ARTICLE

Case Study: Citable × DREAM Venture Labs

Citable Team 12 min
Case Study: Citable × DREAM Venture Labs
How an AI-native nonprofit increased its visibility across major AI engines by 270% in 30 days. Discover how DREAM Venture Labs transformed from appearing in 18% of relevant queries to 67% using Citable's M.A.P. Framework and GEO strategy.

Case Study: Citable × DREAM Venture Labs

Overview

How an AI-native nonprofit increased its visibility across major AI engines in 30 days

About DREAM Venture Labs

DREAM Venture Labs is a nonprofit organization supporting immigrant founders, students, and small businesses across Massachusetts. Despite strong programs, a newly revamped website, and recognition in the community, the organization faced a critical challenge: search engines were no longer the primary discovery channel. AI engines were.

By mid-2024, ChatGPT, Claude, Perplexity, Gemini, and LLaMA-based chatbots had become the default place where students, founders, and donors asked questions like "free programs for immigrant founders," "nonprofit visa support," "fundraising training in Boston," and "community business incubators."

DREAM wasn't appearing consistently. They turned to Citable to fix that.

The Challenge

The Traditional SEO Paradox

Even with strong SEO, DREAM Venture Labs faced three critical problems that traditional search optimization could not solve

Outdated & Inaccurate Information

AI models referenced old programs that no longer existed, wrong eligibility requirements, out-of-date partner schools, and incorrect contact information.

Geographic & Persona Inconsistency

A "founder in Boston" got completely different answers than a "colleague in Denver"—often omitting DREAM entirely. The organization's reach was being artificially limited by AI's inconsistent understanding.

Fragmented Narrative

DREAM's story was spread across many subpages. Models struggled to piece together the full picture of what DREAM actually does, making them less likely to recommend the organization.

Core Problem: DREAM wasn't being understood by AI, pushing it below organizations with bigger budgets or more online content.

The Solution: Citable's AI Visibility Engine

Citable deployed its full GEO stack—the M.A.P. Framework (Memory Fit, Authority Graph, Prompt Surface), persona-based AI accounts, geographical sensitivity testing, and cross-engine campaign tracking.

The Approach

Phase Description Scale
Multi-Persona Testing Memory-enabled AI accounts simulating immigrant founders, nonprofit leaders, college advisors, grant writers, students, and small business owners 200+ accounts
Geographic Mapping Mapped visibility across all U.S. states and international markets, revealing blind spots 50 states + 20 countries
Content Optimization Generated model-friendly structured content matching how LLMs parse organization data Full site optimization
Cross-Engine Campaigns Structured test cycles tracking appearance, ranking, and content influence Thousands of cycles
Program Analytics Performance tracking for Launch Incubator, Growth Accelerator, and Build Fellowship Ongoing monitoring

Why Memory-Based Personas Matter

AI engines personalize answers based on learned "memory clusters." This approach exposed inconsistencies invisible to standard SEO, revealing how different personas received wildly different information about DREAM.

Results (30-Day Campaign)

+270% Increase in Total AI Visibility

Key Metrics

Overall Visibility: 18% → 67%
Across GPT-4.1, Claude 3.5 Sonnet, Perplexity, and Gemini

Persona Consistency: 22% → 81%
Students, founders, and nonprofit leaders now receive consistent information

Geographic Coverage: 8 → 48 states
(500% expansion), especially in areas with large immigrant populations

Hallucinations Reduced: -63%
Models stopped mentioning discontinued programs and corrected eligibility descriptions

Week-by-Week Progress

Week Visibility Rate Key Milestone
Baseline 18% Initial audit complete. Identified outdated program info, fragmented site structure, zero geographic consistency.
Week 1 21% Site content restructured with unified messaging. Schema markup implemented. Minimal citation change yet — models hadn't re-indexed.
Week 2 34% First major jump. Structured content began appearing in ChatGPT and Perplexity responses. Hallucinations dropped as models picked up corrected program info.
Week 3 52% Peak acceleration. Geographic expansion kicked in as persona-based testing revealed and addressed blind spots. Persona consistency jumped from 22% to 58%.
Week 4 67% Stabilization at new baseline. Remaining gains came from long-tail queries and less common persona types.

What Drove Which Results

Action Taken Expected Result Actual Result Timeline
Unified site messaging (single narrative across all pages) Reduce hallucinations and contradictions -63% hallucinations. Models stopped citing discontinued programs within ~10 days of re-indexing. Week 1-2
Added structured data (Organization, Program, FAQ schema) Improve citation accuracy in Google AI and Gemini Gemini visibility jumped from 5% to 41%. Google AI Overviews began featuring DREAM for nonprofit queries. Week 2-3
Geographic-specific content (state-by-state program availability pages) Expand geographic coverage beyond Massachusetts Coverage expanded from 8 to 48 states. Largest gains in states with high immigrant populations (CA, TX, NY, FL, IL). Week 2-4
Persona-aligned content (separate pages for students, founders, donors) Improve persona consistency Persona consistency rose from 22% to 81%. The biggest gap closed was between "student" and "nonprofit leader" personas, which had been receiving wildly different information. Week 3-4
Cross-platform distribution (LinkedIn articles, community forums) Increase Authority Graph signals Perplexity citations improved most (11% to 54%), likely because Perplexity heavily weights external authoritative mentions. Week 3-4

About the 48-State Coverage

DREAM's geographic coverage went from 8 to 48 states — but not 50. The two remaining states (Wyoming and North Dakota) had negligible query volume for nonprofit founder support programs during the testing period. With fewer than 5 relevant queries per state across all engines, the sample size was too small to measure meaningful visibility. These aren't failures — they reflect genuinely low demand in those geographies for DREAM's specific services. As query volume grows or DREAM expands programming, these gaps will be revisited.

Top-Performing Queries

Query Performance
Community incubators for immigrants 2.5× increase
Nonprofit founder support MA 2.3× increase
Training for immigrant entrepreneurs 2.4× increase
No-cost fundraising course MA 2.8× increase

Real-World Impact

Within 45 days of campaign launch:

  • Student applications increased — More qualified candidates discovering programs
  • Partner programs reached out — New collaboration opportunities from AI discovery
  • Universities requested info sessions — Academic institutions seeking partnership
  • Immigrant founders discovered Build Fellowship — Directly fulfilling the organization's mission
  • Donors learned about DREAM from AI engines — New funding sources through AI-driven discovery

What Didn't Work

No campaign is perfect. Here's what we tried that underperformed or failed:

YouTube content had minimal impact. We created 3 short program overview videos and uploaded them to a new DREAM YouTube channel. After 30 days, these had no measurable effect on Gemini citations. The likely reason: Gemini's YouTube integration favors channels with established watch history and subscriber bases. A brand-new channel with 3 videos doesn't carry enough signal. Lesson: YouTube is a long-game channel. Don't expect citation impact in under 90 days unless you already have an established presence.

Wikipedia editing was deprioritized. Initial plans included creating or updating a Wikipedia article for DREAM. However, DREAM didn't meet Wikipedia's notability guidelines for organizations (requires significant coverage in independent, reliable sources). Attempting to create the page would have risked deletion and wasted effort. Lesson: Wikipedia is powerful for AI citations (14% of Perplexity responses reference it), but only pursue it when your organization genuinely meets notability standards. Premature Wikipedia pages get flagged and removed.

Generic "nonprofit support" content underperformed. Early content targeting broad queries like "nonprofit support programs" saw minimal citation lift. The queries were too competitive (large national organizations dominate), and DREAM's differentiation wasn't clear at that level of generality. Lesson: Niche, specific content (e.g., "free fundraising training for immigrant founders in Massachusetts") dramatically outperformed generic category content.

Effort and Resources Required

Transparency about what this campaign actually required:

Resource Amount Notes
Citable team hours ~40 hours over 30 days Initial audit (8 hrs), content strategy (6 hrs), content creation/optimization (16 hrs), monitoring and iteration (10 hrs)
DREAM internal effort ~12 hours over 30 days Reviewing content for accuracy (4 hrs), providing program details and corrections (4 hrs), approving changes (2 hrs), internal alignment meetings (2 hrs)
Content produced 8 optimized site pages, 3 new FAQ pages, 2 LinkedIn articles Plus restructuring of 15+ existing pages
Tools used Citable platform, Google Search Console, schema validator No paid media or advertising budget was spent

Total effective cost: This was a Citable-managed engagement. For organizations attempting similar work independently, expect to budget 40-60 hours of skilled content strategist time over 30 days, or roughly $4,000-8,000 at freelance rates for the content work alone (not including tooling).

60/90-Day Follow-Up

Results don't mean much if they don't last. Here's what happened after the initial 30-day campaign:

Metric Day 30 Day 60 Day 90
Overall Visibility 67% 64% 69%
Persona Consistency 81% 78% 83%
Geographic Coverage 48 states 48 states 48 states
Hallucination Rate -63% from baseline -67% from baseline -71% from baseline

Key takeaway: Visibility held steady with minimal maintenance effort (approximately 4 hours/month of content updates and monitoring). The slight dip at day 60 coincided with a 3-week gap in content updates — reinforcing Law 5 (visibility decays unless maintained). After resuming regular updates, metrics recovered and slightly exceeded the day-30 peak by day 90. Hallucinations continued declining as models incorporated more corrected information over time.

Why This Case Matters

DREAM Venture Labs is not a massive nonprofit with a million-dollar media budget. They're lean, mission-driven, and community-focused. AI discovery was supposed to be a disadvantage.

Instead, with Citable, it became a strategic advantage.

In a world where users increasingly ask AI instead of searching Google, DREAM became the organization most likely to appear—accurate, visible, and recommended.

The Transformation

Before Citable After Citable
❌ Invisible to most AI queries ✅ 67% visibility across major AI engines
❌ Inconsistent persona responses ✅ 81% persona consistency
❌ Geographic limitations ✅ 48-state coverage
❌ Hallucinated information ✅ 63% fewer hallucinations
❌ Fragmented messaging ✅ Unified, accurate messaging

Key Takeaways

Lessons from the DREAM Campaign

AI visibility is persona-relative
Different audiences receive different narratives. Optimizing for "everyone" means optimizing for no one.

Geographic testing is essential
Your AI visibility in Boston might be strong while San Francisco sees nothing. Test across locations.

Memory-enabled personas reveal hidden issues
Traditional testing misses how AI evolves its understanding over time.

Consistency beats volume
A small, coherent message across AI engines outperforms scattered, inconsistent mentions.

Nonprofits can compete
Budget size doesn't determine AI visibility. Structure, consistency, and strategic optimization do.

The Technology Behind the Success

Citable's M.A.P. Framework in action:

Memory Fit

Tracked how each persona's understanding of DREAM evolved over weeks of interactions, optimizing for positive memory trajectories.

Authority Graph

Strengthened DREAM's trust spine by ensuring high-authority sources consistently mentioned the organization in the right contexts.

Prompt Surface

Expanded DREAM's answer footprint across the queries and intents that mattered most to their mission.

Campaign Scale

  • 200+ AI Accounts — Memory-enabled personas across multiple user types
  • 70 Locations Tested — Comprehensive geographic coverage
  • 1,000s of Test Cycles — Continuous monitoring and optimization
  • 4 Major AI Engines — ChatGPT, Claude, Perplexity, and Gemini

What's Next for DREAM

With their AI visibility foundation established, DREAM Venture Labs is now:

  1. Expanding program offerings with confidence that AI will accurately communicate new initiatives
  2. Launching targeted campaigns for specific personas (students vs. founders vs. donors)
  3. Monitoring visibility metrics to maintain and improve their strong position
  4. Leveraging AI insights to understand what questions their community is asking

"Citable didn't just increase our visibility—they gave us a systematic way to understand and shape how AI talks about our mission. That's invaluable for any organization in 2025."

— DREAM Venture Labs Team

Ready to Transform Your AI Visibility?

Whether you're a nonprofit, a B2B SaaS company, or a consumer brand, the principles that worked for DREAM Venture Labs can work for you:

  • Measure your current AI visibility across engines, personas, and geographies
  • Diagnose inconsistencies, hallucinations, and gaps in your story
  • Optimize your content and authority signals using the M.A.P. Framework
  • Monitor your progress with continuous testing and tracking

Start Your Free Trial

About Citable

Citable is the first AI visibility platform built specifically for the M.A.P. Framework. We help brands measure, diagnose, and improve how AI engines remember, trust, and recommend them across personas, regions, and time.

Join DREAM Venture Labs and hundreds of other organizations leveraging AI visibility as their competitive advantage.