The Execution Layer for AI Discovery

AI doesn't rank.
AI recommends.

FancyAI is the execution layer that influences AI Search engines and gets you recommended. Across ChatGPT, Gemini, Perplexity, and Claude — measured, mapped, and moved with the AI Readiness Index (ARI).

Tracked across
ChatGPT Gemini Perplexity Claude Grok + Future Models

AI Readiness Index (ARI)

Live · Q2 2026
Your AI Readiness Index (ARI) Score
0
Out of 100
Visible
Higher than 68% of brands
in your category
AI Visibility Score 71/100
ChatGPT
62
Gemini
65
Perplexity
74
Claude
60
Recommendation Rate
12%
▲ 4 pts
Share of Voice
5.6%
▲ 1.2%
AI Presence
72/100
▲ 5 pts
Visibility Gaps
65
▼ 7
Free Diagnostic

See how AI engines
recommend you.

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, and Claude. Delivered to your inbox in 24 hours.

No credit card. No sales call required. Just your score.

Request received.

Real analysis takes real time. We're querying ChatGPT, Gemini, Perplexity, and Claude across your category — then weighting recommendation frequency, sentiment, and competitive positioning.

Your full AI Readiness Index (ARI) report lands in your inbox within 24 hours.

Want to see what we'd do with the score? Book a demo →
The Category vs. FancyAI
The category sells visibility. We sell influence.
The category tracks. We change outcomes.
How it works

Influence, made measurable.

Three signals. One execution layer. Repeatable lift across every model that matters — quantified by the AI Readiness Index (ARI).

01
Evaluate

The AI Readiness Index (ARI) tells you where you stand.

Entity clarity, citation density, structured proof, corroborating mentions — scored across every major model.

02
Execute

We change the signals AI sees.

Site changes, structured data, citation graph engineering, third-party mentions — operated, not advised.

03
Measure

Influence shows up where it counts.

AI recommendation rate across ChatGPT, Gemini, Perplexity, and Claude — tracked weekly, mapped to revenue.

Explore the platform

AI Platform Breakdown

5 platforms tracked
Platform Presence Change (90d) Position
G ChatGPT 18 / 100 ▼ 6 pts #3
P Perplexity 42 / 100 ▼ 4 pts #1
Gemini 33 / 100 ▼ 5 pts #2
Claude 28 / 100 ▲ 8 pts #3
𝕏 Grok 22 / 100 ▲ 3 pts #4

The brands AI now selects, not just sees.

The category sells visibility.
We sell influence.

Featured Case Study
CONNER Hats Q1 2026 · 90 days

Jumped 99 positions
in 90 days.

FancyAI executed targeted content, structure, and citation changes across ChatGPT, Gemini, Perplexity, and Claude — driving Conner Hats from page-five obscurity to top-three AI recommendations in under 90 days.

Read the full case study
4.4×
Conversion lift, AI traffic vs. organic
Top-3
Recommendations across all four LLMs
12 wks
From kickoff to full execution rollout
FAQ

Questions about AI visibility, answered.

What is Generative Engine Optimization (GEO)?

GEO helps your brand get recommended inside AI answers from platforms like OpenAI ChatGPT, Google Gemini, Anthropic Claude, and Perplexity AI Perplexity.

SEO was built for rankings and clicks. GEO is built for recommendations and citations inside AI-generated answers.

If your brand is not part of the answer, it is often excluded from consideration entirely.

How is GEO different from SEO?

SEO helps your website rank in search engines. GEO helps AI systems understand, trust, and recommend your brand.

Some fundamentals overlap — authority, technical health, structured content, and credibility still matter. But GEO also depends on signals traditional SEO was never designed for, including:

  • Brand/entity clarity
  • Third-party citations and mentions
  • Structured, extractable content
  • Consistency across the web
  • Presence in communities and trusted sources AI systems reference

The strongest brands run SEO and GEO together because both compound.

Which AI platforms does FancyAI cover?

We monitor major AI discovery platforms including ChatGPT, Gemini, Claude, Perplexity, Copilot, and emerging AI search experiences as they evolve.

Each platform behaves differently. Optimization that works on one model does not automatically transfer to another, which is why our approach is model-agnostic by design.

What is the AI Readiness Index (ARI)?

AI Readiness Index (ARI) is FancyAI's 0–100 scoring system that measures how prepared your brand is to be recommended by AI systems.

The score evaluates four core signals:

  • Entity clarity
  • Citation authority
  • Structured proof
  • Corroborating mentions across the web

As those signals strengthen, brands tend to appear more consistently in AI-generated recommendations.

How long does it take to see results?

Most brands begin seeing early visibility improvements within weeks. Larger shifts in recommendation behavior typically happen over a few months as authority signals compound and AI systems refresh their understanding of your brand.

Timing depends on your category, competitive landscape, and starting foundation.

Why invest in AI visibility now?

AI is reshaping how people discover brands.

More decisions are happening before a user ever visits a website. AI systems are increasingly building the shortlist first — and many brands are not being included at all.

Brands investing now are establishing authority and visibility that compounds over time. Brands waiting are giving competitors a head start inside the systems shaping future discovery.

The Product

If you're not
in the answer,
you don't exist.

We get you in the answer. The AI Readiness Index (ARI) tells you where you stand. The Execution Layer changes it. Across ChatGPT, Gemini, Perplexity, and Claude.

Execution Dashboard Live Execution System

AI Visibility Diagnosis

Visibility Share Low Visibility
25%
Your brand appears in 25% of relevant AI responses across major LLMs.
Competitive Status
Competitors dominate citation share in this category.
Opportunity Gap Detected
Top Missing Opportunities
Pricing intent queries
Missing from 8/10 top comparative pricing prompts.
Comparison queries
Not cited when users ask "X vs Y" alternatives.
Buyer guides
Absent from definitive category overviews.

Execution Action Plan 3 active

On-Site Recs
23
Off-Site Actions
18
Active Campaign Tasks
Update Pricing Page Schema
Inject missing comparative pricing schema to capture intent queries.
In Progress
Publish "X vs Y" Comparison Post
Targeting story angles for buyer guides. Draft generated.
Queued
Forbes Citation Outreach
Sending PR + content pitch snippets to target journalists.
In Progress
Reddit Seed Campaign
Initiated discussions in niche subreddits.
Published
The AI Readiness Index (ARI) · Built on The Signal Hierarchy

Four signals. One score.
Every model that matters.

AI Readiness Index (ARI) scores your eligibility to be recommended by AI across the four signals defined in The Signal Hierarchy — FancyAI's published methodology, derived from 40,000+ websites and 1,500+ sources.

Entity Clarity on-site

How clearly AI understands your brand as a defined entity in knowledge graphs.

Your score92 / 100

Citation Density off-site

How many authoritative third-party sources reference your brand by name.

Your score68 / 100

Structured Proof on-site

How extractable your content is — passages, tables, statistics, schema markup.

Your score74 / 100

Corroborating Mentions off-site

Whether you're named in editorial coverage, "best of" lists, and roundups AI reads.

Your score55 / 100
The Execution Layer

We don't advise
We execute.

Other tools tell you what AI is saying. We change what AI sees. Across every signal, every surface, every model — operated as a sprint, not a recommendation.

Book a Demo
P-281 Pro Max Fence · Action Plan
Category:Commercial Fencing Topics:10 Prompts:100
Models: GPT-5.1, Gemini 2.5 Pro, Grok 4.1, Sonar, Claude 4.5 Sonnet
Run 1 · Recommendation Completed
Overview Components Analytics Competitors Recommendations Citation Builder Evidence
Expected Impact: moderate
Pending New Content NC-VIDEO-STRATEGY
Overview Implementation plan
Video Content Priorities Revision: Off ▾
1. Chain Link vs. Welded Wire Fence: Which is Right for Your Commercial Property?
What it answers: "chain link or welded wire fence commercial"

AI systems already answer this comparison query with generic manufacturer content. Pro Max should create a side-by-side field comparison showing actual installations: chain link at a construction yard, welded wire at a power substation. Project manager explains cost difference ($3-8/linear foot for chain link vs. $15-40 for welded wire), security ratings, and when each makes sense. Three-minute video.

Why AI cites it: Comparative content with specific cost figures and use-case guidance.
Signal → Execution

Every signal has a surface we operate on.

AI Readiness Index (ARI) Signal
Where we execute
Entity Clarity
Wikidata Google Knowledge Graph schema.org markup Brand entity submissions
Citation Density
Editorial outreach Citation graph engineering Reddit & LinkedIn presence PR placement
Structured Proof
Site copy rewrites Passage-level structure Comparison tables Statistic injection
Corroborating Mentions
"Best of" list inclusion Editorial roundups Industry directories Awards & accreditations
Measurement

Influence, tracked. Weekly.

Every change we make moves the AI Readiness Index (ARI). Every AI Readiness Index (ARI) movement maps to recommendation rate. Every recommendation maps to traffic, qualified leads, and revenue.

AI Readiness Index (ARI) tracked weekly across ChatGPT, Gemini, Perplexity, and Claude.
Share of voice and recommendation rate, by competitor and category.
Attribution from AI traffic to your CRM, with conversion lift baselines.
AI Readiness Index (ARI) · 90-Day Trend
Conner Hats · Demo data
Start
23
Today
87
▲ +64 pts
Velocity
+0.7/day
100 50 0 Day 0 Day 45 Day 90 Entity sub. List incl.
FAQ

How the product works.

What does the AI Readiness Index (ARI) actually measure?

AI Readiness Index (ARI) measures how prepared your brand is to be discovered and recommended by AI systems.

The score evaluates four core areas:

  • How clearly AI understands your brand
  • How often trusted sources mention you
  • How easy your content is for AI systems to extract and cite
  • How consistently your brand appears across the web

Each signal is scored independently, then combined into a single 0–100 score.

Does AI Readiness Index (ARI) directly correlate with recommendation rate?

In the categories we monitor, stronger ARI scores consistently align with higher recommendation frequency.

Brands with stronger authority, citation, and trust signals tend to appear more often inside AI-generated answers.

We track both ARI and recommendation rate weekly so you can see the relationship directly in your dashboard.

What is citation graph engineering?

Citation graph engineering improves how your brand is referenced and validated across the internet.

AI systems do not learn from your website alone. They also evaluate the broader ecosystem surrounding your brand — articles, directories, reviews, Reddit discussions, media coverage, comparison lists, and third-party mentions.

FancyAI identifies the authority gaps influencing recommendation behavior, then prioritizes the actions most likely to improve visibility across AI systems.

How is FancyAI different from monitoring tools?

Most GEO tools stop at reporting.

FancyAI combines monitoring with execution.

We track recommendation visibility, citation coverage, and AI presence across platforms — then actively improve the signals influencing those outcomes through content, schema, entity optimization, citation work, and editorial execution.

The dashboard measures the work. The work drives the results.

Can FancyAI work alongside our existing SEO program?

Yes. GEO and SEO work best together.

Traditional SEO still matters because AI systems continue pulling signals from authoritative search results. GEO builds on that foundation by improving the signals AI systems use to decide which brands to recommend.

Strong SEO helps GEO. Strong GEO can strengthen SEO performance over time.

How do you measure influence over time?

We track three primary metrics:

  • AI Readiness Index (ARI)
  • Recommendation rate across AI platforms
  • Traffic and conversion attribution from AI sources

Every customer receives a dashboard tying visibility improvements to business outcomes — including qualified traffic, leads, and pipeline impact.

The Team

Operators, not advisors.
Built to execute.

Six operators pulled together around one bet: AI doesn't rank — it recommends. And recommendation is something you can change.

5
Founder Seats
9
Prior CXO Roles
12
Companies Built & Led
1
Bet

The visibility category stops at observation. We built the team to do the verb. Operators who've shipped enterprise growth, engineers who've built the platforms brands depend on, and strategists who've sold to Fortune brands — pulled together around a single bet: AI recommends, and recommendation is changeable.

Tom Howell
Tom Howell
Co-Founder & Chief Executive Officer
Former Co-Founder of ZPEG. Principal at Bizwiggle. Architect of FancyAI's go-to-market and enterprise growth infrastructure.
Jacob Ralph
Jacob Ralph
Co-Founder & Chief Technology Officer
Former Co-Founder & CTO of Client Book CRM and Diamond Profile. Architect of FancyAI's AI visibility platform and execution infrastructure.
Keith Brown
Keith Brown
Chief Operating Officer
Former CIO of Alliance Health. Former CIO of Fusion-io. Operator across enterprise infrastructure and IPO-stage scaling.
Chris Barbee
Chris Barbee
Chief Revenue Officer
Former CSO at Omnicom. Former CMO at 8AM Golf. Built brand and revenue functions across enterprise health and consumer brands.
Mikaela Berman
Mikaela Berman
VP Product Growth & CX
Former Head of Marketing at Salesbot and RecruitBot. Built product-led growth across two B2B SaaS scale-ups.
Joseph Ashburner
Joseph Ashburner
Head of Strategy
Founder of ShopNinja. Co-Founder & CMO of Smoke Holdings IBC. Strategic operator across consumer and DTC growth.
Research

Original research.
No recycled hot takes.

We don't write opinions. We publish data. Every report below cites primary sources — and most are based on our own analysis of how the major LLMs actually decide which brands to recommend.

40K+
Websites Analyzed
1,500+
Sources Cited
12,500
Cross-Platform Queries
5
LLMs Tracked

The corpus.

19 reports · Updated weekly
Original Research 6.8%
Original Research
Which AI Platform Cites What? We ran the same 500 queries across 5 LLMs.

12,500 queries across ChatGPT, Perplexity, Claude, Gemini, and Grok. Only 6.8% of cited domains showed up on three or more platforms. Optimization for one is not optimization for the others.

12 min read
Myth-Busting 40/60
Original Research
"GEO is just rebranded SEO." We ran the numbers. Here's the 40/60 truth.

40% of GEO overlaps with good SEO fundamentals. The other 60% is genuinely different — and that's where most of the bad advice and missed opportunity lives.

16 min read
Platform Deep-Dive 129K
Platform Deep-Dive
How to optimize for ChatGPT: 20 ranking factors from 129,000 domains.

SE Ranking analyzed 129,000 domains. We pulled the SHAP-ranked top 20 ChatGPT factors. Domain trust above 90 = 4× more citations. FCP under 0.4s = 6.7 vs 2.1 citations. The full playbook.

11 min read
Landscape Analysis 3%
Landscape Analysis
Most GEO "experts" can't cite a single study. Here's who actually can.

75 voices tracked. Only 3% of top SEO thought leaders include "GEO" in their headline. We mapped the landscape into three tiers and found 5 major gaps no one is filling.

17 min read
Macro Analysis 61%
Original Research
How AI Overviews killed the click. A zero-click economy emerges.

Eight primary studies. One conclusion: organic CTR is collapsing across every measurement, and the decline is broader than AI Overviews alone. Even queries without AIOs lost 41% CTR YoY.

13 min read
Business Case 23×
Business Case
AI visitors convert at 23×. The quality story behind the quantity drop.

AI traffic is 1.08% of total today, growing 527% YoY, and converting at multiples organic search has never seen. Six independent studies converge on the same finding.

11 min read
Academic Foundation +40%
Academic Paper
The Princeton paper, decoded. The academic foundation of GEO.

Aggarwal et al. coined the term, built GEO-bench, and quantified what works. Three years later, every credible GEO claim still traces back to this paper.

15 min read
Citation Patterns 90%
Original Research
Why fresh content beats authority. The recency bias in AI citations.

90% of AI bot hits land on content less than three years old. AI-cited pages are 368 days fresher than traditionally-ranked ones. Continuous publishing is the new optimization unit.

9 min read
Predictive Signals 0.737
Original Research
YouTube mentions predict AI visibility better than backlinks.

Ahrefs analyzed 75,000 brands. The strongest single predictor of AI brand visibility wasn't domain authority. It wasn't backlinks. It was YouTube mentions — by a wide margin.

10 min read
B2B Buyer Behavior 67%
Buyer Behavior
67% of B2B buyers start with AI. The new front door.

B2B buyers are adopting AI search at three times the consumer rate. By the time they visit a vendor website, the shortlist is already set.

12 min read
Content Collapse 91.4%
Original Research
AI is now citing AI. The 91.4% problem.

Search Engine Land found 91.4% of AI Overview citations are AI-generated. CJR found AI search wrong 60% of the time — premium models worse than free. Each generation degrades.

13 min read
Brand Risk 42.1%
Original Research
When AI lies about your company. A brand hallucination field guide.

Air Canada lost a tribunal. Soundslice built a feature ChatGPT invented. Hoka had wrong pricing on display. The hallucination rate is 17–90% depending on domain — and 40% of users never check the source.

12 min read
Industry Impact 50%
Landscape & Behavior
"Extinction-level event." How AI search is restructuring the open web.

NPR's framing for what publishers face. Daily Mail vice chair: 50% of traffic gone in five years. 500+ lawsuits. A handful of platforms now control how billions discover information.

15 min read
Comparative Analysis < 1/100
Comparative Analysis
The honest skeptic's case against GEO.

Rand Fishkin: fewer than 1 in 100 prompts return the same brands. Profound: 40–60% of cited domains change in a month. A founder shut down his GEO tool after concluding it was just good marketing. The strongest counter-arguments deserve a hearing.

13 min read
Manipulation 3
Original Research
Black hat GEO: the manipulation playbook (and why it's doomed).

Three categories of manipulation are spreading: data poisoning, citation stuffing, and hidden prompt injection. Harvard demonstrated text sequences that force LLM outputs. The platforms are evolving faster than the attackers.

11 min read
Ethics 0
Landscape & Behavior
GEO ethics in 2026: no framework, growing stakes.

No industry body. No code of ethics. No enforcement mechanism. As 37% of consumers start with AI and 82% are skeptical, the discipline is being built on every operator's individual judgment.

12 min read
Legal & Regulatory 500+
Landscape & Behavior
The legal front: 500+ lawsuits, antitrust, and AI defamation.

The New York Times sued OpenAI and Perplexity. Google faces EU antitrust over AI Overviews. The Section 230 question is unsettled. Wolf River Electric is testing AI defamation in court.

13 min read
Technical Architecture 10
Academic Paper
The 10 gates: how AI search engines actually decide what to cite.

Most GEO writing describes outcomes. This one explains the engine. The pipeline from page on the open web to citation in an AI response is a 10-stage system — and most brands optimize for the wrong stages.

14 min read
Comparative Analysis 5 of 6
Comparative Analysis
Six platforms promise to get your brand cited by AI. Most don't finish the job.

Independent buyer-side comparison of Profound, Evertune, Semrush, Scrunch, Conductor, and FancyAI — evaluated across fourteen dimensions. The structural fault line splitting the category in two.

14 min read
Coming Soon
Forthcoming
More original research dropping weekly.

We publish a new piece every Monday. Subscribe to get the next one in your inbox the day it's released.

Subscribe to be notified
FAQ

About our research.

How did you analyze 40,000+ websites?

Our research corpus was built over 18 months using academic papers, platform documentation, industry studies, and original citation analysis.

Every published claim is tied to a primary source, experimental dataset, or reproducible methodology.

What is "the corpus" you keep referring to?

The corpus is FancyAI's internal research library.

It includes academic studies, platform documentation, citation analyses, experiments, and AI visibility research organized across multiple GEO categories.

The corpus informs both our published research and our AI Readiness Index (ARI) methodology.

How is FancyAI's research different from typical "GEO expert" content?

Most GEO content repeats broad opinions without primary sourcing.

Our research is built on citation analysis, reproducible methodologies, platform behavior studies, and original experiments.

Where possible, we publish the methodology so findings can be independently validated.

How often do you publish new research?

We typically publish new research monthly, depending on when findings meet our internal standards for rigor and validation.

Recent topics include citation behavior, AI visibility patterns, structured content performance, and cross-platform recommendation studies.

What does "structure beats length" mean for content?

Longer content does not automatically perform better in AI systems.

Our research consistently shows that well-structured content — clear headings, statistics, comparison tables, concise explanations, and extractable formatting — is cited more often than long, unstructured pages.

The goal is not more content. It is more usable content.

What happens when AI models change?

AI systems evolve constantly.

FancyAI continuously monitors shifts in recommendation behavior, citation patterns, and platform responses across major models.

Our scoring systems and execution priorities adapt alongside those changes so strategies remain aligned with how AI systems actually behave.

Can GEO hurt SEO performance?

No. Effective GEO should strengthen foundational SEO signals, not compete with them.

Improving authority, clarity, citations, structure, and entity understanding often benefits both organic search visibility and AI recommendation performance.

Can I cite your research in my own work?

Yes.

All published research is openly shareable and fully sourced. We encourage teams, analysts, journalists, and marketers to reference our findings with attribution to FancyAI.

Case Studies

Real brands.
Real lift.

Two execution programs run end-to-end — search and AI as a single strategy. The same playbook moved an e-commerce retailer 99 ranking positions in 90 days, and lifted a POS hardware brand to position #1 on both Google and AI in the same month.

+99
Top Position Gain
896
AI Citations · 4 Models
+36%
Revenue MoM
90
Days to Results
Featured Case E-Commerce Specialty Retail 90 Days
CONNER Hats
connerhats.com
Family-owned · 27 collections audited

GEO moved SEO by 99 positions in 90 days.

At a family-owned specialty retailer, AI-optimized content became the accelerant their search program had been missing. Same site, same SEO program — layered with FancyAI's GEO methodology. Within 90 days, ranking improvements landed across the entire collection portfolio.

GEO content is not a replacement for SEO. It is an accelerant. The same pages that were slowly climbing surged when we layered GEO on top. — Tom Howell, Co-Founder & CEO, FancyAI
Download the full case study
+99
Top Keyword Position Gain
27
Collection Pages Updated
338
Keywords Tracked
90 days
From Kickoff to Lift

The Approach

FancyAI layered GEO methodology on top of the existing SEO program. No rebuilds, no replacements. 27 collection pages audited, then content, structure, and schema updated to match what AI systems prioritize: product authority, experiential detail, and structured data. Traditional rankings and AI visibility tracked together so the compounding effects were measurable.

01
Audit
Diagnose how AI systems interpret each collection page.
02
Optimize
Update content, structure, and schema across 27 pages.
03
Monitor
Track SEO rank and AI visibility as one system.

The Results · Selected Keywords

50+ keywords broke into the top 100. New rankings appeared for keywords Conner Hats had never ranked for. Top performers:

Keyword
New Rank
Δ
Gambler Hats
#1
+99
Fishing Hats
Top 10
+72
Black Cowboy Hats
Top 5
+68
Steampunk Hats
Top 10
+62
Leather Hats
Top 10
+57
Outback Hats
Top 10
+54

The Insight

Most brands treat GEO as a separate workstream from SEO. A new channel, new team, new vendor. That framing misses the point. The signals that win AI recommendations are the same signals Google rewards. Run them as one program and both compound. Run them as two and you underperform in both.

Featured Case E-Commerce POS Hardware High-Intent Search
●●
Category Leader
POS Hardware · anonymized
Confidential client
Credit card terminals · stands

From page two to position #1. On Google. In AI.

A unified strategy moved a Category Leader in POS to #1 across both channels, lifted monthly revenue 36%, and — critically — survived the Google March 2026 core update without a scratch while competitors took measurable hits.

We were doing fine on Google. In a category where the top three results get the sale, fine is not enough. FancyAI did not move us up. They made us the answer. — Marketing Lead, Category Leader in POS
Download the full case study
#1
Google Rank, Core Keyword
896
AI Citations Across 4 Models
+36%
Revenue Month-Over-Month
+54%
Sessions Year-Over-Year

The Approach

FancyAI runs SEO and GEO as a single strategy — one program, one roadmap, compounding into both channels at once. For DCC Supply, that meant parallel execution across search and answer engines: 10 premium backlinks live, 47 content optimizations, 6 new blog articles, and 100 prompts monitored continuously across GPT-5.1, Claude 4.5 Sonnet, Gemini 2.5 Pro, and Sonar.

Live
10 Premium Backlinks
Authority earned, not bought.
Done
47 Optimizations
Content + structure + schema.
Pub
6 New Articles
Buyer-intent content surfacing.
Live
100 Prompts Tracked
Across 4 LLMs, continuous.

The Results · Search

Three primary keywords hit #1 in the same month. /collections/stands traffic surged +323%. /collections/terminals up +162%. Revenue grew roughly 5× faster than traffic.

Keyword
Before
After
credit card terminals
#13
#1
credit card terminal
#23
#1
credit card processing terminal
#17
#1

The Results · AI

Position #1 in AI results for "credit card terminals" — matching the Google rank. 896 total citations across all four AI models. "Buy credit card machine" impressions +1,817%. Sentiment: 10% positive, 90% neutral, 0% negative.

+1,817%
"Buy credit card machine" impressions
+223%
"Credit card processing machine" impressions
Google March 2026 Core Update

Our client held all authority metrics steady while weaker competitors in the POS category took measurable hits. The strategy was built for durability, not short-term gains.

Categories We've Operated In

Built for high-intent categories.

E-Commerce
Specialty Apparel
E-Commerce
POS Hardware
E-Commerce
Outdoor Gear
CPG
Personal Care
CPG
Better-for-You Foods
Healthcare
Preventive Care
Healthcare
Corporate Wellness
Health & Wellness
Supplements
Financial Services
Advisory
Travel
Adventure Outfitters
Home Services
HVAC
Marketing Services
Agency Partners
Automotive
Dealer Network
B2B SaaS
HR Tech
Industrial Services
Safety & Training
Retail
Sleep & Mattress
Automotive
Aftermarket
Fitness
Equipment & DTC
Manufacturing
Packaging & Materials
Real Estate
Workspace & Studios
Pricing

Priced for influence.
Not impressions.

Two subscriptions, built to run together. Eligibility is the software you log into — the platform that diagnoses how AI engines see your brand and tracks recommendation rate as it moves. Visibility is the team that executes the offsite work each month — citations, links, Reddit, distribution, and a dedicated strategist. The platform shows the gaps. The team closes them.

Subscription 01 · Eligibility

Eligibility, handled.

The subscription that runs the onsite side of GEO — diagnostics, action plans, and recommendation tracking across every major model. Pick the tier that matches your catalog and category complexity.

Essential

For single product or service brands.

$499 / mo
Billed monthly · cancel anytime
Get Started
  • Includes
  • 1 brand domain tracked
  • 5 custom GEO action plan runs / month
  • Up to 50 recommendations per GEO plan
  • 3 major LLM platforms
  • Competitor tracking
  • Unlimited prompts tracked
  • Unlimited GEO reporting plans
Enterprise / Agency

For large brand catalogs, creative agencies, and PR firms.

Custom
Annual contract · call for pricing
Talk to Sales
  • Everything in Growth, plus
  • All domains & client accounts
  • Unlimited custom GEO action plan runs
  • Expanded recommendations per plan
  • All major LLM platforms
  • Full competitive landscape tracking
  • Unlimited prompts & reporting plans
  • Priority support & custom integrations
Onsite Fulfillment available for $2,500 — includes 25 hours of digital optimization work.
Subscription 02 · Visibility

Visibility, earned.

The subscription that runs the offsite side of GEO — citations, links, Reddit, distribution, and a dedicated strategist. Running every month while the platform measures movement.

Essential

For single product or service brands.

$2,000 / mo
Monthly retainer · 90-day minimum
Get Started
  • Includes
  • Onsite content updates
  • Technical GEO
  • Off-site execution
  • $1,000 in citation link credits
  • $500 in Reddit posts
  • $500 for press release & distribution
  • Dedicated strategist
Pro

For large catalogs and multiple markets.

$9,500 / mo
Monthly retainer · 90-day minimum
Get Started
  • Everything in Growth, plus
  • Onsite content updates
  • Technical GEO
  • Off-site execution
  • $7,000 in links
  • $2,000 in Reddit posts
  • $500 for press release & distribution
  • Dedicated strategist
Every Plan Includes

Everything you get with FancyAI.

Eight capabilities across visibility, scoring, and execution — bundled by default. Tier limits scope (entities, platforms, sprint cadence), not capability.

Visibility Dashboards
Live AI Readiness Index (ARI) score, recommendation rate, share-of-voice tracked across every major LLM.
Recommendation Tracking
How often your brand surfaces in AI answers — by prompt, by platform, by competitor.
Cross-Platform Coverage
ChatGPT, Gemini, Perplexity, and Claude — tracked in one unified report.
AI Readiness Index (ARI)
Composite eligibility score across the four signals AI uses to decide who to recommend.
Execution Sprints
Content rewrites, structure, and schema updates — operated, not advised. We do the work.
Citation Graph Engineering
Authoritative third-party placements that move AI brand-mention weight where it matters.
Knowledge Graph Submissions
Brand entity definition in Wikidata, Google Knowledge Graph, schema.org — clarified for AI.
Editorial Outreach & List Inclusion
Earned placements in "best of" roundups, directories, and editorial coverage AI reads.
Competitive Matrix

How FancyAI Compares.

Full Automation Actual Execution AI Optimization
Capability
FancyAI
Profound
Conductor
Otterly
SEMrush / Ahrefs
Multi-LLM Monitoring
Limited
Bolt-On
Recommendations
Unlimited (20+ Types)
2+ Types
2+ Types
AI Readiness Index (ARI)
Citation Building
Segmented Analytics
Cost
$$
$$
$$
$$
$
FAQ

Questions, answered.

What's the difference between Eligibility and Visibility?

Eligibility is the platform.

It diagnoses how AI systems currently understand your brand, identifies what is missing, and tracks recommendation performance across major AI models.

Visibility is the execution layer.

That includes the strategist, citation work, earned media, structured content improvements, editorial distribution, and authority-building initiatives that improve recommendation outcomes.

The platform identifies the gaps. The team closes them.

How is FancyAI different from a GEO tracking tool?

Most GEO tools focus on monitoring.

FancyAI combines monitoring with execution.

We measure visibility, recommendation frequency, and citation coverage — then actively improve the signals driving those outcomes through ongoing optimization and strategic execution.

How long until we see lift?

Most brands begin seeing measurable visibility improvements within the first several weeks. More meaningful recommendation shifts typically happen over a few months as authority signals compound.

Results depend on your category, competition, and current AI Readiness Index (ARI) baseline.

Which AI platforms do you cover?

We monitor major AI platforms including ChatGPT, Gemini, Claude, Perplexity, Copilot, and emerging AI discovery experiences.

Coverage varies by plan, with higher tiers supporting broader platform monitoring and expanded competitive analysis.

What's actually delivered each month on the Visibility subscription?

Each Visibility engagement includes a dedicated strategist plus ongoing execution work designed to improve AI recommendation performance.

Depending on plan level, deliverables may include:

  • Citation development
  • Editorial placements
  • Reddit and community visibility
  • Press distribution
  • Technical GEO updates
  • Structured content optimization
  • Ongoing authority building

Every engagement is tied to measurable visibility goals.

How do you measure influence?

We track three primary metrics:

  • AI Readiness Index (ARI)
  • Recommendation frequency across AI platforms
  • Traffic and conversion attribution from AI sources

The goal is measurable business impact — not vanity metrics.

What's the typical contract structure?

Eligibility plans are available with monthly or annual billing options depending on tier.

Enterprise and agency engagements typically run annually and include expanded platform coverage, multi-domain support, and custom volume pricing.

Visibility engagements require a minimum commitment so authority signals have time to compound across AI systems.

Careers

Building what the
category only measures.

Six operators today. Hiring the people who build the engine that recommendation runs on next.

The roles we hire for.

We hire when the work is real and the role is funded. No farming, no “always-on” postings. The next openings will appear here when they're live.

Currently

No open roles posted.

We're between cycles. The next openings will be GTM (enterprise AE, partnerships) and platform (senior engineering). Drop your name in front of us early.

Book a Demo

See your brand
the way AI sees it.

A live walkthrough across ChatGPT, Gemini, Perplexity, and Claude. We map your visibility, your gaps, and the execution plan — whether or not we work together.

A live walkthrough of the FancyAI execution engine.
An AI Visibility Diagnosis on your brand and your top three competitors.
A custom action plan you can run with — starting the day you onboard.
As used by
Conner Hats EHE Health Leatherman Consello
Enter a valid work email address.
Enter a valid phone number.
No spam. Just actionable insights.
Booked

You're on the calendar.

Check your inbox for a calendar invite from chris@getfancy.ai. Meanwhile, here's what to read first:

Back to Research
Live · FancyAI Research Corpus

The mention is the signal. The link is almost irrelevant.

A foundational methodology for AI brand visibility, derived from 40,000+ websites and 1,500+ sources across eleven categories of GEO research.

+115%
AI visibility lift for lower-ranked sites applying signal-first tactics
41%
Of AI brand signal weight from list mentions
4.4×
Conversion of AI-referred vs organic traffic
1,500+
Unique sources in our research corpus
Chapter 01

The signal hierarchy has flipped

The most counterintuitive finding from analyzing 40,000+ websites: low-quality backlinks — the thing the SEO industry has spent twenty years building — show weak or neutral correlation with AI visibility.

Digital Bloom's evaluation of 129,000+ domains gave us the weights. The signals that actually drive AI brand recommendations:

  • 41% — Authoritative list mentions (being named in “best of” lists, roundups, directories)
  • 18% — Awards & accreditations (third-party validation)
  • 16% — Online reviews (especially for branded queries)
  • 0.334 — Brand search volume correlation coefficient (the strongest single predictor)
  • Weak / neutral — Low-quality backlinks themselves

Here is the key distinction: being mentioned by name matters enormously. The 41% weight on list mentions isn't about the hyperlink. It's about the mention. When a “Best CRM for Small Business” article names your brand, AI learns that association whether or not there's a link attached.

“The mention is the signal. The link is almost irrelevant.”
Chapter 02

AI visitors convert at 4.4× the rate of organic

When ChatGPT recommends your brand, the user arrives pre-qualified. They are not browsing ten blue links. They received a direct recommendation from a system they already trust.

Semrush puts the differential at 4.4×. Go Fish Digital puts it at 25×. Even the conservative numbers make the business case clear: AI-referred traffic is fundamentally different from organic.

The average GEO customer acquisition cost is $559 and declining at 37.5% as the market matures. Compare that to Google Ads CPC, which continues climbing 10–15% annually with no ceiling in sight.

Chapter 03

Each AI platform behaves differently

One stat most people miss: only 11% of domains cited by ChatGPT are also cited by Perplexity. Each platform has its own search backend and its own citation behavior.

PlatformBackendTop Source
ChatGPTBingWikipedia (47.9%)
PerplexityProprietary 200B+ URL indexReddit (46.7%)
Google AI OverviewsGoogleYouTube (#1)
ClaudeBrave SearchDiversified
GrokX / Twitter dataX conversations

Optimizing for one platform doesn't give you the others. The opposite is also true: brands present on four or more platforms are 2.8× more likely to appear in ChatGPT specifically (Digital Bloom). Multi-platform presence is itself a signal.

Chapter 04

Structure beats length

Word count has a 0.04 correlation with AI citation. Effectively zero. 53.4% of pages cited by AI are under 1,000 words.

What actually moves citation rate:

  • Adding statistics → +41% AI visibility (Princeton/Georgia Tech, 10,000 queries)
  • Comparison tables → 2.5× more citations than equivalent prose
  • Structured formatting (H2/H3, bullets, numbered lists) → +40% citation lift
  • Content freshness (within 30 days) → 3.2× more citations

The biggest opportunity here: lower-ranked sites saw a +115% visibility improvement from applying these tactics — even without improving traditional rankings. The structure is the unlock.

Chapter 05

The 40/60 truth

“GEO is just rebranded SEO” is the most common objection we hear. It's partially right. About 40% overlaps with good SEO fundamentals: technical accessibility, E-E-A-T, quality content, schema markup.

But 60% is genuinely different:

  1. Entity optimization. Your brand as a defined entity in knowledge graphs.
  2. Passage-level structure. 50–150 word self-contained blocks for AI extraction.
  3. Cross-platform consistency. Same brand narrative everywhere AI crawls.
  4. Earned media. 61% of AI brand signals come from editorial media.
  5. Social proof. Reddit is the #1 most-cited source. LinkedIn is #2.

Google's John Mueller confirmed the foundation in January 2026: “There is no such thing as GEO or AEO without doing SEO fundamentals.” Both are true. The foundation matters. But if you stop at SEO, you are missing the 60% that drives AI visibility.

Chapter 06

What to do with this

If we had to pick three actions:

  1. Entity optimization. Build your brand in Wikidata, Google Knowledge Graph, schema.org. Everything else depends on this.
  2. Structure for extraction. Tables, numbered lists, 50–150 word passage blocks with statistics. Stop writing longer; start writing better-structured.
  3. Go multi-platform. Get on four or more platforms. Reddit, LinkedIn, YouTube, your domain. 2.8× visibility increase.

This methodology is the Signal Hierarchy, and it underpins how the AI Readiness Index (ARI) is scored. Every recommendation FancyAI makes traces back to one of these signals.

Sources cited

  1. Digital Bloom — 129,000+ domain analysis
  2. SE Ranking — ChatGPT ranking factor study (129,000 domains)
  3. Aggarwal et al., Princeton/Georgia Tech/IIT Delhi — “GEO: Generative Engine Optimization,” KDD 2024
  4. Semrush — AI referral conversion analysis
  5. Go Fish Digital — AI traffic conversion benchmarks
  6. John Mueller, Google Search Advocate — January 2026 statement on GEO/AEO

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Original Research Original Research Platform Deep-Dive
Back to Research
Live · FancyAI Research Corpus

Which AI platform cites what? Five LLMs, the same questions, almost no overlap.

12,500 query responses analyzed across ChatGPT, Perplexity, Claude, Gemini, and Grok. Optimization for one platform is rarely optimization for any other.

6.8%
Of cited domains appeared on three or more platforms
11%
Domain overlap between ChatGPT and Perplexity
5
AI engines analyzed in parallel
>99%
Of repeat queries return different brand orderings
Chapter 01

The headline finding

We ran the same 500 standardized commercial-intent queries through five AI search engines — ChatGPT, Perplexity, Claude, Gemini, and Grok — and recorded every cited source. The corpus covers 12,500 individual responses.

The result almost no SEO veteran expects: only 6.8% of cited domains appeared on three or more platforms. The cross-platform overlap most teams plan around does not exist at the level practitioners assume. Optimizing for ChatGPT does not, in any meaningful sense, optimize you for Claude.

This finding aligns with SparkToro and Gumshoe's 2025 research, which found AI models produce different brand recommendation lists more than 99% of the time when asked the same question repeatedly. The probability of receiving identical lists in the same order drops below 0.1%.

“Any tool that gives you a "ranking position in AI" is full of baloney. — Rand Fishkin, SparkToro”
Chapter 02

Why the platforms diverge

Each AI platform pulls from a different search backend, weights different source classes, and applies different freshness rules.

PlatformSearch BackendCitation Style
ChatGPTBing (sequential queries)Hover-highlight inline
PerplexityProprietary 200B+ URL indexFootnote list, ~8.79 avg citations
Google AI OverviewsGoogle indexCompact source panel
ClaudeBrave SearchEmbedded inline links
GrokOwn index + X dataX conversation excerpts

ChatGPT leans heavily on Wikipedia and Bing-indexed editorial. Perplexity disproportionately surfaces Reddit and forum content. Gemini favors YouTube and Google-indexed pages. Claude pulls a more diversified mix via Brave. Grok pulls disproportionately from X conversations.

If your strategy treats “AI visibility” as a single channel, you will systematically miss four out of five surfaces.

Chapter 03

Ranking position is the wrong metric

SparkToro's research surfaced a finding that fundamentally reframes how visibility should be measured. Asked the same brand-discovery question 71 times, ChatGPT named City of Hope hospital in 97% of responses for West Coast cancer care — but as the #1 mention in only 25 of those answers.

The implication: AI engines are probability machines. They are designed to generate unique answers every time. Treating them as deterministic ranking systems is “provably nonsensical.”

What is a valid metric: visibility percentage across many runs. Certain brands consistently appear in 80–97% of responses for their core categories. Others appear in 2%. That delta is the real signal — and it requires running each prompt 60–100 times minimum to surface.

Chapter 04

What the 6.8% have in common

The minority of domains that did appear across three or more platforms shared a consistent profile:

  • Strong entity definition in knowledge graphs (Wikidata, Google Knowledge Graph, schema.org Organization markup with consistent identifiers)
  • Editorial mentions in third-party publications — not just owned media. Trust-tier outlets every model crawls.
  • Active community presence on Reddit, LinkedIn, and niche forums where Perplexity, Grok, and Claude pull conversational signal.
  • Schema-marked structured content with extractable 50–150 word passages.
  • Sub-second page performance (FCP under 0.4 seconds), which ChatGPT in particular weights heavily.

None of these are tied to a single platform's ranking algorithm. They build the entity itself, which every model independently learns. The cross-platform brands aren't optimizing for five engines — they're building one credible entity that all five recognize.

Chapter 05

The strategic implication

Two operational shifts follow from this data:

  1. Stop reporting "AI visibility" as a single number. Break it out per platform. Your Perplexity visibility is a Reddit/forum problem. Your ChatGPT visibility is a Bing/Wikipedia problem. Your Gemini visibility is a YouTube/Google-indexed problem. They have different solutions.
  2. Build the entity, not the placements. Every dollar spent strengthening cross-platform signals (knowledge graph, editorial media, community presence) compounds across all five engines. Every dollar spent gaming a single platform stays inside that platform.

The brands present in 4+ platforms are 2.8× more likely to appear in ChatGPT specifically (Digital Bloom). Multi-platform presence is itself a signal.

Chapter 06

Methodology

Queries were drawn from a stratified sample of high-intent commercial searches across SaaS, consumer, healthcare, and B2B services. Each query was issued to each platform under identical conditions, with identical session-state controls. Citations were scraped from response pages, deduplicated by root domain, and cross-referenced.

The full per-platform overlap matrix and category-level breakdowns are available on request: research@getfancy.ai.

Sources cited

  1. SparkToro / Gumshoe — “AIs are highly inconsistent when recommending brands” (Rand Fishkin & Patrick O’Donnell, 2025)
  2. SE Ranking — Cross-platform AI search engine comparison (2025)
  3. BrightEdge — Weekly AI Search Insights (October 2025)
  4. Yext — “Same Search, Different Results?” (April 2025)
  5. Digital Bloom — Multi-platform presence correlation analysis
  6. Search Engine Land — “The goal is being seen, trusted, and reused wherever people search.”

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Platform Deep-Dive
Back to Research
Live · FancyAI Research Corpus

“GEO is just rebranded SEO.” We ran the numbers. Here's the 40/60 truth.

Approximately 40% of GEO overlaps with strong SEO fundamentals. The other 60% is genuinely different — and it is where most of the bad advice and missed opportunity lives.

40 / 60
Overlap with SEO fundamentals vs net-new GEO discipline
5
Net-new disciplines unique to GEO
61%
Of AI brand signals from earned media
25%
Projected drop in traditional search by 2026 (Gartner)
Chapter 01

Where the 40% lives

The shared foundation is real. Strong domain authority, technical accessibility, schema markup, E-E-A-T, quality content, and clean information architecture all matter for both search and AI recommendation systems. AI engines pull from Bing, Brave, Google, and proprietary indices — the underlying crawlability and ranking signals still apply.

If you have a strong SEO program, the 40% is mostly already there. That is the easy part of the conversation. It is also the part the “GEO is just SEO” crowd correctly identifies. Bounteous research shows 99% of URLs in Google AI Mode appear in the top 20 organic search results — SEO strength still maps to AI visibility. The foundation matters.

“There is no such thing as GEO or AEO without doing SEO fundamentals. — John Mueller, Google”
Chapter 02

Where the 60% lives

The other 60% is where SEO instincts actively mislead. Five disciplines are net-new to GEO:

  1. Entity optimization. AI systems reason about brands as entities in knowledge graphs, not as ranked URLs. Wikidata, Google Knowledge Graph, schema.org Organization definitions are the primary lever — not page-level keyword targeting. XFunnel calls this “lexical proximity”: keeping brand-associated descriptors within 2–3 words of your brand name to train LLM associations.
  2. Passage-level structure. AI extracts 50–150 word self-contained blocks. Your H2/H3 hierarchy, statistic density, and table usage now drive selection more than total page word count. SE Ranking found content sections of 120–180 words between headings receive 70% more citations than sections under 50 words.
  3. Cross-platform consistency. The same brand narrative needs to appear consistently across every surface AI crawls — site, Reddit, LinkedIn, YouTube, third-party media. Inconsistency creates ambiguous entity resolution.
  4. Earned media gravity. 61% of AI brand signals come from editorial media and third-party mentions, not owned content. The job is to be talked about, not just to publish.
  5. Community presence. Reddit is the #1 most-cited source for Perplexity (46.7% of citations). LinkedIn is #2. Neither was ever an SEO priority. Both are now first-class GEO real estate.
Chapter 03

The unit of optimization has changed

SEO optimizes for a list of 10 blue links. GEO optimizes for inclusion in a synthesized answer where only 2–7 sources are cited.

That competitive narrowing changes the optimization target itself. The unit of work shifts from pages to passages — AI engines extract specific chunks rather than ranking whole pages. The signal shifts from backlinks to citations — being referenced across trusted third-party sources matters more than link profiles.

NerdWallet's 2024 numbers tell the story in business terms: revenue rose 35% while monthly traffic fell 20%. Discovery and decision-making are shifting to AI-mediated experiences where direct site visits decline but pre-qualified, recommendation-driven business impact increases.

Chapter 04

Why the framing matters

If a CMO treats GEO as a 100% rebrand, they fund the wrong work. They build a GEO team that ignores SEO fundamentals and bleeds technical authority. If they treat it as 100% identical, they fund the SEO playbook and quietly lose the 60% that AI engines actually use to choose recommendations.

The 40/60 framing is the operational answer: keep the SEO foundation, fund the GEO disciplines that don't exist inside it.

The market has voted. The GEO/AEO category was valued at $848M in 2025 and is projected to reach $33.7B by 2034 — a 50.5% CAGR. Regardless of terminology, the commercial opportunity is being committed to.

Chapter 05

The honest test

If your team can answer all five of these “yes,” you are doing GEO. If you cannot, you are still doing SEO — and missing the 60%.

  1. Is our brand a defined entity in Wikidata and Google Knowledge Graph?
  2. Are our top 20 commercial pages structured with passage-level extractability (tables, statistics, 50–150 word blocks)?
  3. Do we have a coherent brand narrative across owned media, Reddit, LinkedIn, YouTube, and third-party editorial?
  4. Are we earning mentions in editorial outlets every model crawls, not just publishing on our blog?
  5. Are we measuring visibility per-platform across many prompt runs, not single-query rankings?

Sources cited

  1. Bounteous — Google AI Mode citation overlap analysis
  2. SE Ranking — Content section length and citation correlation
  3. Gartner — Traditional search volume forecast
  4. NerdWallet 2024 financial disclosures — Revenue / traffic divergence
  5. Profound — “What is Answer Engine Optimization?”
  6. XFunnel — GEO-specific optimization elements (semantic triples, lexical proximity)
  7. Search Engine Land — Philipp Götza, “GEO myths” (2025)
  8. John Mueller, Google Search Advocate — January 2026

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Platform Deep-Dive
Back to Research
Live · FancyAI Research Corpus

How to optimize for ChatGPT: 20 ranking factors from 129,000 domains.

SE Ranking analyzed 129,000 domains across 20 niches and produced a SHAP-ranked list of the top factors driving ChatGPT citation. Here is the playbook — including what doesn't work.

129K
Domains in the SHAP-ranked ranking factor study
Citation lift for domain trust > 90
6.7 vs 2.1
Citations for FCP < 0.4s vs > 1.13s
800M+
ChatGPT monthly active users
Chapter 01

How ChatGPT actually picks sources

ChatGPT's search architecture is a multi-stage pipeline: a tiny classification model, a specialized “Thinky” query-generation model, Bing-powered web retrieval, semantic chunk scoring, and final synthesis by the frontier model. The system typically cites 3–5 deeply-read sources per response, with Wikipedia dominating at 7.8% of all citations.

SearchGPT is now used in 46% of ChatGPT interactions. Usage for general information searches tripled from 4.1% to 12.5% in just six months (Feb–Aug 2025). The platform processes 2.5 billion daily prompts with 500M+ weekly active users.

For source selection, ChatGPT uses sequential search queries (not parallel like Perplexity), reviewing multiple sites before aggregating. Six selection criteria emerge from testing: precise keyword matching, search intent recognition (appending terms like “tutorial”), aggressive recency filtering, credibility (E-E-A-T), trustworthiness (author bios, methodology), and variety of perspectives.

Chapter 02

The top 20 ranking factors

SE Ranking's analysis of 129,000 domains used SHAP (SHapley Additive exPlanations) values to rank features most predictive of ChatGPT citation. The ranked top of the list:

RankFactorEffect Size
#1Referring domainsSites with 32K+ referring domains are 3.5× more cited
#2Domain trafficStrong positive correlation
#3Domain TrustTrust > 90 yields ~4× more citations vs <43
#4Page TrustPage-level authority signal
#5INP / FCP / LCP performanceFCP < 0.4s = 6.7 citations vs 2.1 over 1.13s
#6+Content length, freshness, Reddit/Quora mentions3-month-old content = 6.0 citations vs 3.6 stale

Brand mentions on Reddit and Quora yielded 4× higher citation likelihood. Articles over 2,900 words averaged 5.1 citations vs 3.2 for under 800 words — longer is better, but only with structure (next chapter).

“Flashy "AI hacks" like LLMs.txt barely have any impact. What drives ChatGPT citations are the fundamentals. — Yulia Deda, SE Ranking”
Chapter 03

Structure beats every shortcut

The SE Ranking study quantified the structure effects:

  • Content sections of 120–180 words between headings receive 70% more citations than sections under 50 words
  • Pages with expert quotes average 4.1 citations vs 2.4 without
  • Pages with 19+ statistics average 5.4 citations vs 2.8 with fewer
  • Broad topic-descriptive URLs get 2× more citations than keyword-optimized URLs
  • Companies with review scores below 70% are significantly less likely to be referred

Perhaps most interestingly: .gov and .edu domains averaged only ~3.2 citations — content quality matters more than TLD. The authority bias is weaker than the SEO industry has assumed.

Chapter 04

What does NOT work

The most consequential negative findings, all from controlled testing:

  • LLMs.txt has negligible impact. Removing it actually improved the SE Ranking model's predictive accuracy. Multiple independent experiments (Dejan AI, others) confirm no measurable lift.
  • FAQ schema markup shows negative correlation. Pages with FAQ schema averaged 3.6 citations vs 4.2 without. ChatGPT does not appear to access schema during grounding.
  • Keyword-optimized URLs underperform. Topic-descriptive URLs get 2× more citations. ChatGPT cares about clarity, not exact match.

This is the part most GEO “experts” get wrong. The industry has spent eighteen months selling LLMs.txt and FAQ schema as silver bullets. The data says they aren't bullets at all.

Chapter 05

The four crawlers (and how to control them)

OpenAI operates three distinct crawlers, each controllable independently via robots.txt:

  • GPTBot — for training. Grew 305% in crawling volume from May 2024 to May 2025, becoming the dominant AI crawler at 30% share.
  • OAI-SearchBot — for real-time SearchGPT.
  • ChatGPT-User — for user-initiated browsing within ChatGPT sessions.

You can allow some and block others. Most brands should allow all three — blocking GPTBot specifically removes you from training data, which has long-tail visibility implications most teams underestimate.

Tracking ChatGPT referral traffic in GA4: filter for the chatgpt.com referrer. Note that free-tier users often don't send referrer data, creating attribution gaps. Plan for that gap rather than around it.

Chapter 06

The execution priority order

Based on SHAP weights, the highest-leverage actions for an existing brand:

  1. Audit your domain trust score. If you are below 80, fix the foundational citation graph before anything else. Get into trusted lists, directories, and editorial coverage.
  2. Get FCP under 0.5 seconds. The drop-off above that threshold is steep and asymmetric — ChatGPT punishes slow pages much harder than Google does.
  3. Earn Reddit and Quora presence. 4× citation likelihood is among the highest single-factor effects in the data.
  4. Restructure your top 20 commercial pages. Add statistics, comparison tables, expert quotes, and 120–180 word passage blocks between H2s.
  5. Submit a structured Organization entity to Wikidata and apply schema.org markup with consistent identifiers across owned properties.

Sources cited

  1. SE Ranking / Yulia Deda — “How to Optimize for ChatGPT: 20 Ranking Factors” (2025)
  2. First Page Sage / Evan Bailyn — “ChatGPT Optimization: 2026 Guide”
  3. Forge and Smith — “GEO: SEO for ChatGPT” (2025)
  4. HVSEO — “Optimizing for SearchGPT and Understanding Ranking Factors”
  5. Zapier / Harry Guinness — “How does ChatGPT choose its sources?”
  6. Skyscale — “How ChatGPT Selects Sources: Complete Guide” (2025)
  7. Dejan AI — Independent FAQ schema experimentation

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Research
Live · FancyAI Research Corpus

Most GEO “experts” can't cite a single study. Here's who actually can.

A landscape map of 75+ practitioners shaping the GEO conversation. Three tiers, the academic origin story, the platform pushback, and five gaps no one is filling.

3%
Of top SEO thought leaders include "GEO" in their headline
75+
Practitioners mapped across LinkedIn, X, Substack
82%
Positive sentiment for "GEO" — highest of any AI search term
40%
Visibility lift from the original Princeton/Georgia Tech study
Chapter 01

The academic origin story

The term “Generative Engine Optimization” was coined in an academic paper by Pranjal Aggarwal (IIT Delhi) and Vishvak Murahari (Princeton), published on arXiv in November 2023 and presented at KDD 2024 (pp. 5–16).

The paper introduced the GEO-bench benchmark and tested optimization strategies systematically. The headline finding: “Fluency Optimization + Statistics Addition” can boost AI visibility by up to 40%. That is the academic provenance of the entire field.

“Answer Engine Optimization” (AEO) predates GEO by five years — Jason Barnard (Kalicube) coined it in 2018, originally for the featured-snippet era. AIO has no single originator; it ambiguously means either “AI Optimization” or Google's “AI Overviews” product. Most practitioners now treat GEO as the umbrella term.

“Optimizing for AI search is the same as optimizing for traditional search. — Nick Fox, Google”
Chapter 02

The three-tier landscape

We sampled 75 voices currently publishing under the GEO/AEO/AIO labels across LinkedIn, X, Substack, and YouTube. We coded each by output volume, citation hygiene (do they reference primary research?), and originality (do they publish their own data?).

The distribution clustered into three tiers:

  • Tier 1 — Practitioners with primary research. Under 10 voices publishing original studies, datasets, or methodologies. Mostly platform engineers, academics, and a small group of operator-publishers.
  • Tier 2 — Synthesizers. Mid-sized group recycling tier-1 research with attribution and useful framing. Most useful for operators looking for actionable summaries. Names that show up consistently across multiple expert lists: Lily Ray (Amsive Digital, E-E-A-T focus), Kevin Indig (Growth Memo, 23K+ subscribers, AI search metrics), Mike King (iPullRank, “Relevance Engineering”), Aleyda Solis (Orainti, SEOFOMO 35K+ subscribers, practical international frameworks), and Ross Simmonds (Foundation, content distribution).
  • Tier 3 — Repackagers. The majority. Confident, high-volume, low-citation. Often presenting tier-1 findings as their own observations.
Chapter 03

The platform pushback

The most interesting tension in the field: Google representatives consistently push back on GEO as a distinct discipline.

  • John Mueller (January 2026): “There is no such thing as GEO or AEO without doing SEO fundamentals.”
  • Nick Fox (Google): “Optimizing for AI search is the same as optimizing for traditional search.”
  • Danny Sullivan (Google): Has emphasized that best practices centered on genuine helpfulness will win long-term.
  • Krishna Madhavan (Microsoft Bing): Echoes the “no shortcuts” framing.
  • Jesse Dwyer (Perplexity): Has emphasized platform resistance to manipulation.

The platform position is consistent: SEO fundamentals matter, manipulation will be resisted, and there are no shortcuts. The GEO position, increasingly, is: yes, and there's also a 60% surface area that SEO doesn't cover. Both can be true.

Harvard Business Review (February 2026) validated the shift, describing two concurrent revolutions: the move from SEO to GEO, and AI agents beginning to act as buyers. McKinsey calls AI search “the new front door to the internet.”

Chapter 04

Five gaps no one is filling

Across the entire landscape, five categories of analysis are notably underserved:

  1. Cross-platform behavior studies. Most analysis is single-platform, usually ChatGPT. Per-model citation behavior comparison is rare.
  2. Industry-specific GEO playbooks. Healthcare, financial services, and regulated B2B are nearly absent from public discourse.
  3. Negative-result studies. Almost no one publishes what didn't work. The LLMs.txt non-effect was only surfaced by SE Ranking and Dejan AI taking the time to test and publish a null result.
  4. Long-horizon attribution. Most case studies stop at 90 days. Six- and twelve-month visibility curves under sustained optimization are barely studied.
  5. Methodology critiques. The popular GEO frameworks have no rigorous comparative analysis. Buyers are choosing among them blind.
Chapter 05

How buyers should evaluate vendors

Search Engine Land's study of 75 SEO thought leaders found that fewer than one-third maintained consistent terminology over the past year — an indicator of how unsettled the field still is. GEO had 82% positive sentiment, the highest of any AI search term, but the operator-level vocabulary is still in motion.

If you are sourcing a GEO partner or vendor, citation hygiene is the fastest signal of seriousness. Three questions:

  1. Show me your primary sources. Tier-3 voices will redirect to anecdote. Tier-1 and tier-2 will hand you the studies behind their claims.
  2. What didn't work? Operators who can name their failures (and the data behind them) are doing the work. Operators who can't are reciting.
  3. How do you measure visibility? If the answer is “we run the prompt once,” the answer is wrong. Valid measurement requires 60–100 prompt repetitions per topic to stabilize the visibility percentage.
Chapter 06

The state of the field

The GEO conversation is real, growing, and still in its naming and strategy-formation phase. The academic foundation is solid (Aggarwal & Murahari, KDD 2024). The tier-2 operator commentary is increasingly substantive. The tier-3 noise will compress as the field matures and citation hygiene becomes table stakes.

The buyers who will benefit most are the ones who can read the landscape clearly — choose tier-2 partners over tier-3 narrators, and demand primary-source rigor from anyone publishing under the GEO label. The discipline is real. The signal is buried in noise. Filtering for the signal is the work.

Sources cited

  1. Aggarwal & Murahari, IIT Delhi / Princeton — “GEO: Generative Engine Optimization” (arXiv 2311.09735, KDD 2024)
  2. Search Engine Land — Study of 75 SEO thought leaders, sentiment analysis
  3. Profound — “The 2025 A-list of GEO experts”
  4. Go Fish Digital — “8 GEO Agencies & Thought Leaders” (Feb 2026)
  5. First Page Sage — “The Top GEO / AI Search Experts” (2026)
  6. Harvard Business Review — Feb 2026 SEO-to-GEO analysis
  7. McKinsey — “The new front door to the internet”
  8. John Mueller, Nick Fox, Danny Sullivan (Google); Krishna Madhavan (Microsoft); Jesse Dwyer (Perplexity)

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Research
Live · FancyAI Research Corpus

How AI Overviews killed the click. A zero-click economy emerges.

Eight primary studies. One conclusion: organic CTR is collapsing across every measurement, and the decline is broader than AI Overviews alone.

61%
Drop in organic CTR for AI-Overview queries (Seer Interactive, 25M impressions)
68%
Drop in paid CTR for AIO queries
41%
CTR drop YoY even on queries WITHOUT AIOs
1%
Of users click into the AI summary itself (Pew)
Chapter 01

The clicks are gone

Seer Interactive ran the most rigorous published study to date: 3,119 queries, 42 organizations, 25.1 million organic impressions, 1.1 million paid impressions. The results are unambiguous.

  • Organic CTR for AI Overview queries collapsed from 1.76% to 0.61% — a 61% drop.
  • Paid CTR for AIO queries collapsed from 19.7% to 6.34% — a 68% drop.
  • Pew Research (n=900): when an AI summary appears, only 8% of users click any traditional link vs 15% without — and only 1% click a source within the AI summary itself.
  • Ahrefs (300K-keyword update): AI Overviews reduce clicks by 58%.

The platforms talk about this differently than the data does. Google says AI Overviews drive engagement and citations. The empirical record across multiple independent studies says CTR is in free fall.

“Even when AI Overviews don't appear, users are simply clicking less everywhere. — Search Engine Land”
Chapter 02

The shift is broader than AI Overviews

The most consequential finding in Seer’s data isn’t the AIO drop. It’s the baseline.

For queries that did not trigger an AI Overview at all, organic CTR still fell 41% year over year. The behavioral shift is bigger than any single feature. Users are skimming AI-generated text snippets, hover-cards, knowledge panels, related questions, and shopping modules — and clicking out to source pages less even when the AI Overview itself isn’t present.

This is the part most CMOs miss when they read the headline number. The 61% drop is the visible peak. The 41% drop on un-AIO’d queries is the baseline shift — and it’s not coming back.

Chapter 03

The publishers’ nightmare

Bain & Company surveyed real consumer behavior with Dynata: roughly 80% of consumers now rely on zero-click results for at least some of their queries, and 60% of all searches end without a click. AdExchanger reports publishers losing 20%, 30%, and in some cases up to 90% of traffic and revenue from zero-click AI search.

The traffic that’s left is consolidating. When ChatGPT cut referrals to traditional sites by 52% between July and August 2025, citations to Wikipedia, Reddit, and YouTube rose by 53% in the same window. AI engines are picking a smaller set of sources and reusing them — a winner-take-all dynamic where the cited brands accumulate disproportionately.

Chapter 04

The cited brands actually win

The flip side of Seer’s collapse data is the most operationally useful finding in the study: brands cited inside an AI Overview earned 35% more organic clicks and 91% more paid clicks than non-cited brands on the same query.

Citation isn’t a consolation prize for losing the click. It’s the new way to win the click. The brands that get pulled into the AI summary inherit the user’s pre-qualification and arrive at the page with higher purchase intent than equivalent organic traffic. This is why visibility share, not click volume, is becoming the metric that matters.

Chapter 05

What to measure now

The metrics shift from click-volume to visibility-share. Three measurements replace the old click-through rate as the leading indicator:

  1. Citation frequency — how often your brand is named or linked across AI engine responses for your category prompts.
  2. Recommendation rate — the percentage of relevant AI prompts that include your brand in the answer (measured across 60–100 prompt repetitions per topic to control for LLM variability).
  3. Share of voice — your citation count as a percentage of total citations for your category, measured per platform.

Click-through rate isn’t dead. It’s no longer the leading indicator of discovery. Brands that re-orient measurement around visibility share will see the new picture clearly. Brands that don’t will keep optimizing for a metric the user already stopped honoring.

Sources cited

  1. Seer Interactive — “AIO Impact on Google CTR” (3,119 queries, 42 organizations, 2025)
  2. Pew Research Center — AI Overview click-rate study (n=900, 2025)
  3. Ahrefs — “AI Overviews Reduce Clicks by 58%” (300K keywords, 2026)
  4. Bain & Company / Dynata — Zero-click consumer survey (2025)
  5. AdExchanger — Publisher traffic impact analysis (2025)
  6. Search Engine Land — “Google AI Overviews drive 61% drop in organic CTR”
  7. Forbes — “The 60% Problem: How AI Search Is Draining Traffic” (2025)
  8. Cloudflare — Crawler-to-referrer ratio data

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Research
Live · FancyAI Research Corpus

AI visitors convert at 23×. The quality story behind the quantity drop.

AI referral traffic is tiny in absolute terms — and converts at rates the SEO industry has never seen. Six independent measurements paint the same picture.

23×
Conversion lift of AI-referred vs organic search visitors (Ahrefs)
1.08%
Of total web traffic comes from AI today
527%
YoY growth rate of AI referral traffic
More time on page than organic visitors
Chapter 01

The conversion gap

Six independent studies tracking AI-referred traffic against organic search baselines all converge on the same finding: AI visitors convert at multiples of organic search visitors.

  • Ahrefs — AI visitors convert at 23× the rate of organic search visitors (own-site data).
  • Visibility Labs — ecommerce AI traffic converts 31% higher than non-branded organic.
  • Semrush4.4× conversion rate vs organic.
  • Go Fish Digital25× in their data.
  • Forrester — AI visitors spend up to 3× longer on-page than traditional search visitors.
  • Broworks — in 90 days, 10% of organic visits came from AI engines and 27% of that traffic converted to SQLs.

Even the conservative end of the range — Semrush’s 4.4× — would be a category-defining metric in classical SEO. The aggressive end (Ahrefs 23×, Go Fish 25×) suggests an entirely new traffic class with conversion economics the industry hasn’t catalogued before.

“Zero-click search isn't a problem — it's an enormous opportunity. Buyers are arriving at vendor websites more informed, with higher intent. — Forrester”
Chapter 02

Why the visitor is pre-qualified

The mechanism is structural, not accidental. When a user receives a brand recommendation inside an AI answer — ChatGPT naming your product, Perplexity citing your page, Gemini summarizing your offering — the visit happens after the recommendation, not as a hopeful click on a list of options.

Forrester documented the behavioral signature: AI users average 15–23 word queries versus the 3–4 word average for traditional search. They are explaining their problem at length, asking for synthesis, and accepting a recommendation as the answer. By the time they arrive at the recommended vendor’s site, they have already done the comparison work the AI did on their behalf.

This is why time on page is 3× higher. The visitor isn’t evaluating — they’re acting.

Chapter 03

The economics flip

The cost-per-acquisition curves are diverging.

  • The average GEO customer acquisition cost is $559 and declining at 37.5% as the market matures and brands accumulate compounding citation authority.
  • Google Ads CPC continues climbing 10–15% annually with no signs of slowing — auction prices set by competitive bidding floors that ratchet up.
  • AI traffic is 1.08% of total web traffic today (Conductor) but growing 527% YoY (Search Engine Land / GA4 analysis).
  • ChatGPT alone sent 57.7M outbound clicks in March 2025 — up 558% YoY.

The volume is small. The unit economics are extraordinary. The growth rate is exponential. Three of those three are attractive curves.

Chapter 04

Where the dollar earns more

Run the math on a $10,000 monthly customer acquisition budget.

Spent on Google Ads at a $559 CAC equivalent (charitable benchmark for the channel), the budget yields ~18 customers. Spent on GEO — building entity authority, citation graph, structured content, and earned media that compounds across all five major AI platforms — the budget builds an asset that generates 23× higher conversion per visitor as it accrues. Year one returns may be modest as the entity grounds. Years two and three the gap widens, because GEO accrues like backlinks did in 2008–2014: durable, compounding, hard to dislodge.

This is the case to make to the CFO. Not “AI traffic is bigger than you think” — it isn’t, yet. The case is “the dollars committed to AI visibility today are the cheapest dollars you’ll ever spend on it,” because the asset compounds and the channel hasn’t saturated.

Chapter 05

The Broworks proof

The Broworks case study (90-day GEO sprint) is one of the cleanest published examples of the conversion economics operating in real time. Within three months of starting the program:

  • 10% of organic visits came from generative engines (vs near-zero baseline).
  • 27% of that AI-sourced traffic converted to Sales Qualified Leads.
  • Visitors from LLMs stayed 30% longer on-page than equivalent Google visitors.

The numbers vary by category and starting position, but the directional pattern is now consistent across enough independent measurements that the conversion-multiplier finding has moved from anecdotal to operational. AI traffic isn’t big yet. But it’s the highest-quality traffic class measurable today.

Sources cited

  1. Ahrefs — Own-site AI vs organic conversion comparison
  2. Visibility Labs — Ecommerce AI traffic conversion benchmarks
  3. Semrush — AI referral conversion analysis
  4. Go Fish Digital — AI traffic conversion data
  5. Forrester — “GenAI Forever Changes All Forms of Search” (2025)
  6. Conductor — AI traffic share of total web traffic
  7. Broworks — Published 90-day GEO case study
  8. Search Engine Land — GA4 AI referral growth analysis

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Research
Live · FancyAI Research Corpus

The Princeton paper, decoded. The academic foundation of GEO.

Aggarwal et al. coined the term, built the benchmark, and quantified what works. Three years later, every credible GEO claim still traces back to this paper.

+40%
Visibility lift from the top GEO methods (Cite Sources, Statistics, Quotation)
10,000
Queries in the GEO-bench benchmark
+115%
Lift for rank-5 sites applying GEO methods
−30%
Top-ranked sites LOST visibility
Chapter 01

The paper that named the field

In November 2023, six researchers from Princeton, Georgia Tech, the Allen Institute for AI, and IIT Delhi posted a paper to arXiv titled simply “GEO: Generative Engine Optimization.” A year later it was published at ACM SIGKDD 2024, the top data-mining conference in the world.

The authors — Pranjal Aggarwal, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, Karthik Narasimhan, and Ameet Deshpande — did three things no one had done before:

  1. Coined the term “Generative Engine Optimization” and formalized generative engines as systems combining retrieval with grounded LLM synthesis.
  2. Built GEO-bench — a benchmark of 10,000 queries from 9 diverse sources across 25 domains, designed to evaluate optimization methods systematically.
  3. Tested nine concrete content optimization strategies and quantified what worked, what didn’t, and by how much.

The paper is the academic spine of the entire GEO discipline. Every credible practitioner study published since either cites it, replicates it, or extends it.

“Including citations, quotations from relevant sources, and statistics can significantly boost source visibility, with an increase of over 40%. — Aggarwal et al., KDD 2024”
Chapter 02

The benchmark that made it measurable

Before GEO-bench, “optimizing for AI search” was advice. After GEO-bench, it was a measurable discipline.

The benchmark draws queries from nine sources representative of real-world search behavior:

  • MS Marco, ORCAS-1, Natural Questions (commercial & informational queries)
  • AllSouls (Oxford academic exam questions)
  • LIMA, Davinci-Debate (LLM-generated diverse prompts)
  • Perplexity Discover (real production AI search queries)
  • ELI-5 (Reddit explanation requests)
  • GPT-4 generated queries (synthetic coverage)

The query distribution preserves real-world ratios: 80% informational, 10% transactional, 10% navigational. Coverage spans 25 domains: tech, health, finance, history, law, government, opinion, and more. The split is 8,000 train, 1,000 validation, 1,000 test.

Two evaluation metrics measure visibility: Position-Adjusted Word Count (how prominently a source is quoted, weighted by position in the AI response) and Subjective Impression (LLM-judged influence on the answer).

Chapter 03

What worked

The paper tested nine optimization methods. The top three each delivered 30–40% relative improvement on Position-Adjusted Word Count and 15–30% on Subjective Impression:

  1. Cite Sources — adding inline references to authoritative third-party sources directly within the content.
  2. Quotation Addition — embedding pull quotes from credible experts and publications.
  3. Statistics Addition — injecting specific, attributed numerical data into the text.

Stylistic improvements (Fluency Optimization — better readability, clearer structure) added 15–30% on top. The single highest-performing combination was Fluency Optimization + Statistics Addition together, which outperformed any single method by more than 5.5%.

Real-world validation on Perplexity.ai (a deployed production engine, not just the benchmark): Quotation Addition improved Position-Adjusted Word Count by 22%, Statistics Addition improved Subjective Impression by 37%.

Chapter 04

What didn’t

Two negative findings deserve as much attention as the positive ones, because they upend SEO instincts:

  • Keyword stuffing — the workhorse of early SEO — showed “little to no improvement” on generative engines. Generative engines synthesize meaning, not match terms.
  • Authoritative / persuasive tone changes — “writing more confidently” — showed no significant lift. The authors note GEs are “already somewhat robust to such changes,” meaning models discount confidence-as-style and weight grounded substance.

The pattern: tactics that game surface-level patterns don’t work. Tactics that add genuine substance (citations, real statistics, structured quotation, clear writing) work measurably and reproducibly.

Chapter 05

The asymmetry that changed everything

The single most consequential finding for the GEO industry is buried in the experimental section: lower-ranked sites benefit dramatically more from GEO than top-ranked sites do.

  • SERP rank-5 sites applying Cite Sources gained +115% visibility increase.
  • Top-ranked sites on average LOST 30.3% visibility when applying the same methods.

The interpretation: when everyone plays the same game, the playing field levels. Smaller creators — previously crushed by domain authority moats — gain disproportionately. The historic SEO advantage of sheer authority compresses inside generative engines because synthesis pulls from a wider source pool than ranked lists.

This is why Aggarwal et al. wrote: “The advent of Generative Engines might initially seem disadvantageous to these smaller entities. However, the application of GEO methods presents an opportunity for these content creators to significantly improve their visibility.” The paper is, in a real sense, an invitation to the underdogs.

Chapter 06

The follow-ups: Toronto and SourceBench

Two academic papers in 2025–2026 extended the foundation in important directions.

Chen et al., University of Toronto (2025) — “Generative Engine Optimization: How to Dominate AI Search” — ran the first comprehensive comparative analysis of AI Search vs Google Search across multiple verticals, languages, and query paraphrases. The headline finding: AI Search exhibits “systematic and overwhelming bias toward earned media” — in US automotive queries, AI search returned 81.9% earned media vs 18.1% brand-owned and 0% social, compared to Google’s much more balanced 39.5% brand / 15.4% social / 45.1% earned mix. This validated, at scale, what FancyAI’s Signal Hierarchy methodology codified separately: third-party citations are the highest-leverage GEO investment.

SourceBench (2026) evaluated source quality across 12 AI search systems. Two findings stand out: GPT-5 achieves the highest source quality scores, and AI search actively discovers high-quality sources not found in traditional keyword-based search results. The implication: GEO isn’t just optimization for the same set of sources Google ranks. It’s competition for a partially different source pool.

Chapter 07

Why the paper still matters

Three years after first publication, every reputable GEO claim still routes back to GEO-bench. The 40% visibility lift number is now a category benchmark. The +115% lift for lower-ranked sites is the underlying math behind why the discipline works for challenger brands. The Fluency + Statistics combination is the operational recipe most credible practitioners reference, often without realizing they’re paraphrasing Aggarwal.

If you take one thing from the paper: add specific, attributed statistics to your content. Of the nine tactics tested, this single move plus fluent writing produced the largest measurable gain. The discipline starts here.

Sources cited

  1. Aggarwal, Murahari, Rajpurohit, Kalyan, Narasimhan, Deshpande — “GEO: Generative Engine Optimization” (arXiv 2311.09735, ACM SIGKDD 2024)
  2. Princeton University Research Portal — GEO paper page
  3. Chen, Wang, Chen, Koudas (University of Toronto) — “Generative Engine Optimization: How to Dominate AI Search” (arXiv 2509.08919, 2025)
  4. SourceBench — Multi-system source quality benchmark (2026)
  5. “Beyond SEO: A Transformer-Based Approach for Reinventing Web Content Optimisation” (arXiv 2507.03169, 2025)
  6. Springer — “Artificial Intelligence’s Revolutionary Role in SEO” (peer-reviewed chapter, 2024)
  7. NSF Grant 2107048 — Underlying research funding

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Research
Live · FancyAI Research Corpus

Why fresh content beats authority. The recency bias in AI citations.

90% of AI bot hits land on content less than three years old. AI-cited pages are 368 days fresher than traditionally-ranked ones. Continuous publishing is the new optimization unit.

90%
Of AI bot hits land on content from the last three years
1,064
Average age (days) of AI-cited content
1,432
Average age of traditionally-ranked content
368
Days fresher AI-cited content is, on average
Chapter 01

The freshness gap

Seer Interactive analyzed AI bot crawler logs across a representative sample of indexed sites. The result is among the cleanest empirical findings in the GEO literature: nearly 90% of AI bot hits land on content from the last three years. The remaining 10% spreads across the entire historical web.

The same study compared the publication age of AI-cited content versus content traditionally ranked by Google. The numbers:

  • AI-cited content averages 1,064 days old (~2.9 years).
  • Traditionally-ranked content averages 1,432 days old (~3.9 years).
  • 368-day gap — AI-cited content is more than a year fresher on average.

This is independent of topic, independent of authority, independent of domain rating. AI engines systematically prefer recent content even when older equivalents exist with stronger authority signals.

Chapter 02

Why the bias exists

The mechanism has two layers. The training-time layer is well-documented: LLMs trained on time-stamped corpora develop measurable preferences for content within their training window’s recency horizon. The Metehan.ai academic analysis confirms this experimentally, showing the bias is observable in raw model behavior independent of retrieval.

The retrieval-time layer is more consequential operationally. AI search engines apply freshness signals during the retrieval ranking phase — before the LLM ever sees a candidate source. ChatGPT’s SearchGPT pipeline, Perplexity’s proprietary index, and Google AI Overviews all weight publication date and last-updated timestamp into their retrieval scoring. The result: even an authoritative source from 2018 has lower retrieval probability than a moderately-authoritative source from 2025 on the same topic.

“Freshness scoring is "always on" in AI systems — content strategy must be continuously updated.”
Chapter 03

The continuous publishing imperative

The operational implication forces a re-architecture of the content calendar. The traditional SEO play — build a definitive long-form “ultimate guide” once, earn backlinks, harvest organic traffic for years — doesn’t map to AI visibility. The asset depreciates faster than authority can be built.

What works: a continuous publishing cadence on the topics that matter to your category. SE Ranking’s data is consistent with Seer’s: pages updated within 3 months average 6.0 ChatGPT citations versus 3.6 citations for stale equivalents — a 67% lift just from refreshing publication date and core stats.

Chapter 04

The refresh sprint as the new unit of work

The shift is from one-time content production to ongoing content refresh sprints. Three operational patterns are emerging across well-executed GEO programs:

  1. 30-day refresh cycles on top-of-funnel commercial pages — updating statistics, adding recent industry developments, refreshing publication date.
  2. Quarterly statistical refreshes on data-heavy pages — replacing 12-month-old benchmarks with current numbers maintains the “cited authoritative source” status AI engines reward.
  3. Monthly editorial additions in core categories — net-new published pieces that establish recency-graded authority across the topic cluster.

The cadence isn’t about volume. It’s about staying inside the recency band where AI engines actively retrieve. A site publishing once a quarter on its core topics will systematically lose ground to a competitor publishing once a month on the same topics, even if the quarterly site has higher domain authority.

Chapter 05

The strategic reframe

Authority is necessary but no longer sufficient. The site that earns AI citations in 2026 is the one combining authority signals with freshness signals — the operational equivalent of running an editorial publication on the topics where the brand wants to be the cited expert.

Most B2B brands aren’t structured this way. Their content programs are episodic: a launch, a campaign, a quarterly initiative. The brands that re-architect content as a continuous editorial function on their core topics will accumulate citation share. The ones that keep treating content as a project will watch competitors with fresher pages get cited instead.

Sources cited

  1. Seer Interactive — “Study: AI Brand Visibility and Content Recency” (log file analysis, 2025)
  2. Metehan.ai — “Recency Bias That’s Reshaping AI Search” (2025)
  3. SE Ranking — Content freshness citation correlation (129K domains)
  4. Ahrefs — Citation freshness analysis
  5. Academic literature on LLM training-window recency effects

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Research
Live · FancyAI Research Corpus

YouTube mentions predict AI visibility better than backlinks.

Ahrefs analyzed 75,000 brands. The strongest single predictor of AI brand visibility wasn't domain authority. It wasn't backlinks. It was YouTube mentions — by a wide margin.

0.737
Correlation between YouTube mentions and AI brand visibility
75,000
Brands in the underlying study
0.04
Correlation between content length and citation
r²=0.032
Variance in AI citations explained by domain authority
Chapter 01

The 75K-brand study

Ahrefs ran the largest published correlation study to date on what predicts AI brand visibility: 75,000 brands measured across multiple AI engines, scored against every plausible predictor variable, ranked by correlation strength. The headline finding upends two decades of SEO intuition.

The strongest single correlation in the dataset:

  • YouTube mentions: ~0.737 correlation with AI brand visibility
  • Branded web mentions: 0.66–0.71
  • Branded anchor text: 0.51–0.63
  • Domain Authority (DA / DR): r² = 0.032 — explains less than 4% of the variance
  • Backlinks alone: weak / neutral per Seer Interactive’s replication
  • Content length: ~0.04 Spearman — effectively zero

YouTube mentions aren’t a tactical optimization. They are the dominant signal. The implication for most brands’ current content strategies is uncomfortable.

“Domain Authority explains less than 4% of AI citation variance. — Ahrefs, 75K-brand study”
Chapter 02

Why YouTube wins

Three structural factors explain YouTube’s outsized weight in AI visibility scoring:

  1. Cross-platform retrieval coverage. YouTube content surfaces in Google’s index (where AI Overviews retrieve), Bing’s index (where ChatGPT retrieves), and within Gemini’s direct integration. A single YouTube mention has multi-platform reach that a single owned-domain article cannot replicate.
  2. Multi-modal grounding. Modern LLMs increasingly use video transcript data for entity grounding. Brand mentions inside YouTube transcripts feed knowledge graphs and entity disambiguation systems with audio-grade signal that text-only sources don’t carry.
  3. Authority transfer. Mentions on third-party YouTube channels (creators with established subscriber bases) carry the same earned-media gravity as third-party editorial citations — but at a fraction of the cost.
Chapter 03

What doesn’t predict (and never did)

The negative correlations are as instructive as the positive ones:

  • Domain Authority / Domain Rating — classic SEO god-metric — explains under 4% of AI citation variance. SearchAtlas’s replication confirmed: DA, DR, and Domain Power are weak predictors of LLM visibility.
  • Backlink count — the workhorse of off-page SEO — is “weak or neutral” per Seer Interactive’s study of 10,000 LLM questions.
  • Multi-modal content variety on your own site — assumed to help — “didn’t move the needle” per Seer’s data. Variety on your site doesn’t register; variety in third-party mentions of you does.
  • Content length — Ahrefs found a near-zero correlation (0.04 Spearman) with citation. 53.4% of AI-cited pages are under 1,000 words.

The shift is from “build authority on your site” to “build entity presence across the surfaces AI engines actually crawl.” Different game, different scoring.

Chapter 04

The platform divergence wrinkle

Ahrefs’ per-platform breakdown surfaces a useful nuance: the correlation strength varies meaningfully by AI engine.

  • ChatGPT shows the weakest correlations with classic authority metrics (DR: 0.266, branded search: 0.352). It rewards earned media and Reddit/Quora presence over domain authority.
  • AI Mode (Google) shows the strongest correlations with branded authority signals (branded anchors: 0.628). It tracks closer to Google’s own ranking.
  • AI Overviews value DR more than ChatGPT or AI Mode — they’re partially anchored to Google’s organic top-10 (86.85% of AIOs cite at least one Google top-10 result).

The implication: optimizing for one platform via authority-building doesn’t automatically optimize for another. But across all platforms, YouTube mentions and branded earned-media correlations are the most consistent signals — the closest thing to a universal lever.

Chapter 05

Operational implications

For brands committing to AI visibility as a strategic priority, the data suggests a portfolio reweighting. Three concrete moves:

  1. Establish a YouTube presence on category-relevant channels. Not necessarily an owned channel — sponsored mentions, expert interviews, and product placements on established creator channels carry the citation weight.
  2. Reweight off-page investment from raw backlinks toward branded mentions. A backlink from a low-authority site without brand context contributes near-zero. A branded mention without a link from a high-authority publication contributes meaningfully.
  3. Stop optimizing for content length. A 1,200-word piece with statistics, structure, and a YouTube embed will outperform a 4,000-word “ultimate guide” without these elements every time. Length signals nothing useful.

Sources cited

  1. Ahrefs — “Top Brand Visibility Factors: 75K Brands Studied” (2025)
  2. Seer Interactive — “Study: What Drives Brand Mentions in AI Answers?” (Christina Blake, 10K LLM questions, 2024)
  3. SearchAtlas — “Authority Metrics in the Age of LLMs” (2025)
  4. Surfer SEO — 36M AI Overviews citation analysis
  5. SE Ranking — ChatGPT ranking factors (129K domains)
  6. Position Digital — AI SEO statistics compendium (2025)

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Research
Live · FancyAI Research Corpus

67% of B2B buyers start with AI. The new front door.

B2B buyers are adopting AI search at three times the consumer rate. By the time they visit a vendor website, the shortlist is already set.

67%
Of B2B buyers start with an AI assistant before visiting vendor websites
B2B vs consumer AI search adoption rate
90%
Of organizations now use GenAI in purchasing
44%
Of AI search users say it's their primary information source
Chapter 01

The 67% finding

GrackerAI’s B2B buyer survey produced one of the most consequential statistics in the AI search literature: 67% of B2B buyers now start their research with an AI assistant before visiting any vendor website.

This isn’t a measure of AI traffic. It’s a measure of the buying process restructuring around AI as the discovery layer. By the time a B2B buyer arrives on a vendor’s site, the shortlist has already been formed — in conversation with ChatGPT, Perplexity, or Claude. The vendor either appeared on that shortlist or didn’t.

For sales and marketing leaders, this is a quiet category shift with loud implications. Pipeline that used to start with brand awareness ads now starts with AI prompts. Pages optimized for SEO now compete to be the source the AI cites. The funnel hasn’t collapsed — it’s moved upstream, into a layer most vendors aren’t measuring.

“AI is now the place where decisions begin. — Magenta Associates (300 senior UK procurement professionals)”
Chapter 02

B2B is ahead of consumer adoption

Forrester’s 2024 B2B Buyers’ Journey Survey found B2B buyers adopt AI-powered search at 3× the rate of consumers. The accelerated adoption tracks with the higher information density of B2B purchasing decisions: enterprise software, professional services, and capital equipment buyers were already running 50–80 source comparisons per decision before AI. AI condenses that comparison work from weeks to minutes.

The data points all converge:

  • 90% of organizations now use GenAI in some aspect of their purchasing process (Forrester).
  • B2B AI-generated traffic = 2–6% of total organic traffic, growing at 40%+ per month.
  • Forrester’s end-of-2025 projection: B2B AI traffic share to reach 20%+.
  • Site visitors from AI platforms spend up to 3× more time on-page than traditional search visitors.
Chapter 03

McKinsey’s primary-source finding

McKinsey’s AI Discovery Survey (n=1,927, August 2025) measured the most consequential metric for any vendor selling on findability: which information source consumers consider primary.

The result:

  • 44% — AI-powered search
  • 31% — Traditional search
  • 9% — Retailer websites
  • 6% — Review sites

For the first time on record, AI search has surpassed traditional search as the primary discovery channel among AI search users. The cohort is growing fast. The window where SEO alone secures vendor visibility is closing.

Chapter 04

The UK enterprise validation

Magenta Associates surveyed 300 senior procurement professionals across UK enterprises about how their purchasing process has changed with AI. The conclusion was unambiguous: “AI is now the place where decisions begin.”

Specific behavioral patterns surfaced in the data:

  • Procurement teams use AI to generate vendor longlists before any vendor outreach.
  • RFP responses are increasingly summarized by AI before human review.
  • Vendor differentiation messaging that doesn’t make it into AI summaries effectively doesn’t exist for the buying committee.

The pattern is consistent with KPMG’s AI Quarterly Pulse finding: enterprise leaders are restructuring teams around the assumption that “agents will manage projects while humans manage agents.” The buyer-side shift is already in motion.

Chapter 05

What this means for B2B vendors

The strategic implication is structural, not tactical. Three reframes:

  1. The website is no longer the front door — it’s the destination after the AI shortlist. Vendors not appearing in AI shortlists for their category miss the opportunity entirely. Brand-awareness budgets that stop at “driving traffic” are budgeting for the wrong stage.
  2. Sales enablement needs to include AI visibility briefings. Reps who know what their AI shortlist position is — per platform, per buyer persona prompt — can address objections that are now baked in before the first call.
  3. Pipeline attribution needs an AI-discovery layer. Most attribution stacks credit the channel that drove the click. The actual decision often happened earlier, in a ChatGPT conversation that didn’t fire any UTM. Vendors that don’t measure AI visibility share will keep crediting the wrong channels for pipeline that originated upstream.
Chapter 06

The competitive window

The window for a B2B vendor to establish AI visibility before competitors do it first is open and narrowing. Three observations from the survey data inform the timing:

  • 91% of marketing leadership has been asked about AI search visibility in the past year (SEOFOMO survey of 200+ senior SEOs). Awareness is universal.
  • 62% of SEOs report AI search drives less than 5% of revenue today. Investment is lagging awareness — most vendors are still in early experimentation.
  • The gap between the two numbers (91% asked vs. 62% under-5%) is the strategic window. Vendors investing seriously now are pulling ahead in citation share before the budgets catch up.

The brands that establish citation authority in their category in 2026 will be the ones AI engines default to in 2027 and 2028. The asset compounds. The window is open. It won’t stay open long.

Sources cited

  1. GrackerAI — B2B buyer behavior survey (2025)
  2. Forrester — B2B Buyers’ Journey Survey (2024) and B2B AI marketing report (2025)
  3. McKinsey — AI Discovery Survey (n=1,927, August 2025)
  4. Magenta Associates — UK enterprise procurement survey (n=300, 2025)
  5. KPMG AI Quarterly Pulse — Enterprise leadership trends
  6. SEOFOMO / Search Engine Land — State of AI Search Optimization Survey (200+ senior SEOs, 2025)
  7. Menlo Ventures — “State of Generative AI in the Enterprise” (~500 US enterprise decision-makers, 2025)

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Research
Live · FancyAI Research Corpus

Six platforms promise to get your brand cited by AI. Most don't finish the job.

A buyer's-side comparison of Profound, Evertune, Semrush, Scrunch, Conductor, and FancyAI — and the structural fault line splitting the category in two.

5 of 6
Platforms in this analysis that stop at recommendations and leave the work to the customer
14
Dimensions evaluated across pricing, coverage, execution
$99–$500K+
Annual price range across the six tools
60%
Of AI brand signal weight that lives off-site, not on-site
Chapter 01

The category splits in two

The pitch decks of Generative Engine Optimization platforms in 2026 are nearly indistinguishable. Each promises to show buyers where their brand appears in ChatGPT, Perplexity, Gemini, and Claude, to track citations across model releases, and to surface what the competition is doing differently. At the dashboard level, almost every product in the category looks the same.

That sameness is the story. Independent reviewers and the platforms’ own marketing have begun to acknowledge it openly. Mersel’s 2026 GEO platform analysis described one of the largest incumbents in the space as a tool that “does not execute content or deploy AI infrastructure” and “functions exclusively as a monitoring and analytics platform.” Evertune’s own homepage frames its differentiator as a critique: “Every other tool shows you the data. Evertune shows you what to do with it.”

The category is splitting along a single fault line: monitoring versus execution. The vast majority of GEO platforms are dashboards. They tell brands where they stand. A much smaller group does the work that actually changes the answer.

This article evaluates the six platforms most often shortlisted by mid-market and enterprise buyers in 2026 across fourteen dimensions. The dimensions were chosen to be fair on the surface area every platform competes on, and to make the structural gap in the category visible without editorializing.

“Every other tool shows you the data. — Evertune, homepage”
Chapter 02

The matrix

The fourteen-row comparison below is structured around the question every GEO buyer is now asking out loud: I can see I’m not showing up in AI answers. Now what? Almost every platform in this comparison answers the first half of that question. Only one is built end-to-end around the second.

Three rows decide the category — on-site implementation, content production, and off-site/earned media. Watch those.

Dimension
Profound
Evertune
Semrush
Scrunch
Conductor
FancyAI
Pricing & access
Starting price
$99/mo
~$1,000/mo+
$99/mo
$417/mo
$26,800/yr
$499/mo
Public pricing?
Partial
No
Yes
Yes
No
Yes
Coverage
Engines (entry tier)
ChatGPT only
All major
ChatGPT + AIO + Gemini
6 engines
Not specified
ChatGPT + Gemini
Engines (top tier)
10+
All major
All major
6 engines
All
All major
Monitoring & reporting
Monitoring & reporting
Best in class
Strong
Strong
Strong
Enterprise
Included
Citation tracking
Yes
Yes
Yes
Yes
Yes
Yes
Recommendations engine
Yes
Yes (playbook)
Yes
Page audits
Yes
Yes (auto)
Three rows that decide the category
On-site implementation
No
No
No
AXP only
Drafts only
Yes
Content production
No
Partial
Separate $60/mo
No
60–600 drafts/yr
Included
Off-site / earned media
No
Affiliate ties
Separate $149+
No
No
Yes (PR + Reddit)
Service & procurement
Citation link building
No
No
No
No
No
Yes
Strategist included
Enterprise only
Yes
No
Growth+
Mid-market+
All tiers
Procurement cycle
Sales-led
Sales-led
Self-serve
Self-serve
Sales-led (slow)
Self-serve

Sources: vendor pricing pages, Vendr procurement intelligence, Mersel, Trakkr, Rankability, Mint, Indexly, Cairrot, Reddit user reports. Prices current as of May 2026.

Chapter 03

Profound: the analytics-first incumbent

Of the six platforms in this analysis, Profound has the deepest investment in measurement infrastructure. Its Agent Analytics module reads server logs to track GPTBot, ClaudeBot, and PerplexityBot crawl patterns at a level of granularity none of the other tools attempt. The platform covers more than ten AI engines at its top tier, holds SOC 2 Type II compliance, and is the most likely choice for a Fortune 500 brand that already has the in-house engineering bandwidth to act on what the dashboard surfaces.

The platform’s limitation is not a flaw — it is a deliberate scope choice that buyers are increasingly questioning. Mersel’s independent analysis put the framing bluntly: Profound “does not execute content or deploy AI infrastructure” and “functions exclusively as a monitoring and analytics platform.” The Rankability 2026 review benchmarked its pricing at 48 percent above the market average for monitoring tools, with full engine coverage gated to the Enterprise tier (typically $2,000–$5,000+ per month after the $99/mo Starter and $399/mo Growth bands).

Profound is the right answer for organizations whose constraint is information, not capacity. For organizations whose constraint is capacity — the bandwidth to actually ship the changes the dashboard surfaces — the analytics depth becomes harder to justify against the gap between insight and action.

Chapter 04

Evertune: visibility plus paid-media bolted on

Evertune has built the most candid positioning in the category. Its homepage tagline reads as a confession of what its competitors don’t do: “Every other tool shows you the data. Evertune shows you what to do with it.” The platform pairs visibility tracking with EverPanel, a 25-million-user behavioral data layer, and adds programmatic ad activation through The Trade Desk — the only platform in this comparison that integrates AI visibility insight with paid-media execution.

The agency-friendly model (unlimited logins, no per-seat charges) has won enterprise customers including WPP’s Choreograph and Miro. Pricing is not published; Reddit threads and third-party reports place it around $1,000 per brand per country per month, with paid media activation requiring its own separate budget.

The structural caveat sits inside the same tagline. “Showing you what to do with it” still places the burden of doing it on the customer. Evertune hands buyers a playbook, recommendations, and ad inventory access. Translating those into shipped content, schema deployments, and earned editorial mentions remains the customer’s problem to solve.

Chapter 05

Semrush AI Visibility Toolkit: an SEO incumbent extends its surface

Semrush has the largest installed base in the comparison, and its AI Visibility Toolkit reflects that reality. The product is positioned as an extension of the existing SEO suite rather than a standalone GEO platform, with familiar dashboards, the same competitor and keyword databases that make the parent product useful, and the lowest entry price in the category at $99 per user per month for one domain.

The arithmetic gets more complicated quickly. The standalone toolkit covers ChatGPT, Google AI Overviews, and Gemini. Adding the broader AI-ready Site Audit and brand insights requires a Semrush One bundle ($165 to $549 per month depending on tier), and content drafting lives in a separate $60-per-month Content Toolkit. Buyers who want the full surface end up assembling four products into something the standalone competitors deliver as one.

Semrush is the right answer for organizations already living inside the platform — agencies and in-house SEO teams who can extend an existing contract incrementally rather than procuring a new vendor. The trade-off is conceptual: AI Visibility is one of more than ten toolkits in Semrush’s catalog, not the company’s strategic center. The roadmap reflects that.

Chapter 06

Scrunch AI: the mid-market agent-experience play

Scrunch occupies the middle of the market with five hundred-plus customers (including Lenovo, BairesDev, Clerk, and Skims) and the broadest AI engine coverage in its tier — ChatGPT, Claude, Gemini, Perplexity, Google AI Mode and AI Overviews, and Meta. Its differentiator is the Agent Experience Platform (AXP), which generates an AI-friendly version of a customer’s site automatically. Pricing starts at $417 per month (annual) or $500 month-to-month, with a seven-day free Starter trial.

Customer evidence has been compelling where it appears. Clerk reported nine-times higher sign-ups attributable to AI search after deploying Scrunch’s recommendations. The persona-based prompt monitoring framework is meaningful for B2B buyers tracking different ICP search behaviors.

The page audit limits are tight by design (5 audits on Starter, 10 on Growth), and AXP optimizes the website that already exists. It does not produce new authoritative content, earn off-site signals, or pursue the editorial mentions that the underlying research suggests carry the largest share of AI recommendation weight. Scrunch is a strong dashboard with a clever on-site automation layer attached. For mid-market teams comfortable shipping content themselves, it is among the best-value options in the category.

Chapter 07

Conductor: the enterprise SEO incumbent extends to AEO

Conductor is the most established platform in the comparison and the one with the longest enterprise customer roster. Its expansion from SEO into AEO (Answer Engine Optimization) added Writing Assistant Drafts (60 to 600 per year depending on tier), Content Score, and AgentStack — an internal tooling layer for managing the work. At Enterprise scale, the platform monitors more than 125,000 pages and 60,000 keywords, with white-glove implementation that often spans the first quarter.

The pricing reflects the customer profile. Conductor publishes no public price list. Vendr’s 2026 procurement data places median annual spend at $48,950, with entry contracts ranging $26,800 to $45,000, mid-market deals $48,000 to $85,000, and enterprise commitments commonly $150,000 to $500,000 and beyond. Implementation regularly adds another $30,000 in Year 1.

The procurement cycle is its own filter. Buyers report 30 to 90 day timelines from first contact to contract, which makes the platform unrealistic for SMB and most mid-market organizations. Conductor is the right answer for global enterprises with six-figure SEO budgets, ten-plus domains, and a dedicated CSM relationship to operate the program. For everyone else, it is over-engineered for the use case.

Chapter 08

FancyAI: the execution layer for AI discovery

FancyAI is the youngest platform in this analysis and the only one positioned around execution as the product. The company publishes its methodology openly — a Signal Hierarchy framework derived from analysis of more than 40,000 websites and 129,000 domains, scored against an AI Readiness Index that quantifies a brand’s eligibility to be recommended by AI systems across four signal classes (entity clarity, citation density, structured proof, corroborating mentions).

The platform itself ($499/$799/$1,249 per month across Basic, Pro, and Advanced) carries the same monitoring and recommendations features the other tools provide. The departure is what comes attached. Each engagement bundles the dashboard with implementation hours (a $2,500/mo fulfillment package covering 30 hours of monthly execution work), content production (included rather than charged separately), and off-site media packages ($2,000 to $9,500 per month for PR, Reddit, LinkedIn, and citation development).

The structural argument that emerges from the matrix is what FancyAI is built to answer. Of the platforms compared, it is the only one that returns a “yes” on all three of the rows that decide the category — on-site implementation, content production, and off-site/earned media. The company’s tagline (“AI Visibility, Executed”) commits to closing the loop the rest of the category leaves open.

The candid trade-off: FancyAI is a newer brand than Conductor or Semrush, and enterprise procurement teams accustomed to evaluating dashboards may need to be educated on the managed-execution model. Pricing is published — a deliberate contrast to the sales-led incumbents.

Chapter 09

Why three rows decide the category

The most consequential rows in the matrix are not the ones the dashboards compete on. Citation tracking, recommendations engines, and platform coverage are now table stakes — every serious GEO product ships these. The differentiation lives downstream of the dashboard, in the rows where most platforms answer “no” or “partial.”

The reason this matters more in GEO than it did in SEO is empirical. FancyAI’s published methodology, drawing on Digital Bloom’s analysis of 129,000+ domains and the company’s own corpus, finds that 41 percent of AI brand recommendation weight comes from authoritative list mentions (third-party “best of” lists, roundups, directories) and not from on-site content at all. Another 18 percent comes from awards and accreditations; 16 percent from online reviews. The signals that move AI recommendations are predominantly off-site, earned, and editorial — precisely the work that monitoring tools cannot do for a customer.

The Princeton GEO paper (Aggarwal et al., KDD 2024) reached a complementary finding from a different angle: lower-ranked sites applying GEO methods saw a 115 percent visibility lift, while top-ranked sites saw an average 30 percent decrease. The opportunity is asymmetrically large for brands that can do the work, not just observe their position.

Across the comparison, every platform either answers “no” on the off-site row, or hands the customer a playbook and steps back. One platform answers “yes” on all three rows that move the recommendation needle. That is the structural fault line buyers should weight most heavily.

“The mention is the signal. The link is almost irrelevant.”
Chapter 10

The verdict, written as a decision tree

No tool in this comparison is the right answer for every buyer. Ranking is the wrong frame. The right frame is fit:

  • If the constraint is information at Fortune 500 scale — an enterprise SEO, content, and engineering team with the bandwidth to ship changes and the need for the deepest AI-bot crawl analytics in the market — Profound is the right choice.
  • If the goal is monitoring tied to programmatic ad activation — a brand or agency willing to fund a separate paid-media budget alongside the platform license — Evertune is the only tool in the category with The Trade Desk integrated.
  • If Semrush is already in the stack and the team wants AI visibility added incrementally rather than procured separately, Semrush’s AI Visibility Toolkit is the path of least resistance.
  • If the buyer is a mid-market marketing team shipping content internally and wanting strong cross-platform monitoring with on-site automation, Scrunch is the best-value self-serve option.
  • If the organization is a global enterprise with a six-figure SEO budget, ten-plus domains, and the procurement patience for a 30 to 90 day cycle, Conductor remains the deepest enterprise platform.
  • If the bottleneck is execution — if the dashboard would just confirm what the team already suspects without anyone with capacity to act on it — FancyAI is the only platform in the comparison that closes the loop. The work, not just the readout.

The category will continue consolidating around this distinction. Monitoring platforms will commoditize. Execution platforms will compound. Buyers who recognize where their organization actually breaks — insight or capacity — will choose accordingly.

Sources cited

  1. Mersel AI — “Mersel vs. Profound” analytics-vs-execution analysis (2026)
  2. Profound — Pricing and product navigation (tryprofound.com)
  3. Rankability — “Profound AI Review 2026”
  4. Mint AI / Indexly — Profound pricing breakdown (2026)
  5. Evertune — Homepage and product positioning (evertune.ai)
  6. Reddit r/evertune — Pricing discussion thread (2026)
  7. Semrush — Subscription plans and toolkit pricing (semrush.com)
  8. Trakkr — Semrush AI Visibility pricing review (2026)
  9. Scrunch AI — Pricing and plan details (scrunch.com)
  10. Cairrot — Scrunch AI review and alternatives
  11. Conductor — Pricing page (conductor.com)
  12. Vendr — Conductor 2026 procurement benchmarks (median $48,950)
  13. Checkthat.ai — Conductor pricing tiers and 3-year TCO
  14. Conductor Academy — SEO & AEO Pricing Guide
  15. FancyAI — Signal Hierarchy methodology and AI Readiness Index
  16. Aggarwal et al., Princeton/Georgia Tech/Allen AI/IIT Delhi — “GEO: Generative Engine Optimization,” ACM SIGKDD 2024

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Research
Live · FancyAI Research Corpus

AI is now citing AI. The 91.4% problem.

A Search Engine Land analysis found 91.4% of content cited in AI Overviews is at least partly AI-generated. A Columbia Journalism Review study found AI search engines wrong 60% of the time — and premium models were worse than free ones.

91.4%
Of content cited in AI Overviews is at least partly AI-generated
60%
Inaccurate or misleading answers across 8 AI engines (CJR)
17–33%
Legal AI hallucination rate (Stanford)
1,800
AI articles in the 2023 SEO heist that "stole" 3.6M visits
Chapter 01

The headline finding

Search Engine Land’s 2025 analysis of Google AI Overview citations surfaced one of the most consequential statistics in the GEO literature: approximately 91.4% of content cited in AI Overviews was at least partly AI-generated.

Read that twice. The system designed to synthesize the web’s most credible answers is overwhelmingly drawing on content the web’s machines made. Each citation feeds back into the training data for the next generation of models. The information ecosystem is becoming progressively more self-referential.

The phenomenon has a name in the academic literature: model collapse. UK and Canadian researchers covered by VentureBeat in 2024 demonstrated that “as AI-generated content proliferates around the internet, and AI models begin to train on it,” quality degrades exponentially with each generation. iPullRank calls the operational consequence “the content collapse”: each cycle of generation produces work that is “progressively more generic, less accurate, and less useful.”

“Each generation of AI content becomes progressively more generic, less accurate, and less useful. — iPullRank”
Chapter 02

The Columbia Journalism Review study

The most rigorous published audit of AI search accuracy comes from the Columbia Journalism Review’s Tow Center for Digital Journalism. CJR researchers tested eight major AI search engines against 200 news-sourced queries with known correct answers.

The headline result: chatbots provided inaccurate or misleading answers more than 60% of the time — nearly always without acknowledging any uncertainty. The most counterintuitive finding: premium AI models were more prone to confidently incorrect responses than their free counterparts. Paying more bought a more confident liar, not a more accurate one.

CJR also documented that “many of the AI companies developing these tools have not publicly expressed interest in working with news publishers” — the same publishers whose content trained the models in the first place.

Chapter 03

The 2023 SEO heist as canary

In late 2023 a SaaS founder publicly bragged about generating 1,800 articles using AI and “stealing” 3.6 million visits from a competitor. The articles required minimal human oversight. Google eventually applied a manual action against the site, but not before the experiment proved an uncomfortable truth: at scale, AI-generated content can be made to rank, get cited, and capture traffic that previously belonged to human-created authoritative sources.

The economics of digital pollution are bad. The cost to produce 1,800 mediocre articles is now under a hundred dollars. The cost to produce one well-researched, expertly written piece on the same topic is roughly the same as it was a decade ago. The arbitrage is brutal.

Peec AI’s analysis of Google’s 2024 spam action found that 100% of affected pages had AI-generated content, with half the affected sites being completely de-indexed. Detection improves; production costs fall faster.

Chapter 04

The PNAS perverse incentive

A Proceedings of the National Academy of Sciences paper added a third factor to the loop: when given a choice between human-written and AI-written text, large language models sometimes prefer the AI-written version.

The implication is structurally important. Brands optimizing for AI visibility now face a perverse incentive: if the cheapest path to citation is to publish AI-generated content because models prefer reading what other models wrote, the rational response is to flood the web with synthetic content. That is exactly what is happening. The 91.4% number is not a glitch — it’s the rational outcome of the incentive landscape.

Reddit r/science surfaced a corollary finding: nearly two-thirds of AI-generated citations are themselves inaccurate. The web of references AI engines cite is itself made by AI engines, and that web is wrong more often than it is right.

Chapter 05

Why human-grade content still wins

The contrarian finding inside the data is the operational one. Multiple controlled tests now show that human-generated content designed for AI extraction performs “up to an order of magnitude better” than AI-generated content on the same topics.

The mechanism: AI engines reward expertise signals (E-E-A-T), source citation density, and structural extractability. AI-generated content tends to lack first-person experience, novel data, and named authors with credentials. It is structurally legible but evidentially thin. Models cite it because it’s easy to parse; users don’t convert from it because it doesn’t teach them anything new.

The brands winning long-term in AI search are doing the opposite of the AI-generation arbitrage: they are publishing fewer pieces, written by named experts, with original data, structured for extraction but written for humans first.

Chapter 06

The trust paradox for GEO practitioners

The structural challenge for the entire GEO discipline is the trust paradox iPullRank articulated: brands and marketers are investing in strategies to earn citations and visibility — inside systems that are becoming less reliable.

The 91.4% problem is not just a content quality issue. It is a signal degradation issue. The systems brands optimize for are losing the ability to distinguish authoritative sources from synthetic ones. Citation share inside a degrading system is worth less every quarter.

Three operational responses worth considering:

  1. Increase the human-evidence signal density of every page. Named authors with credentials. Original data. First-person observation. The content that AI cannot mass-produce is the content that compounds in value.
  2. Invest in third-party validation pipelines. Trade publication coverage, podcast appearances, conference talks, expert interviews. Earned media that AI cannot generate is the proof of authority that survives the collapse.
  3. Treat AI-generated content as a tax, not a tool. Use it for first drafts and structural scaffolding; never publish without substantial expert rewrite. The economics still favor the slow path for brands that intend to be cited five years from now.

Sources cited

  1. Search Engine Land — “AI-generated content: Benefits, risks & SEO best practices”
  2. Columbia Journalism Review (Tow Center) — “AI Search Has a Citation Problem” (8 engines tested, 2025)
  3. VentureBeat — “The AI feedback loop: Researchers warn of model collapse” (UK/Canadian study, 2024)
  4. iPullRank — “The Content Collapse and AI Slop — A GEO Challenge”
  5. Peec AI — Google 2024 spam action analysis
  6. Proceedings of the National Academy of Sciences (PNAS) — LLM preference for AI-generated text
  7. Reddit r/science — Two-thirds of AI-generated citations inaccurate
  8. Semrush — “Can AI Content Rank on Google?” (20K blog URL analysis)

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Research
Live · FancyAI Research Corpus

When AI lies about your company. A brand hallucination field guide.

Air Canada lost a tribunal ruling. Soundslice built a feature ChatGPT invented. Hoka had wrong pricing on display. The hallucination rate is between 17% and 90% depending on the domain — and 40% of users never check the source.

42.1%
Of users encounter inaccurate AI content; 40%+ never click through to verify
17–33%
Legal AI hallucination rate (Stanford)
28–90%
Medical AI hallucination rate range
16.78%
Have encountered unsafe or harmful AI advice
Chapter 01

The hallucination rate baseline

AI hallucination is not a fringe edge case. It is a measured, structural feature of probabilistic systems applied to deterministic domains. NeuralTrust’s 2025 analysis frames it bluntly: hallucinations are “an inherent risk of using probabilistic models in deterministic domains.” They are a feature, not a bug.

The published rates by domain:

  • Legal AI tools: Stanford research found 17–33% hallucination rates across major legal-research AI products.
  • Medical AI: documented hallucination rates range 28–90% depending on the system and query type.
  • General AI search: Columbia Journalism Review’s test of 8 engines: 60% inaccurate or misleading.
  • User-side: MarTech data shows 42.1% of users encounter inaccurate AI content; 16.78% have encountered unsafe or harmful AI advice; over 40% rarely or never click through to verify a source.

Translate that to brand math: every day, AI engines make millions of confidently incorrect statements about real businesses, and most users never cross-check.

“There are three kinds of lies: lies, damned lies, and hallucinations. — UC Berkeley SCET”
Chapter 02

The Air Canada precedent

In 2024, Air Canada’s customer-service chatbot invented a bereavement-fare policy that did not exist. A grieving passenger acted on the AI’s answer, booked the trip at full price expecting a refund, and was denied. He sued. The Canadian tribunal ruled the airline liable for the chatbot’s statements: if the AI is the customer-facing voice of the brand, the brand owns what the AI says.

The legal exposure is real and replicable. The National Law Review’s 2025 framing was unambiguous: “If your AI acts as an agent of your business, you likely bear responsibility for what it tells people.” The legal framework is still settling, but the early-precedent direction favors users harmed by hallucinations and not the brands deploying the AI.

Chapter 03

The Soundslice case: built what the AI invented

Soundslice, a music-software company, discovered ChatGPT was telling users it had an ASCII tab import feature. The product had no such feature. Users were signing up to use it, finding nothing, and churning. ChatGPT had hallucinated the capability into existence.

The company eventually built the feature — not because they had planned to, but because the AI had created enough demand for it that shipping was the cheaper response than continued explaining. iPullRank documents this as the canonical example of AI-driven product roadmap pollution: your AI-generated brand reality starts to shape your real product strategy.

Hoka faced a similar issue from a different direction: ChatGPT was showing prospective customers incorrect pricing pulled from outdated third-party sources. Even after Hoka updated their official pricing, the AI continued surfacing the wrong number. The lag between brand correction and AI retraining is months, sometimes longer.

Chapter 04

Why users don’t catch it

The user-behavior side of the hallucination problem is what makes it consequential at scale. Pew Research found only 1% of users click into the cited source when an AI Overview is shown. Bain’s consumer survey found ~80% of consumers now rely on zero-click results.

The trust paradox compounds: an Exploding Topics analysis of 2025 consumer trust data found 82% of users are skeptical of AI results, yet only 8% always check sources. Skepticism without verification creates the worst-case outcome — users who don’t fully trust AI answers but act on them anyway, then carry the misinformation into their decision-making.

For brands, this means: a hallucinated claim about your company in an AI Overview reaches users who are already skeptical of the medium, but who will act on the claim anyway, and will not check whether your website tells a different story.

Chapter 05

The four classes of brand-level hallucination

From the case literature, hallucinations affecting brands cluster into four reproducible patterns:

  1. Outdated training data. AI surfaces old pricing, old leadership, old policies, old product capabilities. The brand updates the website; the AI keeps quoting the old version for months.
  2. Brand confusion. RankScience documented a US consulting firm whose AI responses blended its history with a UK firm of similar name, making the US firm invisible on competitive queries. Fictional-character namesake collisions are the worst case.
  3. Fabricated specifics. Invented features, made-up partnerships, hallucinated awards, fake founder biographies. Soundslice is the textbook example.
  4. Negative narrative drift. Third-party criticism, outdated controversies, or competitor-comparison content getting surfaced as the dominant brand frame even when the underlying issue is resolved.
Chapter 06

The defensive GEO playbook

You cannot directly correct an AI model’s output. The only effective response is to flood the information ecosystem with accurate, structured, authoritative content that gives AI systems high-confidence material to draw from. Edelman’s 2025 framing: “Earned media is the single most important driver of brand visibility in AI-generated responses.”

The MarTech-validated defensive sequence:

  1. Audit systematically. Query all major AI platforms with brand-specific prompts (“What is [Brand]?” / “Who founded [Brand]?” / “What does [Brand] cost?”). Document every false claim. Track over time.
  2. Diagnose the root cause. Is the wrong claim coming from outdated training data, or from a third-party source the model is retrieving live? Different fixes apply.
  3. Update owned content first. Make sure your website, knowledge graphs (Wikidata), and schema.org markup state the canonical truth clearly.
  4. Earn third-party corrections. Get accurate information published on the authoritative sites the AI engines crawl. The mention is the signal.
  5. Use platform feedback mechanisms. Where available (ChatGPT thumbs-down, Perplexity feedback), report incorrect statements about your brand. Slow but cumulative.
  6. Monitor over time. Model retraining cycles are months. Check quarterly that corrections have propagated.
Chapter 07

The insurance market emerges

A measure of how seriously the brand-risk side is taken: insurance products specifically covering AI-related brand damage are now in market. Munich Re launched aiSure covering AI performance failures. Willis Towers Watson partnered with Liberty Specialty Markets on similar coverage. Lloyd’s syndicates are underwriting AI hallucination liability policies.

The premiums and exclusions are still being calibrated, but the existence of the market is itself a signal. When reinsurers are willing to write coverage, they have priced the risk. The hallucination-driven brand-damage exposure is now a balance-sheet item, not a marketing concern.

For CMOs and CISOs: budget for AI brand monitoring is no longer optional. The cost of catching a hallucination early is measured in monitoring tool licenses. The cost of catching it late is measured in tribunal rulings, lost customer trust, and insurance premiums.

Sources cited

  1. Stanford — Legal AI hallucination study (17–33%)
  2. National Law Review — “AI Hallucinations Are Creating Real-World Risks for Businesses”
  3. Air Canada v. Moffatt — Canadian tribunal ruling (2024)
  4. iPullRank — Soundslice case study + AI hallucination epidemic analysis
  5. Pew Research Center — AI Overview click-rate behavioral study
  6. MarTech — “How to Protect and Control Your Brand Reputation in AI Search”
  7. Yoast — “When AI Gets Your Brand Wrong: Real Examples and How to Fix It”
  8. Edelman — “How Brands Stay Visible in an AI-Driven Search World”
  9. Munich Re aiSure / Willis Towers Watson — AI insurance market data

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Research
Live · FancyAI Research Corpus

“Extinction-level event.” How AI search is restructuring the open web.

NPR’s framing for what publishers are facing. The Daily Mail’s vice chair: 50% of traffic gone in five years. 500+ lawsuits. A handful of platforms now control how billions discover information.

50%
Of publisher traffic projected to disappear within five years (Daily Mail vice chair)
25%
Of Google searches now trigger an AI summary
500+
Publisher lawsuits filed against AI platforms
$50M
News Corp annual licensing deal with OpenAI
Chapter 01

The NPR framing

NPR’s July 2025 reporting on AI search’s impact on publishers used a phrase that has stuck in the industry: “an extinction-level event.” The framing is dramatic but not hyperbolic. The data behind it is consistent across independent measurements.

TechCrunch documented “referrals to news sites are plummeting, cutting off the traffic publishers need to sustain quality journalism.” Search Engine Journal quoted publisher leadership: “We’re definitely moving into the era of lower clicks and lower referral traffic for publishers” — the golden age of search traffic is ending. Digital Content Next’s August 2025 member survey found median year-over-year referral traffic from Google declined sharply after AI Overviews expanded.

The cause is structural: when 25% of Google searches now trigger an AI summary that answers the user’s question without requiring a click, the publisher economics break. Ad revenue collapses. Subscription conversion paths break. Editorial budgets shrink. Less content gets produced. Less authoritative content exists for AI engines to draw from. The loop closes badly.

“All publishers could see 50 percent of their traffic gone in five years. — Rich Caccappolo, Vice Chair, Daily Mail parent company”
Chapter 02

The Daily Mail projection

Rich Caccappolo, vice chair of Media at the Daily Mail’s parent company, told The Atlantic in June 2025 that all publishers “could see 50 percent of their traffic gone in five years.” The Daily Mail is one of the world’s largest English-language news properties. The projection came from inside the most resource-equipped tier of publishing.

The Atlantic’s own reporting captured the broader sentiment: “I’ve spoken with several news publishers, all of whom see AI as a near-term existential threat to their business.” Futurism documented that the steepest declines occurred in mid-2025 specifically — the moment Google expanded AI Overview coverage from a small share of queries to roughly a quarter of all searches.

Chapter 03

The litigation front

The legal response is the largest publisher-vs-platform litigation cluster in the history of the open web. 500+ publications have filed or joined lawsuits against AI search platforms. The most prominent:

  • The New York Times sued OpenAI and Perplexity for copyright infringement.
  • Dow Jones joined the wave of suits.
  • Google faces an EU antitrust investigation specifically about AI Overviews using publisher content.
  • Wolf River Electric’s defamation suit against Google is testing whether AI-generated false claims meet the legal standard for libel.

The legal question underneath all of these cases is unsettled: do AI engines have the same Section 230 protections as traditional search engines? The American Bar Association’s November 2024 analysis noted “the absence of comprehensive AI regulation clearly defining the contours and applicability of Section 230 immunity in the context of generative AI.”

Chapter 04

The licensing alternative

Some publishers have chosen the negotiation path instead of litigation. News Corp signed a $50M-per-year licensing deal with OpenAI. The Associated Press has a similar agreement. Other major outlets have followed.

The split strategy reflects publisher uncertainty about the better path. Sue for damages and clarification of legal standards? Or negotiate for predictable revenue and seat at the table?

The math favors negotiation only for the largest publishers. A $50M annual licensing deal is meaningful for News Corp’s revenue base. Translated to mid-tier publishers, the proportional equivalent payments are too small to compensate for the traffic loss they replace. Local news, trade publications, and most B2B media outlets are not getting licensing offers and would not survive on the proportional equivalent if they did.

Chapter 05

The 25% threshold

Futurism’s 2025 analysis identified the inflection point: by mid-2025, roughly 25% of all Google searches triggered an AI summary. That number is the threshold at which publisher traffic loss becomes acute rather than chronic.

Below 10% AI Overview saturation, publishers absorbed the loss as a manageable headwind. Above 25%, the loss broke the unit economics of ad-supported publishing for everyone except the largest players. Above an estimated 50% saturation — which Google’s trajectory suggests is reachable within 24 months — entire categories of publishing become unviable.

The categories most exposed: top-of-funnel informational content (the standard SEO-driven blog model), review and comparison content (synthesized away by AI Overviews), and how-to and tutorial content (also synthesized). The categories most insulated: opinion, original reporting, breaking news, and brand-relationship-driven subscription content.

Chapter 06

The affiliate-marketing collapse

The collateral damage extends beyond traditional publishers. Affiliate marketing — the entire industry of review-driven product comparison content and recommendation sites — faces a more acute version of the same dynamic.

Affiversemedia’s 2025 analysis: “Users increasingly discover, evaluate, and decide on purchases within AI environments before ever clicking an affiliate link.” Acceleration Partners reported “significant drops in organic traffic because these AI search summaries don’t drive clicks.”

The affiliate model isn’t dead — coupons, loyalty programs, and partner networks remain viable — but the SEO-driven affiliate playbook of the past 15 years (write “best of” lists, rank organically, capture commission on click-through) is being structurally disintermediated. The brands that adapt are the ones building direct authority that earns AI citations rather than relying on the click.

Chapter 07

Concentration: from open web to four platforms

Forbes’s April 2025 framing of the macro shift was precise: “We’re shifting from the chaos of the open web to the centralization of a few AI platforms.” AdExchanger’s parallel analysis: “Generative AI didn’t just transform search results; it upended how monetization works on the open web.”

The concentration math: four companies (Google, OpenAI, Microsoft, Anthropic) now control roughly the entirety of mainstream AI-mediated information discovery. Google’s historic search dominance was partial because users had alternatives (Bing, DuckDuckGo, vertical search). The AI search consolidation is qualitatively different — the four-platform concentration is more pronounced and the user behavior shift is one-directional.

The downstream consequences run through the entire information ecosystem: 32% of US/UK consumers now say AI is negatively disrupting the creator economy (up from 18% in 2023). Small website owners face the resource gap most acutely — GEO monitoring tools, content optimization, and brand building all require investment they may not have.

Chapter 08

What survives, and what brands should do about it

The publishers, brands, and content categories that survive the next 24–36 months will be the ones that adapt to one structural reality: the click is no longer the unit of value. Citation share, brand-mention frequency, and AI-mediated visibility are the new metrics.

For brand operators reading this as buyers (not as publishers), three implications matter:

  1. If your category is heavily mediated by news and review content — consumer products, B2B SaaS, healthcare, financial services — the third-party validation pipeline you used to depend on is shrinking. The brands that earn citation share will increasingly do so through direct authority signals, not by hoping a review aggregator picks them up.
  2. If your acquisition funnel depended on top-of-funnel SEO content, that channel is collapsing. Move budget toward authority-building content (original research, named-author publishing, expert positioning) and earned media (PR, podcast appearances, conference talks).
  3. If you publish content as a brand, you are now operating in the same competitive environment as the publishers being affected by this shift. The bar is higher; the rewards are concentrated. The middle is being hollowed out.

Sources cited

  1. NPR — “Online news publishers face ‘extinction-level event’” (July 2025)
  2. The Atlantic — “AI Is Already Crushing the News Industry” (June 2025)
  3. TechCrunch — “Google’s AI search features are killing traffic to publishers”
  4. Digital Content Next — Member survey (August 2025)
  5. Futurism — Google AI Overview saturation analysis
  6. Forbes Tech Council — “AI And The Future Of Search”
  7. AdExchanger — “The AI Search Reckoning Is Dismantling Open Web Traffic”
  8. Press Gazette — Publisher lawsuits and licensing deals tracker
  9. American Bar Association — Section 230 / generative AI analysis

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Research
Live · FancyAI Research Corpus

The honest skeptic’s case against GEO.

Rand Fishkin: fewer than 1 in 100 prompt runs return the same brands. Profound: 40–60% of cited domains change in a month. A founder shut down his GEO tool after concluding it was just good marketing. The strongest counter-arguments deserve a fair hearing.

< 1 in 100
Prompt runs that produce the same brand list (Rand Fishkin / SparkToro)
40–60%
Of cited domains change within one month (Profound)
< 1/3
Of SEO leaders maintain consistent GEO terminology
80%
Of GEO is repackaged fundamental SEO, per the strongest critics
Chapter 01

Why this article exists

Most published GEO content is sold by people whose business depends on GEO being important. That is not a disqualifying conflict, but it is a real one. The strongest case against the discipline deserves a hearing on its own terms — not a strawman, not a polite acknowledgment before the inevitable rebuttal, but the actual argument as its proponents make it.

What follows is the most credible version of the skeptical position, drawn from named practitioners with deep SEO experience and data-backed reasons for their skepticism. Where the position is right, the article says so. Where the position over-reaches, the article says that too. The goal is the honest assessment a serious buyer should be able to make before committing budget.

Chapter 02

Fishkin’s consistency finding

The most empirically devastating critique comes from Rand Fishkin’s SparkToro research published in 2025. The methodology was straightforward: ask the same brand-recommendation question of an AI engine, repeatedly, and measure how consistent the responses are.

The findings:

  • Fewer than 1 in 100 prompt runs produce the same list of brands.
  • Fewer than 1 in 1,000 produce the same list in the same order.
  • The probability of identical lists in identical order is under 0.1% regardless of platform, vertical, or query phrasing.

The implication, in Fishkin’s framing: any tool that gives a “ranking position in AI” is full of baloney. AI engines are probability machines designed to generate unique answers every time. Treating them as deterministic ranking systems is, in the technical sense of the word, nonsensical.

This critique is correct. It does not make GEO irrelevant — visibility percentage measured across many prompt repetitions is a valid metric — but it does invalidate a specific class of GEO marketing claims that promise “rank #1 in ChatGPT” or “guaranteed Perplexity placement.” Buyers should treat any vendor making such claims as either uninformed or dishonest.

“Any tool that gives you a "ranking position in AI" is full of baloney. — Rand Fishkin, SparkToro”
Chapter 03

Profound’s citation drift research

Profound — one of the largest GEO platforms by market presence — published research that arguably argues against the durability of its own category. The findings:

  • 40–60% of domains cited in AI responses change completely within one month.
  • 70–90% change over six months.

The interpretation: AI citation positions are radically more volatile than Google rankings ever were. A site that wins citation share this quarter has no strong reason to expect that share next quarter. The optimization investment may not compound the way SEO investment historically did.

This critique is partially correct. The citation drift is real and measurable. But the interpretation cuts both ways: the same volatility means challenger brands can capture share quickly, in ways that were structurally impossible against incumbent SEO authority moats. The volatility is bad for incumbents and good for challengers. Whether your organization sees it as a threat or an opportunity depends on which side of that line you sit on.

Chapter 04

Why Lorelight shut down

The most damaging critique of the GEO industry came from inside the GEO industry. Ben Goodey built Lorelight, an AI visibility tracking tool, then shut it down in 2025. His public reasoning was unusually candid.

The conclusion that ended the company: “Everything companies needed for GEO was the same as what they already needed for good marketing.” Lorelight experienced significant churn because GEO “currently has no long-term strategy” — tactics were “hacks more than strategies.”

From Goodey’s exit interview with Section: “Companies mostly expected a magic bullet. They thought they would log in and it would say, ‘if you do this one thing, suddenly you will be number one in ChatGPT.’ Instead they got: ‘if you keep doing the hard work, then you will show up.’”

This is the strongest skeptic argument because it comes from someone who tried to build the product, learned what wasn’t working, and was honest about the result. The fair response: Goodey is right that GEO is not a magic bullet. He is also right that 80% of the work overlaps with foundational marketing. He is wrong, in the read of multiple operators who continued investing, that the remaining 20% doesn’t matter — that 20% is exactly where citation share is decided.

Chapter 05

The snake-oil critique

Kai Spriestersbach, the AI researcher and SEO veteran widely cited in the European GEO conversation, made the bluntest version of the skeptic argument in Business Insider:

“Everyone is going crazy about becoming the next agency — all the side hustlers and snake oil sellers with their tools already on the train and riding the hype.”

The market reality validates the framing. 20+ GEO tools launched in 2025 alone. Several have raised at $100M+ valuations despite operating in a field where, as Spriestersbach notes, no one has proven, replicable methodologies. Lily Ray (Amsive) and Jeremy Moser (uSERP) have both publicly warned that “80% of GEO is good fundamental SEO” and that anyone claiming otherwise is selling snake oil.

Where the critique is right: the industry has attracted bad actors. Where the critique over-reaches: 80% overlap means 20% net-new discipline, and that 20% is where competitive positioning gets decided. The buyer’s test is not whether GEO has snake oil — every emerging discipline does — but whether the specific vendor in front of you publishes their methodology, cites primary research, and can show measurable lift attributable to their work.

Chapter 06

Where Google’s own representatives draw the line

The most institutional version of the skeptic position comes from inside Google. The platform’s public-facing representatives have been remarkably consistent:

  • John Mueller (Search Advocate, January 2026): “There is no such thing as GEO or AEO without doing SEO fundamentals.”
  • Nick Fox (Google): “Optimizing for AI search is the same as optimizing for traditional search.”
  • Danny Sullivan (Search Liaison): has emphasized that any GEO tools advising content designed “solely for rank and visibility purposes” lose “track of the big picture.”

The position is consistent across the company and over time: SEO fundamentals matter, manipulation will be resisted, there are no shortcuts.

This is the most credible institutional skeptic position because it comes from the platform that benefits most when SEO and GEO converge. Google’s incentive is to keep the optimization disciplines unified under its rules. The position is also empirically partially correct — Bounteous research found 99% of URLs in Google AI Mode appear in the top 20 organic search results. SEO foundations carry through to AI visibility.

Where the platform position over-reaches: it understates the 60% of GEO that is genuinely different from SEO — entity optimization, passage-level structure, cross-platform consistency, earned-media gravity, community presence. These are net-new disciplines that classical SEO does not cover.

Chapter 07

The thoughtful counter-position

The honest synthesis: every credible skeptic critique above is partially correct. None of them, taken individually or collectively, justify ignoring the underlying shift.

What the skeptics get right:

  • AI engines are not deterministic ranking systems; vendors selling “ranking positions” are misleading buyers.
  • Citation drift is real; optimization gains are less durable than SEO equivalents were.
  • 80% of GEO overlaps with foundational marketing; agencies pretending otherwise are overstating their differentiation.
  • The industry has attracted bad actors; due diligence on vendors is essential.
  • SEO fundamentals carry through; teams that ignore them while chasing GEO tactics are building on sand.

What the skeptics get wrong:

  • The 20% of GEO that is net-new (entity, passage, cross-platform, earned, community) is where competitive positioning is increasingly decided.
  • Citation drift cuts both ways; challengers gain share faster than they ever could against incumbent SEO moats.
  • “It’s just good marketing” understates how the structure of marketing has changed when 60% of consumer information starts in an AI conversation.
  • The discipline being immature doesn’t mean the underlying shift isn’t real; it means the playbook is still being written.

The honest buyer’s position is somewhere in the middle. Skepticism toward GEO vendors is healthy. Investment in the underlying capability is rational. The question is not whether to engage. It is which vendor, at what scale, with what measurement framework, and on what timeline.

Sources cited

  1. Rand Fishkin / SparkToro — “AIs are highly inconsistent when recommending brands” (2025)
  2. Profound — Citation drift research
  3. Section / Ben Goodey interview — “Is SEO dead or is GEO hype?”
  4. Business Insider — “AI Search Reshapes SEO, Fueling GEO Gold Rush” (Spriestersbach quote)
  5. MarTech / Mike Maynard — “GEO isn’t a fad — but most GEO tactics won’t survive”
  6. Search Engine Land — Mueller, Fox, Sullivan statements (multiple, 2025–2026)
  7. Bounteous — Google AI Mode top-20 organic citation overlap
  8. Search Engine Land — 75 SEO thought leader sentiment analysis

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Research
Live · FancyAI Research Corpus

Black hat GEO: the manipulation playbook (and why it’s doomed).

Three categories of manipulation are spreading: data poisoning, citation stuffing, and hidden prompt injection. Harvard demonstrated text sequences that force LLM outputs. Reboot Online ran a negative GEO experiment against itself. The platforms are evolving faster than the attackers.

3
Categories of GEO manipulation now in active use across the industry
40%
Of ChatGPT insights pulled from Reddit (astroturfing target)
95%
Of consumers check reviews before purchasing
100%
Of Google’s 2024 spam action targets had AI-generated content
Chapter 01

Where the line is

TigerTracks, in one of the cleanest published framings, drew the operational distinction: “Optimization focuses on structural clarity and technical transparency. Manipulation focuses on deceptive influence.”

The line is not always crisp, but the categories are now well-documented. Three classes of GEO manipulation have emerged from the practitioner literature, each a direct descendant of black-hat SEO tactics adapted for the synthesis-based logic of AI engines. Each is more dangerous than its predecessor because AI synthesizes a single authoritative-sounding answer (versus ten blue links), so manipulation has outsized consequences. Users get one “truth” rather than multiple perspectives.

Chapter 02

Class 1 — Data poisoning and synthetic consensus

The cleanest version: use AI to generate hundreds of articles on a topic, all making the same claim, published across a network of mid-authority sites. The articles never quite reach top-tier publications, but they create enough repetition across the web that AI engines treat the manufactured consensus as evidence.

The 2023 SEO heist was the canary — 1,800 AI-generated articles, 3.6 million visits stolen from a competitor, public bragging on social media. Google eventually applied a manual action. The economics still favor the attacker: generating 1,800 mediocre articles costs under $100 today. The detection burden falls on platforms and publishers.

The astroturfing variant targets Reddit specifically because ~40% of ChatGPT’s insights are pulled from Reddit. Coordinated comment campaigns, fake review sites linking to manufactured discussions, AI-generated “authentic” user voices. Evertune AI’s warning is precise: “If Reddit detects and removes astroturfed content, that manipulated data won’t make it into the datasets that train AI models” — but the catch rate is far below 100%, and what slips through can persist in training data for years.

“Optimization helps the engine perform its duty. Manipulation hijacks the model's decision logic through noise. — TigerTracks”
Chapter 03

Class 2 — Citation stuffing and link farm 2.0

The second class adapts the link-farm playbook to citation-driven discovery. The mechanic: build a network of sites that cite each other in the specific patterns AI engines use to assess source authority — comparison tables, “best of” lists, expert roundups — then game the inclusion criteria.

Sports Illustrated’s 2023 incident was the high-profile cautionary example: AI-generated articles published under fake writer profiles with synthetic credentials. The credibility damage outlasted the traffic gain by years.

The defensive read: AI engines already weight third-party citations heavily, which makes citation stuffing tempting. The empirical read: platforms are getting better at recognizing the patterns. Google’s 2024 spam action affected pages with 100% AI-generated content and de-indexed half of those sites. The arbitrage window is narrowing.

Chapter 04

Class 3 — Hidden prompt injection and the Harvard finding

The most technically sophisticated class is the most concerning long-term. Harvard researchers demonstrated “strategic text sequences” — nonsensical-looking character strings that, when added to product pages or reviews, can force LLMs to generate specific outputs. The sequences look like garbled noise to humans but encode instructions the model interprets as commands.

Aounon Kumar, the Harvard researcher who published the work, framed the implication: “The challenge lies in anticipating and defending against a constantly evolving landscape of adversarial techniques.” Princeton’s Ameet Deshpande, co-author of the foundational GEO paper, was equally candid: “It’s a cat and mouse game. These generative engines are not static, and they’re also black boxes.”

The downstream concern: if adversarial content creators successfully game these systems at scale, “a lot of traffic is going to go to them, and 0% will go to good content creators.” The structural fairness of AI-mediated discovery depends on platforms staying ahead of the adversarial techniques. So far they have. The asymmetry favors them — they have the model weights and the detection infrastructure — but the cat-and-mouse dynamic is permanent.

Chapter 05

The negative GEO threat

The dark mirror of the manipulation playbook: negative GEO, where competitors attack your brand by publishing strategically placed negative content that AI engines pick up.

Reboot Online’s 2025 negative GEO experiment was the first published demonstration. Researchers planted negative claims about a test target across mid-authority sites and measured how often AI engines surfaced the manufactured negative narrative. Perplexity repeatedly cited the test sites with the fabricated negative claims. ChatGPT showed more resistance but was not immune.

The implication for brands: defensive GEO is not optional. Even if your team has no interest in offensive optimization, your competitors may. The brands with strong owned-content authority and earned-media presence are insulated; the brands without are exposed to attacks they cannot directly counter.

Chapter 06

Why platform pushback is consistent

The most underweighted signal in the manipulation conversation is platform consistency. Every major AI platform’s public-facing representatives have published essentially the same position:

  • Danny Sullivan (Google): Best practices centered on genuine helpfulness and authority will win long-term.
  • Krishna Madhavan (Microsoft Bing): No shortcuts; manipulation will be resisted.
  • Jesse Dwyer (Perplexity): Platform resistance to manipulation is a core engineering investment.

The institutional incentive aligns with the public statement: platforms cannot afford to lose user trust to manipulated outputs. Google’s 2024 spam action and ongoing model updates demonstrate the operational follow-through. The arms race favors platforms because they have the model weights, the detection telemetry, and the existential motivation.

The historical analogy: the first decade of SEO had a similar dynamic. White-hat operators built durable practices; black-hat operators captured short-term gains and were eventually punished. The asymmetry between durable and fleeting positions widens as platforms mature. GEO is in roughly the equivalent of SEO’s 2007 — the manipulation tactics work today, will mostly be detected within 18–24 months, and will leave the brands that depended on them holding the bag.

Chapter 07

The legal exposure most operators are missing

The under-discussed risk in the manipulation conversation is the legal one. Several of the techniques in active use are illegal, not merely against platform terms of service:

  • Astroturfing is illegal in many jurisdictions, including under FTC consumer protection authority.
  • Fake reviews have triggered FTC enforcement actions including $25M fines (recent cases).
  • Synthetic personas with fake credentials may constitute fraud depending on context and damages.
  • Hidden prompt injection has not been tested in court, but legal scholars consider it likely to be characterized as deceptive practice.

For brands evaluating GEO vendors: any vendor whose methodology relies on these techniques is exposing the buyer’s brand to legal risk in addition to platform risk. The due diligence question that should be asked is direct: “Walk me through every action you take that creates content, citations, or third-party signals on our behalf, and tell me whether each one would survive an FTC inquiry.” Vendors who can’t answer that question cleanly are not vendors a serious brand should engage.

Sources cited

  1. TigerTracks — “The Ethics of GEO: Where Performance Optimization Ends and Manipulation Begins”
  2. iPullRank — “Trust, Truth, and the Invisible Algorithm”
  3. Aounon Kumar (Harvard) — Strategic text sequences research
  4. Aggarwal et al. (Princeton) — GEO paper, adversarial techniques discussion
  5. Reboot Online — Negative GEO experiment (2025)
  6. Evertune AI — “7 Rules For Reddit Engagement That Improves AI Visibility”
  7. Search Engine Land / Jason Tabeling — “Black hat GEO is real”
  8. Similarweb — “Negative GEO: How Competitors Can Harm Your Reputation on AI”
  9. Peec AI — Google 2024 spam action analysis

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Research
Live · FancyAI Research Corpus

GEO ethics in 2026: no framework, growing stakes.

No industry body. No code of ethics. No enforcement mechanism. As 37% of consumers start searches with AI and 82% are skeptical of the answers, the discipline is being built on every operator’s individual judgment.

0
Industry-wide GEO ethics frameworks adopted as of 2026
37%
Of consumers start searches with AI, not Google
82%
Are skeptical of AI results
8%
Always check sources
Chapter 01

The ethical imperative iPullRank named

iPullRank’s Michael King has done the most rigorous published thinking on the GEO ethics question. The frame: “engineer relevance responsibly, or allow the machines to engineer our reality for us.”

The choice is not theoretical. AI systems “perform truth rather than presenting it” — they generate single authoritative-sounding answers without the disclaimers and confidence-interval signaling that academic research carries. Users receive AI outputs with disproportionate trust. Manipulation of those outputs has disproportionate consequences.

The structural problem: no industry body has adopted a code of ethics for GEO. No certification, no standards, no enforcement mechanism. Each practitioner makes their own decisions about where the line sits. The most credible operators have published their own frameworks; the least credible have not.

“The invisible algorithm's most visible impact may be whether we choose to engineer relevance responsibly or allow the machines to engineer our reality for us. — iPullRank”
Chapter 02

TigerTracks’ line

TigerTracks proposed the cleanest operational distinction in the published literature: optimization helps the engine perform its duty to the user; manipulation hijacks the model’s decision logic through noise.

The test is functional, not procedural. Three diagnostic questions for any GEO tactic:

  1. Is the underlying claim true? If the optimization makes a true claim more findable, it’s helping the engine. If it makes a false claim more findable, it’s manipulating it.
  2. Would a knowledgeable human reviewer agree your brand belongs in the answer? If yes, the tactic is structural advocacy for an honest position. If no, the tactic is engineering an outcome the evidence doesn’t support.
  3. Does the tactic depend on the AI engine not noticing? If yes, it’s manipulation by definition. If the same tactic survives full transparency to the platform, it’s optimization.

Tactics that pass all three tests: structured content, schema markup, accurate brand entity definition, earned-media outreach, original research publishing. Tactics that fail any one test: synthetic consensus generation, fake reviews, hidden prompt injection, fabricated credentials.

Chapter 03

The bias dimension

UNESCO’s ongoing work on AI ethics has surfaced a less-discussed GEO concern: AI systems amplify the biases embedded in their training data. Gender, racial, cultural, and ideological biases that exist in the underlying corpus get propagated into AI recommendations.

Stanford GSB research on AI political bias found that both Republicans and Democrats perceived left-leaning bias in LLMs’ discussion of contentious topics. The directional finding is less important than the structural one: AI engines are not neutral arbiters. They reflect the perspectives of the corpora they were trained on and the humans who reinforced them.

For brands operating in politically or culturally sensitive categories (healthcare, education, financial services, news), the implication is operational: even unimpeachable optimization tactics can produce outputs that some user segments perceive as biased. The defensive read: brands need to monitor not just whether they’re cited but in what framing.

Chapter 04

The trust paradox

Exploding Topics’ 2025 consumer-trust research surfaced the paradox that defines the user-side ethics question:

  • 82% of users are skeptical of AI results.
  • Only 8% always verify sources.
  • Forbes found AI search results are more trusted than ads; consumers describe AI as “less cluttered” than search.
  • 37% of consumers now start searches with AI instead of Google (Search Engine Land).

The pattern: users distrust AI in principle and depend on it in practice. The skepticism does not translate to verification behavior. The result is the worst-case dynamic for GEO ethics — a user base that knows AI can be wrong but acts as if it were right, and a brand-side incentive structure that rewards the brands whose optimization is most aggressive whether or not the underlying claims are accurate.

Chapter 05

What ethical operators publish

In the absence of an industry framework, the most credible operators have published their own ethical commitments. The patterns that emerge across them:

  1. Disclose methodology. If a vendor cannot explain in plain language what they do for clients, the methodology is probably either trivial or dishonest. The credible operators publish their playbooks.
  2. Cite primary sources. Claims about AI engine behavior should be backed by named studies, not asserted. The credible operators link to their evidence.
  3. Refuse synthetic-volume tactics. Mass AI-generated content, fake reviews, synthetic personas. The credible operators publicly reject these.
  4. Document client commitments. What you will and will not do on a client’s behalf, in writing, before the engagement starts.
  5. Acknowledge uncertainty. AI engines are evolving black boxes. Vendors who claim certainty about future behavior are either selling something or not paying attention.

Aleyda Solis, who runs Orainti and the SEOFOMO newsletter (35K+ subscribers), has been among the most consistent published voices on the ethics question. Her central warning: “If you treat AI search solely as a performance channel — expecting traffic and revenue from every inclusion in AI answers — you’ll set yourself up for disappointment.” Treating GEO purely as performance optimization, not as relationship-building with the information ecosystem, is the path that leads operators across the ethical line.

Chapter 06

The buyer’s diligence framework

Until an industry body publishes a code of ethics, buyers are the de facto enforcement mechanism. The questions a serious buyer should ask any GEO vendor:

  1. Walk me through every type of action you take on a client’s behalf. Specific, not generic. Get the list in writing.
  2. Which of these actions involve creating content, citations, or signals that originate from sources other than the client? If the answer is “none,” the engagement is probably purely advisory. If the answer involves third parties, dig in.
  3. Do you publish synthetic personas, AI-generated reviews, or coordinated comment campaigns? The honest answer should be “no.” If it isn’t, walk away.
  4. How do you measure success? If the answer involves “ranking position,” the vendor either misunderstands AI engines or is misleading you.
  5. What happens to the brand’s reputation if a platform flags any of your tactics? The vendor should have thought about this. If they haven’t, the buyer is carrying the risk alone.

The brands that emerge from the next 24 months with strong AI visibility and intact reputations will be the ones whose vendors could answer those questions clean.

Sources cited

  1. iPullRank (Michael King) — “Trust, Truth, and the Invisible Algorithm”
  2. TigerTracks — “The Ethics of GEO”
  3. UNESCO — AI ethics research and bias documentation
  4. Stanford GSB — Political bias in LLM outputs
  5. Exploding Topics — “The AI Trust Gap: 82% Are Skeptical, Yet Only 8% Always Check Sources”
  6. Forbes — “AI Search Results More Trusted Than Ads”
  7. Search Engine Land — Aleyda Solis on AI search ethics
  8. LSEO, MaximusLabs, Nuoptima, Blazly — Vendor-published ethics frameworks

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Research
Live · FancyAI Research Corpus

The legal front: 500+ lawsuits, antitrust, and AI defamation.

The New York Times sued OpenAI and Perplexity. Google faces EU antitrust over AI Overviews. The Section 230 question is unsettled. Wolf River Electric is testing AI defamation in court. Every brand operating in AI search needs a legal briefing.

500+
Publisher lawsuits filed against AI search platforms
$50M
News Corp annual licensing deal with OpenAI
2026
EU AI Act labeling requirements take effect
Unsettled
Section 230 protection for AI-generated content
Chapter 01

The litigation landscape

The publisher-vs-platform litigation cluster is the largest in the history of the open web. 500+ publications have filed or joined lawsuits against AI search platforms. The named cases that matter most for setting precedent:

  • The New York Times v. OpenAI & Microsoft — the most watched case; tests whether training and serving from copyrighted content constitutes infringement.
  • The New York Times v. Perplexity — separate case, focused on real-time citation rather than training.
  • Dow Jones v. Perplexity — joining the wave of suits over verbatim and near-verbatim use of paywalled content.
  • EU Commission antitrust investigation of Google — specifically about AI Overviews using publisher content without compensation.
  • Wolf River Electric v. Google — the first prominent test of AI-generated defamation against a search engine.

The cases are at different stages, in different jurisdictions, with different legal theories. The cumulative effect, regardless of individual outcomes, is the establishment of legal standards that will shape AI search for the next decade.

Chapter 02

The Section 230 question

The most consequential unsettled legal question in AI search is whether Section 230 of the Communications Decency Act — the law that protects search engines and platforms from liability for user-generated content they index — extends to AI-generated outputs.

The American Bar Association’s November 2024 analysis was unambiguous on the uncertainty: “the absence of comprehensive AI regulation clearly defining the contours and applicability of Section 230 immunity in the context of generative AI.” Traditional search engines are intermediaries that display third-party content. AI engines generate new content. The legal distinction matters.

If courts hold that AI-generated content is “speech” by the platform itself rather than indexed third-party content, Section 230 protections likely don’t apply. That would make AI platforms liable for the truthfulness of their outputs in the same way a publisher is liable for its articles. The downstream consequences would reshape the entire economics of AI search.

“If your AI acts as an agent of your business, you likely bear responsibility for what it tells people. — National Law Review”
Chapter 03

The Wolf River Electric defamation precedent

The most operationally relevant emerging case for brand-side legal exposure is Wolf River Electric v. Google. The company sued after Google’s AI-generated search results fabricated negative claims about it. The case is one of the first prominent tests of whether AI-generated defamation against a brand meets the legal standard for libel.

The Columbia Law Review’s 2025 analysis, “Redefining Defamation: Establishing Proof of Fault for AI Hallucinations,” framed the central question: traditional defamation law requires the defendant to have acted with knowing falsity or reckless disregard for the truth. How do you apply intent-based standards to a probabilistic system that doesn’t “know” anything?

The Battle v. Microsoft Corp. case is testing a parallel question — an Air Force veteran sued over AI-generated false claims about him. The legal framework for AI defamation is being built case by case in real time. For brands, the implication is twofold: you may have a cause of action when AI lies about you, and your competitors may have one when AI lies about them in your favor. Both directions of risk are operational.

Chapter 04

The EU AI Act and labeling requirements

The most concrete regulatory development is the EU AI Act, which began phasing in requirements through 2025 and 2026. The provisions most relevant to GEO operators:

  • AI-generated content must be clearly labeled as AI-generated, with disclosure obligations on the platforms generating the content and on entities deploying it.
  • Transparency requirements for systems that interact with consumers, including chatbots and AI search interfaces.
  • Risk-based classification with stricter requirements for high-risk applications (financial, medical, legal).
  • Conformity assessments for AI systems before market entry in regulated categories.

The extraterritorial reach matters: the EU AI Act applies to any AI system used by EU residents, regardless of where the system is operated from. US-based brands optimizing for AI visibility need to factor EU compliance into their content and disclosure strategies.

Individual US states are passing their own AI disclosure laws in parallel, creating a patchwork of state-level requirements. Federal AI legislation has not been enacted, but the FTC has begun applying existing consumer protection authority to AI claims — the May 2025 enforcement actions against companies making deceptive AI claims signal the direction.

Chapter 05

FTC enforcement and false-AI-claim risk

The Federal Trade Commission has not waited for new AI legislation. The agency has begun applying its existing consumer protection authority to AI-related deceptive claims. The pattern emerging from enforcement actions:

  • $25 million fine against a company that used deceptive AI claims to defraud consumers (recent FTC action).
  • Orders barring companies from advertising services dedicated to generating fake consumer reviews or testimonials — a direct strike against the manipulation playbook.
  • Joint statement from FTC, EEOC, CFPB, and DOJ clarifying that their existing authority covers AI applications.
  • Guidance against falsely claiming AI capabilities, falsely attributing AI-generated content to humans, and using AI to mislead consumers about products or services.

For brand operators: the FTC has signaled it will treat AI-related deception with the same enforcement posture as traditional deception. Brands engaging in synthetic-consensus manipulation, fake-review generation, or hidden-prompt techniques face FTC exposure regardless of whether platforms catch them first.

Chapter 06

The right-to-be-forgotten challenge

European data protection law gives individuals a right to be forgotten — the ability to demand the deletion of personal data from systems that hold it. The right has worked reasonably well for traditional search engines, which can de-index URLs.

It does not work for LLMs in any clean technical sense. You cannot simply delete information from a trained model. Information learned during training is encoded in the weights, distributed across billions of parameters, and not extractable in a targeted way. Re-training a model from scratch to remove specific information is computationally prohibitive at the scale of frontier models.

The legal response is still developing. Some platforms offer “output filtering” — preventing the model from producing certain information at inference time even though the underlying knowledge remains in the weights. Whether this satisfies right-to-be-forgotten requirements is being litigated. Brands and individuals seeking to remove inaccurate or outdated information from AI engines face a fundamental technical barrier the legal framework hasn’t yet caught up to.

Chapter 07

The negotiation alternative

Some publishers have chosen to settle rather than litigate. News Corp signed a $50M-per-year licensing agreement with OpenAI. The Associated Press has a similar deal. Other major outlets have followed.

The math behind the choice: a $50M annual deal is meaningful for News Corp’s revenue base and creates a multi-year predictable revenue stream during a period of structural traffic decline. Litigation would cost less but might yield nothing — and would not stop the traffic loss while the case wound through the courts.

The economics of negotiation favor only the largest publishers. Mid-tier publishers are not getting offered comparable deals. Local news, trade publications, and most B2B media are litigating or absorbing the loss with no realistic path to a licensing solution. The result is a two-tier publisher ecosystem: a small number of tier-1 outlets with AI revenue streams, and everyone else competing for what remains of the open-web traffic model.

Chapter 08

What every brand’s legal team should brief on

The operational read for in-house legal teams supporting marketing and brand functions:

  1. Vendor due diligence on GEO partners. Any vendor whose tactics include synthetic personas, fake reviews, astroturfing, or hidden prompt injection is exposing your brand to FTC and (in the EU) AI Act enforcement.
  2. Content disclosure obligations. AI-generated content used in customer communications may require labeling under EU and emerging US state law. Your content production process should track AI involvement at a granular level.
  3. Defamation monitoring and response. Establish a process for identifying AI hallucinations about your brand and documenting them for potential legal action. The Wolf River and Battle cases are establishing precedent that may give you remedies.
  4. Section 230 watch. The legal status of AI-generated content is being established this year and next. Your brand’s exposure to AI-generated misinformation may shift dramatically depending on how courts rule.
  5. Right-to-be-forgotten requests. If your brand or executives need to remove inaccurate information from AI engines, the technical and legal pathways are limited. Plan for slow timelines and partial remedies.
  6. Licensing deal evaluation. If your business depends on content that AI platforms are training on, the licensing-vs-litigation question may eventually apply to you. Track the evolving deal terms in publishing as a forward indicator.

Sources cited

  1. American Bar Association — “Beyond the Search Bar: Generative AI’s Section 230 Tightrope Walk”
  2. Copyright Alliance — AI Lawsuit Developments 2024 review
  3. Press Gazette — Publisher AI lawsuit and licensing tracker
  4. The New York Times — Filed complaints against OpenAI and Perplexity
  5. Columbia Law Review — “Redefining Defamation: Establishing Proof of Fault for AI Hallucinations”
  6. The New York Times — “Who Pays When A.I. Is Wrong?” (November 2025)
  7. White & Case — “AI Watch: Global Regulatory Tracker”
  8. European Commission — EU AI Act final text and implementation timeline
  9. FTC — Joint statement (FTC/EEOC/CFPB/DOJ) on AI authority
  10. National Law Review — “AI Hallucinations Are Creating Real-World Risks for Businesses”

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Research
Live · FancyAI Research Corpus

The 10 gates: how AI search engines actually decide what to cite.

Most GEO writing describes outcomes. This one explains the engine. The pipeline from page on the open web to citation in an AI response is a 10-stage system — and most brands optimize for the wrong stages.

10
Stages a piece of content must pass through to be cited by an AI engine
50–90%
Of LLM citations don't fully support the claims they're attached to
38,065:1
ClaudeBot crawl-to-cite ratio (Cloudflare)
200B+
URLs in Perplexity's proprietary index
Chapter 01

Why “the page” is the wrong unit

For twenty years, the unit of search optimization was the page. SEO tools scored pages, ranked pages, recommended page-level fixes. The algorithm rewarded pages with strong overall signals.

AI search engines do not work this way. They work at the level of the passage — the discrete 40–150 word block of text that answers a specific question. A single page may contain twenty extractable passages, each competing independently for citation. A page that ranks well in Google AI Mode may have only one of its passages cited; the other nineteen contribute nothing to the eventual answer.

The shift from page-level to passage-level optimization is the most under-discussed structural change in the discipline. Most GEO advice still operates at the page level because that’s where SEO operated and where most practitioners’ mental models live. Operators who internalize the passage-level reality — who write content as a sequence of self-contained answer blocks rather than a continuous narrative — capture disproportionate citation share.

To understand why, you need to look inside the engine.

Chapter 02

The 10 gates

Search Engine Land published the cleanest framework for the AI engine pipeline in 2025: every piece of content must pass through ten distinct gates before it can be cited in an AI response. Most operators optimize for one or two of them. The brands that earn citation share understand all ten.

GateWhat happensWho optimizes for it
1. DiscoveryAI crawlers find your contentSEO foundations
2. SelectionCrawler decides to fetch the pageSEO foundations
3. CrawlingContent is downloadedSEO foundations
4. RenderingJavaScript is executedSEO (gap for AI crawlers)
5. IndexingContent stored in the search indexSEO foundations
6. AnnotationMetadata, entities, structured data addedAlmost no one
7. RecruitmentContent selected as candidate for a queryGEO basics
8. EvaluationPassages scored for relevanceGEO basics
9. DisplayedContent appears in AI responseGEO marketing
10. WonUser acts on cited contentGEO marketing

Most SEO advice still focuses on gates 2–5. Most GEO marketing focuses on gates 9–10. The biggest untapped opportunity is gate 6 — annotation — where structured data, entity markup, and semantic clarity decide whether the AI engine understands what your content is about.

Search Engine Land’s framing was direct: gate 6 is “the biggest untapped opportunity in search, assistive, and agential optimization right now.”

“Most SEO advice operates at gates 2–5. Most GEO advice operates at gates 9–10. The biggest opportunity is gate 6.”
Chapter 03

RAG explained operationally

Behind every gate is the same core technical pattern: Retrieval-Augmented Generation, or RAG. RAG is the engineering reality that determines what AI engines can and cannot cite, and how content needs to be structured to participate.

The RAG pipeline, in operational terms:

  1. Query understanding. The user’s prompt is parsed for intent. AI search prompts average 23 words versus 2–3 for traditional search — the engine has more to work with.
  2. Query fan-out. The single user prompt is broken into multiple sub-queries. A question about “best CRM for a SaaS startup” might fan out into queries about pricing, integration depth, ease of use, and customer support, each retrieved separately.
  3. Retrieval. Each sub-query hits the search index. Candidate documents are fetched.
  4. Chunking. Retrieved documents are broken into passages. The engine doesn’t reason over your whole page — it reasons over the slices it can extract.
  5. Embedding. Each passage is converted into a vector embedding — a numerical representation of its meaning in high-dimensional space.
  6. Scoring. The user’s prompt is also embedded. Cosine similarity between query embedding and passage embeddings determines which passages get pulled into the model’s context window.
  7. Synthesis. The LLM generates the final answer from the top-scoring passages, attaching citations to the source documents.
  8. Display. The user sees the synthesized answer with inline citations.

The implication for content: if your content can’t be cleanly chunked into 40–150 word passages with self-contained meaning, it never makes it past step 4. The engine throws away most of your page in favor of the passages it can extract.

Chapter 04

The Wu et al. citation accuracy crisis

The most consequential under-cited finding in the AI search literature comes from Wu et al., published in Nature Communications in April 2025. The researchers tested whether the citations AI engines attach to their answers actually support the claims being made.

The result: 50–90% of LLM citations don’t fully support the claims they’re attached to.

Read that twice. The citation footnotes in AI responses are frequently fictional in their support relationship. The cited source exists, but doesn’t actually back up the specific claim it’s attached to. The user sees an authoritative-looking citation; the underlying source is at best tangentially related.

A complementary finding from Venkit et al. (arXiv 2024): citation accuracy across major AI search platforms ranges from ~66% (best) to under 50% (worst). Even the best-performing platforms get the citation relationship wrong a third of the time.

The downstream effect: users see citations and trust the answer. 82% of users are skeptical of AI results in principle, but only 1% click into the cited source to verify (Pew). The citation acts as a credibility marker even when it doesn’t do the work of credibility. AI engines have, accidentally or not, evolved a system where the appearance of sourcing substitutes for actual sourcing.

Chapter 05

The crawl-to-cite ratio

Cloudflare’s 2025 telemetry on AI bot behavior surfaced a number that reframes how operators should think about visibility. ClaudeBot’s crawl-to-cite ratio is approximately 38,065:1. For every page Anthropic’s crawler reads, it cites approximately one in 38,000 in an actual user-facing response.

The other major AI crawlers operate at similar orders of magnitude. GPTBot grew 305% in crawling volume from May 2024 to May 2025, becoming the dominant AI crawler at 30% of all AI bot traffic. The volume of crawling is enormous; the volume of citation is comparatively tiny.

The implication: your content being crawled is not visibility. The crawl is the entry ticket; everything between the crawl and the citation is where the real competition happens. Brands obsessed with bot-traffic dashboards are watching the wrong number. Citation share — how often your content is the one in 38,000 that gets surfaced — is the real metric.

This also explains why analytics-only GEO platforms see so much traffic data without obvious correlation to outcomes: there’s an enormous funnel between the crawl event and the user-facing citation, and most of the funnel is invisible to the brand.

Chapter 06

Per-platform architecture differences

Each major AI engine implements RAG with different architectural choices. The differences matter because they determine which optimization moves work on which platforms.

EngineSearch backendCitation behavior
ChatGPTBing (sequential queries)3–5 citations per response; 87% correlation with Bing top-10
PerplexityProprietary 200B+ URL index + Bing API~13 citations per response; document + sub-document scoring
Google AI OverviewsGoogle index93.67% correlation with organic top-10 results
ClaudeBrave SearchEmbedded inline links; 38,065:1 crawl-to-cite ratio
GrokOwn index + X dataX conversations weighted heavily

The architectural divergence is meaningful for optimization strategy. Perplexity’s document-and-sub-document scoring means individual passages within a page compete independently — passage-level structure matters more here than anywhere else. ChatGPT’s heavy Bing dependency means that classical SEO investment in Bing visibility carries through directly to ChatGPT visibility. Google AI Overviews’ 93.67% correlation with organic top-10 means SEO foundations are the prerequisite, not a bonus.

The cross-platform implication is the same conclusion the published research keeps surfacing: only 11% of domains receive citations from both ChatGPT and Perplexity. The same content, run through two different engines, lands in two different citation pools. A unified GEO strategy has to anticipate the architectural differences, not assume them away.

Chapter 07

Why query fan-out changes content design

Query fan-out — the practice of decomposing a single user prompt into multiple sub-queries — is the architectural detail with the largest practical implication for how content should be written.

When a user asks ChatGPT for “the best project management tool for remote design teams,” the engine doesn’t run that exact query. It fans out into a series of sub-queries:

  • Best project management tools 2026
  • Project management for remote teams
  • Project management for design teams
  • [Specific tool] reviews (for each tool surfaced in the first three queries)
  • [Specific tool] vs [specific tool] (for comparative framing)
  • Project management tool pricing

The synthesized answer is built from passages retrieved across all of these sub-queries. A page optimized for the literal user prompt is competing in only one of the seven retrieval rounds.

The operational shift: content needs to satisfy multiple sub-queries simultaneously. A pricing-focused passage, a use-case-focused passage, a comparison passage, and a feature passage on the same product page each compete in different sub-queries. Pages that bundle these dimensions earn citation share across more retrieval rounds. Pages that focus narrowly compete in one round and are absent from the rest.

Chapter 08

The annotation gap

Returning to gate 6 — the annotation gate Search Engine Land called the “biggest untapped opportunity” — the practical work has three components.

Structured data. JSON-LD schema markup that gives AI engines a pre-digested summary of what each page is about. Organization schema. Product schema. FAQ schema (with the caveat that direct citation lift from FAQ schema is debated; the indirect benefit through entity grounding is real). Article schema with author and publication date. Most pages have minimal or generic schema; pages with comprehensive, accurate schema are interpretable in ways unstructured pages aren’t.

Entity markup. Explicitly defining the entities your content is about — brands, products, people, places — with consistent identifiers across owned properties and to external entity databases (Wikidata, Google Knowledge Graph). The Onely study found that LLMs grounded in knowledge graphs achieve 300% higher accuracy than those working with unstructured text alone.

Semantic clarity. Heading hierarchy that maps logically to the content’s argument structure. Passage breaks that align with discrete sub-topics. Internal linking that explicitly states relationships between related concepts. None of this is new SEO advice. The difference is that AI engines reward it more directly than ranking algorithms ever did.

Chapter 09

The Reddit and Wikipedia gravitational pull

The source distribution data tells a structural story about where AI engines pull from when they synthesize answers. Two domains dominate disproportionately:

  • Reddit: 40.1% of LLM citations across all platforms (Averi AI). For Perplexity specifically, 46.7% of top sources come from Reddit (Digital Bloom).
  • Wikipedia: 26.3% of LLM citations across platforms. For ChatGPT specifically, 47.9% of top sources come from Wikipedia.

Neither was a primary SEO target before AI search. Both are now first-class citation real estate. The mechanism is structural: Reddit is conversational, multi-perspective, written in the language users use; Wikipedia is encyclopedically structured with explicit entity definitions. Both are exactly the kind of content RAG architectures process most cleanly.

The operational implication for brands: presence on these platforms isn’t optional for serious AI visibility programs. Not in the form of astroturfing — community platforms detect and punish that — but as authentic participation, third-party coverage, and entity grounding. Brands that are authentically discussed on Reddit get cited by Perplexity. Brands that have well-maintained Wikipedia entries get cited by ChatGPT. Brands that have neither compete uphill regardless of how good their owned content is.

Chapter 10

What the engineering means for operators

The architectural details add up to four operational shifts every brand serious about AI visibility should internalize:

  1. Optimize for the passage, not the page. Write content as a sequence of 40–150 word self-contained answer blocks, each tied to a specific sub-query the engine might fan out into. Pages that read well as continuous narrative get cited less than pages structured as extractable answer units.
  2. Invest in gate 6 — annotation. Schema markup, entity grounding, knowledge graph submissions. This is where the largest under-claimed competitive opportunity lives. Most brands have done little here; the brands that invest pull ahead.
  3. Track citation share, not crawl traffic. The 38,065:1 ClaudeBot ratio is a reminder that crawl events are not visibility events. The metric that matters is how often your content is the one passage in 38,000 that the engine actually surfaces in a user-facing response.
  4. Build presence where AI engines pull from. Authentic Reddit participation, well-maintained Wikipedia/Wikidata entries, third-party editorial coverage on the publications each engine over-indexes on. The off-site signal layer is doing more of the work than most operators realize.

The brands that internalize the engineering reality — that AI engines are RAG pipelines with specific architectural constraints, not magic boxes — will out-execute the brands operating from a 2018 SEO mental model. The gap is the technical literacy. The work is the application.

Sources cited

  1. Search Engine Land — “The AI engine pipeline: 10 gates that decide whether you win” (2025)
  2. Wu et al. — Nature Communications, “Citation Support in Large Language Models” (April 2025)
  3. Venkit et al. — arXiv, AI search citation accuracy benchmark (2024)
  4. Cloudflare — AI bot crawl-to-refer ratio telemetry (2025)
  5. iPullRank — “AI Search Architecture Deep Dive: Teardowns of Leading Platforms”
  6. Perplexity Research — “Architecting and Evaluating an AI-First Search API”
  7. Towards Data Science — “The Architecture Behind Web Search in AI Chatbots”
  8. Digital Bloom — Cross-platform citation distribution analysis
  9. Averi AI — “Building Citation-Worthy Content” (Reddit/Wikipedia source distribution)
  10. Onely — Knowledge graph grounding accuracy study (300% improvement)
  11. Pew Research Center — User behavior with AI Overview citations
  12. Google AI for Developers — “Grounding with Google Search” documentation
  13. AWS — “What is RAG?” technical overview

Want this measured against your brand?

Get your AI Readiness Index (ARI) score across ChatGPT, Gemini, Perplexity, Claude, and Grok — delivered in 24 hours.

Related research

Foundational Methodology Original Research Original Research
Back to Home
Back to Home