Wayfinder AI
  • About
  • Blog
  • Research
  • Contact
  • Access Compass
Access Compass
Wayfinder AI
© 2026 Wayfinder AI. All rights reserved.
Products
  • Compass
  • Lens (Coming soon)
  • Chart (Coming soon)
  • Radar (Coming soon)
Company
  • About
  • Blog
  • Contact
Resources
  • Glossary
  • Guides
  • Comparisons
  • Free Tools
  • Pricing
  • Privacy Policy
  • Terms of Service
All Posts

Google's Biggest Number Is Its Best Lie

The industry keeps citing 8.5 billion daily Google searches to dismiss AI search. But when you actually decompose that number, the searches that shift consumer decisions are a fraction of the headline.

March 31, 2026·Shaun Myandee·20 min read
AEOsearchAI searchGoogleresearch

Google processes 8.5 billion searches a day. You've heard this number. If you work in SEO or digital marketing, you've probably used it yourself. It's the industry's favourite security blanket, trotted out every time someone suggests that AI search might be eating into Google's dominance. "ChatGPT is a rounding error. Perplexity is a toy. Google handles 8.5 billion searches a day. Nothing has changed."

And that number is real. I'm not here to dispute it.

What I am here to dispute is what it means. Because when you actually look at what those 8.5 billion searches consist of, when you decompose the headline number into its component parts using publicly available data, the picture that emerges is very different from the one the industry is selling. The number of searches where someone's opinion is genuinely being formed, where a brand they haven't heard of might enter their consideration set, where a decision is actually being made, is dramatically smaller than the headline suggests.

I wrote a companion piece to this a few weeks ago arguing that AI search is not a performance channel and shouldn't be measured like one. That post was about how we measure AI search. This one is the other side of that coin: how we measure Google search, and why the numbers the industry relies on to dismiss AI search are, to put it politely, misleading. To put it less politely, they're bollocks.

Here's my thesis, stated up front so you can decide whether to keep reading: Of 8.5 billion daily Google searches, our modelling estimates that roughly 600 million to 1.3 billion represent genuine discovery, searches where a consumer's mind might actually be changed. The rest is navigational traffic, branded queries, zero-click searches, clicks that never leave Google's ecosystem, and demand harvesting of intent that was already formed. And the genuine discovery layer, the 7-15% that actually matters, is exactly the layer AI search is eating.

Those 600 million to 1.3 billion daily discovery searches are still an enormous number. But they're not 8.5 billion. ChatGPT alone now has over 800 million weekly active users and processes billions of prompts per day1. Not all those prompts can be called "searches," but when you compare what we might class as AI "search volumes" against the discovery layer of Google rather than the headline number, they're competing on comparable terms. The people calling AI search a rounding error are comparing it to the wrong number.

When you compare AI search to the discovery layer rather than the headline, the scale difference collapses.


What 8.5 Billion Searches Actually Looks Like

The best data we have on what people actually do on Google comes from SparkToro and Datos, who analysed 332 million search queries across 130,000 US devices over 21 months2. It's the largest publicly available clickstream study of Google Search behaviour, and the numbers are striking.

44% of all Google searches are branded. Not informational, not discovery, not "I wonder what's out there." Branded. People typing "Amazon," "Netflix login," "Gmail," "Nike." They already know exactly what they want and where they want to go. Google is functioning as a glorified address bar for nearly half of all searches.

It's worth noting that Google Chrome, which launched in 2008 and now holds 68% global browser market share3, was the first major browser to merge the address bar with the search bar. Before Chrome, if you wanted to visit Amazon, you typed "amazon.com" in the address bar and that was a direct navigation, not a Google search. After Chrome, you type "amazon" and that becomes a Google search query. Chrome's design documentation explicitly states that the merged "omnibox" was intended to reduce the cognitive load of choosing between search and navigation4. How much of Google's branded search volume is simply an artefact of the omnibox converting direct navigations into search queries is an open question. But it's an interesting one, particularly when you consider that five years after launching Chrome, Google stopped telling us which organic searches were actually driving clicks to our websites. The "(not provided)" decision starts to look a little different in that context.

33% are navigational, meaning the searcher is trying to reach a specific known destination, which overlaps heavily with but isn't identical to branded searches. 15% of all Google search volume comes from just 148 keywords, and the list reads like a roll call of services people use daily: YouTube, Facebook, Gmail, WhatsApp Web, Google Translate, Amazon, ChatGPT5. These aren't discovery queries. They're people using Google as a launcher for apps and websites they already know about.

Then there's the zero-click problem. 58.5% of US Google searches end without a click to any website at all6. The searcher either gets their answer directly from Google's interface, changes their query, or gives up. For every 1,000 Google searches, only 360 result in a click to the open web. And of the clicks that do happen, 28.5% go to Google-owned properties, YouTube, Google Maps, Google Flights, Google Shopping. Nearly a third of all search clicks never leave Google's own ecosystem.

How the Model Works

These categories overlap. A single search can be branded AND zero-click AND navigational simultaneously, so you can't just add the percentages and call it a day. To handle this, we built a Monte Carlo simulation7.

Here's what that means in plain English. Imagine you have a bag of 10,000 marbles, and each marble represents a Google search. For each marble, you flip a series of weighted coins to determine its characteristics. The first coin is weighted 44/56, so it lands on "branded" 44% of the time, matching the SparkToro data. Then you flip a "zero-click" coin, but here's the key bit: the weight of that coin depends on what the first coin said. If the search is branded, the zero-click coin is weighted to 72% (because searches like "Gmail" are almost always answered without a click). If it's not branded, the zero-click coin drops to about 48%. This is how we model the fact that these categories are correlated, not independent.

You do this for every attribute: branded, zero-click, clicks to Google properties, navigational, demand harvesting. Each marble ends up with a full set of characteristics, and you count how many made it through every filter to qualify as "genuine discovery."

Then you do the whole thing 50,000 times. But each time, you wobble the coin weights slightly, drawing them from a distribution rather than using fixed values. Instead of "branded is exactly 44%," the model says "branded is 44% give or take 3 percentage points." After 50,000 runs, you get a distribution of outcomes, which is why we can report a 90% confidence interval rather than a single number.

The Waterfall

Here's what happens to 1,000 Google searches:

The Discovery Waterfall: from 1,000 Google searches to 107 genuine discovery. Hover over each bar to see what was filtered. Data: Monte Carlo simulation (50K runs) using SparkToro/Datos base rates.

107 out of 1,000. Roughly 10.7%, with a 90% confidence interval of 7.2% to 14.9%.

Applied to the 8.5 billion headline: somewhere between 610 million and 1.27 billion daily searches represent genuine discovery, moments where a consumer doesn't already know what they want and might actually have their mind changed by what they find.

What's solid: The base rates for branded search (44%), zero-click (58.5%), and Google properties (28.5%) come directly from the SparkToro/Datos clickstream study, the largest publicly available dataset of its kind. The conditional probabilities (how much more likely a branded search is to be zero-click, for example) are modelled rather than directly observed, but they're calibrated to reproduce the known marginal rates, which is the mathematically correct approach.

The biggest uncertainty is the demand harvesting rate. We estimate that 45% of non-branded, non-navigational search clicks are capturing existing intent rather than creating it, based on Google's own Messy Middle and ZMOT research. But there's no direct measurement. The sensitivity analysis below shows this is the single most impactful assumption, with a ±3.9 percentage point swing on the result. Even so, if you set demand harvesting to zero, if you assume every non-branded non-navigational click is genuine discovery, you still only get about 19%. The structural filters do most of the work before we even ask that question.

Sensitivity analysis: which assumptions matter most? The demand harvesting rate dominates. The structural filters (zero-click, branded, Google properties) are the most robust inputs.

Other honest limitations: The SparkToro data is US-only and potentially desktop-biased, while we're applying it to a global 8.5 billion figure. "Discovery" is a spectrum rather than a binary — someone searching "best running shoes 2026" has some existing preferences even if the query looks open-ended. And the conditional probabilities between categories are estimated, not directly measured from the clickstream data, though they're calibrated so the marginals come out correctly.

The distribution below shows the full output of 50,000 simulation runs. The model doesn't produce a single number; it produces a range. The median lands at 10.7%, with 90% of runs falling between 7.2% and 14.9%. Even the most optimistic runs rarely push past 20%.

Distribution of genuine discovery % across 50,000 Monte Carlo runs. Red line: median (10.7%). Yellow dashes: 90% confidence interval (7.2%–14.9%).

None of these limitations change the core finding. The headline number massively overstates discovery.

This is like counting footfall in a shopping centre and including everyone who walked through the car park to get to a specific shop they'd already decided to visit. The number is real. It just doesn't measure what you think it measures.


Search Is the Cash Register, Not the Salesperson

Even the 107 per 1,000 overstates the case. Because the industry's assumption isn't just that Google has lots of searches, it's that those searches drive decisions. That Google is where people go to make up their minds. And the evidence for that is surprisingly weak, including from Google's own research.

In 2011, Google partnered with Shopper Sciences on a study they called the Zero Moment of Truth8. They found that the average consumer used 10.4 sources of information before making a purchase decision. Search was one of those sources, not the source. This was 2011, before social commerce, before AI, before the current explosion of discovery channels. The number of sources has only grown since.

When ZMOT first came out, I found it genuinely compelling. It seemed to confirm that search was central to how people discover and decide. But over time, looking at it with a slightly more critical eye, I think that was the conclusion Google wanted us to draw. What the research actually shows, if you read it without the Google branding, is that search is one of many touchpoints, and one that sits closer to the demand harvesting end of the spectrum, trimming consideration sets and processing existing preferences, rather than the demand generation end, where new ideas enter your world for the first time. The difference matters, because AI search can inherently do both. It can harvest existing intent, but it can also generate demand, surfacing a brand or product you might never have encountered through a keyword search.

In 2020, Google went further with their "Messy Middle" research9, conducted with The Behavioural Architects across 310,000 simulated purchase scenarios. Their finding: consumer decision-making is non-linear. People loop between exploration and evaluation across multiple touchpoints, Google's words, not mine. The old model of search-as-funnel-entry was wrong. Search is one node in a complex, looping decision web.

This matters because the marketing industry has a term for what Google Search predominantly does: demand harvesting. And it's distinct from demand generation. Demand generation creates intent, it puts a brand or product into someone's consideration set for the first time. Demand harvesting captures existing intent, it processes a decision that's already been made, or very nearly made, somewhere else.

When someone searches "running shoes," they've already decided they want running shoes. That intent was generated somewhere upstream, a conversation with a friend, an Instagram ad, a recommendation from ChatGPT, seeing someone at the gym wearing a pair they liked. They probably already have brands in mind. Google processes that existing intent. It's the cash register, not the salesperson. And the attribution models treat the cash register like the salesperson.

The most convincing evidence for this comes from an experiment that eBay ran and that was subsequently published as a peer-reviewed NBER working paper by Steve Tadelis and colleagues10. eBay stopped all branded search advertising on Yahoo and Microsoft, using Google as a control. The result: 99.5% of the traffic was retained through organic search. The paid ads were contributing almost nothing. The paper's conclusion was blunt: branded search advertising has "neither persuasive nor informative value" for well-known brands. The users were going to eBay anyway. The ads were just claiming credit.

The same logic applies to organic branded search. If a user types your brand name into Google and clicks your organic result, Google didn't create that intent. They just couldn't be bothered to type the "www" and the ".com." Whether they clicked the ad or the organic link is irrelevant to the question of influence, it's the same user with the same pre-formed intent arriving at the same destination via a slightly different link on the same page. The 99.5% substitution rate between paid and organic branded results tells you that the channel is interchangeable. The decision was already made.

Broader incrementality studies corroborate this pattern. Branded search typically shows 20-50% incrementality11, meaning 50-80% of the conversions attributed to branded search would have happened regardless. The industry is paying for a coupon handed to someone already walking through the door, and then telling its clients that the coupon drove the sale.

There's an irony here that search practitioners will recognise. The SEO and paid search industry has spent years criticising affiliate networks and voucher code sites for claiming attribution on conversions they didn't influence, for dropping a coupon in front of someone who was already buying and then taking credit for the sale. Branded search does exactly the same thing, just at a different level of abstraction.

How We Got Here

There's a name for this phenomenon. Goodhart's Law, as expressed by the anthropologist Marilyn Strathern in 1997: "When a measure becomes a target, it ceases to be a good measure"12. The marketing industry targeted search volume, keyword rankings, click-through rates, and optimised relentlessly toward them. And in doing so, it stopped asking whether any of it was actually changing anyone's mind. The metrics became the goal. The outcomes became an afterthought. Optimising a bigger slice of a shrinking pie.

Campbell's Law makes the same point from the other direction: "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor"13. The industry didn't just optimise toward the wrong metrics. The metrics themselves degraded because they were being optimised toward.

And then there's the moment Google took the data away. In 2013, Google stopped passing keyword-level organic search data into analytics platforms. Organic search referrals became "(not provided)." At the time it was framed as a privacy decision. In retrospect, knowing what we now know about what Google's search volume actually consists of, it's easy to conclude that Google believed removing this data would increase their bottom line. And it did. The industry lost the ability to measure keyword-level incrementality for organic search, and instead of treating that as a crisis, it developed workarounds to workarounds and carried on. Nobody asked why Google might not want you to see which keywords were actually driving your business, because the answer might have been uncomfortable.


A Third of Search Clicks Never Leave Google

So Google's volume is overstated and the attribution model gives search credit for decisions it didn't make. But the defenders of the status quo have a broader argument: even if Google search is imperfect, the overall digital advertising ecosystem works. The machinery of SEO, paid search, programmatic display, and social advertising is a proven system. AI search is unproven. Why risk the known for the unknown?

I'd find this argument more compelling if the known system wasn't falling apart.

Let's start with a number that doesn't get enough attention: 28.5% of all Google search clicks go to Google-owned properties6. Nearly a third of all the clicks generated by Google search never leave Google's ecosystem. YouTube, Google Maps, Google Flights, Google Shopping, Google Images. When you search for a restaurant, Google shows you its own Maps result. When you search for a product, Google shows you its own Shopping carousel. When you search for a video, Google surfaces YouTube.

Where do your clicks actually go? For every 1,000 Google searches, only 297 clicks reach the open web. Data: SparkToro/Datos 2024.

You don't have to take my word for it, because the European Court of Justice agreed this was a problem and upheld a €2.4 billion fine against Google for systematically favouring its own Shopping results over competitors'14. Google's response, over time, has been to expand this approach, not curtail it. And in January 2026, they were granted a patent for a system that would replace advertisers' landing pages with AI-generated versions15. The patent says "optimised for conversion metrics," but the question worth asking is: whose conversion metrics? Optimised for the outcomes that matter to your business, or optimised for the outcomes that keep users inside Google's ecosystem and clicking Google's ads? If you've spent any time in this industry, you already know the answer.

Then there's AI Overviews, Google's own AI-generated answers that now appear at the top of many search results. A Seer Interactive study of 3,119 queries across 42 organisations found that organic click-through rates fell 61% on queries where AI Overviews appeared16. Google is cannibalising its own organic results to keep users on Google's page. If you're an SEO professional, your client's organic visibility is being eaten not by ChatGPT, but by Google itself.

The Broader Ecosystem

The problems extend well beyond Google. The Association of National Advertisers published their Programmatic Transparency Benchmark in Q2 2025, and the numbers are damning17. $26.8 billion in global programmatic ad spend is wasted annually, up 34% in two years. Only 36% of programmatic budgets actually reach consumers. Over a quarter is consumed by intermediary fees before a single ad is served. And they identified 128,000 "Made for Advertising" sites, websites that exist solely to serve ads, with no genuine audience and no genuine content.

In March 2026, Publicis Groupe, the world's largest advertising holding company, told its clients to stop using The Trade Desk after an audit revealed transparency concerns and hidden fees that couldn't be verified18. WPP and Dentsu had already pulled back. When three of the four biggest ad holding groups can't verify where their clients' money goes through one of the industry's most prominent platforms, the system has a problem.

Anyone who's spent time on both sides of digital advertising, buying and selling, knows the tension between what the dashboards report and what the business actually experiences. The challenge of getting straight answers from vendors on where your money went is not new. What's new is that the gap between reported performance and actual performance is becoming too large and too well-documented to ignore.

This is the system people are defending when they dismiss AI search. A system where nearly a third of search clicks stay on Google, a third of your programmatic budget vanishes into fees, the dominant platform has been fined billions for self-dealing, and the industry's largest holding groups are telling clients they can't verify their spend. This isn't a healthy ecosystem being disrupted by an upstart technology. It's a broken ecosystem being exposed by an alternative that consumers actually prefer.


AI Is Shit and People Love It Anyway

Here's the thing nobody in the AI-is-eating-Google camp wants to admit: AI search is, in many respects, still quite bad. It hallucinates. It can't do maths reliably. It can't count the words in a sentence. It confidently invents citations that don't exist. It's the subject of a thousand memes, and most of them are deserved.

And it's still growing at 130-150% year-over-year19.

AI-powered search now accounts for 12-18% of total referral traffic, up from 5-8% in late 2024. ChatGPT search referrals alone have increased by more than 200% since mid-202520. Even on queries where Google doesn't show AI Overviews, organic click-through rates are declining by 25-41%16. People aren't just being pushed away from Google by AI Overviews. They're leaving voluntarily, before Google even pushes.

A product this flawed growing this fast tells you everything you need to know about how consumers feel about the alternative.

The Canary Was ChatGPT 3

When ChatGPT launched in late 2022, it didn't have web access. It had stale training data. It hallucinated with absolute confidence. Ask it who the president was and it'd give you the answer from two years ago. Ask it for a source and it'd invent one that sounded plausible but didn't exist.

And people used it as a search engine from day one. They asked it what shoes to buy. They asked it for restaurant recommendations. They asked it medical questions. They found the hallucinations and the wrong dates and the invented citations because they were stress-testing it as a Google replacement, not because they were poking around out of curiosity. That isn't technology hype. That's latent consumer demand exploding on contact with the first viable alternative, however imperfect.

Stack Overflow is the clearest parallel. Stack Overflow didn't get worse. A better alternative appeared, and the exodus was practically overnight. The speed of the switch tells you the depth of the frustration that was already there. The same dynamic is playing out with Google Search, just on a slower timescale because the stakes are higher and the habits are more entrenched.

Google's Monetisation Trap

There's a reason Google took so long to ship an AI search product, despite the fact that Google researchers literally wrote the paper that made all of this possible. "Attention Is All You Need," published by eight Google Brain researchers in June 2017, introduced the transformer architecture that powers every major large language model in existence, ChatGPT, Claude, Gemini, all of them21. It has since been cited over 150,000 times, making it the seventh most-cited scientific publication of the twenty-first century22. The most transformative paper in modern computer science, written by their own researchers.

OpenAI, which was an open-source research lab at the time, read that paper and shipped GPT-1 in June 2018, one year later. GPT-2 followed in February 2019. GPT-3 in June 2020. Three years, three models, the third one changed the landscape completely. Meanwhile, it took Google nearly six years to ship a consumer product based on their own research, and when they did, it was Bard, which was so bad they killed it within a year and started again with Gemini. An open-source research lab with a fraction of Google's resources read Google's paper and built the future with it faster than Google could.

From Google's own paper to Google's belated response: how an open-source lab built the future faster than the company that invented it.

Google's business model, the one that generates over $200 billion in annual revenue, is built on selling ads alongside search results. They didn't launch an AI product because they couldn't answer one question: Where do the ads go?

Meanwhile, the alternative is now available to anyone. You can download an open-source model, connect it to a web search API, and have a functional Google replacement running on your own hardware. No ads. No tracking. No clicks staying on someone else's platform. The fact that this is even possible is an existential threat to an advertising-funded search monopoly.

Anthropic's Super Bowl campaign23 brought this tension into mainstream public consciousness. "Ads are coming to AI. But not to Claude." The ads were funny, Sam Altman got upset, and Claude jumped to number 7 on the App Store with an 11% user boost. The "no ads" pitch resonated because consumers already feel the problem. They already know what an ad-saturated discovery experience feels like, because they've been living in one for twenty years.

The people dismissing AI search as a fad have something to sell that AI disrupts. SEO consulting, programmatic media buying, paid search management, they're all built on the existing system. They have established playbooks and revenue models. AI search has neither. It's unsolved. It's the black box to end all black boxes. And it's easier to dismiss what you can't control than to admit that the thing you can control might not be working as well as you've been telling your clients.


What Your Best Marginal Return Actually Looks Like

None of this means SEO is dead. Informational search, genuine discovery, the 10-15% of Google searches where someone doesn't know what they want and is open to influence, that's still a real and significant volume of traffic. Nobody should abandon it.

But the industry's habit of waving the 8.5 billion number around and concluding that nothing has changed is a Goodhart's Law trap. You're optimising a metric that stopped measuring what matters. The volume is real. The influence is not.

The question I'd suggest asking instead of "how much traffic does Google send?" is "how many decisions does each of my channels actually influence?" Not attribute. Influence. They're different things, and the gap between them is where billions of pounds of marketing spend disappears every year.

Apply incrementality thinking to your entire channel mix, not just paid media. Ask where your best marginal return on investment is across every channel, including the ones that are harder to measure. Strictly speaking, if we're talking about pure influence-per-pound, the honest answer is probably email marketing. It's not sexy, it's not going to get you invited to Cannes Lions, and nobody has ever raised $96 million in Series B funding to build an email automation dashboard. But in terms of actually changing someone's mind about spending money, it's quietly devastating. The same is true of half a dozen channels that don't get keynote speeches because they're boring and they work. You know what else has incredible influence metrics? Charity muggers. The clipboard-wielding optimists stationed outside train stations who convince you to set up a direct debit for African donkeys when the plight of the African donkey had never once crossed your mind before that moment. If that isn't demand generation, putting an entirely new idea into someone's consideration set and converting it to revenue on the spot, I don't know what is. But nobody has ever won an Effie for mugging people in the street. Yet.

In the context of online discovery, though, where you're competing for the attention of someone who doesn't know you exist yet, the answer is AI search. It's a discovery and consideration channel that's growing at 130-150% year-over-year. It's uncrowded. Your competitors haven't figured it out yet. And it will never be cheaper or less competitive than it is right now. Every month you wait, the window narrows.

The next generation of consumers are native AI users. They're forming brand preferences through AI conversations, not keyword searches. They're asking Claude what laptop to buy, not scrolling through ten blue links on Google. If you're not in that conversation, you're not in the consideration set. And you can't buy your way in with a display ad. OpenAI is trying. ChatGPT's early ad pilot has been, charitably, underwhelming: one brand reported a 0.91% click-through rate against a 6.4% benchmark in the same category on Google, and an enterprise advertiser managed to spend just 3% of a $250,000 budget after weeks because the platform couldn't deliver at scale24. The most valuable ChatGPT users, the paying subscribers, are excluded from ads entirely. The old playbook doesn't transfer. AI search isn't a media channel you buy. It's a trust channel you earn.

8.5 billion searches a day. 107 out of every 1,000 are genuine discovery. The rest is theatre.

The question is whether you want to keep optimising for the theatre, or start showing up where the decisions are actually being made.


Footnotes

  1. OpenAI, weekly active user figures reported February 2026. ChatGPT surpassed 800 million weekly active users. ↩

  2. SparkToro & Datos, "New Research: We Analyzed 332 Million Queries Over 21 Months," 2024. ↩

  3. Backlinko, "Google Chrome Statistics for 2026." ↩

  4. Chromium Project, "Omnibox: User Experience," design documentation. ↩

  5. Search Engine Land, "15% of Google's Search Volume Comes from Just 148 Terms," 2024. ↩

  6. SparkToro & Datos, "2024 Zero-Click Search Study," 2024. ↩ ↩2

  7. Wayfinder AI, Monte Carlo simulation: 50,000 runs × 10,000 searches, conditional probability model using SparkToro/Datos base rates. Full methodology and Python source available on request. ↩

  8. Google & Shopper Sciences, "Zero Moment of Truth," 2011. ↩

  9. Google & The Behavioural Architects, "Decoding Decisions: Making Sense of the Messy Middle," 2020. ↩

  10. Blake, T., Nosko, C. & Tadelis, S., "Consumer Heterogeneity and Paid Search Effectiveness: A Large-Scale Field Experiment," NBER Working Paper 20171. ↩

  11. Recast, "The Conundrum of Brand Search: Is it Incremental?" ↩

  12. Strathern, M. (1997), "'Improving ratings': audit in the British University system," European Review, 5(3), 305-321. ↩

  13. Campbell, D. T. (1979), "Assessing the impact of planned social change," Evaluation and Program Planning, 2(1), 67-90. ↩

  14. Court of Justice of the European Union, Google Shopping case, €2.4 billion fine upheld, 2024. ↩

  15. Google patent US12536233B1, "AI-generated content page tailored to a specific user," granted January 27, 2026. Search Engine Land coverage. ↩

  16. Seer Interactive, "How AI Overviews Impact CTR," study of 3,119 queries across 42 organisations. ↩ ↩2

  17. Association of National Advertisers (ANA), "Q2 2025 Programmatic Transparency Benchmark," 2025. ↩

  18. Adweek, "Publicis Advises Clients to Avoid The Trade Desk," March 2026. ↩

  19. upGrowth, "AI Traffic Share Report 2026." ↩

  20. Exposure Ninja, "AI Search Statistics 2026: CMO Cheatsheet." ↩

  21. Vaswani, A., Shazeer, N., et al. (2017), "Attention Is All You Need," NeurIPS 2017. ↩

  22. Nature, "Exclusive: the most-cited papers of the twenty-first century," April 2025. ↩

  23. TechCrunch, "Anthropic's Super Bowl ads mocking AI with ads helped push Claude's app into the top 10," February 2026. ↩

  24. Early reports from ChatGPT's advertising pilot: eMarketer, Search Engine Land. ↩