Does Google actually care about the health of the open web? Yes, but only in the way a parasite cares about the host not dying.
For the last few years, the goal of AI Overviews has been to kill the click. Why send a user to a blog when the LLM can just synthesize the answer into a neat little paragraph? It’s efficient for the user. It’s great for Google’s metrics. And it’s an absolute disaster for anyone who actually writes the content. We have moved from a search engine—which was essentially a map to where the information lived—to an answer engine, which acts as a filter that strips away the context and the incentive for the original creator to keep publishing. Now, suddenly, Google is pivoting toward citing its sources more aggressively. (And it’s about time).
The shift is a recognition that the current trajectory is unsustainable. If you remove the traffic, you remove the revenue. If you remove the revenue, you remove the writers. If you remove the writers, you eventually run out of fresh, human-generated data to train the next version of the model. Google is finally realizing that they cannot simply feast on the existing corpus of the internet without contributing something back to the ecosystem.
Integrate links more prominently
The new approach is less about altruism and more about the cold math of data availability. If every high-quality publisher decides that the cost of being scraped exceeds the value of the residual traffic, they’ll just flip the switch on their robots.txt. Google knows that training a model on a dead web is a recipe for stagnation. According to ArsTechnica, the company plans to integrate links more prominently, making it easier for users to jump from the summary to the source.
This is a survival tactic. Google has spent a decade moving from a directory of links to a walled garden of answers, but they’ve finally hit the ceiling. When you strip away the incentive for people to create original, deeply researched content, you stop the flow of new training data. You can’t just loop the LLM on its own output forever without the whole thing devolving into a digital incestuous mess. We’ve seen this in smaller-scale model collapse experiments; when a model learns from synthetic data, the quality degrades rapidly.
Do they really think a few more blue links will stop the bleed?
Link is not a visit
The problem is that a link is not a visit. It’s like a restaurant that serves a tasting menu of your favorite dishes but expects you to pay the original chefs via a tip jar on the way out. The friction is still there. The user already has the answer—the “tasting” was the AI Overview—so they have no reason to click through unless they are looking for deeper nuance or a specific tool.
The latency of a click is also a factor. In a world where the answer appears in 200ms, asking a user to wait three seconds for a heavy WordPress site to load is a big ask. Google is trying to maintain the facade of a search engine while operating as an answer engine, hoping that if they throw a few more citations at the wall, the publishers will stop treating the crawler like a thief. (I suspect this is just a defensive play to avoid antitrust heat). But citations are a consolation prize. They don’t pay the server bills or the writers’ salaries.
It’s a band-aid on a gunshot wound.
Google is not fixing the search ecosystem; they are just trying to slow down the rate of decay. They want the prestige and the data of the open web without the responsibility of delivering the traffic that keeps that web alive. By Q4, we’ll see a significant uptick in high-authority sites blocking AI Overviews entirely despite these links.
They are wrong to think this is enough.












Leave a Reply