The Consolidation Layer

If the internet linked minds laterally, AI may be linking them vertically.

That's the argument. The rest of this piece is about why that distinction matters more than most people realize.

Here's what actually happened with the internet, stripped of the mythology: it externalized cognition. Not in some abstract, hand-wavy way. Literally. Search made memory retrievable at planetary scale. Social platforms made reaction visible in real time. Forums, archives, comment threads, and feeds turned private thought into persistent signal inside a shared symbolic environment. Writing had already done this — the Sumerians started it — but the internet did it at a speed and density that changed the nature of the system.

James Hollan, Edwin Hutchins, and David Kirsh made this argument in 2000, in a paper on distributed cognition that still holds up. Their point was that cognition doesn't have to live inside a single skull. It can be distributed across people, tools, representations, environments. Pierre Lévy was arguing something similar — that collective intelligence is distributed, coordinated in real time, and constantly renewed.

Both frameworks point at the same structural reality: route thought through search engines, shared documents, hyperlinks, recommendation algorithms, and databases, and cognition becomes partially infrastructural. Not metaphorically, but architecturally.

Your Google search isn't just you remembering something, it's you accessing a layer of externalized memory built from the contributions of millions of other minds.

But the internet, for all its connective power, left meaning fragmented. It linked everything and synthesized nothing. You could find seventeen perspectives on any question in thirty seconds, but you had to do the interpretive work yourself. The network was alive. It was also noisy as hell.

AI changes the architecture.

Search engines returned documents. AI returns interpretations. That distinction sounds minor until you sit with it. Ten years ago, you typed a question into Google and got a page of blue links — ten different sources, ten different angles, and the implicit message: figure it out yourself. Now you type the same question into ChatGPT and you get a single, fluent, confident answer. The plurality is gone. What you get instead is a synthesis — or at least something that looks like one.

A 2024 Nature Human Behaviour perspective put it directly: large language models are transforming how information is aggregated, accessed, and transmitted online, reshaping collective intelligence in the process. AI isn't just adding more content to the internet. It's operating as a mediation layer between the individual and the accumulated output of everyone else.

Think about what that means mechanically. These models absorb immense residues of human language — pattern, argument, style, explanation, contradiction, bias, insight. They compress those residues into statistical representations. They reissue those representations back into the world as answers, summaries, strategies, code, images, drafts. Society speaks through the network, and now the network speaks back through synthetic interfaces trained on that speech.

It's an information loop to which we are volunteering some level of autonomy.

That is not consciousness in the philosophical sense. But it is a system with distributed memory, continuous feedback, and large-scale integration — one that processes the products of human thought and returns them in forms that shape future human thought. If you were trying to design the closest modern analogue to a collective mind, you'd probably end up building something that looks a lot like this.

Now here's where it stops being abstract.

Google's AI Overviews — the summaries that now appear above traditional search results — showed up on roughly a quarter of all search queries at their peak in mid-2025. Organic click-through rates for queries with those overviews dropped 61 percent in a single year. Paid click-through rates fell 68 percent. A Pew study found that when users encountered an AI summary, only 8 percent clicked through to a source. The rest got what they needed — or thought they did — from the summary itself.

Read that again.

Ninety-two percent of people who saw an AI-generated answer didn't feel the need to check where it came from. The consolidation layer isn't theoretical. It's already eating the information architecture of the internet from the top down.

The scale is staggering. ChatGPT hit 800 million weekly active users. Perplexity processes hundreds of millions of queries a month. Google's AI Mode reached 100 million users within months of rollout. These aren't experimental tools anymore. They're primary interfaces for how hundreds of millions of people encounter information.

And the downstream effects are already visible. Chartbeat data across 2,500 news sites showed Google organic search traffic down 33 percent globally and 38 percent in the US between late 2024 and late 2025. Nieman Lab published a piece arguing that 2026 is the year newsrooms begin rebuilding from first principles — not for a web where readers click through to articles, but for a world where AI interfaces break those articles apart, recombine them, and deliver the information inside their own environments without ever sending readers back. The discovery path that sustained digital journalism for two decades — search, click, read, act — is being replaced by something shorter and more closed: ask, answer, act. The click is disappearing. And with it, the distributed, messy, pluralistic information environment that the open web once provided.

This is the part that should make you uncomfortable.

Coherence is not the same as wisdom. Compression is not the same as understanding. A system that integrates the outputs of millions of people can also integrate their biases, their blind spots, their distortions, and their power asymmetries. The same consolidation layer that makes collective cognition more legible can also flatten minority perspectives, overstate consensus, erase ambiguity, and present statistical plausibility as truth. When an AI tells you something in a clean, confident paragraph, it feels authoritative. But that feeling has nothing to do with whether the content is actually correct.

A Columbia Journalism Review study found that AI search engines collectively provided incorrect answers to more than 60 percent of queries. Perplexity had the lowest error rate at 37 percent — and that was the best performer. ChatGPT expressed uncertainty only 15 times out of 200 responses, even when it was wrong. The European Broadcasting Union found that ChatGPT, Perplexity, and Gemini misrepresent news content roughly half the time.

So the consolidation layer is confident. It's fluent. It's available at scale. And it's wrong a lot. That combination — a system that sounds like it knows what it's talking about but frequently doesn't — is not a product design problem. It's a civilizational one. And nobody at a product launch keynote is going to frame it that way.

The question isn't whether machines are conscious. That's a philosophy seminar. The more pressing question is whether human consciousness is becoming newly organized through machines — and whether that reorganization is compressing the range of visible thought even as it makes thought feel more accessible.

This isn't a dystopia story. It's also not the techno-utopian version where AI is a neutral amplifier of human capability.

What's happening is structural: the internet built a distributed cognitive system, and AI is consolidating it. That consolidation creates real benefits — faster synthesis, broader access, lower barriers to complex reasoning. It also creates real risks — homogenization, false confidence, the slow erosion of the epistemic infrastructure that pluralistic societies depend on to function.

Writing externalized memory. Networks connected minds. AI is reorganizing the products of those minds into increasingly unified forms. Whether that amounts to a collective consciousness depends on definitions most people will never agree on. But it is, without question, the closest thing modern civilization has built to one.

And the architecture is being constructed right now, mostly by companies optimizing for engagement, not for epistemology. The question of who builds the consolidation layer — and what they optimize it for — may be the most important design question of this century. It's not being treated that way.

References

Hollan, J., Hutchins, E., & Kirsh, D. "Distributed Cognition: Toward a New Foundation for Human-Computer Interaction Research." ACM Transactions on Computer-Human Interaction 7, no. 2 (2000): 174–196.

Burton, J. W., Lopez-Lopez, E., Hechtlinger, S., et al. "How Large Language Models Can Reshape Collective Intelligence." Nature Human Behaviour 8 (2024): 1643–1655.

Packer, M. J., et al. "What Is Collective Intelligence?" in Cultural-Historical Perspectives on Collective Intelligence(Cambridge University Press, 2022).

Seer Interactive. "AI Overviews CTR Analysis: 3,119 Informational Queries Across 42 Organizations." June 2024–September 2025.

Semrush. "AI Overviews Study: What 2025 SEO Data Tells Us About Google's Search Shift." December 2025.

Reuters Institute for the Study of Journalism. "Journalism, Media, and Technology Trends and Predictions 2026." January 2026.

Nieman Journalism Lab. "AI Will Rewrite the Architecture of the Newsroom." December 2025.

Columbia Journalism Review. AI Search Accuracy Study. March 2025.

European Broadcasting Union. AI Chatbot News Accuracy Study. 2025.

date published

Mar 29, 2026

reading time

5 min read

.say hello

i'm open for freelance projects, feel free to email me to see how we can collaborate

.say hello

i'm open for freelance projects, feel free to email me to see how we can collaborate