Are we doomed to be STUPID ?!!

Are we doomed to become more stupid?

Please excuse the click-bait headline, but now you are here, consider staying and examining with me the deeper, the more long term impact AI is having on society and humankind. This is an important topic and we really need to think about it and have conversations about where our current trajectory is leading us.

Artificial intelligence has rapidly grown from a distant vision of the future into a present-day reality that shapes countless aspects of our personal and professional lives. From healthcare and finance to manufacturing and social media, AI-driven innovations are transforming industries and redefining how humans interact with information. Over the next short while, we will explore the shifting landscape of information retrieval and generative AI, examining the potential consequences of these advancements on society in the short, medium, and long term. I will discuss key historical milestones from libraries to search engines, and delve into how generative AI might undermine or reinforce our critical thinking skills, content creation practices, and the broader information ecosystem.

A Brief History of Information Search and Retrieval

The Library Era

Before the advent of the internet, accessing information often meant visiting libraries or perusing newspapers. Libraries functioned as carefully curated repositories of knowledge, offering books, journals, and other publications to those who needed them. People tended to trust library collections because they were vetted by professionals (e.g., librarians and archivists) who ensured that the materials met minimum standards of accuracy and relevance.

 

People in library - licensed from Envato.

People in library - licensed from Envato


Within this paradigm, finding new ideas or answers to difficult questions usually meant a combination of talking to experts, reading extensively, and piecing together insights from multiple sources. In such an environment, we enjoyed certain freedoms and benefits:

 

  • Choice: Although libraries offered a limited selection, patrons could still choose from various books, references, or scholarly articles.
  • Multiple Perspectives: Readers could compare and contrast different authors’ viewpoints, developing critical thinking skills.
  • Focused Learning: Because there was a finite set of references, people spent more time engaging deeply with the material.

 

The Search Era

The onset of web search engines, think AltaVista, Yahoo, and later Google, completely upended how we discover and consume information. Suddenly, the entire World Wide Web became accessible at the click of a button. The role of newspapers and traditional media outlets began to shift, as more and more people sourced news and insights online. Over the years, search engines refined their algorithms to the point where a single query could return millions of results.

 

Search image - licensed from Envato

Search image - licensed from Envato

 

There were immediate benefits:

  • Vast Quantity of Results: Instead of relying on a finite set of library materials, users could theoretically explore an endless ocean of information.
  • Democratization of Knowledge: People who had the means to go online could access the same (or very similar) cache of data as experts.
  • User Autonomy: Critically, individuals had to determine which results were most relevant to them. If someone felt that the first result was unconvincing or biased, they could continue scrolling, or refine their search, to find alternative viewpoints.

However, the search era introduced challenges:

  • Information Overload: The vast number of results could overwhelm users, sometimes prompting them to settle for the easiest or most convenient answer.
  • Bias in Algorithms: Search engines are not neutral tools. Algorithms that measure relevance, popularity, or personalization can inadvertently push some views into the spotlight while overshadowing others.
  • Decline in Critical Reading: While users were theoretically free to explore multiple perspectives, the sheer volume sometimes nudged people to skim quickly and trust top-ranked links.

Even with these limitations, choice persisted. Those who were determined to dig deeper could discover obscure facts or niche sources by sifting through page after page of search results. This freedom to self-direct the learning process encouraged broad exposure to diverse viewpoints.

The Impact of Generative AI Technologies

Pre-ChatGPT Generative AI

Prior to the breakthrough of large language models like ChatGPT, generative AI was generally used in specialized domains: text summarization, machine translation, content recommendation, or creative work such as music and art generation. While these advancements showed potential, they were not adopted on a massive scale by the general public. People still relied heavily on search engines for most information.

The ChatGPT Revolution

Then came ChatGPT and similar generative AI models. Suddenly, tools with a human-like capacity to converse, write, and create content captured global attention—and rapidly gained traction. People discovered that many tasks once requiring manual effort and consultation of multiple sources (e.g., drafting an article, writing code, brainstorming ideas) could be significantly streamlined. Lets look at some pros and cons of this impact:

Pros

  • Efficiency: Tasks that once took hours of research, writing, and editing can now be completed in minutes.
  • Accessibility: Non-experts gain a near-instant resource for learning new concepts or verifying details.
  • Creativity: Generative AI can serve as a muse for brainstorming or prototyping, providing countless variations of ideas.

Cons

  • Loss of Control: A single AI-generated answer might be accepted without critical evaluation, narrowing exposure to alternative perspectives.
  • Quality Concerns: While large language models have improved dramatically, they still produce inaccuracies, or “hallucinations,” leading to misinformation (and while the term 'hallucinations' is generally used in this context, I think it humanises the technology too much and should better be thought of as 'randomised content generation' that has no basis in fact and simply has a similar pattern to something the gen-ai has observed previously).
  • Dependence: People relying solely on these tools risk eroding their own capacity for critical thinking and problem-solving. Scientific research in cognitive psychology supports the notion that mental faculties follow the “use it or lose it” principle. By not regularly practicing critical thinking and problem-solving, individuals can experience a degradation in neural pathways responsible for higher-order cognition (Dayan & Cohan 2011).

 

Where AI Companies Get Their Data

An often-overlooked reality is that AI companies acquire much of their training data by scraping websites, social media platforms, and digital archives across the internet. This widespread scraping process has provoked a wave of litigation, as many content producers and publishers see it as unauthorized use—or even outright theft—of their intellectual property. The impact has been devastating for numerous online platforms that thrive on advertising and subscription revenue: when users choose to consult ChatGPT or similar AI models instead of visiting websites directly, traffic declines dramatically, and with it, the revenue streams that keep these platforms operational.

 

Size of training data needed over time - from  NYTimes article

Growth of training data over time (ref: NY Times)

AI companies argue that their models benefit from broad web-scraped data to achieve comprehensive coverage, but critics point out that these businesses are profiting from other people’s work without adequate (or in most cases, any) compensation or even acknowledgement. Meanwhile, faced with plunging page views and reduced ad impressions, publishers struggle to survive and it is clear that many will close and go out of business due to this dramatic shift in how information is searched and delivered to users on the Internet.

On top of these legal and economic concerns, experts such as Ilya Sutskever, co-founder and former Chief Scientist of OpenAI, have remarked that we may already have reached “peak data,” meaning the most valuable and relevant information on the internet has largely been harvested for training. If this is true, and there is no new wellspring of genuine, high-quality human data to feed AI models, the question becomes: where does such data come from going forward?

For many AI companies, the answer lies in generating synthetic data which, as outlined above, derives its foundations and insights from the original human-provided content. Yet this approach only underscores the risk of a self-referential loop. If users and platforms cease producing fresh, human-origin content because AI has siphoned away much of the audience and revenue, the well of genuine knowledge might run dry. In such a scenario, future AI models, starved of real human context and experience, could become increasingly detached from the realities they aim to represent.

Decline in User-Generated Content

A striking consequence of the rise of generative AI is the noticeable slowdown in user-generated content. Platforms like Stack Overflow and many other online communities that I am personally involved with have reported sharp declines in human contributed content generation (and both question-asking and answer-providing activity), which some attribute to the fact that ChatGPT can handle basic content and instruction related queries or conceptual questions instantaneously.

The following chart showing a decline in StackOverflow traffic around the time of ChatGPT release is interesting ... I saw this first on Tom Alders LinkedIn feed highlighting the irony of Stacks fall and GPTs growth.

 

The crux is that if fewer individuals are motivated to create and share knowledge, online information repositories risk stagnation. High-quality human contributions come from real experiences, insights, and creative processes that AI cannot replicate (until, or if, it becomes sentient, but even then, its 'lived' reality will not be that of humans, so its still different). Without these very deeply human contributions, the very training data on which AI models rely could rapidly dwindle in quality and volume.

Additionally, synthetic data, by its very nature, derives its patterns, structures, and content from an original corpus of human-generated input or ground truth. While it can be useful in certain contexts, such as filling gaps in datasets or augmenting specific use cases, it fundamentally lacks the spontaneity, cultural context, and novel insights borne out of human experience and creativity. Synthetic data does not possess lived experiences or genuine motivations; it can only remix or extrapolate from existing human-provided information. As a result, it cannot develop brand-new conceptual breakthroughs in the same way humans, living and engaging with the real world, can. Over time, if more and more data becomes artificially generated, the overall depth and authenticity of the dataset risk devolving into repetitive patterns and echo chambers. Without a steady influx of genuine human insights and experiences, AI models lose their grounding in the ever-evolving reality they are meant to reflect, and the quality of knowledge they produce will inevitably degrade.

What the Future Holds

Short-Term Outlook

In the short term, over the next few years, generative AI will likely continue to spread across various industries and personal use cases. Companies will integrate AI assistants into their products, and people will grow increasingly comfortable relying on these models for everything from text editing to complex research.

 

  • Productivity Gains: Automation will lighten the burden of routine tasks, enabling businesses to focus on higher-level strategic planning.
  • Need for Guardrails: As misinformation threats loom, expect a surge in regulations and ethical guidelines for AI deployments.
  • Hybrid Workflows: Human-in-the-loop systems will emerge to validate AI outputs. We will still rely on experts to confirm the correctness of information.

 

Medium-Term Outlook

Looking five to ten years ahead, generative AI will likely be deeply woven into our daily lives.

 

  • Cultural Shift: The notion of “research” and “learning” may increasingly mean interacting with an AI agent rather than an active search across multiple sources.
  • Skill Erosion: If people increasingly depend on AI to do the hard work, finding, verifying, and presenting information, core skills like critical thinking, in-depth reading, and domain-specific knowledge risk atrophy.
  • Content Dearth: As motivation for creating new, publicly available content diminishes, high-quality data pools shrink. AI models may “feed” on older or synthetic data, leading to a cycle of decaying quality over time.

 

Long-Term Outlook

In the decades to come, society may confront a reckoning if we do not maintain robust mechanisms to cultivate fresh ideas and quality information.

 

  • Self-Referential AI: Relying on AI-generated outputs to train future AI systems could risk data staleness and degrade their value.
  • Fragmented Reality: With less human-generated content to serve as a common basis for truth, we risk siloed understanding and heightened echo chambers.
  • Emergence of Specialized Communities: People who value deeper analysis and primary-source data may cluster into smaller, dedicated circles, reminiscent of the library era’s reliance on expert gatekeepers.

However, all is not doom and gloom if we actively steer the evolution of AI:

 

  • Reinventing Trust Models: Society could develop new systems of peer review, verification, and regulation that incentivize genuine human-driven content creation.
  • Hybrid Intelligence: We may balance AI’s convenience with continual human input, ensuring that data remains grounded in real-world experiences.
  • Ethical and Responsible AI: As governments and private entities collaborate, we could develop transparent models that promote truthfulness, reliability, and accountability.

 

Wrapping it up...

The journey from libraries to search engines and now to generative AI underscores humanity’s relentless pursuit of convenience and efficiency. Each paradigm shift in information access has granted us new abilities while also introducing vulnerabilities. With generative AI technologies like ChatGPT, we stand at a crossroads: we could embrace unprecedented creativity and productivity, or we could succumb to complacency, losing our capacity for independent thought and diminishing the pool of human-created knowledge.

To ensure a healthy information ecosystem in which AI and humanity can coexist fruitfully, we must:

 

  1. Maintain Critical Thinking: Encourage education systems and professional environments to reward scepticism and curiosity.
  2. Foster Human Contribution: Develop incentives for individuals to continue creating and sharing high-quality, reliable content.
  3. Implement Robust AI Governance: Establish guidelines and best practices that hold AI systems (and their creators) accountable for misinformation or bias.
  4. Preserve Diverse Perspectives: Resist algorithmic echo chambers by designing AI platforms that highlight multiple viewpoints.

Ultimately, the future of AI and society is a story we are writing in real time. By harnessing these powerful tools responsibly—without forsaking our innate human capabilities—we can ensure that AI evolves as a transformative force that elevates, rather than diminishes, the depth and breadth of human knowledge.

 

References

Some references you may find useful to read:

Park, D.C., & Reuter-Lorenz, P.A. (2009). The Adaptive Brain: Aging and Neurocognitive Scaffolding. Annual Review of Psychology, 60, 173–196.

Dayan, Eran; Cohen, Leonardo G. Neuroplasticity subserving motor skill learning. Neuron, 2011, vol. 72, no 3, p. 443-454.

Training data is exhausted - TechCrunch

Tech giants harvest data - The New York Times

Up Next
    Ebook Download
    View all
    Learn
    View all