Three years ago, Chat GPT was born. This amazed the world and sparked unprecedented investment and excitement in the field of artificial intelligence. Today, ChatGPT is still a baby, but public sentiment around the AI boom has become sharply negative. The change began this summer when OpenAI released GPT-5 to mixed reviewsmostly from casual users who, unsurprisingly, judged the system based on its superficial flaws rather than its core capabilities.
Since then, experts and influencers have declared that AI progress is slowing, that scaling has “hit a wall,” and that the entire field is just another tech bubble inflated with cacophonous hype. In fact, many influencers have seized on the disparaging phrase “AI slop” to reduce the amount of amazing images, documents, videos, and code generated at the behest of pioneering AI models.
This perspective is not only wrong, but also unsafe.
I wonder where all the irrational tech bubble “experts” were when electric scooter startups were being touted as a transportation revolution and animated NFTs were being auctioned for millions? They were probably too busy shopping worthless land in the Metaverse or adding to your GameStop positions. But when it comes to the artificial intelligence boom, which is arguably the most crucial factor in the technological and economic transformation of the last 25 years, journalists and influencers can’t write the word “slop” often enough.
Are we protesting too much? After all, objectively speaking, AI is much more powerful than the huge majority computer scientists predicted just five years ago and continues to improve at an astonishing rate. The impressive leap demonstrated by Gemini 3 is just the latest example. At the same time, McKinsey recently reported that 20% of organizations already derive actual value from genAI. Also, recent survey Deloitte indicates that 85% of organizations increased their investments in artificial intelligence in 2025, and 91% plan to boost it again in 2026.
This does not fit the “bubble” narrative and the dismissive language of “sloppiness.” As a computer scientist and research engineer who began working with neural networks in 1989 and has since followed progress through frigid winters and scorching growing seasons, I am amazed almost daily by the rapidly growing capabilities of pioneering AI models. When I talk to other professionals in this field, I hear similar opinions. If anything pace of artificial intelligence development leaves many experts feeling overwhelmed and, frankly, a little scared.
The dangers of artificial intelligence denial
So why does the public buy the narrative that AI is failing, that the results are “half-baked,” and that the AI boom lacks true exploit cases? Personally, I think it’s because we’ve fallen into a collective state I’m in denialsticking to the narrative we want to hear in the face of mighty evidence to the contrary. Denial is the first stage of grief and therefore a reasonable response to a very disturbing perspective that we as humans may soon lose. cognitive supremacy here on planet Earth. In other words, the exaggerated AI bubble narrative is a social defense mechanism.
Believe me, I understand it. I warned you against it risk of destabilization AND the demoralizing influence of superintelligence for over a decade and I, too, believe that artificial intelligence is becoming too wise, too quickly. The fact is that we are rapidly moving towards a future in which widely available artificial intelligence systems will be able to outperform most humans at most cognitive tasks, solve problems faster and more accurately, and yes, more creatively than anyone else. I emphasize “creativity” because AI denialists often insist that certain human traits (especially creativity and emotional intelligence) are crucial) will always be beyond the reach of AI systems. Unfortunately, there is little evidence to support this perspective.
When it comes to creativity, today’s AI models can generate content faster and with greater variety than any individual human. Critics say that true creativity requires internal motivation. I share this argument, but I think it is a vicious circle – we define creativity based on how we experience it, not on the quality, originality or usefulness of the product. We also don’t know whether AI systems will develop internal drives or a sense of agency. Either way, if AI can produce original work that can compete with most human professionals, impact on creative jobs it will still be quite devastating.
The problem of AI manipulation
Our human advantage in emotional intelligence is even more precarious. Artificial intelligence will probably soon be able to read our emotions faster and more accurately than any human, following subtle clues in our microexpressions, vocal patterns, posture, gaze, and even breathing. And when we integrate AI assistants into our phones, glasses and other wearable devices, these systems will monitor our emotional responses throughout the day, building predictive models our behaviors. Without strict regulation, which is increasingly unlikely, these predictive models could be used to target us individually optimized impact which maximizes persuasion.
It’s called The problem of AI manipulation and suggests that emotional intelligence may not give humanity an edge. In fact, it may be a significant, favorable weakness asymmetric dynamics where AI systems can read us superhuman accuracywhile we can’t read AI at all. When you talk to photorealistic AI agents (i you will) you will see a smiling facade, designed to appear toasty, empathetic and trustworthy. It will look and feel human, but it’s just an illusion and it’s uncomplicated to achieve shake your views. After all, our emotional responses to faces are just that visceral reflexes shaped by millions of years of evolution on a planet where every interactive human face we encountered was actually a human. Soon this will no longer be true.
We are quickly moving towards a world where many of the faces we encounter will be those of AI agents hiding behind digital facades. In fact, these “virtual spokespeople” can easily have a look tailored to each of us based on our past responses – whatever makes us best let our guard down. And yet many insist that artificial intelligence is just another technology cycle.
This is wishful thinking. The massive investment in AI isn’t driven by hype – it’s driven by the expectation that AI will permeate every aspect of everyday life, embodying itself as the wise actors we engage with every day. These systems will do it support usteach us and influence us. They will change our lives and it will happen faster than most people think.
To be clear, we are not witnessing the AI bubble filling with empty gas. We are watching a recent planet form, a molten world quickly taking shape that will solidify a new society based on artificial intelligence. Denial won’t stop it. This will only make us less prepared for risk.
Ludwik Rosenberg is an early pioneer of augmented reality and a long-time artificial intelligence researcher.
