Amara’s Law: How the AI Hype Cycle Leads to Disillusionment

This blog post was written by Dmitry Shironosov, CEO of Everypixel, and explores the current state of the AI hype cycle, arguing that we are moving from inflated expectations towards a more realistic understanding of AI’s role in our world:

The Journey from Hype to Reality

In recent years, society has experienced a wave of euphoria over the “limitless” possibilities of AI. However, we are gradually sobering up, or even becoming disillusioned, as we come to understand the real role of AI in comparison to humanity. Humans are the source, the idea, the impulse; AI is merely a professional tool.

When we look at the Gartner Hype Cycle, it’s evident that we are moving from the peak of inflated expectations towards the trough of disillusionment. What’s happening with AI is a classic demonstration of Amara’s Law in reality: people tend to overestimate the short-term impact of new technologies while underestimating their long-term effects.

History is full of inventions that seemed revolutionary at first, only to disappoint the public initially before ultimately having a huge long-term impact. The Economist cites the example of the tractor, which appeared long ago but only fundamentally transformed agriculture decades later. Despite its efficiency, the tractor did not eliminate manual labor or the role of humans as managers. The article suggests that AI may follow a similar path, stating that the tractor “conquered the world with a whimper, not a bang.”

The AI Hype Bubble and Investments

The excitement surrounding AI’s capabilities and its perceived success is partly driven by the enormous investments pouring into the industry. By various estimates, AI startups attracted up to $50 billion in investments in 2023, surpassing the GDP of half the countries on the planet.

This financial influx is complemented by a media frenzy that amplifies AI’s potential impacts. AI is indeed changing many aspects of our lives, including the language we use, giving words new meanings or significantly influencing the frequency of existing words. We even conducted a study that confirmed that “everyone is talking about AI” — it’s not a professional distortion of people within the industry’s bubble but a real reflection of widespread discussion.

Engaging with this informational backdrop, where incredible technological revolutions are advertised everywhere, the average user gets inspired and turns to ChatGPT. Often, they face disappointment, like this Reddit user:

It’s pretty clear that big tech has no solutions for the fundamental problems infecting AI today, and they’re both hiding these problems under the carpet with fancy useless demonstrations.

Reddit user EuphoricScreen8259

Such reactions often stem from fear of AI’s capabilities as a defense mechanism. On the other hand, threads on X praising AI tools with exclamations like “this is insane” and stories of people earning $5,000 a month using ChatGPT can also inflate some people’s expectations, leading to disappointment when they receive a wrong answer or encounter an error.

At the same time, it’s remarkable how different people can view the same thing. One person says it’s “useless,” while another is amazed and sees beyond the visible hype, recognizing the significant problems AI solves. This user seems to be in the Slope of Enlightenment stage:

A blind man can now walk through a city with the help of AI. And you see a singular move like ‘it just sees a street lamp..how is that special?’

Reddit user Such–Balance

Humanizing Technology: The Path to Disillusionment

AI is a mathematical tool that humans often humanize due to psychological tendencies. This was interestingly illustrated in the story about OpenAI and Scarlett Johansson, where Sam Altman’s company, whether intentionally or not, created a virtual assistant closely resembling the movie character from “Her” without the actress’s consent.

We name AI models, assign them personalities, joke with them, and debate whether we should greet chatbots and say “thank you.” We literally treat them as humans. However, AI responds like a human because it is trained on datasets created by humans.

Thus, I find the comparison of AI to a mirror, in which humans seek to see themselves, quite fitting:

If AI can help us as a society to not only save the environment, cure disease and explore the universe, but also better understand ourselves—well, that may prove one of the greatest discoveries of them all.

DeepMind’s co-founder and CEO Demis Hassabis

AI mirrors our input, needing human-created datasets as its foundation. Many people mistake the flickers of consciousness in language models for genuine awareness, not realizing these are reflections of human input. The emergence of the dataset market, a trend in AI’s 2024 development, further underscores this point.

To create a model akin to human consciousness, we must first understand its complexity, then meticulously analyze how it works, describe it, and only then emulate it. However, what is the mechanism of consciousness, our internal reflection of reality, our mirror of reality? How can we emulate consciousness if we don’t understand how the original works? Will we ever advance beyond the research phase in the foreseeable future? We have not yet come close to solving the simple and complex problem of consciousness, as David Chalmers calls it.

As Luc Julia, co-founder of Siri and Samsung’s VP of Innovation, said in a 2019 interview:

Descartes said ‘Cogito ergo sum,’ ‘I think therefore I am.’ AI doesn’t think, AI doesn’t exist.

Co-founder of Siri and Samsung’s VP of Innovation, Luc Julia

I am sure that AI is not and will not be a subject, feel qualia, and have willpower, at least until we ourselves understand how our consciousness does it.

Spread the word

Posted

in

by