November 21 0 5

The Rise and (Slight) Fall of GEN AI: What Happened and What's Next?

Let’s be real — nothing has created more excitement (and confusion) in the past few decades than Artificial Intelligence. But if you really look at it, most of the AI "revolution" has been just that: noise.

Sure, there have been some legit breakthroughs, but they've been drowned out by the deafening sound of hype. This isn’t new, guys. Every new technology goes through the same thing — just think about the internet bubble in the 90s. AI is no different.

Then came ChatGPT, and suddenly, AI wasn’t just a techie’s plaything — it was the talk of the boardroom. But here's the catch: with all this buzz, we're missing the real story. So, let’s cut through the noise and focus on Generative AI, the star (and now maybe the scapegoat) of the past year or so.

What’s really going on with AI?

There are three big schools of thought when it comes to Large Language Models (LLMs). Let's break them down:

  1. Skepticism: AI, My Foot!

Some heavyweights like Noam Chomsky see LLMs as nothing more than fancy calculators. They argue that these models don’t "understand" anything — they’re just really good at predicting what comes next in a sentence because they’ve seen a gazillion examples. Mathematically, they've nailed conditional probability, but that's about it.

Our two cents: Sure, they might be underestimating the subtle ways data modeling can mimic cognition. But then again, how do we know humans aren’t doing the same thing? We’re constantly processing data from our senses. So, maybe understanding and mimicking understanding aren’t that different. Check out this paper for more on how much of LLM behavior can be explained by simple statistical rules.

  1. Hopeful insight: AI’s got a brain!

On the flip side, folks like Ilya Sutskever (the genius behind ChatGPT) and Geoffrey Hinton think LLMs have developed some kind of internal model of the human world. They argue that since the internet is a treasure trove of human thoughts and experiences, LLMs — by predicting the next word in a text — have somehow soaked up this knowledge and become "intelligent."

Our two cents: This might be giving too much credit. These models are great at processing data, but that’s not the same as real understanding. Plus, if they’re so smart, why do they flunk simple tasks?

  1. Pragmatism: AI is a tool, not a thinker

Then there are the pragmatists like Yann LeCun and Subbarao Kambhampati. They see LLMs as powerful aids but not as entities with human-like intelligence. LLMs can help with tasks like writing but fall short on genuine reasoning and understanding.

Our two cents: They’re spot on. LLMs are like cognitive crutches — good for support but not for running a marathon. Future AI advancements will need new principles, not just bigger models. LeCun even said, "Don’t work on LLMs."

The hype is dying: what’s next?

Remember, every exponential curve is a ticking time bomb. Eventually, it flattens out. AI is no exception. Take CPUs in the 2000s or airplane speeds in the 1970s — growth hit a wall. The same might be happening with AI.

Flight airspeed records over time. The SR-71 Blackbird record from 1976 still stands today.

Synthetic data

Some think synthetic data is the future of AI scaling. But let’s be real — using synthetic data to replace high-quality human data is like trying to fuel a car with coffee grounds instead of gas. It might work in niche areas, like AlphaGo learning to play Go by itself, but that’s not a universal solution.

How good are current AI systems, really?

Here’s a bombshell: we don’t really know how good or bad current AI systems are. Accuracy measurements that don’t account for cost are useless. Some fancy, costly models aren’t any more accurate than simpler, cheaper ones.

The real cost: If we don’t measure cost, researchers might just keep building expensive models to top leaderboards. And let’s be honest — most AI research papers? Not super useful. We need a real scientific theory of intelligence, not just more models.

The financial fiasco

Let’s talk cash. A lot of people don’t get the economics of AI. Companies are throwing money at AGI dreams (looking at you, Sam Altman, with your $7 trillion plan). But here’s the kicker: even if we build AGI, what then? If machines do all the work, who’s going to buy the products? The economic system would collapse.

And let’s not forget the current AI gold rush. Half of the startups are just wrappers on OpenAI’s API, burning through VC money. That gravy train is slowing down.

What could burst the AI bubble?

  1. Unsustainable valuations: Overhyped AI companies might not live up to their sky-high valuations.
  2. Lack of profitable revenue streams: Many AI companies haven’t figured out how to make real money.
  3. Regulatory challenges: More scrutiny on AI safety and ethics could slow things down.
  4. Economic downturn: A recession could dry up AI investment.

The Future: Less hype, more reality

AI CEOs are already dialing down AGI expectations. What’s next? We might see a dilution of interest in AI, with only the truly dedicated sticking around to build better, more controllable systems.

What we need: Human-augmenting intelligence — systems that make us more productive and connected, not more distracted. The mindless proliferation of AI products has created more problems than solutions.

Conclusion

Let’s not normalize AI friends and girlfriends or use AI in education without thinking through the consequences. We need to be smarter about how we use AI, not just make more of it.

So, there you have it. The GEN AI boom might be fading, but the story of AI is far from over. Let’s hope the next chapter is more about substance and less about spectacle.

How do you like the article?
#Generative AI

Start to earn with PARIPESA