I think we’re tipping over the ‘peak of inflated expectations’ now and heading down towards the ‘trough of disillusionment.’
I think that's pretty much correct. The thing with current GenAI technology is how "magical" they seem. They are impressive because they can perform in ways nothing else ever has, but once you understand how they actually work, the illusion the big impressions make breaks and you're left with the cruel reality of hallucinations and pretensions.
Right now, CEOs and other highly-ranked executives see how ChatGPT and the like act like humans and talk like humans. They think they can replace the work force and save them money.
But that's just the thing. They
act. They aren't actually intelligent, have no agency and no real understanding of any function they may be performing. The serious problem with current GenAI technology are hallucinations. There is no way to stop them, and as long as that problem exists, they cannot be trusted to perform on their own for substintial tasks.
Mark Zuckerberg and his buddies know this, so the AI industry is on a race to develop Artificial General Intelligence (AGI), which is supposed to actually, and for real, emulate human intelligence. The only problem is: Nobody in the entire industry even has a guess for how to even achieve such a thing. We require a huge breakthrough and current AI technologies that are receiving ridiculous amounts of investments don't appear to be making any real and measurable contributions to that goal.
LLMs are cool, but their output should be taken with a massive grain of salt because there is simply no real reliability or approximation for correctness. There are so many people that rely on them as if there is, but there is really not, and those people run a real risk of being proverbially burned later for their conceited trust.
I had some fun with ChatGPT the other month. I wanted to test its ability to assess grammar correctness, so I gave it a sentence that I already knew was wrong. It incorrectly assessed it as being right. I corrected it, and it accepted its own mistake. But I didn't stop there. I then gave it some bullshit explanation (that sounded plausible) for why it was actually correct. It thanked me and again agreed that the sentence was grammatically correct—even though it wasn't. I could probably repeat that charade forever.
The reason it failed so hard is exactly because ChatGPT isn't actually intelligent. It literally just predicts the most likely answer based on its dataset made from random and stolen Internet data. It doesn't evaluate based on linguistic rules like a real human would. What's worse, it can easily be influenced by the way you prompt it.
Let my experiment be a lesson. ChatGPT can be a useful tool. Use it, but use it with caution and understanding for its limitations. Don't let its confident tone fool you into thinking that its correct. Though I will say, Claude is a little bit better than ChatGPT. It's less biased towards pleasing the user, so its more likely to give you better answers.
Either way, once more and more people become aware of the serious limitations, we'll indeed head for "trough of disillusionment". The world will eventually see through the false magic behind AI and the excitement will die. AGI could come along, but again, there is no real indication beyond fearmongering and AI CEO talk that it is.