Things AI can do (but there aren't many)

I think we’re tipping over the ‘peak of inflated expectations’ now and heading down towards the ‘trough of disillusionment.’
I think that's pretty much correct. The thing with current GenAI technology is how "magical" they seem. They are impressive because they can perform in ways nothing else ever has, but once you understand how they actually work, the illusion the big impressions make breaks and you're left with the cruel reality of hallucinations and pretensions.

Right now, CEOs and other highly-ranked executives see how ChatGPT and the like act like humans and talk like humans. They think they can replace the work force and save them money.

But that's just the thing. They act. They aren't actually intelligent, have no agency and no real understanding of any function they may be performing. The serious problem with current GenAI technology are hallucinations. There is no way to stop them, and as long as that problem exists, they cannot be trusted to perform on their own for substintial tasks.

Mark Zuckerberg and his buddies know this, so the AI industry is on a race to develop Artificial General Intelligence (AGI), which is supposed to actually, and for real, emulate human intelligence. The only problem is: Nobody in the entire industry even has a guess for how to even achieve such a thing. We require a huge breakthrough and current AI technologies that are receiving ridiculous amounts of investments don't appear to be making any real and measurable contributions to that goal.

LLMs are cool, but their output should be taken with a massive grain of salt because there is simply no real reliability or approximation for correctness. There are so many people that rely on them as if there is, but there is really not, and those people run a real risk of being proverbially burned later for their conceited trust.

I had some fun with ChatGPT the other month. I wanted to test its ability to assess grammar correctness, so I gave it a sentence that I already knew was wrong. It incorrectly assessed it as being right. I corrected it, and it accepted its own mistake. But I didn't stop there. I then gave it some bullshit explanation (that sounded plausible) for why it was actually correct. It thanked me and again agreed that the sentence was grammatically correct—even though it wasn't. I could probably repeat that charade forever.

The reason it failed so hard is exactly because ChatGPT isn't actually intelligent. It literally just predicts the most likely answer based on its dataset made from random and stolen Internet data. It doesn't evaluate based on linguistic rules like a real human would. What's worse, it can easily be influenced by the way you prompt it.

Let my experiment be a lesson. ChatGPT can be a useful tool. Use it, but use it with caution and understanding for its limitations. Don't let its confident tone fool you into thinking that its correct. Though I will say, Claude is a little bit better than ChatGPT. It's less biased towards pleasing the user, so its more likely to give you better answers.

Either way, once more and more people become aware of the serious limitations, we'll indeed head for "trough of disillusionment". The world will eventually see through the false magic behind AI and the excitement will die. AGI could come along, but again, there is no real indication beyond fearmongering and AI CEO talk that it is.
 
Last edited:
Let my experiment be a lesson. ChatGPT can be a useful tool. Use it, but use it with caution and understanding for its limitations. Don't let its confident tone fool you into thinking that its correct. Though I will say, Claude is a little bit better than ChatGPT. It's less biased towards pleasing the user, so its more likely to give you better answers.
...

Either way, once more and more people become aware of the serious limitations, we'll indeed head for "trough of disillusionment". The world will eventually see through the false magic behind AI and the excitement will die. AGI could come along, but again, there is no real indication beyond fearmongering and AI CEO talk that it is.
Funny aside: even the fearmongering was to drum up hype for AI.

I think the fact that it appears to communicate enough like a human, the kind of human many of us want to hear, is more likely to damn us all into accepting it. Didn't we already have to move the goal post on the Turing test? An average person will not be so discerning.

"I asked it a question. It answered. Therefore it knows things."

I can see corporate disillusionment, possibly, but I don't expect normal people to lose faith in it.
 
I can see corporate disillusionment, possibly, but I don't expect normal people to lose faith in it.
Without even talking about LLMs, there will always be people who have faith on even the stupidest of things. That's why false information is rampant on social media. People make it because there are people who believe it.

I'm pretty sure I read about some sort of ChatGPT religion. It certainly wouldn't surprise me if someone started it. The point is, some people just don't know better. But that will change as better resources that will help people understand the limitations of these technologies increase in availability and quality. Right now, I can see how even someone who's pretty smart could fall victim to the hype.

I remember when I started computer science nearly three years ago (I'm near the finish line now), ChatGPT scared me to death because the way it talked and produced code was so human-like. I was depressed because I thought: "Welp, that's it. In five years, ChatGPT will be ten times better, and many jobs will be history."

In retrospect, that's a pretty stupid statement. ChatGPT has a massive limit to how much it can evolve because of its fundemental technological constraints. Without another breakthrough, those technologies can and will improve further, but they won't cross any real boundaries beyond those they already have.

Not that I could see all of that. I only say how well it looked like it was performing. There wasn't any science behind those thoughts. I was under the illusion of the AI hyper, magic and fearmongering. But once I slowly came to understand how these technologies actually worked, those fears disappeared and my mindset around them has changed completely.

There are job losses, mind you. CEOs think that one human programmer with ChatGPT equals three human programmers. While even that's being argued and studied right now, the point is: In its current form, demand in certain jobs will be lessened, but they won't vanish like I thought they would.

Of course, CEOs will always dream of being ever richer, so they will be rather adventurous and bold in these matters. Who knows what the future holds? It is pretty uncertain indeed.

None of that covers art-related jobs. I hear graphic artists are struggling pretty hard now. The reality is: Sometimes, the error-prone work GenAI produces is often enough for jobs where excellence is simply irrelevant.
 
Back
Top