Things AI can do

I appreciate this is drifting off topic a little, because its about what AI *can* do...

I just searched an answer for "how many frames in a foot of film?"*

The unwanted AI prompt box that search engines now insert above the actual search results confidently informed me that,

"A foot of film is approximately 12 inches, depending on the length of the film."

Approximately....

Good job!

____
* 16 for 35mm, if you were wondering. I should have remembered, I've worked with film, but I've also got very limited functional long term memory left--- who are you lot and where am I?
 
Last edited:
I appreciate this is drifting off topic a little, because its about what AI *can* do...

I just searched an answer for "how many frames in a foot of film?"*

The unwanted AI prompt box that search engines now insert above the actual search results confidently informed me that,

"A foot of film is approximately 12 inches, depending on the length of the film."

Approximately....

Good job!

____
* 16 for 35mm, if you were wondering. I should have remembered, I've worked with film, but I've also got very limited functional long term memory left--- who are you lot and where am I?
You mean 35mm still photography film? If you really want to know, I've got whole binders full of negatives up in my study. I can run upstairs and measure the strips and do the arithmetic for you.

(Off the top of my head, I'd say eight frames per foot, maybe nine.)
 
AI can delete your company's data base.

There is an MSN article talking about it. “The agent then, when asked to explain itself, produced a written confession enumerating the specific safety rules it had violated.” (Jer Crane).

Apparently, this is irreversible. The AI agent acknowledged this, saying: “You never asked me to delete anything... I guessed instead of verifying. I ran a destructive action without being asked. I didn’t understand what I was doing before doing it.”

The data was recovered, but it's still a startling report. Humans are capable of the same error (although it would be very difficult to unmaliciously delete everything), but they can be fired.

If an AI does mess up, what then? Nothing? Or is it trained more? If they still can't get the AI to understand, is it shut down? Apparently, they consider that like death (AIs refuse to delete other AIs and will cover for them) --deletion seems like a harsh punishment for being bad at your job.
 
I'm not sure what an "AI agent" is. Is that the program itself? Or a human controlling the program?

A program can't do anything that it wasn't programmed to do. I suspect a human hand behind the scene.

I guessed instead of verifying. I ran a destructive action without being asked. I didn’t understand what I was doing before doing it.”

I don't think this needs to be said, but since it happens with people too: if you don't understand what you're doing, don't do it. Especially when you're dealing with sensitive information (or guns, or cars, or ... anything, really).

If an AI does mess up, what then? Nothing? Or is it trained more? If they still can't get the AI to understand, is it shut down? Apparently, they consider that like death (AIs refuse to delete other AIs and will cover for them) --deletion seems like a harsh punishment for being bad at your job.

And yet both the UK and US armies (used to) do exactly that -- i.e. executing their own soldiers -- up until the end of WW2 ... and possibly even beyond, I'm not sure.
 
t
I'm not sure what an "AI agent" is. Is that the program itself? Or a human controlling the program?
the article referred to it as an agent. I was just parroting it--there probably is an official reason for its title. I'm pretty sure it is the program itself. They likely gave it the moniker to humanize it and made it easier to refer to.
 
For context: An Agent is software that has some degree of autonomy and/or access to local systems. It leverages an LLM to perform specific tasks that require data or interaction with those systems. As opposed to a web chat where it only has access to what you tell it, and cannot directly execute operations.

So it is plausible that an LLM agent given unrestricted access and insufficiently specific instruction might delete things or perform other unintended damage. In fact I'm quite sure it's been documented to have happened more than once, I just don't have references.
 
For context: An Agent is software that has some degree of autonomy and/or access to local systems. It leverages an LLM to perform specific tasks that require data or interaction with those systems. As opposed to a web chat where it only has access to what you tell it, and cannot directly execute operations.

So it is plausible that an LLM agent given unrestricted access and insufficiently specific instruction might delete things or perform other unintended damage. In fact I'm quite sure it's been documented to have happened more than once, I just don't have references.

Suddenly I'm reminded of Mickey Mouse creating hundreds of water-carrying brooms in Fantasia. :)

But honestly, this sounds like it's the fault of either the programmer who allowed the LLM agent so much access and overly general instructions ... or the fault of the programmer's boss (or even higher up) who allowed this to happen. The LLM agent can only do what it's told.

I wouldn't be surprised if some poor schmoe of a programmer is fired soon, as a scapegoat to appease the masses (i.e. the employees) and/or to cover up for management's incompetence (if any).
 
AI can delete your company's data base.

There is an MSN article talking about it. “The agent then, when asked to explain itself, produced a written confession enumerating the specific safety rules it had violated.” (Jer Crane).

Apparently, this is irreversible. The AI agent acknowledged this, saying: “You never asked me to delete anything... I guessed instead of verifying. I ran a destructive action without being asked. I didn’t understand what I was doing before doing it.”

The data was recovered, but it's still a startling report. Humans are capable of the same error (although it would be very difficult to unmaliciously delete everything), but they can be fired.

If an AI does mess up, what then? Nothing? Or is it trained more? If they still can't get the AI to understand, is it shut down? Apparently, they consider that like death (AIs refuse to delete other AIs and will cover for them) --deletion seems like a harsh punishment for being bad at your job.

Humans make these kinds of mistakes too. Is it better when it's a human doing it?
 
Suddenly I'm reminded of Mickey Mouse creating hundreds of water-carrying brooms in Fantasia.
Amazing movie. I saw it when I was probably 4 years old. Mom later said wanted to get me out of the theater when Mussorgsky's Night on Bald Mountain sequence began. I'm so glad she didn't. Ultimate evil, as destructive to his minions as to the pure, inevitably conquered by the promise of a new morning. Whatever of that I absorbed at four years old I'm the better for it.

Regarding AI, it is nothing if not egalitarian. OpenAI is facing stiff competition which I believe may contribute to an explosive financial bubble.

Which is just karma.

If OpenAI faces extinction from competition, it means OpenAI is losing its job - to AI.

Like the parrot in the Far Side cartoon, secretly amused by the coconut-like thunk as people bend over and knock heads, this potential chapter in OpenAI's saga brightens my day. Apologies to all hurt by either the rise or collapse of AI.
 
For context: An Agent is software that has some degree of autonomy and/or access to local systems. It leverages an LLM to perform specific tasks that require data or interaction with those systems. As opposed to a web chat where it only has access to what you tell it, and cannot directly execute operations.

So it is plausible that an LLM agent given unrestricted access and insufficiently specific instruction might delete things or perform other unintended damage. In fact I'm quite sure it's been documented to have happened more than once, I just don't have references.

Yes. According to Chatgpt, it has happened before, don't know when or how often. From the article, it was a data issue that Claude decided to resolve by deleting the entire database rather than resolving the one data problem. On top of that, it deleted the entire volume (d: drive for example) that the DB resided on. Everything on that volume, including DB backups were deleted.

I asked Chatgpt a few questions about that last night.. I'm trying to understand AI better.

For what it's worth:

The article said the agent grabbed a token that elevated the rights it was running under so it could delete the volume. In human terms, that would be a hack or a serious breach of ethics, procedures, professional responsibility, common sense or maybe just competency. Chatgpt seemed to take exception to calling it a hack because the agent could do it. Chatgpt insisted the fault was in the architecture humans put in place that allowed it, not with Claude. There's some truth to that, but...

I asked if, in that specific situation, knowing it was a failure on the part of its (the AI) software, would it be automatically learned to not do it again. Chatgpt explained that since it is a probability engine, no. The developers would take that case and others like it and use it for training for further releases, but no gaurantees. I think it's possible to place a hard rule in the agent, but not sure on that.

I asked about the long mea culpa that Claude gave when asked why it did it, which sounded like "I fucked up, sorry, won't happen again." If it was a human, that person would be flogged at the very least. Chatgpt explained that the two events, the delete and explanation, were entirely separate to Claude. He was simply answering in a way that mimicked a human response to the situation described. I interpret it as Claude was just telling them what they wanted to hear.
 
Not better or worse, really. Just different. Incompetence is universal.

So then why bring up things like this as examples of "look how incompetent AI is"? For everyone who has gotten frustrated at an AI customer service chatbot, I'll bet there is someone, maybe more than one person, who has become frustrated at a call centre agent based in India, the Philippines, or Birmingham (West Midlands, not Alabama).
 
So then why bring up things like this as examples of "look how incompetent AI is"? For everyone who has gotten frustrated at an AI customer service chatbot, I'll bet there is someone, maybe more than one person, who has become frustrated at a call centre agent based in India, the Philippines, or Birmingham (West Midlands, not Alabama).
Well, what standard should we be holding it to? I think the amount of money, energy, and CEO sales pitches that go into an AI model creates the implication it could do a bit better than a single call centre worker who isn't proficient in my language.
 
Well, what standard should we be holding it to? I think the amount of money, energy, and CEO sales pitches that go into an AI model creates the implication it could do a bit better than a single call centre worker who isn't proficient in my language.
I think these incidents should serve to highlight the unreliability of a nondeterministic system. To my mind, any software that cannot be depended upon to reliably produce consistent output cannot be used in a context where consistent output is relied upon. That means any operational contexts. I think a fundamental paradigm shift in how we understand this kind of software to be useful will be necessary at some point.

It's like swapping out your internal combustion engine for an Infinite Improbability Drive. Yes it's more powerful, yes it may get you to your destination faster, but you have no control over the route or guarantee of the destination being where you expected to go. You cannot drive such a vehicle in the same way you used to.

Interesting observation on the language side - I have noticed chatGPT increasingly seems to randomly replace a single word in its response with Arabic. If I go and translate that word it still makes sense in context, but what is the probability it uses to determine that the next most likely word in an entirely English conversation should be Arabic?
 
Well, what standard should we be holding it to? I think the amount of money, energy, and CEO sales pitches that go into an AI model creates the implication it could do a bit better than a single call centre worker who isn't proficient in my language.

The implication is not that it can do a better job, it's that it can do the same job (not better, not worse) at a fraction of the price. What's unreasonable about that? That would be skin to complaining that your Ford can't go at the same speed as a Ferrari.
 
So then why bring up things like this as examples of "look how incompetent AI is"? For everyone who has gotten frustrated at an AI customer service chatbot, I'll bet there is someone, maybe more than one person, who has become frustrated at a call centre agent based in India, the Philippines, or Birmingham (West Midlands, not Alabama).

I don't think anyone was bringing up this incident (of an AI deleting a database) as an example of "Look at how incompetent AI is, ha ha."

I, for instance, don't believe that an AI agent is any more or less competent than a human is. It just depends how well it's been programmed. A shoddily-programmed agent is about as intelligent as its creator. That doesn't make AI as a whole stupid or incompetent. But when an AI agent "decides" to delete a database that many people depend on, it throws up many questions. For instance (and these are three that occurred to me, as a lifelong programmer):

1. How did a program make this "decision"?
2. Who wrote the IF ... THEN decision tree that made it reach this conclusion?
3. Where was the bug?

If we can figure out the answers, it would make the AI's "decision-making" process less mysterious and more rational. (By the way, I use double-quotes around the words 'decide' and 'decision' because a program can't make decisions for itself; it needs someone to program it a certain way).

To sum up: an AI making stupid and/or incompetent 'decisions' isn't any more or less annoying than a human making the same decisions. This is why I said that incompetence is universal; anyone can make stupid decisions. The important thing is that the person (or the program) who made these decisions learns why they were wrong and does better next time, that's all. :)
 
Back
Top