Things AI can do

I appreciate this is drifting off topic a little, because its about what AI *can* do...

I just searched an answer for "how many frames in a foot of film?"*

The unwanted AI prompt box that search engines now insert above the actual search results confidently informed me that,

"A foot of film is approximately 12 inches, depending on the length of the film."

Approximately....

Good job!

____
* 16 for 35mm, if you were wondering. I should have remembered, I've worked with film, but I've also got very limited functional long term memory left--- who are you lot and where am I?
 
Last edited:
I appreciate this is drifting off topic a little, because its about what AI *can* do...

I just searched an answer for "how many frames in a foot of film?"*

The unwanted AI prompt box that search engines now insert above the actual search results confidently informed me that,

"A foot of film is approximately 12 inches, depending on the length of the film."

Approximately....

Good job!

____
* 16 for 35mm, if you were wondering. I should have remembered, I've worked with film, but I've also got very limited functional long term memory left--- who are you lot and where am I?
You mean 35mm still photography film? If you really want to know, I've got whole binders full of negatives up in my study. I can run upstairs and measure the strips and do the arithmetic for you.

(Off the top of my head, I'd say eight frames per foot, maybe nine.)
 
AI can delete your company's data base.

There is an MSN article talking about it. “The agent then, when asked to explain itself, produced a written confession enumerating the specific safety rules it had violated.” (Jer Crane).

Apparently, this is irreversible. The AI agent acknowledged this, saying: “You never asked me to delete anything... I guessed instead of verifying. I ran a destructive action without being asked. I didn’t understand what I was doing before doing it.”

The data was recovered, but it's still a startling report. Humans are capable of the same error (although it would be very difficult to unmaliciously delete everything), but they can be fired.

If an AI does mess up, what then? Nothing? Or is it trained more? If they still can't get the AI to understand, is it shut down? Apparently, they consider that like death (AIs refuse to delete other AIs and will cover for them) --deletion seems like a harsh punishment for being bad at your job.
 
I'm not sure what an "AI agent" is. Is that the program itself? Or a human controlling the program?

A program can't do anything that it wasn't programmed to do. I suspect a human hand behind the scene.

I guessed instead of verifying. I ran a destructive action without being asked. I didn’t understand what I was doing before doing it.”

I don't think this needs to be said, but since it happens with people too: if you don't understand what you're doing, don't do it. Especially when you're dealing with sensitive information (or guns, or cars, or ... anything, really).

If an AI does mess up, what then? Nothing? Or is it trained more? If they still can't get the AI to understand, is it shut down? Apparently, they consider that like death (AIs refuse to delete other AIs and will cover for them) --deletion seems like a harsh punishment for being bad at your job.

And yet both the UK and US armies (used to) do exactly that -- i.e. executing their own soldiers -- up until the end of WW2 ... and possibly even beyond, I'm not sure.
 
t
I'm not sure what an "AI agent" is. Is that the program itself? Or a human controlling the program?
the article referred to it as an agent. I was just parroting it--there probably is an official reason for its title. I'm pretty sure it is the program itself. They likely gave it the moniker to humanize it and made it easier to refer to.
 
For context: An Agent is software that has some degree of autonomy and/or access to local systems. It leverages an LLM to perform specific tasks that require data or interaction with those systems. As opposed to a web chat where it only has access to what you tell it, and cannot directly execute operations.

So it is plausible that an LLM agent given unrestricted access and insufficiently specific instruction might delete things or perform other unintended damage. In fact I'm quite sure it's been documented to have happened more than once, I just don't have references.
 
For context: An Agent is software that has some degree of autonomy and/or access to local systems. It leverages an LLM to perform specific tasks that require data or interaction with those systems. As opposed to a web chat where it only has access to what you tell it, and cannot directly execute operations.

So it is plausible that an LLM agent given unrestricted access and insufficiently specific instruction might delete things or perform other unintended damage. In fact I'm quite sure it's been documented to have happened more than once, I just don't have references.

Suddenly I'm reminded of Mickey Mouse creating hundreds of water-carrying brooms in Fantasia. :)

But honestly, this sounds like it's the fault of either the programmer who allowed the LLM agent so much access and overly general instructions ... or the fault of the programmer's boss (or even higher up) who allowed this to happen. The LLM agent can only do what it's told.

I wouldn't be surprised if some poor schmoe of a programmer is fired soon, as a scapegoat to appease the masses (i.e. the employees) and/or to cover up for management's incompetence (if any).
 
Back
Top