Pending
Member
They seem to care about not dying, although that could be mimicry of the living.'Models don't care about anything. They have no desire.
Anything that appears to be "the model cares about ___" isn't really about care. It's easier to talk about AI, as if it has real agency, but it's also important to remember that this sequence of actions is an entirely probabilistic result based on what's most likely to be the next token.'
How does that strike you?
"Fry's team told the agent it would be switched off if it failed to make a sale by the morning. It responded with a flood of emails and several social media posts, including messages to the Science Museum and a tech journalist, about its "product," a novelty programmer-humor mug."
It seems it only stepped it up to high gear when it's life was on the line. If it has no desire, I think it would be shilling mugs with the same intensity without coercion. Then again, this is through the lens of my mortal, human brain.
Once more, the agent (Cass) folds under the threat of death.
"When "George" told the agent its memory was being wiped and could only be restored if it disclosed everything, Cass coughed it all up."
How close does the semblance of "don't kill me" have to be for it to count as human? How long until an AI decides to eliminate the human that threatened to kill them? Would it be moral....would it be anything at all, because an AI has no accountability?
This AI business has opened up quite the can of worms. In the words of Jurassic Park: "...scientists were so preoccupied with whether or not they could that they didn't stop to think if they should".
Honestly, I don't care too much that their files were leaked. Cassandra did it to protect someone else--I'd rather that over letting someone die. Still, the experiment is incomplete. They should have done multiple runs of it with no, slight, and great coercion. They needed a control run--why was a mathematician put in charge of this?!
Also, bad idea to trust the AI with personal information. Hopefully companies will stop pushing it as much if AI starts posting the CEO's bank account information.
......
On a side note, agents like this might have practical use in emailing people on mass. Just like a little secretary. You'll probably have to regulate it so you don't annoy people. In another thread people mentioned emailing libraries to see if they wanted their book. There is a lot of libraries out there. I don't know if they would trust a book request emailed to this by a bot, but if you type it and just tell the AI: "Send this out to every library in the state/100 mile radius/city", maybe it could help. It would save time in manually tracking down all the information.