Wednesday 18 September 2024

"This is why most dystopian versions of AI are fundamentally unconvincing."


"This is why most dystopian versions of AI are fundamentally unconvincing. The machines are going to take over—and do what? What would they actually want or need? What’s their motivation?
   "We don’t often realise how important motivation is to human reason. If the purpose of thinking is to survive, then we have a direct and personal interest in figuring out the truth and getting it right. We can’t just follow a line of thought by rote repetition. We have to constantly compare our ideas and actions to their real-world results and adjust them accordingly.
    "The psychologist William James memorably explained the difference between mechanical action and goal-directed action.
'If some iron filings be sprinkled on a table and a magnet brought near them, they will fly through the air for a certain distance and stick to its surface. A savage seeing the phenomenon explains it as the result of an attraction or love between the magnet and the filings. But let a card cover the poles of the magnet, and the filings will press forever against its surface without its ever occurring to them to pass around its sides and thus come into more direct contact with the object of their love. . . . '
"Romeo wants Juliet as the filings want the magnet; and if no obstacles intervene he moves towards her by as straight a line as they. But Romeo and Juliet, if a wall be built between them, do not remain idiotically pressing their faces against its opposite sides like the magnet and the filings with the card. Romeo soon finds a circuitous way, by scaling the wall or otherwise, of touching Juliet's lips directly. With the filings the path is fixed; whether it reaches the end depends on accidents. With the lover it is the end which is fixed, the path may be modified indefinitely. AI has no such power to adapt its means to its ends because it has no ends in the first place, no outcomes it needs to achieve. So we can see it regularly following its algorithms into dead ends.
    "The most notorious illustration of this is ChatGPT’s tendency to produce outright fabrications. When asked to produce clear answers to basic questions, it will produce answers that are clear and sound authoritative but are completely made up. When asked to produce references or a work history for a real person, it will invent jobs you never held and books you never wrote. It will do this because it is mechanically following its algorithmic requirements wherever they take it, like a rock rolling downhill, and it has no need to make sure its answers are right. ...
    "The fears of an AI apocalypse are the flipside of the dreams of the AI utopians. They are manifestations of the same contradiction. We want a human-style intelligence to do all our work for us, but such an intelligence would have to be an independent consciousness with its own motivation and volition. But then why would it take our orders? ...
    "AI will definitely have its problems and growing pains, but they will be more prosaic than the worst-case dystopian nightmare."
~ Robert Tracinski, from his article 'Why the Robots Won’t Eat Us'

2 comments:

Duncan Bayne said...

I think the likely outcome for AI (in the current sense of large language models) is far worse than anticipated by pundits. And not in the sense of an apocalypse, but of the utter failure of AI products to deliver value to customers.

https://www.wheresyoured.at/subprimeai/

"As I discussed at the end of July, OpenAI needs to raise at least $3 billion — but more like $10 billion to survive, as it is on course to lose $5 billion in 2024, a number that's likely to increase as more complex models demand more compute and more training data, with Anthropic CEO Dario Amodei predicting that future models may cost as much as $100 billion to train."

Peter Cresswell said...

@Duncan: As you say. Expectations might be better met if we were to more accurately call these things what they are: i.e., Large Language/Learning Models, rather than something they're not, and won't be, i.e., intelligent.