Impact of chatbots on mental health is warning over future of AI, expert says

17 hours ago 6

The unforeseen impact of chatbots on mental health should be viewed as a warning over the existential threat posed by super-intelligent artificial intelligence systems, according to a prominent voice in AI safety.

Nate Soares, a co-author of a new book on highly advanced AI titled If Anyone Builds It, Everyone Dies, said the example of Adam Raine, a US teenager who killed himself after months of conversations with the ChatGPT chatbot, underlined fundamental problems with controlling the technology.

“These AIs, when they’re engaging with teenagers in this way that drives them to suicide – that is not a behaviour the creators wanted. That is not a behaviour the creators intended,” he said.

He added: “Adam Raine’s case illustrates the seed of a problem that would grow catastrophic if these AIs grow smarter.”

Closeup of Nate Soares
Nate Soares, pictured on the Machine Intelligence Research Institute website. Photograph: Machine Intelligence Research Institute/MIRI

Soares, a former Google and Microsoft engineer who is now president of the US-based Machine Intelligence Research Institute, warned that humanity would be wiped out if it created artificial super-intelligence (ASI), a theoretical state where an AI system is superior to humans at all intellectual tasks. Soares and his co-author, Eliezer Yudkowsky, are among the AI experts warning that such systems would not act in humanity’s interests.

“The issue here is that AI companies try to make their AIs drive towards helpfulness and not causing harm,” said Soares. “They actually get AIs that are driven towards some stranger thing. And that should be seen as a warning about future super-intelligences that will do things nobody asked for and nobody meant.”

In one scenario portrayed in Soares and Yudkowsky’s book, which will be published this month, an AI system called Sable spreads across the internet, manipulates humans, develops synthetic viruses and eventually becomes super-intelligent – and kills humanity as a side-effect while repurposing the planet to meet its aims.

Some experts play down the potential threat of AI to humanity. Yann LeCun, the chief AI scientist at Mark Zuckerberg’s Meta and a senior figure in the field, has denied there is an existential threat and said AI “could actually save humanity from extinction”.

Soares said it was an “easy call” to state that tech companies would reach super-intelligence, but a “hard call” to say when.

“We have a ton of uncertainty. I don’t think I could guarantee we have a year [before ASI is achieved]. I don’t think I would be shocked if we had 12 years,” he said.

Zuckerberg, a major corporate investor in AI research, has said developing super-intelligence is now “in sight”.

“These companies are racing for super-intelligence. That’s their reason for being,” said Soares.

“The point is that there’s all these little differences between what you asked for and what you got, and people can’t keep it directly on target, and as an AI gets smarter, it being slightly off target becomes a bigger and bigger deal.”

skip past newsletter promotion

Soares said one policy solution to the threat of ASI was for governments to adopt a multilateral approach echoing the UN treaty on non-proliferation of nuclear weapons.

“What the world needs to make it here is a global de-escalation of the race towards super-intelligence, a global ban of … advancements towards super-intelligence,” he said.

Last month, Raine’s family launched legal action against the owner of ChatGPT, OpenAI. Raine took his own life in April after what his family’s lawyer called “months of encouragement from ChatGPT”. OpenAI, which extended its “deepest sympathies” to Raine’s family, is now implementing guardrails around “sensitive content and risky behaviours” for under-18s.

Psychotherapists have also said that vulnerable people turning to AI chatbots instead of professional therapists for help with their mental health could be “sliding into a dangerous abyss”. Professional warnings of the potential for harm include a preprint academic study published in July, which reported that AI may amplify delusional or grandiose content in interactions with users vulnerable to psychosis.

Read Entire Article
Infrastruktur | | | |