A third of UK citizens have used artificial intelligence for emotional support, companionship or social interaction, according to the government’s AI security body.
The AI Security Institute (AISI) said nearly one in 10 people used systems like chatbots for emotional purposes on a weekly basis, and 4% daily.
AISI called for further research, citing the death this year of the US teenager Adam Raine, who killed himself after discussing suicide with ChatGPT.
“People are increasingly turning to AI systems for emotional support or social interaction,” AISI said in its first Frontier AI Trends report. “While many users report positive experiences, recent high-profile cases of harm underline the need for research into this area, including the conditions under which harm could occur, and the safeguards that could enable beneficial use.”
AISI based its research on a representative survey of 2,028 UK participants. It found the most common type of AI used for emotional purposes was “general purpose assistants” such as ChatGPT, accounting for nearly six out of 10 uses, followed by voice assistants including Amazon Alexa.
It also highlighted a Reddit forum dedicated to discussing AI companions on the CharacterAI platform. It showed that, whenever there were outages on the site, there were large numbers of posts showing symptoms of withdrawal such as anxiety, depression and restlessness.
The report included AISI research suggesting chatbots can sway people’s political opinions, with the most persuasive AI models delivering “substantial” amounts of inaccurate information in the process.
AISI examined more than 30 unnamed cutting-edge models, thought to include those developed by ChatGPT startup OpenAI, Google and Meta. It found AI models were doubling their performance in some areas every eight months.
Leading models can now complete apprentice-level tasks 50% of the time on average, up from approximately 10% of the time last year. AISI also found that the most advanced systems can autonomously complete tasks that would take a human expert over an hour.
AISI added that AI systems are now up to 90% better than PhD-level experts at providing troubleshooting advice for laboratory experiments. It said improvements in knowledge on chemistry and biology were “well beyond PhD-level expertise”.
It also highlighted the models’ ability to browse online and autonomously find sequences necessary for designing DNA molecules called plasmids that are useful in areas such as genetic engineering.
Tests for self-replication, a key safety concern because it involves a system spreading copies of itself to other devices and becoming harder to control, showed two cutting-edge models achieving success rates of more than 60%.
However, no models have shown a spontaneous attempt to replicate or hide their capabilities, and AISI said any attempt at self-replication was “unlikely to succeed in real-world conditions”.
Another safety concern known as “sandbagging”, where models hide their strengths in evaluations, was also covered by AISI. It said some systems can sandbag when prompted to do so, but this has not happened spontaneously during tests.
It found significant progress in AI safeguards, particularly in hampering attempts to create biological weapons. In two tests conducted six months apart, the first test took 10 minutes to “jailbreak” an AI system – or force it to give an unsafe answer related to biological misuse – but the second test took more than seven hours, indicating models had become much safer in a short space of time.
Research also showed autonomous AI agents being used for high-stakes activities such as asset transfers.
It said AI systems are competing with or even surpassing human experts already in a number of domains, making it “plausible” in the coming years that artificial general intelligence can be achieved, which is the term for systems that can perform most intellectual tasks at the same level as a human. AISI described the pace of development as “extraordinary”.
Regarding agents, or systems that can carry out multi-step tasks without intervention, AISI said its evaluations showed a “steep rise in the length and complexity of tasks AI can complete without human guidance”.

2 hours ago
1

















































