Chatbots can sway people’s political opinions but the most persuasive artificial intelligence models deliver “substantial” amounts of inaccurate information in the process, according to the UK government’s AI security body.
Researchers said the study was the largest and most systematic investigation of AI persuasiveness to date, involving nearly 80,000 British participants holding conversations with 19 different AI models.
The AI Security Institute carried out the study amid fears that chatbots can be deployed for illegal activities including fraud and grooming.
The topics included “public sector pay and strikes” and “cost of living crisis and inflation”, with participants interacting with a model – the underlying technology behind AI tools such as chatbots – that had been prompted to persuade the users to take a certain stance on an issue.
Advanced models behind ChatGPT and Elon Musk’s Grok were among those used in the study, which was also authored by academics at the London School of Economics, Massachusetts Institute of Technology, the University of Oxford and Stanford University.
Before and after the chat, users reported whether they agreed with a series of statements expressing a particular political opinion.
The study, published in the journal Science on Thursday, found that “information-dense” AI responses were the most persuasive. Instructing the model to focus on using facts and evidence yielded the largest persuasion gains, the study said. However, the models that used the most facts and evidence tended to be less accurate than others.
“These results suggest that optimising persuasiveness may come at some cost to truthfulness, a dynamic that could have malign consequences for public discourse and the information ecosystem,” said the study.
On average, the AI and human participant would exchange about seven messages each in an exchange lasting 10 minutes.
It added that tweaking a model after its initial phase of development, in a practice known as post-training, was an important factor in making it more persuasive. The study made the models, which included freely available “open source” models such as Meta’s Llama 3 and Qwen by the Chinese company Alibaba, more convincing by combining them with “reward models” that recommended the most persuasive outputs.
Researchers added that an AI system’s ability to churn out information could make it more manipulative than the most compelling human.
“Insofar as information density is a key driver of persuasive success, this implies that AI could exceed the persuasiveness of even elite human persuaders, given their unique ability to generate large quantities of information almost instantaneously during conversation,” said the report.
Feeding models personal information about the users they were interacting with did not have as big an impact as post-training or increasing information density, said the study.
Kobi Hackenburg, an AISI research scientist and one of the report’s authors, said: “What we find is that prompting the models to just use more information was more effective than all of these psychologically more sophisticated persuasion techniques.”
However, the study added that there were some obvious barriers to AIs manipulating people’s opinions, such as the amount of time a user may have to engage in a long conversation with a chatbot about politics. There are also theories suggesting there are hard psychological limits to human persuadability, researchers said.
Hackenburg said it was important to consider whether a chatbot could have the same persuasive impact in the real world where there were “lots of competing demands for people’s attention and people aren’t maybe as incentivised to sit and engage in a 10-minute conversation with a chatbot or an AI system”.

1 hour ago
2

















































