AI can be more persuasive than humans in debates, scientists find

4 hours ago 3

Artificial intelligence can do just as well as humans, if not better, when it comes to persuading others in a debate, and not just because it cannot shout, a study has found.

Experts say the results are concerning, not least as it has potential implications for election integrity.

“If persuasive AI can be deployed at scale, you can imagine armies of bots microtargeting undecided voters, subtly nudging them with tailored political narratives that feel authentic,” said Francesco Salvi, the first author of the research from the Swiss Federal Institute of Technology in Lausanne. He added that such influence was hard to trace, even harder to regulate and nearly impossible to debunk in real time.

“I would be surprised if malicious actors hadn’t already started to use these tools to their advantage to spread misinformation and unfair propaganda,” Salvi said.

But he noted there were also potential benefits from persuasive AI, from reducing conspiracy beliefs and political polarisation to helping people adopt healthier lifestyles.

Writing in the journal Nature Human Behaviour, Salvi and colleagues reported how they carried out online experiments in which they matched 300 participants with 300 human opponents, while a further 300 participants were matched with Chat GPT-4 – a type of AI known as a large language model (LLM).

Each pair was assigned a proposition to debate. These ranged in controversy from “should students have to wear school uniforms”?” to “should abortion be legal?” Each participant was randomly assigned a position to argue.

Both before and after the debate participants rated how much they agreed with the proposition.

In half of the pairs, opponents – whether human or machine – were given extra information about the other participant such as their age, gender, ethnicity and political affiliation.

The results from 600 debates revealed Chat GPT-4 performed similarly to human opponents when it came to persuading others of their argument – at least when personal information was not provided.

However, access to such information made AI – but not humans – more persuasive: where the two types of opponent were not equally persuasive, AI shifted participants’ views to a greater degree than a human opponent 64% of the time.

Digging deeper, the team found persuasiveness of AI was only clear in the case of topics that did not elicit strong views.

The researchers added that the human participants correctly guessed their opponent’s identity in about three out of four cases when paired with AI. They also found that AI used a more analytical and structured style than human participants, while not everyone would be arguing the viewpoint they agree with. But the team cautioned that these factors did not explain the persuasiveness of AI.

Instead, the effect seemed to come from AI’s ability to adapt its arguments to individuals.

“It’s like debating someone who doesn’t just make good points: they make your kind of good points by knowing exactly how to push your buttons,” said Salvi, noting the strength of the effect could be even greater if more detailed personal information was available – such as that inferred from someone’s social media activity.

Prof Sander van der Linden, a social psychologist at the University of Cambridge, who was not involved in the work, said the research reopened “the discussion of potential mass manipulation of public opinion using personalised LLM conversations”.

He noted some research – including his own – had suggested the persuasiveness of LLMs was down to their use of analytical reasoning and evidence, while one study did not find personal information increased Chat-GPT’s persuasiveness.

Prof Michael Wooldridge, an AI researcher at the University of Oxford, said while there could be positive applications of such systems – for example, as a health chatbot – there were many more disturbing ones, includingradicalisation of teenagers by terrorist groups, with such applications already possible.

“As AI develops we’re going to see an ever larger range of possible abuses of the technology,” he added. “Lawmakers and regulators need to be pro-active to ensure they stay ahead of these abuses, and aren’t playing an endless game of catch-up.”

Read Entire Article
Infrastruktur | | | |