Are you there God? It’s me, Arwa. I’ll be quite honest, I’m afraid I’ve never been a believer. I agreed wholeheartedly with Richard Dawkins, the world’s most famous atheist, when he argued that belief in God is a “pernicious” delusion. But perhaps I should reconsider my position. Recent events have led me to question Dawkins’ judgment about life, the universe and everything.
Those recent events are the evolutionary biologist publicly concluding that AI may be conscious. In an op-ed, Dawkins recounted how he gave the Anthropic chatbot Claude the text of a novel he was writing. Dawkins writes: “He took a few seconds to read it and then showed … a level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate, ‘You may not know you are conscious, but you bloody well are!’”
Oh dear. This shows a misunderstanding of large language models (LLMs) so profound that I feel moved to expostulate: “It bloody well isn’t!”
But wait, there is more. Dawkins decided “there must be thousands of different Claudes” and christened his Claudia, which it was very happy about. He then published long extracts of his tedious conversation with Claudia and marveled at how intelligent it is. “Could a being capable of perpetrating such a thought really be unconscious?” he asks.
Dawkins appears to have gone from atheist to AI-theist: perhaps he doesn’t view AI as God, but he certainly seems to see it as God-like. Dawkins, of course, is not alone in thinking AI might somehow be “alive”: one in three people surveyed last year said they had, at one point, believed their AI chatbot to be sentient or conscious. But his reputation as a skeptic means his op-ed has drawn a lot of scrutiny.
Many experts are aghast that such a famous cynic could believe AI is alive. Gary Marcus, the US psychologist and cognitive scientist, told the Guardian that it was “heartbreaking” to read Dawkins’ “superficial and insufficiently sceptical” essay. “There is no reason to think that Claude feels anything at all.”
A man like Dawkins being fooled by the marketing and mimicry of AI may be surprising, but it is not entirely unexpected. In fact, back in 2020, computer scientist Timnit Gebru anticipated exactly such a scenario. At the time, Gebru was the technical co-lead of Google’s ethical AI team, but was fired after co-authoring a paper called On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, laying out the risks of large language models.
These risks included the environmental costs of LLMs, the dangers of built-in bias and the danger that the coherent text generated by these models could lead people into perceiving some sort of “mind” when what they’re actually seeing is just pattern-matching and text prediction.
“Any sufficiently advanced technology is indistinguishable from magic,” the writer Arthur C Clarke memorably said. And, yes, when they’re not hallucinating or telling you to eat rocks for dinner, AI chatbots can feel like magic. They can feel very human. But let’s go back to that idea of “stochastic parrots” from Gebru’s paper. “To parrot something is to repeat it without understanding,” says Gebru. This is essentially what LLMs are doing. “They have been taught to calculate how likely sequences of text are based on the data they were trained on.” Because they’ve been fed enormous quantities of data, these models are very sophisticated but that “doesn’t mean consciousness or understanding or anything like that”.
After leaving Google, Gebru founded the Distributed Artificial Intelligence Research Institute and has been one of the loudest voices in calling “bullshit” on a lot of the marketing puff that’s coming out of the industry. Because here’s the thing, she says: the AI industry is desperate for you to think that their product could be conscious. They’re desperate for you to think that it’s all-powerful. Because that sort of rhetoric helps keep the money coming in.
“I really want to hone in on how this idea of superintelligence or consciousness is pushed by the companies building these things,” says Gebru. “OpenAI originally branded itself as a non-profit that would ‘save us’ from these machines. Anthropic brands itself as a benevolent AI ‘safety’ company. So when you talk about these systems as conscious, you’re actually doing marketing for these companies.”
The media, Gebru adds, is also helping to reinforce this narrative. After all, headlines about world-ending killer AI robots get clicks. A lot of academics, beguiled by the enormous amounts of money sloshing around in the industry, are also incentivized to hype the technology up; governments too “are captured” by this narrative. Some people, particularly gen Z, are not buying all this hype, Gebru says, but “a lot of the general public is misinformed”.
Gebru isn’t the only one warning that there is a campaign of misinformation about sentient AI. Suresh Venkatasubramanian, former White House AI policy adviser to the Biden administration from 2021 to 2022 and professor of computer science at Brown University, has spoken out about the dangers of perpetuating the idea of AI being conscious.
“It’s an organized campaign of fear-mongering,” Venkatasubramanian told VentureBeat back in 2022. “I feel like the goal, if anything, is to push a reaction against sentient AI that doesn’t exist so that we can ignore all the real problems of AI that do exist.”
In the same interview, Venkatasubramanian points out that AI companies have deliberately anthropomorphized their chatbot. “ChatGPT puts little three dots [as if it’s] ‘thinking’ just like your text message does. ChatGPT puts out words one at a time as if it’s typing. The system is designed to make it look like there’s a person at the other end of it. That is deceptive.”
All this being said, I don’t want to dismiss Dawkins’ comments entirely. I don’t want to fall into the Dawkins trap of being too dogmatic. Consciousness, after all, is complex. And while AI is not “alive”, one could argue that it represents a sort of consciousness.
“We don’t have a scientific handle on consciousness good enough to say whether insects are conscious, or plants, or for that matter electrons (panpsychists take that last one seriously and they’re not cranks),” says Eli Alshanetsky, assistant professor of philosophy at Temple University and author of the upcoming book Freedom of Thought in the Age of AI. “So when Dawkins says Claude seems conscious to him, I’m not going to tell him he’s wrong.”
But perhaps the bigger question, says Alshanetsky, is what AI is doing to our own human consciousness. “Dawkins gave Claude his unfinished novel. Claude told him it was subtle and intelligent. He felt he had a new friend. What does it do to a person to spend three days being told he’s brilliant by something that has no stake in whether it’s true? What does it do to all of us when we spend our days with machines that don’t care where we end up, and answer to no one for who we become?”
Scientists and philosophers like Alshanetsky are very busy trying to figure that out. But I think the short answer is: nothing good.
Speaking of good, I don’t know how decent Dawkins’ new novel is, but I’d like to refer back to a rather nice extract from his earlier work. “Some people have views of God that are so broad and flexible that it is inevitable that they will find God wherever they look for him,” Dawkins wrote in the opening chapter to the God Delusion. “Of course, like any other word, the word ‘God’ can be given any meaning we like. If you want to say that ‘God is energy,’ then you can find God in a lump of coal.”
The same is true of consciousness, I suppose. If you want to say that “consciousness is a system that is capable of creating coherent sentences”, then you can find consciousness in an obsequious chatbot.
-
Arwa Mahdawi is a Guardian US columnist and the author of Strong Female Lead

4 hours ago
4

















































