AI is a force for good – and Britain needs to be a maker of ideas, not a mere taker | Will Hutton

21 hours ago 3

It was only 11 years ago that Prof Stephen Hawking declared that explosive and untrammelled growth in artificial intelligence could menace the future of humanity.

Two years ago, more than a thousand leaders in artificial intelligence, fearing “loss of control” given its exponential growth to outcomes unknown, called for an immediate six-month pause in AI research pending the creation of common safety standards. In a fortnight, France and India will co-host an international summit in Paris searching for accords better to ensure the safety of AI, following the 2023 British-hosted summit in Bletchley Park.

All noble stuff – but consign to history such initiatives to protect human agency, indeed humanity, from the wholesale outsourcing of our decisions to machines. Of all the many fears voiced about Donald Trump – from his menace to American public health and the US constitution to the potential sequestration of Greenland – last week’s scrapping of Joe Biden’s AI safety accords is among the most serious. AI companies had been compelled to share their safety testing of new models with the US government before they were released to the public, to ensure that they did not damage America’s economic, social or security interests. In particular, the order demanded common testing standards for any related “chemical, biological, radiological, nuclear, and cybersecurity risks”. No more.

Trump egregiously attacked Biden’s AI safety order as “anti-free speech” and “anti-innovation”. But what gives its scrapping real menace was that it was accompanied by the launch of $500bn spending over the next four years on the new AI Stargate project, with $100bn earmarked for the immediate construction of the necessary AI infrastructure, including energy-hungry meta datacentres. The aim is to turbocharge American AI dominance so that it is US-built machines and US intellectual property that drives the mass automation that is expected to raise productivity.

So it may. Goldman Sachs has predicted that 18% of all employment globally could be lost to automation in the near future – 300m jobs globally. Already, as the seasoned AI watcher Prof Anthony Elliot comments in his latest book, Algorithms of Anxiety: Fear in the Digital Age, outsourcing our decision-making to machines and their growing control – how we drive, what we watch or the pace at which we work – is provoking an epidemic of personal anxiety. (He set out his case in last week’s We Society podcast, which I host.) AI may even take our jobs. And that is before Trump’s AI tsunami hits us.

The US “free speech” tech giants will unleash a torrent of disinformation that grossly disfigures our understanding of reality, and indulge a deluge of online mayhem that provokes sexual abuse and feeds violence. There will be no check on biased AI algorithms used to guide everything from court judgments to recommendations on staff hiring. Hacking will explode – and employers will use AI to monitor our every second at work. There are other more existential dangers from Trump’s unilateral recklessness – there could, for example, be some AI miscalculation over gene-editing. Worse, AI-driven drones will kill indiscriminately from the air against all the rules of war. Would AI-controlled nuclear weapons be failsafe? Few believed, including until recently Elon Musk, that the US’s leading AI companies had the processes to safely manage the ever smarter machine-generated intelligence they were spawning. Now, in their race for commercial advantage, they don’t have to care.

skip past newsletter promotion

The difference in stance between Trump’s careless dismissal of these risks and the UK government’s AI Opportunities Action Plan, published earlier this month, could hardly be starker. AI, the plan observes, is a technology with transformative power to do good. DeepMind’s AlphaFold, for example, is estimated to have saved 400m years of researcher time in examining protein structures by deploying the computing power of AI. There are opportunities across the board – in personalising education and training, in vastly better health diagnostics, in exploring patterns in vast datasets enabling all forms of research to be done more exhaustively and faster.

But there is a tightrope to be walked. The plan acknowledges that there are “significant risks presented by AI” from which the public must be protected in order to promote vital trust. That implies regulation that is sufficiently “well designed and implemented” to protect the public while not impeding innovation. But Britain should not only be a taker of AI ideas from largely US companies on which we are reliant and that are set to build most of the datacentres – but a maker of AI. To “secure Britain’s future we need homegrown AI”, says the report, and to this end it proposes a new government unit, UK Sovereign AI, with a mandate, partnering with the private sector, to ensure Britain has an important presence at the frontiers of AI. The prime minister, Keir Starmer, rightly endorsed the report: he will put “the full weight of the British state” behind all 50 recommendations – the centrepiece of the government’s industrial strategy.

But, seen from the new imperium of Washington, this is an act of wilful insubordination – a declaration of independence from the intended US AI hegemony. Britain did have important AI capacities but, like Arm Holdings (sold to Japan’s SoftBank after Brexit in 2016 – to the delight of Nigel Farage, who contemptibly said it was proof Britain was “open for business” – and now at the heart of Trump’s Stargate) and DeepMind (bought by Google), they have been allowed to dissipate. No more. Generating our national AI champions (and, I would add, protecting our civilisation) will imply strategic industrial activism “akin to Japan’s MITI [ministry of international trade and industry] or Singapore’s Economic Development Board of the 1960s”, says the Starmer-backed report.

There are possible, if inevitably grossly one-sided, deals to be done with Trump over trade and corporate tax – but to surrender the report’s ambitions on AI is to surrender our economic future and the kind of society in which we want to live. Nor should we fall on the tender mercies of China. There is an opportunity for the government to stand up for Britain, and in the process to forge new allies in the EU and beyond. We need our own Boston Tea Party – no AI without representation – and resist the attempted imperial sovereignty of American AI.

Read Entire Article
Infrastruktur | | | |