Can AI, Artificial Intelligence, be regulated sensibly to ensure we get the benefits of its development while controlling for the risks and threats? It’s one of the toughest questions facing policymakers right now.
Next week is a big week on the AI front, with a two-day international summit taking place at Bletchley Park, the scene of Britain’s wartime code-cracking operation.
Rishi Sunak kicked off the government’s push on the subject today by announcing that Britain will establish the world’s first AI safety institute. Speaking in London, Sunak confirmed that this new institute would “carefully evaluate and test new types of AI so we can understand what new models are capable of”.
It is a vital task, Sunak insisted, because, if left unchecked, the threat posed by AI could include terrorist groups using the technology to build bioweapons. And then there is the even more existential threat, he added: the kind of AI sometimes referred to as “super intelligence” could outsmart us to such a degree “that humanity could lose control of AI completely”.
Yet the PM was also clear: “The UK’s answer is not to rush to regulate.” First, we need to undertake research so that we can fully grasp the risks. “How can we write laws that make sense for something that we don’t yet fully understand?” he said.
Sunak was speaking ahead of that International AI Safety Summit he is hosting next week at Bletchley Park which will attempt to find a consensus on how to approach frontier AI.
World leaders and senior Silicon Valley executives are expected to jet in to attend the event, including the chief executives of the world’s three major artificial intelligence labs: OpenAI, Google’s Deepmind and Anthropic.
Fears around AI have intensified over the past year as – in a Frankenstein-esque turn of events – even those involved in its creation have started to voice fears over AI’s unforeseen and threatening potential.
Earlier this week, Deep Mind’s chief, Dennis Hassabis, declared that the world must act immediately to tackle the technology’s dangers and called for the creation of an IPCC-style panel on AI.
Back in May, Hassabis was one of the signatories of an open letter from AI experts in which they called for greater regulation and warned that the threat of extinction from AI should be treated as a societal risk comparable to pandemics or nuclear weapons.
The fact that developers of AI don’t even know what their own models are capable of is the very reason we need an AI safety institute, Sunak claimed today. “We shouldn’t rely on them marking their own homework..only governments can properly assess the risks to national security” he insisted.
Number 10 says Sunak is determined to make Britain a global leader on this front, citing his ambition today to make the UK the the safest place in the world to develop the technology.
But it is America – global tech leader – where much of the action is. Congress is homing in on early attempts at regulation or lawmaking. Senate Majority Leader Chuck Schumer this week held the second of his AI Insight Forums, a closed door session featuring tech leaders, policymakers and academics, discussing how to make AI safe.
One of the attendees criticised the “crazy, reckless” rush to a super-intelligence. Others are reported to have demanded urgent regulation.
The White House is expected to announce its long-awaited executive order on Monday, laying down rules for the use of AI in government.
Write to us with your comments to be considered for publication at letters@reaction.life