UK AI summit: US-led AI pledge threatens to overshadow Bletchley Park
Nations are vying to see who can sign up the most countries to their AI safety agreements, with a surprise US announcement threatening to overshadow the UK’s declaration
By Chris Stokel-Walker
2 November 2023
US vice president Kamala Harris spoke about artificial intelligence at the US embassy in London on 1 November
Maja Smiejkowska/Reuters
This week, UK prime minister Rishi Sunak is hosting a group of more than 100 representatives from the worlds of business and politics to discuss the potential and pitfalls of artificial intelligence.
The AI Safety Summit, held at Bletchley Park, UK, began on 1 November and aims to come up with a set of global principles with which to develop and deploy “frontier AI models” – the terminology favoured by Sunak and key figures in the AI industry for powerful models that don’t yet exist, but may be built very soon.
While the Bletchley Park event is the focal point, there is a wider week of fringe events being held in the UK, alongside a raft of UK government announcements on AI. Here are the latest developments.
Advertisement
Participants sign agreement
The key outcome of the first day of the AI Safety Summit yesterday was the Bletchley Declaration, which saw 28 countries and the European Union agree to meet more in the future to discuss the risks of AI. The UK government was keen to tout the agreement as a massive success, while impartial observers were more muted about the scale of its achievement.
While the politicians on stage wanted to highlight the successes, a good proportion of those who were at the summit felt more needed to be done. At 4pm yesterday, just before the closing plenary rounding up of the conclusions of the first day’s panels was due to begin, nearly a dozen civil society groups present at the conference released a communique of their own.
The letter urged those in attendance to consider a broader range of risks to humanity beyond the fear that AI might become sentient or be misused by terrorists or criminals. “The call for regulation comes from those who believe AI’s harm to democracy, civil rights, safety and consumer rights is urgent now, not in a distant future,” says Marietje Schaake at Stanford University in California, who was one of the signatories. Schaake was also keen to point out that the discussion “process should be independent and not an opportunity for capture by companies”.