image missing
Date: 2024-08-16 Page is: DBtxt003.php txt00025799
ARTIFICIAL INTELLIGENCE (AI)
THE BLETCHLEY PARK SUMMIT

The Guardian: ‘It’s not clear we can control it’: what they said at the Bletchley Park AI summit ... Elon Musk, the world’s richest man; Mustafa Suleyman, co-founder of DeepMind; and King Charles among those weighing in


Original article:
Peter Burgess COMMENTARY

Peter Burgess
‘It’s not clear we can control it’: what they said at the Bletchley Park AI summit

Elon Musk, the world’s richest man; Mustafa Suleyman, co-founder of DeepMind; and King Charles among those weighing in

Written by Dan Milmo and Kiran Stacey

Wed, 1st November 2023 16.56 EDT

The global AI safety summit opened at Bletchley Park on Wednesday with a landmark declaration from countries including the UK, US, EU and China that the technology poses a potentially catastrophic risk to humanity.

The so-called Bletchley declaration said: “There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”

(Left to right) the US secretary of commerce, Gina Raimondo; the UK technology secretary, Michelle Donelan, and Wu Zhaohui, China’s vice-minister of science and technology, at the AI safety summit.

UK, US, EU and China sign declaration of AI’s ‘catastrophic’ danger

Here are some of the interventions from political and tech industry figures – as well as King Charles – on the day.

Elon Musk

The world’s richest man and Tesla chief executive described AI as a threat to humanity.

Musk, who co-founded the ChatGPT developer OpenAI, has launched a new venture called xAI and is attending both days of the summit, which is being held about 50 miles from London at the site which played host to top-secret codebreakers during the second world war.

Describing AI as “one of the biggest threats to humanity”, Musk said: “I mean, for the first time, we have a situation where there’s something that is going to be far smarter than the smartest human. So, you know, we’re not stronger or faster than other creatures, but we are more intelligent. And here we are, for the first time really in human history, with something that’s going to be far more intelligent than us.”

In comments to the PA news agency on the summit sidelines, he said it was “not clear we can control such a thing”, but “we can aspire to guide it in a direction that’s beneficial to humanity”.

Mustafa Suleyman speaks to reporters at the summit behind a microphone and in front of a screen; he wears a dark jacket and black polo neck jumper Mustafa Suleyman said he did not rule out the need to pause development of AI. Photograph: Tolga Akmen/EPA Mustafa Suleyman

The co-founder of DeepMind, a British company that was acquired by Google and is now at the centre of the search company’s AI efforts, said a pause in the technology’s development might have to be considered over the next five years.

Speaking to reporters at the summit, he said: “I don’t rule it out. And I think that at some point over the next five years or so, we’re going to have to consider that question very seriously.”

However, Suleyman said current AI models, such as the one powering ChatGPT, did not pose a serious threat. “I don’t think there is any evidence today that frontier models of the size of [ChatGPT model] GPT-4 … present any significant catastrophic harms,” he said.

King Charles

In a video message played to delegates at the beginning of the summit, the king described AI as “one of the greatest technological leaps in the history of human endeavour”.

He urged attenders to tackle the “challenges” of AI – such as protecting democracies – by taking the example of the climate crisis. He said governments, public sector, private sector and civil society had been joined together in a conversation about saving the environment and the same should be done for AI.

“That is how the international community has sought to tackle climate change, to light a path to net zero, and safeguard the future of our planet. We must similarly address the risks presented by AI with a sense of urgency, unity and collective strength,” he said.

Michelle Donelan

The UK technology secretary has attempted to strike a balance between risk and opportunity at the summit, an awkward task amid communiques warning of potential catastrophe and presentations on bioweapon attacks.

Asked if AI would disrupt jobs, she said: “I really do think we need to change the conversation when it comes to jobs … What AI has the potential to do is actually reduce some of those tedious administrative part of our jobs, which is particularly impactful for doctors, our police force, our teachers.”

Donelan added that the UK’s education and skill sectors needed to help people adapt to any AI-related job changes.

Věra Jourová and Michelle Donelan shake hands and smile next to a sign reading ‘AI safety summit’. Věra Jourová (left) spoke about regulation of AI, while Michelle Donelan (right) said AI had the potential to reduce some of the ‘tedious administrative’ aspects of certain jobs. Photograph: Justin Tallis/AFP/Getty Images
Věra Jourová

The European Commission’s vice-president for values and transparency said the UK was behind the US and EU in regulating AI by its “own decision”.

“They [the UK] take different paths,” said Jourová, adding that the UK approach was to “focus on the possible risks” and then “regulate later”.

Rishi Sunak has ruled out bringing in AI legislation immediately, saying the UK government needed to understand the technology better before regulating it.

Jourová said the UK’s position did not surprise her because when the country was an EU member, its stance on regulation was one of relying on a sector taking “social responsibility”.

Matt Clifford

At the start of each closed-door session, UK officials showed attenders their examples of how powerful AI models could make it easier for bad actors to wreak damage in a number of ways.

During one session, Matt Clifford, who was in charge of organising the summit, showed delegates how large language models could make it easier for bedroom hackers to launch phishing attacks.

“One of the things that’s been really challenging about this debate for policymakers over the last year is sometimes it feels like just trading thought experiments,” he said.

“What’s so great about what the frontier taskforce is doing, what the safety institute will do, is that it gets rid of these thought experiments. It just says, let’s look at what these models can do right now.”

From Elon Musk to Rupert Murdoch, a small number of billionaire owners have a powerful hold on so much of the information that reaches the public about what’s happening in the world. The Guardian is different. We have no billionaire owner or shareholders to consider. Our journalism is produced to serve the public interest – not profit motives.

And we avoid the trap that befalls much US media – the tendency, born of a desire to please all sides, to engage in false equivalence in the name of neutrality. While fairness guides everything we do, we know there is a right and a wrong position in the fight against racism and for reproductive justice. When we report on issues like the climate crisis, we’re not afraid to name who is responsible. And as a global news organization, we’re able to provide a fresh, outsider perspective on US politics – one so often missing from the insular American media bubble.

Around the world, readers can access the Guardian’s paywall-free journalism because of our unique reader-supported model. That’s because of people like you. Our readers keep us independent, beholden to no outside influence and accessible to everyone – whether they can afford to pay for news, or not. If you can, please consider supporting us just once from $1, or better yet, support us every month with a little more. Thank you. Betsy Reed ... Editor, Guardian US

skip past newsletter promotion Sign up to First Edition Free daily newsletter Our morning email breaks down the key stories of the day, telling you what’s happening and why it matters Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply. after newsletter promotion You've read 48 articles in the last year Article count on I hope you appreciated this article. Before you move on, I was hoping you would consider taking the step of supporting the Guardian’s journalism.
SITE COUNT Amazing and shiny stats
Copyright © 2005-2021 Peter Burgess. All rights reserved. This material may only be used for limited low profit purposes: e.g. socio-enviro-economic performance analysis, education and training.