Date: 2025-02-05 Page is: DBtxt003.php txt00025798 | |||||||||
ARTIFICIAL INTELLIGENCE (AI)
THE BLETCHLEY PARK SUMMIT Global Leaders Warn A.I. Could Cause ‘Catastrophic’ Harm ... At a U.K. summit, 28 governments, including China and the U.S., signed a declaration agreeing to cooperate on evaluating the risks of artificial intelligence.
NYT-Bletchley-Park-AI-Summit-25798-01-a.jpg
A winding entrance road to Bletchley Park, a low-slung country estate in Britain with manicured lawns in front. The mansion house at Bletchley Park, north of London. The country estate, home of Britain’s code-breaking efforts in World War II, is the site of a two-day summit focused on A.I. safety.
Credit...Matt Dunham/Associated Press
Original article: https://www.nytimes.com/2023/11/01/world/europe/uk-ai-summit-sunak.html Peter Burgess COMMENTARY Peter Burgess | |||||||||
Global Leaders Warn A.I. Could Cause ‘Catastrophic’ Harm
At a U.K. summit, 28 governments, including China and the U.S., signed a declaration agreeing to cooperate on evaluating the risks of artificial intelligence. By Adam Satariano and Megan Specia ... Adam Satariano reported from Bletchley Park, England, and Megan Specia from London. Nov. 1, 2023 Want the latest stories related to United Kingdom? Sign up for the newsletter Your Places: Global Update, and we’ll send them to your inbox. In 1950, Alan Turing, the gifted British mathematician and code-breaker, published an academic paper. His aim, he wrote, was to consider the question, “Can machines think?” The answer runs to almost 12,000 words. But it ends succinctly: “We can only see a short distance ahead,” Mr. Turing wrote, “but we can see plenty there that needs to be done.” More than seven decades on, that sentiment sums up the mood of many policymakers, researchers and tech leaders attending Britain’s A.I. Safety Summit on Wednesday, which Prime Minister Rishi Sunak hopes will position the country as a leader in the global race to harness and regulate artificial intelligence. On Wednesday morning, his government released a document called the Bletchley Declaration, signed by representatives from the 28 countries attending the event, including the U.S. and China, which warned of the dangers posed by the most advanced “frontier” A.I. systems. “There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these A.I. models,” the declaration said. “Many risks arising from A.I. are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible A.I.” The document fell short, however, of setting specific policy goals. A second meeting is scheduled to be held in six months in South Korea and a third in France in a year. Governments have scrambled to address the risks posed by the fast-evolving technology since last year’s release of ChatGPT, a humanlike chatbot that demonstrated how the latest models are advancing in powerful and unpredictable ways. Future generations of A.I. systems could accelerate the diagnosis of disease, help combat climate change and streamline manufacturing processes, but also present significant dangers in terms of job losses, disinformation and national security. A British government report last week warned that advanced A.I. systems “may help bad actors perform cyberattacks, run disinformation campaigns and design biological or chemical weapons.” Editors’ Picks A Footpath in England, Torn Down, Keeps Being Rebuilt by ‘Fairies’ Those Promotions Promising a ‘Free’ iPhone? It Isn’t Free. Is the Pantyhose-as-Pants Look a Real Thing? Mr. Sunak promoted this week’s event, which gathers governments, companies, researchers and civil society groups, as a chance to start developing global safety standards. Elon Musk, wearing a dark suit and white shirt, speaks to journalists in an empty auditorium. ... Elon Musk, the CEO of SpaceX, Tesla and X (formerly Twitter) on Wednesday at the artificial intelligence summit.Credit...Leon Neal/Getty Images The two-day summit in Britain is at Bletchley Park, a countryside estate 50 miles north of London, where Mr. Turing helped crack the Enigma code used by the Nazis during World War II. Considered one of the birthplaces of modern computing, the location is a conscious nod to the prime minister’s hopes that Britain could be at the center of another world-leading initiative. Bletchley is “evocative in that it captures a very defining moment in time, where great leadership was required from government but also a moment when computing was front and center,” said Ian Hogarth, a tech entrepreneur and investor who was appointed by Mr. Sunak to lead the government’s task force on A.I. risk, and who helped organize the summit. “We need to come together and agree on a wise way forward.” With Elon Musk and other tech executives in the audience, King Charles III delivered a video address in the opening session, recorded at Buckingham Palace before he departed for a state visit to Kenya this week. “We are witnessing one of the greatest technological leaps in the history of human endeavor,” he said. “There is a clear imperative to ensure that this rapidly evolving technology remains safe and secure.” Vice President Kamala Harris, and Gina Raimondo, the secretary of commerce, were taking part in meetings on behalf of the United States. Wu Zhaohui, China’s vice minister of science and technology, told attendees that Beijing was willing to “enhance dialogue and communication” with other countries about A.I. safety. China is developing its own initiative for A.I. governance, he said, adding that the technology is “uncertain, unexplainable and lacks transparency.” In a speech on Friday, Mr. Sunak addressed criticism he had received from China hawks over the attendance of a delegation from Beijing. “Yes — we’ve invited China,” he said. “I know there are some who will say they should have been excluded. But there can be no serious strategy for A.I. without at least trying to engage all of the world’s leading A.I. powers.” Vice President Kamala Harris waved while standing next to her husband at the top of a set of steps leading from the exit of a plane ... Vice President Kamala Harris and her husband, Douglas Emhoff, arriving at Stansted Airport, north of London, on Tuesday night. Credit...Joe Giddens/Press Association, via Associated Press With development of leading A.I. systems concentrated in the United States and a small number of other countries, some attendees said regulations must account for the technology’s impact globally. Rajeev Chandrasekhar, a minister of technology representing India, said policies must be set by a “coalition of nations rather than just one country to two countries.” “By allowing innovation to get ahead of regulation, we open ourselves to the toxicity and misinformation and weaponization that we see on the internet today, represented by social media,” he said. Executives from leading technology and A.I. companies, including Anthropic, Google DeepMind, IBM, Meta, Microsoft, Nvidia, OpenAI and Tencent, were attending the conference. Also sending representatives were a number of civil society groups, among them Britain’s Ada Lovelace Institute and the Algorithmic Justice League, a nonprofit in Massachusetts. In a surprise move, Mr. Sunak announced on Monday that he would take part in a live interview with Mr. Musk on his social media platform X after the summit ends on Thursday. Some analysts argue that the conference will be heavier on symbolism than substance, with a number of key political leaders absent, including President Biden, President Emmanuel Macron of France and Chancellor Olaf Scholz of Germany. And many governments are moving forward with their own laws and regulations. Mr. Biden announced an executive order this week requiring A.I. companies to assess national security risks before releasing their technology to the public. The European Union’s A.I. Act, which could be finalized within weeks, represents a far-reaching attempt to protect citizens from harm. China is also cracking down on how A.I. is used, including censoring chatbots. Britain, home to many universities where artificial intelligence research is being conducted, has taken a more hands-off approach. The government believes that existing laws and regulations are sufficient for now, while announcing a new A.I. Safety Institute that will evaluate and test new models. Mr. Hogarth, whose team has negotiated early access to the models of several large A.I. companies to research their safety, said he believed that Britain could play an important role in figuring out how governments could “capture the benefits of these technologies as well as putting guardrails around them.” In his speech last week, Mr. Sunak affirmed that Britain’s approach to the potential risks of the technology is “not to rush to regulate.” “How can we write laws that make sense for something we don’t yet fully understand?” he said. NEW More to Discover Expand to see more When Anton Hood and his teammates on New Zealand’s curling team needed a place to stay, the residents of Chartwell Colonel Belcher Retirement Residence, like Mary Gregoret, 80, were happy to help. STYLE Olympic Hopefuls Needed a Home. A Retirement Community Stepped Up. “It’s getting serious. There’s worry Don Jr. could be tried as an adult in this one,” Jimmy Kimmel joked. ARTS Jimmy Kimmel: Donald Trump Jr. Is ‘The Fraudigal Son’ Workers removed what remained of the Sycamore Gap tree along Hadrian’s Wall in Northumberland last month. WORLD Two More Arrests Made Over Destruction of Sycamore Gap Tree More in Europe Workers removed what remained of the Sycamore Gap tree along Hadrian’s Wall in Northumberland last month. WORLD Two More Arrests Made Over Destruction of Sycamore Gap Tree SAVE FOR LATER Try Something Different When Anton Hood and his teammates on New Zealand’s curling team needed a place to stay, the residents of Chartwell Colonel Belcher Retirement Residence, like Mary Gregoret, 80, were happy to help. Todd Korol for The New York Times STYLE Olympic Hopefuls Needed a Home. A Retirement Community Stepped Up. SAVE FOR LATER “It’s getting serious. There’s worry Don Jr. could be tried as an adult in this one,” Jimmy Kimmel joked. ABC ARTS Jimmy Kimmel: Donald Trump Jr. Is ‘The Fraudigal Son’ SAVE FOR LATER Representative Ken Buck said he had decided to step aside because of his differences with the contemporary Republican Party. “We lost our way,” he said. Haiyun Jiang for The New York Times US G.O.P.’s Buck Won’t Seek Re-Election, Citing His Party’s Election Denialism SAVE FOR LATER From left: Paul McCartney, John Lennon, George Harrison and Ringo Starr. “Now and Then” is billed as the band’s “last” song. Getty Images ARTS The Beatles’ ‘Now and Then’: A Glimpse of Past Greatness SAVE FOR LATER Editors Picks TRAVEL 36 Hours in Durham, N.C. SAVE FOR LATER For more stories, return to home. Adam Satariano is a technology correspondent based in Europe, where his work focuses on digital policy and the intersection of technology and world affairs. More about Adam Satariano Megan Specia is an international correspondent for The Times, based in London, covering the United Kingdom and Ireland. Since early 2022, she has also covered the war in Ukraine. She joined The Times in 2016. More about Megan Specia A version of this article appears in print on Nov. 2, 2023, Section A, Page 6 of the New York edition with the headline: Governments Warn A.I. Poses Risk of ‘Catastrophic’ Harm. Order Reprints | Today’s Paper | Subscribe READ 150 COMMENTS Share full article 150 Explore Our Coverage of Artificial Intelligence Latest News President Biden signed a far-reaching executive order on A.I. that imposes new rules on companies and directs federal agencies to begin putting guardrails around the technology. Researchers have found few deepfakes created by A.I. tools related to the Israel-Hamas conflict, but the mere possibility of their existence has people doubting real evidence. The News Media Alliance, a trade group that represents newspapers, including The New York Times, released research that it said showed that developers of A.I. chatbots are disproportionately using news articles to train the technology. The Age of A.I. The F.D.A. has approved many new medical products that use A.I., but doctors remain skeptical of these tools. A nonprofit led by a respected computer scientist is trying to democratize A.I. by building a freely available alternative to those offered by the likes of Google and OpenAI. OpenAI now lets outside businesses and independent developers tweak what its chatbot does. A new paper says that can lead to trouble. Advances in A.I. are offering new ways for bad actors to misappropriate online content of children. That is posing new risks for parents posting about their children on social media. Researchers at Anthropic asked roughly 1,000 Americans to write rules for their chatbot. The results could be a model for future kinds of A.I. governance. Leading A.I. researchers are transforming chatbots into a new kind of autonomous system called an A.I. agent that can play games, query websites, schedule meetings and build bar charts. More In Europe |