Things to know about AI safety summit in Seoul

SEOUL, South Korea (AP) — South Korea is set to host a mini-summit this week on the risks and regulation of artificial intelligence, following an AI safety initiative meeting in Britain last year that drew a diverse crowd of tech luminaries, researchers. and officers.

The aim of the meeting in Seoul is to build on the work started at the United Kingdom meeting on curbing threats from cutting-edge artificial intelligence systems.

Here’s what you need to know about the Seoul AI Summit and AI safety issues.

WHY IS AI SECURITY MADE INTERNATIONAL?

The Seoul summit is one of many global efforts to create guardrails for the rapidly advancing technology that promises to change many aspects of society, but it has also raised concerns about new risks to life both everyday, such as the algorithmic bias that skews search results and potential existential threats. for mankind.

The UK summit in November, held at a secret former wartime code-breaking base at Bletchley north of London, brought together researchers, government leaders, technology executives and members of civil society groups, many of whom opposing views on AI, engaged in closed-door talks. Tesla CEO Elon Musk and OpenAI CEO Sam Altman mingled with politicians such as British Prime Minister Rishi Sunak.

Delegates from more than two dozen countries including the United States and China signed the Bletchley Declaration, agreeing to work together to limit the potentially “catastrophic” risks of advances in artificial intelligence .

In March, the United Nations General Assembly approved its first resolution on artificial intelligence, supporting an international effort to ensure that the powerful new technology benefits all nations, respects human rights and that she is “safe, secure and reliable”.

Earlier this month, the United States and China held their first high-level talks on artificial intelligence in Geneva to discuss how to address the risks of the rapidly evolving technology and set shared standards to manage it. There, US officials raised concerns about China’s “abuse of AI” and Chinese representatives urged the US to “restrict and pressure” artificial intelligence, according to their government.

WHAT WILL BE PLANNING AT THE SEOUL SUMMIT?

The May 21-22 meeting is being co-hosted by the governments of South Korea and the UK.

On the first day, Tuesday, South Korean President Yoon Suk Yeol and Sunak will meet with leaders virtually. A number of global industry leaders were invited to provide updates on how they are meeting the commitments made at the Bletchley summit to ensure the safety of their AI models.

On the second day, digital ministers will gather for a personal meeting hosted by South Korean Science Minister Lee Jong-ho and British Technology Secretary Michelle Donelan. Participants will share best practices and concrete action plans. They will also share ideas on how to protect society from potential negative impacts of AI on areas such as energy use, workers and the spread of misinformation and disinformation, according to organizers.

The meeting is called a mini-virtual summit, which will be an interim meeting until a full-fledged in-person edition that France has promised to hold.

The meeting of digital ministers will include representatives from countries such as the United States, China, Germany, France and Spain and companies including ChatGPT maker OpenAI, Google, Microsoft and Anthropic.

WHAT PROGRESS ARE SECURITY APPLICATIONS MADE?

The agreement reached at the UK meeting was light on details and did not propose a way to regulate AI development.

“The United States and China came to the last summit. But when we look at some principles that were announced after the meeting, they were similar to what was already announced after some meetings of the United Nations and the OECD,” said Lee Seong-yeob, a professor at Graduate School of Technology Management at Korea University Seoul. “There was nothing new.”

It is important to hold a global summit on AI safety issues, he said, but it will be “extremely difficult” for all participants to reach agreements since each country has different interests and different levels of technologies and domestic AI industries.

The meeting is being held as Meta, OpenAI and Google roll out the latest versions of their AI models.

The original AI Safety Summit was designed as a venue for finding solutions to so-called inherent risks associated with the most powerful “primary models” that underpin multi-purpose AI systems such as ChatGPT.

Pioneering computer scientist Yoshua Bengio, known as one of the “stolen AI men,” was tapped at the UK meeting to lead a panel of experts tasked with drafting a report on the state of AI safety. An interim version of the report released on Friday for discussion in Seoul identified a range of risks associated with general purpose AI, including its malicious use to increase the “scale and sophistication” of fraud and scams, overcome the spread of disinformation or to create. new bio-weapons.

Malfunctioning AI systems could spread bias in areas such as health care, job recruitment and financial lending, while the technology’s potential to automate a wide range of tasks also poses systemic risks to the labor market, the report said.

South Korea hopes to use the Seoul summit to take the initiative to form global governance and norms for AI. But some critics say the country does not have a sufficiently developed AI infrastructure to take a leadership role in such governance issues.

__

Chan reported from London.

Leave a Reply

Your email address will not be published. Required fields are marked *