How to Break AI Before It’s Too Late

Credit – Getty Images

ohit’s only been 16 months, but the release of ChatGPT back in November 2022 already feels like ancient AI history. Hundreds of billions of dollars, both public and private, are pouring into AI. Thousands of AI-powered products have been created, including the new GPT-4o just this week. Everyone from students to scientists now use these major language models. Our world, and especially the world of AI, has changed decisively.

But the true prize of AI – or artificial general intelligence (AGI) – is yet to be achieved. Such progress would mean an AI that can do the most economically productive work, interact with others, do science, build and maintain social networks, conduct politics, and conduct modern warfare . Cognition is the main constraint for all these tasks today. Removing this restriction would be world-changing. But many across the world’s largest AI labs believe this technology could become a reality before the end of this decade.

That could be a huge help to mankind. But AI can also be extremely dangerous, especially if we can’t control it. An uncontrolled AI could hack its way into the online systems that power so much of the world, and use them to achieve its goals. It could gain access to our social media accounts and create custom manipulations for large numbers of people. Even worse, AI could manipulate military personnel in charge of nuclear weapons into sharing their credentials, posing a grave threat to humanity.

A helpful step would be to make it as difficult as possible for any of that to happen by strengthening the world’s defenses against malicious online actors. But when AI can convince humans, which it’s already better at than we are, there’s no defense in knowledge.

For these reasons, many AI safety researchers at AI labs such as OpenAI, Google DeepMind and Anthropic, and at safety-minded non-profits, have given up trying to limit the actions that can Making AI in the future. Instead they are focusing on creating “aligned” or inherently safe AI. An aligned AI could become powerful enough to wipe out humanity, but it shouldn’t wanting do this.

There are big questions about aligned AI. First, the technical part of the alignment is an unsolved scientific problem. Recently, some of the top researchers working on superhuman AI to align OpenAI left disgruntled, a move that does not inspire confidence. Second, it is unclear what would make a super-intelligent AI. If it were an academic value system, such as utilitarianism, we could quickly find that most people’s values ​​are not in line with these aloof ideas, and after that the philosophy could go on endlessly. face to act against the will of the majority of people forever. If the alignment was with people’s true intentions, we would need some way to aggregate those very different intentions. While there are ideal solutions like the UN council or AI-driven decision aggregation algorithms In the realm of possibility, there is concern that the entire power of oversight would be concentrated in the hands of a very few politicians or CEOs. This would of course be unacceptable—and would be a direct danger to—everyone else.

Read more: The Only Way to Deal with the AI ​​Threat? Close it

Dismantling the time bomb

If we can’t at least find a way to keep humanity safe from extinction, and preferably also from an aligned dystopia, there’s no need to create an AI that could become uncontrollable in the first place. The downside of this solution, postponing human-level or super-intelligent AI, is that as long as we don’t resolve safety concerns, the big promises of AI – from curing disease to creating massive economic growth – will have to wait .

Pausing the AI ​​may seem like a radical idea, but it will be necessary if AI continues to improve without finding a satisfactory alignment plan. When AI capabilities reach near-takeover levels, the only realistic option is for governments to clamp down firmly on labs to stop development. It would be suicidal to do otherwise.

And it might not be as hard to pause the AI ​​as some people make it out to be. At the moment, only a relatively small number of large companies are able to undertake the key areas of training, which means that implementation of a break is largely limited by political will, at least in the short term. In the long term, however, improving hardware and algorithms mean that a break may be difficult to enforce. Enforcement between countries, for example by treaty, would require enforcement, as would enforcement within countries, with steps such as strict hardware controls.

In the meantime, scientists need to better understand the risks. Although academic concerns are widely shared, there is still no consensus. Scientists should formalize their points of agreement, and show where and why their views diverge, in the new International Scientific Report on Advanced AI Safety, which should be developed into an “Intergovernmental Panel on Climate Change for AI risks”. Mainstream scientific journals should further open up existential risk research, even if speculative. The future does not provide data points, but looking ahead is as important for AI as it is for climate change.

For their part, governments play a huge role in how AI develops. This starts with officially recognizing the inherent risk of AI, as the US, UK, and EU., and establish AI safety institutions. Governments should also draw up plans for what to do in the most critical and achievable situations, as well as how to deal with many unnecessary AGI issues such as mass unemployment, runaway inequality, and energy consumption. Governments should make their AGI strategies publicly available, allowing for scientific, industry and public evaluation.

The fact that major AI countries are constructively discussing common policy at biennial AI safety summits, including one in Seoul from May 21 to 22, is a big step forward. However, this process needs to be protected and expanded. . Working on a shared truth about existing risks of AI and expressing shared concerns with all 28 nations that were invited would already be a big step in that direction. Moreover, relatively easy measures must be agreed, such as the creation of licensing regimes, model evaluations, tracking of AI hardware, increased liability for AI labs, and exclusion of copyrighted material from training. An international AI agency must be established to protect enforcement.

Scientific progress is inherently difficult to predict. Still, superhuman AI will likely impact our civilization more than anything else this century. Waiting for the time bomb to explode is not a feasible strategy. Let us use the time we have as wisely as possible.

Call us at letters@time.com.

Leave a Reply

Your email address will not be published. Required fields are marked *