The rise of ChatGPT and similar artificial intelligence systems has been accompanied by a sharp increase in concern about AI. For the past few months, AI safety executives and researchers have been offering predictions, called “P(doom),” about the likelihood of large-scale disaster by AI.
Concerns peaked in May 2023 when the non-profit research and advocacy organization Center for AI Safety issued a one-sentence statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war. ” Many key players in the field signed the statement, including the heads of OpenAI, Google and Anthropic, as well as two of the “Godfathers” of AI: Geoffrey Hinton and Yoshua Bengio.
You might ask how such an existential fear is to be played. One famous case is the “pinch maximizer” thought experiment proposed by Oxford philosopher Nick Bostrom. The idea is that an AI system tasked with producing as many paper clips as possible could go a long way in finding raw materials, such as destroying factories and causing car accidents.
For a less resource-intensive variety, AI is tasked with providing a reservation for a popular restaurant by turning off cellular networks and traffic lights to prevent other patrons from getting a table.
Office supplies or dinner, the basic idea is the same: AI is quickly becoming an alien intelligence, good at achieving goals but dangerous because it does not necessarily align with the moral values of its creators. And, in its most extreme version, this argument boils down to obvious concerns about AIs enslaving or destroying the human race.
Actual damage
In recent years, my colleagues and I at the UMass Boston Center for Applied Ethics have been studying how engagement with AI affects people’s understanding of themselves, and I believe these catastrophic fears are overblown and misguided.
Yes, AI’s ability to create deep, persuasive audio and video is terrifying, and can be abused by the ill-intentioned. In fact, that’s already happening: Russian operatives apparently tried to embarrass Kremlin critic Bill Browder by including him in a conversation with an avatar of the former President of Ukraine Petro Poroshenko. AI voice cloning is being used by cybercriminals for a variety of crimes – from high-tech heists to routine scams.
AI decision-making systems that offer loan approval and hiring recommendations run the risk of algorithmic bias, since the training data and decision-making models they rely on have long shown social bias.
These are big problems, and they demand the attention of policy makers. But they’ve been around for a while, and they’re hardly cataclysmic.
Not in the same league
The statement from the Center for AI Safety framed AI with pandemics and nuclear weapons as a major risk to civilization. There are problems with that comparison. COVID-19 has resulted in nearly 7 million deaths worldwide, created an ongoing massive mental health crisis and created economic challenges, including chronic supply chain shortages and runaway inflation.
Nuclear weapons probably killed more than 200,000 people in Hiroshima and Nagasaki in 1945, claimed many more people from cancer in the years that followed, created years of profound anxiety during the Cold War and the world came under pressure to annihilate during the Cuban Missile crisis. in 1962. They also changed the calculations of national leaders on how to respond to an international attack, as is currently playing out with Russia’s invasion of Ukraine.
AI is nowhere near capable of doing this kind of damage. The paper clip case and others like it are science fiction. Existing AI applications execute specific tasks rather than making broad judgments. Technology is a long way from being able to decide and then plan the goals and sub-goals necessary to stop traffic to get you a seat at a restaurant, or blow up a car factory to satisfy your foot for a squeeze. papers. .
Not only does the technology lack the complex, multi-layered judgment capabilities involved in these situations, but it lacks autonomous access to sufficient parts of our critical infrastructure to initiate that kind of damage.
What it means to be human
In reality, there is an existential danger to the use of AI, but that risk is existential in a philosophical rather than apocalyptic sense. AI as it stands today can change the way people see themselves. It can degrade abilities and experiences that people consider essential to being human.
For example, humans are judgmental creatures. People rationally weigh data and make daily judgment calls at work and in leisure time about who to hire, who to borrow from, what to watch and so on. But more and more of these judgments are being automated and applied to algorithms. As that happens, the world will not end. But gradually people will lose the ability to make those judgments themselves. The less of them people do, the worse they are likely to become in doing it.
Or consider the role of chance in people’s lives. People appreciate serendipitous encounters: coming across a place, person or activity by accident, being drawn into it and realizing in retrospect the role that accident played in those discoveries of meaning. But the role of algorithmic recommendation engines is to reduce that kind of serendipity and replace it with planning and prediction.
Finally, consider ChatGPT’s scripting capabilities. Technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose an important tool for teaching students how to think critically.
Not dead but reduced
So, no, AI doesn’t blow up the world. But the more uncritical acceptance of it, in different narrow contexts, means that some of the most important skills people have are gradually eroded. Algorithms are already undermining people’s ability to make judgments, enjoy intimate encounters and enhance critical thinking.
The human species will survive such losses. But our current way will be impoverished in the process. The overwhelming fears of the coming AI cataclysm, singularity, Skynet, or however you might think of it, obscure these more subtle costs. Recall TS Eliot’s famous closing lines of “The Hollow Men”: “This is how the world ends,” he wrote, “not with a whimper but with a whimper.”
This article is republished from The Conversation, a non-profit, independent news organization that brings you reliable facts and analysis to help you make sense of our complex world. It was written by: Nir Eisikovits, UMass Boston
Read more:
The Center for Applied Ethics at UMass Boston receives funding from the Institute for Ethics and New Technologies. Nir Eisikovits serves as a data ethics consultant for Hour25AI, a startup dedicated to reducing digital distractions.