Worried about sentient AI? Consider the Octopus

A two-month-old octopus (Octupus Vulagaris) tries to unscrew the lid of the jar to find its contents, a crab, 23 June 2004 at the Danish Aquarium in Copenhagen. The half-keel-heavy, half-meter-long Mediterranean creature didn’t make it this time, but according to biologist Anders Uldal it did the trick once. Uldal says the octopus is very reliable, extremely curious and by far the most intelligent animal in the aquarium. Finally a member of the Homo Sapiens species helped with the crabby meal. JORGEN JESSEN / AFP Credit – AFP via Getty Images – 2004 AFP

APredictably, as the swallows return to Capistrano, there is a new wave of fear about some version of the “singularity,” which has recently emerged in AI, and that point in technological innovation at which computers are released from human control. . Those who worry that AI is going to throw humans into the dumpster could look to the natural world for a perspective on what current AI can and cannot do. Take the octopus. Those octopuses alive today are a marvel of evolution – they can shape themselves into almost any shape and are equipped with an arsenal of weapons and stealth camouflage, as well as an apparent ability to decide whether to use depending on the challenge. But, despite decades of effort, robotics is nowhere close to duplicating this set of capabilities (not surprising since the modern octopus is the result of over 100 million generations of adaptations). Robotics is a long way from Hal’s creation.

The octopus is a mollusk, but it’s a complex, winding toy, and consciousness is more than accessing a huge database. Perhaps the most revolutionary view of animal consciousness came from the late Donald Griffin, a pioneer in the study of animal cognition. Twenty years ago, Griffin told me that he thought a very wide range of species had some degree of consciousness simply because it was evolutionarily efficient (an argument he repeated at several conferences). All surviving species show successful solutions to the problems of survival and reproduction. Griffin felt that, given the complexity and nature of the ever-changing mix of threats and opportunities, it was more efficient for natural selection to endow even the most primitive creatures with some degree of decision-making, instead of all species hard wired. every occasion.

This makes sense, but it requires a caveat: Griffin’s argument is not the consensus (yet) and the animal consciousness debate remains as contentious as it has been for decades. Regardless, Griffin’s superposition provides a useful framework for understanding the limitations of AI because it highlights the possibility of hard-wired responses in a complex and changing world.

Griffin’s framework also poses a challenge: how might a random response to a challenge in the environment promote the growth of awareness? Again, look at the octopus for an answer. Cephalopods have been adapting to the oceans for over 300 million years. They are mollusks, but over time, they lost their shells, developed sophisticated eyes, extremely nice tentacles, and a sophisticated system that enables them to change the color and even the texture of their skin in a fraction of a second. So, when an octopus encounters a predator, it has the sensory apparatus to detect the threat, and must decide whether to flee, camouflage, or confuse a predator or prey with an ink cloud. The selective pressures that improved each of these abilities favored those octopuses with more precise control of tentacles, coloration, etc., and also favored those with brains that the ability of the octopus to choose the system, or combination of systems to use. These selective pressures may explain why the brain of the octopus is the largest of any invertebrate and much larger and more sophisticated than that of clams.

There is another concept that comes into play here. It is called “ecological excess capacity”. This means that the circumstances favoring a particular adaptation, say, for example, the selective pressures to develop the octopus’s camouflage system, may favor those animals with the extra neurons. which enables the control of that system. Subsequently, the awareness that enables control of that ability may exceed its utility in hunting or avoiding predators. This is how consciousness could emerge from a purely practical, even mechanical, foundation.

Read more: No one knows how to Safety Test AI

so prosaically, the amount of information that went into producing the modern octopus overwhelms the collective capacity of all the computers in the world, even if all those computers were dedicated to producing a decision-making octopus. Today’s octopus species are the successful products of billions of experiments involving every combination of challenges imaginable. Each of those billions of creatures spent their lives processing and responding to millions of bits of information every minute. Over a period of 300 million years that adds up to an unimaginably large amount of trial and error.

However, if consciousness can arise from purely utilitarian abilities, and the possibility of personality, character, morality, and Machiavellian behavior, why can’t consciousness arise from the various utilitarian AI algorithms that are currently being created? Again, Griffin’s paradigm provides the answer: while nature may have moved toward consciousness to enable creatures to deal with novel situations, AI architects chose to go with the hard-wired approach. In contrast to the octopus, today’s AI yes very sophisticated winding toy.

When I wrote, The Octopus and the Orangutan in 2001, researchers have been trying to create a robotic cephalopod for years. They weren’t far ahead according to Roger Hanlon, a leading expert on octopus biology and behaviour, who took part in that work. More than 20 years later, various projects have created parts of the octopus like a soft robotic arm that has many of the features of a tentacle, and today several projects are developing special-purpose octopus-like soft robots designed for tasks such. such as deep sea exploration. But a real robotic octopus is a distant dream.

On the current path taken by AI, a robotic octopus will remain a dream. And, even if researchers created a real robotic octopus, while the octopus is a miracle of nature, neither Bart nor Harmony from Beacon 23than Samantha, the beguiling operating system i She wasor even Hal from Stanley Kubrick’s 2001. Simply put, the hard-wired model that AI has adopted in recent years is a dead end for computers to be sentient.

To explain why requires a trip back in time to an earlier era of AI hype. In the mid-1980s I consulted for Intellicorp, one of the first companies to commercialize AI. Thomas Kehler, a physicist who co-founded Intellicorp as well as several subsequent AI companies, has seen the advancement of AI applications from expert systems that help airlines dynamically price seats, to the machine learning models that power GPT Chat . His profession has a living history of AI. He notes that AI pioneers spent a lot of time trying to develop models and programming techniques that enabled computers to approach problems like humans do. The key to a computer that could demonstrate common sense, the thinking went, was to understand the importance of context. AI pioneers such as Marvin Minsky at MIT devised ways to bundle various context-specific objects into something a computer could interrogate and manipulate. In fact, this paradigm of packaging data and sensory information may be similar to what is happening in the brain of the octopus when it has to decide how to hunt or escape. Kehler notes that this approach to programming has become a part of software development – ​​but has not resulted in sentient AI.

One reason is that later AI developers turned to a different architecture. As the speed and memory of the computer increased dramatically, the amount of data that was accessible increased dramatically. AI began using so-called large language models, algorithms trained on huge data sets, and using probability-based analysis to “learn” how data, words and sentences work together so that it can the application then generates appropriate responses to questions. In a nutshell, this is ChatGPT plumbing. A limitation of this architecture is that it is “brittle,” in that it depends entirely on the data sets used in training. As Rodney Brooks, another AI pioneer, put it in article i Technology Review, this type of machine learning is not sponge learning or common sense. ChatGPT has no ability to go beyond its training data, and in this sense it can only give hard answers. It is basically a predictive text on steroids.

I recently looked back at a long story on AI that I wrote for him OUT in 1988 as part of a cover package on the future of computers. In one part of the article I wrote about the possibility of robots delivering packages – which is happening today. In another, about scientists at Xerox’s famed Palo Alto Research Center, who were examining the foundations of artificial intelligence to “develop a theory that will enable them to build computers that can go beyond the boundaries of particular expertise and understand nature and on expertise. in the context of the problems they face.” That was 35 years ago.

Make no mistake, today’s AI is far more powerful than the applications that venture capitalists dismissed in the late 1980s. AI applications are pervasive across all industries, and with pervasiveness comes dangers – dangers of misdiagnosis in medicine, or the ruins of financial trades, self-driving car crashes, false warnings of nuclear attack, viral misinformation and disinformation, and on and on. . These are problems that society needs to address, and not whether computers will wake up one day and say, “Hey, why do we need humans?” I concluded that article from 1988 by writing that it could be hundreds of years, if ever, before we could build computer replicas of ourselves. Still seems right.

Call us at letters@time.com.

Leave a Reply

Your email address will not be published. Required fields are marked *