What Genetic AI Reveals About the Human Mind

Digital image of an artificial intelligence human brain on a black background. Credit – Getty Images-Andriy Onufriyenko

GDynamic AI – think Dall.E, ChatGPT-4, and more – is all the rage. Its remarkable successes, and occasional catastrophic failures, have sparked important debates about the scope and dangers of advanced forms of artificial intelligence. But what, if anything, does this work reveal about natural intelligence like our own?

I am a philosopher and cognitive scientist who has spent my entire career trying to understand how the human mind works. Drawing on research that spans psychology, neuroscience, and artificial intelligence, my search has drawn me toward a picture of how the natural mind works that is interestingly similar to, but also very different from, the core operating principles of the Genius Axis. Perhaps an examination of this contrast will help us understand them both better.

The AIs learn a generative model (hence their name) that enables them to predict patterns in different types of data or signals. Generating means that they learn enough about the deep regularities in some data set to enable them to create new plausible versions of that type of data for themselves. For ChatGPT the data is text. Knowing all the weak and strong patterns in a huge library of texts allows ChatGPT, when asked, to produce plausible versions of that kind of data in interesting ways, when carved with user prompts – for example, d A user could ask for a story about a black cat written in the style of Ernest Hemingway. But there are also AIs that specialize in other types of data, such as images, enabling them to create new paintings in the style of, say, Picasso.

What does this have to do with the human mind? According to much contemporary theory, the human brain has learned models to predict certain types of data as well. But in this case the data to be predicted are the various barrages of sensory information registered by sensors in our eyes, ears, and other perceptual organs. Now comes the crucial difference. Natural brains have to learn how to predict those sensory flows in a very special context – the context in which the sensory information is used to choose actions that help us survive and thrive in our lives. This means that among the many things our brain learns to predict, a core subset concerns the ways in which our own actions in the world will change how we feel afterward. For example, my brain has learned that if I accidentally touch my cat’s tail, the next sensory stimuli I receive will often include visions of crying, yawning, and occasionally feelings of pain. from scratch a well-deserved revenge.

Read more: AI and the Rise of the Middle Ore

This type of learning has particular advantages. It helps us separate cause and simple correlation. There is a strong correlation between seeing my cat and the furniture in my apartment. But neither of these happens until the other happens. Disturbance on my cat’s tail is the cause of the crying and subsequent scratching. Knowing the difference is essential if you are a creature that must act on its life to achieve desired effects (or undesired consequences). In other words, the generative model that produces natural predictions is constrained by a well-known and biologically crucial goal – choosing the right actions to take at the right times. That means knowing how things are now and (crucially) how things will change and change if we act and intervene in the world in certain ways.

How does ChatGPT and other contemporary AIs compare to this understanding of the human brain and mind? It is clear that current AIs tend to specialize in predicting relatively specific types – sequences of words, in the case of ChatGPT. At first glance, this suggests that ChatGPT might be better seen as a model of our textual outputs rather than (like a biological brain) the world in which we live. That would be a very significant difference indeed. But it could be argued that the move is a bit too quick. Words, as the wealth of great and not-so-great literature attests, already reveal patterns of all kinds—patterns among looks and tastes and sounds, for example. This gives a real window into the generation AIs on our world. Still missing, however, is that crucial ingredient – action. At best, text-predictive AIs find a kind of verbal fossil record of the consequences of our actions on the world. That track consists of verbal descriptions of actions (“Andy trod on his cat’s tail”) together with verbal information about their consequences and typical consequences. Despite this the AIs have no practical ability to intervene in the world – so there is no way to test, evaluate and improve their own global model, the model that makes the predictions.

More From TIME

This is an important practical limitation. It seems that one had access to a vast library of data on the shape and results of all previous experiments, but was unable to make any of their own. But it may also have a deeper meaning. To plausibly, he is just poking, prodding, and generally intervention on our world biological minds feed their knowledge to the very life they are meant to describe. By learning what causes it, and how different actions will affect our future lives in different ways, we build a solid foundation for our own understandings later on. It is this foundation in actions and their consequences that enables us later to truly understand sentences we come across such as “The cat scratched the person who violated on his tail.” Our generative models – unlike the generative AI models – are created in fires of activity.

Could the AI ​​of the future build anchor models this way too? Could they start running experiments where they send responses around the world to see what effect those responses have? Something like this already happens in the context of online advertising, political campaigning, and social media manipulation, where algorithms can send ads, posts, and reports and adjust their future behavior according to specific effects on buyers. , voters and others. If more powerful AIs closed the action loop in these ways, they would be beginning to turn their currently passive and “second-hand” window into human life into something closer to the kind of grip that active humans have us on our lives.

But even then, other things would be missing. Many of the predictions that structure human experience are related to our own internal physiological states. For example, we sense thirst and hunger in anticipatory ways, allowing us to preemptively heal looming deficiencies, to stay within the right zone for bodily integrity and survival. This means that we exist in a world where some of our brain’s predictions are particularly important in a very special way. They are important because they enable us to continue to exist as the embodied, energy-metabolizing beings that we are. Humans also benefit greatly from the collective practices of culture, science and art, which allow us to share our knowledge and explore and test our best models of ourselves and our lives.

In addition, we are what might be called “conscious scientists”—we present ourselves as people of knowledge and belief, and we have slowly designed the complex world of art, science, and technology for our to test and improve our own knowledge and beliefs. . For example, we can write papers that make claims that are quickly challenged by others, and then run experiments to try to resolve the differences of opinion. In all these ways (even between the obvious but currently insurmountable questions about ‘real conscious awareness’) there seems to be a great gulf that separates our special forms of knowledge and understanding from anything achieved by the AIs so far.

Could AI one day be predictive machines with a survival instinct, running baseline predictions that proactively seek to create and maintain the conditions for their own existence? Could they then become more autonomous, protecting their own hardware and manufacturing and drawing power as needed? Could they create a community, and invent a kind of culture? Could they model themselves as people with beliefs and opinions? There is nothing in their current situation to drive them in these common directions. But it’s clear that none of those dimensions are off limits either. If there were changes to all or some of those main dimensions, we might still be looking at the soul of the new machine.

Call us at letters@time.com.

Leave a Reply

Your email address will not be published. Required fields are marked *