Language enables people to transmit ideas to each other because everyone’s brain responds in the same way to the meaning of words. In our newly published research, my colleagues and I developed a framework for modeling the brain activity of speakers during face-to-face conversations.
We recorded the electrical brain activity of two people while they engaged in unscripted conversations. Previous research has shown that when two people are conversing, their brain activity is coupled, or aligned, and that the degree of neural coupling is associated with a better understanding of the speaker’s message.
A neural code refers to certain patterns of brain activity associated with particular words in their contexts. We discovered that the speakers’ brains are aligned on a shared neural code. Importantly, the brain’s neural code resembled the artificial neural code of large language models, or LLMs.
Neural patterns of words
A large language model is a machine learning program that can generate text by predicting which words are most likely to follow other words. Large language models are better at learning language structure, generating human-like text and having conversations. They can even pass the Turing test, making it difficult for a person to know whether they are interacting with a machine or a human. Like people, LLMs learn to speak by reading or listening to text produced by others.
By giving the LLM a transcript of the conversation, we were able to extract his “neural activities,” or how he translates words into numbers, as he “reads” the script. Next, we correlated the speaker’s brain activity with the LLM activities and the listener’s brain activity. We found that LLM activities could predict the shared brain activity of the speaker and the listener.
To be able to understand each other, people have shared agreement on the rules of grammar and the meaning of words in context. For example, we know to use the past tense form of a verb to talk about actions in the past, as in the sentence: “He visited the museum yesterday.” Furthermore, we understand implicitly that the same word can have different meanings in different situations. For example, the word cold in the sentence “you’re cold as ice” can refer to a person’s body temperature or personality trait, depending on the context. Due to the complexity and richness of natural language, until the recent success of large language models, we did not have an accurate mathematical model to describe it.
Our study found that large language models can predict how linguistic information is encoded in the human brain, providing a new tool for interpreting human brain activity. The similarities between the linguistic code of the human brain and the big language model have enabled us, for the first time, to trace how information in the speaker’s brain is encoded into words and translated, word for word, to the brain of the listener during face-to-face. face conversations. For example, we found that brain activity related to the meaning of a word occurs in the speaker’s brain before a word is uttered, and that the same activity occurs again quickly in the listener’s brain after hearing the word.
A powerful new tool
Our study provided insights into the neural code for language processing in the human brain and how humans and machines can use this code to communicate. We found that major language models were able to predict shared brain activity compared to different aspects of language, such as syntax, or the order in which words connect to form phrases and sentences. This is partly due to the LLM’s ability to incorporate the contextual meaning of words, as well as to integrate multiple levels of the linguistic hierarchy into a single model: from words to sentences to conceptual meaning. This suggests important similarities between the brain and artificial neural networks.
An important aspect of our research is the use of everyday recordings of natural conversations to ensure that our findings relate to real-world brain processing. This is called ecological validity. In contrast to experiments where participants are told what to say, we leave control over the study and let the participants converse as naturally as possible. This loss of control makes it difficult to analyze the data because each conversation is unique and involves two people interacting and speaking spontaneously. Our ability to model neural activity as people engage in everyday conversations demonstrates the power of large language models.
Other dimensions
Now that we have developed a framework to assess the neural code shared between brains during everyday conversations, we are interested in what factors encourage or inhibit this coupling. For example, does linguistic coupling increase if a listener better understands the speaker’s intent? Or perhaps complex language, such as jargon, may reduce neural coupling.
Another factor that may influence linguistic coupling is the relationship between the speakers. For example, you might be able to convey a lot of information in a few words to a good friend but not to a stranger. Or you might be nervously in touch with political allies rather than rivals. This is because differences in how we use words across groups may make it easier to align and associate them with people within our social groups rather than outside them.
This article is republished from The Conversation, a non-profit, independent news organization that brings you reliable facts and analysis to help you make sense of our complex world. It was written by: Zaid Zada, Princeton University
Read more:
Zaid Zada does not work for any company or organization that would benefit from this article, does not consult with, share ownership of or be funded by any company or organization that would benefit from this article, and has not disclosed any relevant connections beyond their academic appointment.