New neurotechnology is blurring the lines around mental privacy – but are new human rights the answer?

Neurotechnologies – devices that interact directly with the brain or nervous system – were once dismissed as the stuff of science fiction. No more. Several companies are developing and some are even testing “brain-computer interfaces,” or BCIs, the most high-profile of which is probably Elon Musk’s Neuralink. He announced on January 29, 2024, that he is the first in the company’s clinical trials he has received a brain implant.

Like other companies, Neuralink’s immediate goal is to improve independence for patients with severe paralysis or other neurological disorders.

But not all BCIs are designed for medical use: There are EEG headsets that detect electrical activity within the wearer’s brain that cover a wide range of applications, from entertainment and fitness to education and the workplace. However, Musk’s ambitions go beyond these therapeutic and non-medical uses. Neuralink aims to ultimately help people “surpass capable human performance.”

Neurotechnology research and patents have at least doubled in the past two decades, according to a United Nations report, and devices are becoming more powerful. Newer devices have the ability to collect data from the brain and other parts of the nervous system more directly, with higher resolution, in larger volumes and in more widespread ways.

However, these improvements have raised concerns about mental privacy and human autonomy—questions that I consider in my research on the ethical and social implications of brain science and neural engineering. Who owns the data generated, and who should have access? Could this type of device threaten the ability of individuals to make independent decisions?

In July 2023, the UN agency for science and culture held a conference on the ethics of neurotechnology, which called for a framework to protect human rights. Some critics have even argued that societies should recognize a new category of human rights, “neural rights”. In 2021, Chile became the first country whose constitution addresses concerns about neurotechnology.

Advances in neurotechnology raise important privacy concerns. However, I believe these debates can ignore more fundamental threats to privacy.

An inner insight

Concerns about neurotechnology and privacy focus on the idea that an observer can “read” a person’s “thoughts and feelings” directly from recordings of their brain activity.

It is true that some neurotechnologies can record brain activity with great specificity: for example, developments in high-density electrode arrays that allow high-resolution recording from different parts of the brain.

Someone standing outside the frame adjusts a loud monitor connected to a computer.

Researchers can make inferences about mental phenomena and interpret behavior based on this type of information. However, “reading” the recorded brain activity is not simple. Data has already passed through filters and algorithms before the human eye receives the output.

With these complexities in mind, my colleague Daniel Susser and I wrote an article in the American Journal of Bioethics – Neuroscience asking whether some concerns about mental privacy might be misplaced.

Although neurotechnologies raise significant privacy concerns, we argue that the risks are similar to those of more scientific data collection technologies, such as everyday online surveillance: the kind most people are used to through browsers internet and advertising, or wearable devices. Even browser history on personal computers can reveal very sensitive information.

It is also worth remembering that a central aspect of being human is always to understand the behaviour, thoughts and feelings of others. Brain activity alone does not tell the whole story; other behavioral or physiological measures are also needed to reveal this type of information, as well as the social context. A certain increase in brain activity may indicate fear or excitement, for example.

However, that is not to say that there is no cause for concern. Researchers are exploring new directions in which multiple sensors – such as headbands, wrist sensors and room sensors – can be used to capture various types of behavioral and environmental data. Artificial intelligence could be used to combine that data into more powerful interpretations.

Think for yourself?

Another thought debate related to neurotechnology deals with cognitive freedom. According to the Center for Cognitive Freedom & Ethics, founded in 1999, the term refers to “every person’s right to think independently and autonomously, to use the full power of his or her mind, and to engage in multiple modes of thought. “

Recently, other researchers have resurfaced the idea, for example in legal scholar Nita Farahany’s book “The Battle for Your Brain.” Cognitive libertarians generally argue for the need to protect individuals from having their mental processes manipulated or monitored without their consent. They argue that better regulation of neurotechnology may be necessary to protect the freedom of individuals to determine their own internal thoughts and control their own mental functions.

A man in a gray turtleneck stands with a black and white bicycle helmet on his head.A man in a gray turtleneck stands with a black and white bicycle helmet on his head.

These are important freedoms, and there are certainly specific aspects – such as those relating to BCI neurotechnology and non-medical neurotechnology applications – that have raised important questions. But I have argued that the way cognitive freedom is discussed in these debates sees each individual as an isolated and independent agent, neglecting the relational aspects of who we are and how we think.

Ideas don’t just spring out of nothing in one’s head. For example, part of my mental process in writing this article is remembering and reflecting on research from colleagues. I’m also thinking about my own experiences: the many ways in which I can be who I am today is a combination of my upbringing, the society I grew up in, the schools I attended. Even the ads my web browser pushes me can shape my thoughts.

How unique are our ideas? How much are my mental processes already being manipulated by other influences? With that in mind, how should societies protect privacy and freedom?

I believe that recognizing the extent to which our thoughts are already being shaped and monitored by many different forces can help set priorities as neural technologies and AI become more common. Looking beyond new technology to strengthen existing privacy laws may provide a more holistic view of the many threats to privacy, and the freedoms that need to be protected.

This is an updated version of an article originally published on August 7, 2023.

This article is republished from The Conversation, a non-profit, independent news organization that brings you reliable facts and analysis to help you make sense of our complex world. It was written by: Laura Y. Cabrera, Penn State

Read more:

Laura Y. Cabrera receives funding from the National Institutes of Health, and the National Network of Depression Centers. She is affiliated with IEEE, and the International Neuroethics Society.

Leave a Reply

Your email address will not be published. Required fields are marked *