Several companies are testing brain implants – why is Neuralink so hyped? Two professors unpack the ethical issues

Putting a computer inside a person’s brain could be the edge of science fiction. Today, it is a reality. Academic and commercial groups are testing “brain-computer interface” devices to enable people with disabilities to function more independently. But Elon Musk’s company, Neuralink, has pushed this technology into debates about safety, ethics and neuroscience.

In January 2024, Musk announced that Neuralink implanted his first chip in one’s brain matter. The Conversation contacted two students at the University of Washington School of Medicine – Nancy Jecker, a bioethicist, and Andrew Ko, a neurosurgeon who implants brain chip devices – for their thoughts on the ethics of this new horizon in neuroscience.

How does a brain chip work?

Neuralink’s coin-sized device, called N1, is designed to enable patients to perform actions just by pointing at them, without moving their bodies.

Subjects in the company’s PRIME study – short for Robotically Implanted Precise Brain-Computer Interface – undergo surgery to place the device in a part of the brain that controls movement. The chip records and processes the brain’s electrical activity, then transmits this data to an external device, such as a phone or computer.

The external device “decodes” the patient’s brain activity, learning how to associate certain patterns with the patient’s goal: moving a computer cursor up a screen, for example. Over time, the software can identify a pattern of neural firing that occurs consistently while the participant imagines that task, and then execute the task for the person.

Neuralink’s current trial is aimed at helping people with computers or smartphones to control paralyzed limbs. Brain-computer interfaces, commonly known as BCIs, can also be used to control devices such as wheelchairs.

A number of companies are testing BCIs. What is different about Neuralink?

Non-invasive devices that sit on the outside of a person’s head have long been used in clinical trials, but have not received approval from the Food and Drug Administration for commercial development.

Faigheann cuairteoir taithí ar chóras BCI le linn Aonach Idirnáisiúnta na Síne um Thrádáil i Seirbhísí i mBéising 2023.  <a href=Li Xin/Xinhua via Getty Images” data-src=”https://s.yimg.com/ny/api/res/1.2/rs0PGRg84Tdl16MMXWjQkg–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTY0MA–/https://media.zenfs.com/en/the_conversation_us_articles_815/8c14f728626312741c4c746b49f 215fa”/>

There are other brain-computer devices, such as the Neuralink devices, that are fully implantable and wireless. However, the N1 implant combines more technologies in one device: It can target individual neurons, record from thousands of locations in the brain and recharge its small batteries wirelessly. These are important advances that could lead to better outcomes.

Why is Neuralink drawing criticism?

Neuralink received FDA approval for human trials in May 2023. Musk announced the company’s first human trial on its social media platform, X – formerly Twitter – in January 2024.

Information about the implant is scarce, however, apart from a brochure aimed at recruiting trial subjects. Neuralink did not register at ClinicalTrials.gov, as is customary, as some academic journals require.

Some scientists are concerned about this lack of transparency. Sharing information about clinical trials is important because it helps other investigators learn about areas related to their research and can improve patient care. Academic journals can also be biased towards positive results, preventing researchers from learning from unsuccessful experiments.

Fellows at the Hastings Center, a bioethics think tank, have cautioned that Musk’s brand is not “science by press release, although it is increasingly popular.” They advise against relying on someone with a significant financial stake in a research outcome to act as the sole source of information.

When government agencies or philanthropic groups fund scientific research, the aim is to promote the public good. Neuralink, on the other hand, incorporates a private equity model, which is becoming increasingly popular in science. Firms that pool funds from private investors to support scientific breakthroughs may strive to do good, but they also strive to maximize profits, which may conflict with the interests of patients.

Fógraíodh an chéad ionchlannán daonna de chuid Neuralink ar ardán meáin shóisialta X Elon Musk, ar a dtugtaí Twitter roimhe seo, i mí Eanáir 2024. <a href=NurPhoto via Getty Images” data-src=”https://s.yimg.com/ny/api/res/1.2/R1sYXFRQsEtWx39n_6Ul5Q–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTU4Ng–/https://media.zenfs.com/en/the_conversation_us_articles_815/07e0bb93ba7e16ecee4c772ccb1467 bc”/>

In 2022, the US Department of Agriculture investigated animal cruelty at Neuralink, according to a Reuters report, after employees accused the company of rushing tests and botching procedures on test animals in a race for results. The agency’s inspection found no violations, according to a letter from the USDA secretary to lawmakers, reviewed by Reuters. However, the secretary noted a self-reported “surgical adverse event” in 2019 by Neuralink.

In a separate incident also reported by Reuters, Neuralink was fined by the Department of Transport for violating rules on transporting hazardous materials, including flammable liquid.

What other ethical issues does the Neuralink trial raise?

When brain-computer interfaces are used to help patients with disabling conditions function more independently, for example by helping them communicate or move around, this can greatly improve their quality of life. In particular, it helps people gain a sense of their own agency or autonomy – one of the main principles of medical ethics.

However, medical interventions can have unintended consequences. In the case of BCIs, scientists and ethicists are particularly concerned about the potential for identity theft, password hacking and blackmail. Given how the devices gain access to users’ thoughts, there is also the possibility that third parties could manipulate their autonomy.

The ethics of medicine require physicians to help patients, and to minimize potential harm. In addition to errors and privacy risks, scientists worry about the potential adverse effects of a fully implantable device like Neuralink, since device components are not easily replaced after implantation.

When considering any invasive medical intervention, patients, providers and developers seek a balance between risk and benefit. At current levels of safety and reliability, a permanent implant would have to be of great benefit to justify the uncertain risks.

What lies ahead?

So far, Neuralink’s trials have focused on patients with paralysis. Musk has said that his ultimate goal for BCIs, however, is to help humanity – including healthy people – “like speed” with artificial intelligence.

This raises questions about another central principle of medical ethics: justice. Certain types of supercharged brain computer synthesis could contribute to social inequalities if only wealthy citizens had access to enhancements.

Of more immediate concern, however, is the possibility that the device could be shown to be more helpful to people with disabilities, but would not be available due to the loss of research funding. For patients with access to a device connected to a research study, the prospect of losing access after the study ends can be devastating. This raises major questions about whether it is ever ethical to provide early access to breakthrough medical interventions before they receive full FDA approval.

Clear ethical and legal guidelines are needed to ensure that the benefits arising from scientific innovations such as the Neuralink brain chip are balanced with patient safety and the benefit of society.

This article is republished from The Conversation, a non-profit, independent news organization that brings you facts and analysis to help you make sense of our complex world.

It was written by: Nancy S. Jecker, University of Washington and Andrew Ko, University of Washington.

Read more:

The authors do not work for, consult with, or own shares in, or receive funding from, any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Leave a Reply

Your email address will not be published. Required fields are marked *