Consumer devices may soon be able to directly access and interfere with the human brain - but this raises unprecedented ethical and legal questions. How can we leverage the benefits of this technology for therapeutic purposes while preventing its misuse?
Just about every technological revolution raises a crucial question: how can society unlock the potential of the new technology while limiting the risk of it being used for malevolent purposes? Today, that question is being applied to neurotechnology, with advances in neural interfaces, bioelectronics, brain-computer interfaces and more. This progress could improve the lives of millions of patients by restoring their motor function, mental and cognitive capabilities or ability to communicate. But it also opens the door to businesses or even criminals making use of our most personal data, including our thoughts, moods and memories, and could widen the gap between those who can afford such technology - to treat an illness or improve their performance, for example - and those who can't. In other words, combined breakthroughs in neuroscience and engineering are ushering in a new era that can transform our understanding of the human brain and augment our capacities in unimaginable ways. Yet they're also raising very thorny ethical and legal questions, in a world where established principles are increasingly being cast aside.
Compared with conventional biological data, neural data come with an entirely new set of risks and opportunities. They reveal our mental, cognitive and emotional states, getting right to the essence of who we are as individuals - our thinking capability and experiences, our memory, our planning process and our intentions, perceptions and emotions. These states influence our behavior and shape our decisions. If devices are able to interpret our neural data, they could eventually be able to alter our cognitive and emotional processes, which could in turn influence our neural data. For instance, they could apply stimuli such as electrical impulses or magnetic fields, which - coupled with the brain's natural neuroplasticity - could give rise to new mechanisms of brain activity or change existing ones.

"It's imperative that we speed the development of neurotechnology," says Marcello Ienca from the Technical University of Munich, a philosopher and neuroethics expert who headed EPFL's Hybrid Minds Project from 2021 to 2024. "Around a third of the world's population will experience a neurological disorder and around half will suffer from a mental health condition at some point in their lives. Advancements in neurotechnology can help us understand the human brain and treat these people. But once we go beyond R&D and medical applications, it's a slippery slope. We live in a world where our brain is our most important asset - it underpins the data-driven business models of social media operators and online retailers. These companies want to know all about our psychological preferences so that they can influence our purchasing decisions, exploit our vulnerabilities and keep us captive. Neural data can make their data-driven applications more accurate and effective. What's more, companies can use brain-computer interfaces to combine the behavioral data they already have with more complete datasets collected directly from the brain. It's as if they could go straight to the source. Consumers today aren't aware of all the information contained in their brain signals."
Portable mini EEG machines
A number of businesses, generally working in conjunction with neuroscientists, have developed implants for treating disorders such as depression, Parkinson's disease and paraplegia. For now, brain implants can be used only for therapeutic purposes. But some companies have their sights set higher. Neuralink, owned by Elon Musk, is already venturing in this direction. "It's the only company that has clearly stated it wants to implant brain chips in millions of people for nonmedical purposes - that is, solely to improve their performance," says Ienca. However, he's skeptical. "I'd be surprised if the company manages to implant 100 brain chips over the next five years. But the idea that Neuralink could convince a significant number of people to share their neural data is still worrying. My biggest concern is that if the company slips up - if a patient dies or there's a major data leak, for example - that will undermine trust in neurotechnology and the scientific community as a whole."
Having a monopoly on the human brain is the riskiest thing that could happen to our species. Preventing that from happening will be very difficult, but one thing we could do is adopt a strong European strategy.
For now, noninvasive technology in the form of portable mini EEG machines is being integrated into headphones, fitness bands and sleep trackers in order to monitor our brain activity. "These devices don't require surgery or come with any physical risks, so companies can use them for nonmedical purposes to support their business goals, whether to control devices remotely, provide neurogaming or entertainment services or offer something else entirely," says Ienca. "These devices aren't as effective as brain implants but they allow companies to collect data from a larger pool of consumers - and today, more data means better-trained AI programs and more robust predictive algorithms."
Collecting data and consumer preferences
It's no coincidence that Musk has branched out into neurotechnology and that his AI business acquired Twitter, now X. Nor that Apple has patented brain activity sensing technology that can be integrated into future generations of its AirPods. "These neurotechnological advancements are what data-driven business models have been waiting for - they let companies collect data on consumer preferences, intentions and wishes straight from the neurobiological source," says Ienca. "And there's a real chance that our mental privacy is at stake, with the risk that the technology will be used to manipulate and exploit consumers."
"We're about to witness a boom in neurotechnology," according to Ienca. "This decade could be to neurotechnology what the 1980s were to personal computers. Neurotechnology will become personal, and we'll use it not because it's useful but because someone has convinced us it's useful. The gaming industry could be the Trojan horse for this kind of shift by developing neurogames where players can control avatars with their thoughts, for example." And we won't even go into the military applications.
"In 2017, we worked with colleagues at ETH Zurich to introduce the concept of neurorights, which is based on the idea that protecting our thoughts and mental processes is a fundamental human right that obviously didn't occur to the individuals who drafted the Universal declaration of Human Rights back in 1948. So we suggested updating the Declaration by either adding new rights or interpreting the existing ones so they also apply to neural data," says Marcello Ienca.
The first legally binding international document to address the issue of data protection was the Council of Europe's Convention 108 on data protection, adopted in 1981. The Council of Europe has 46 member states, including Switzerland, but 55 countries have signed the convention. Ienca recently served on a Council of Europe ad hoc committee that wrote guidelines for neural data protection and proposed an amendment to Convention 108, which will be discussed in a plenary council meeting in June 2025. He also sat on a UNESCO working group that drafted recommendations on the ethics of neurotechnology, which are currently being reviewed by UNESCO member states. In addition, EPFL's College of Humanities, in collaboration with the Swiss Commission for UNESCO, hosted the Western European regional consultation on the first draft text of the UNESCO Recommendation on the Ethics of Neurotechnology last July.
When it comes to neural data protection, a number of local governments are rolling out initiatives in this area. In April 2024, Colorado became the first US state to adopt an act protecting neural data in the same way as other types of personal data such as DNA, fingerprints, facial images and biometric data. California followed suit with a similar law in September.
"It's true that as things currently stand, rules and regulations don't seem to be a priority and can easily be ignored - but that doesn't mean we should give up our efforts to establish a legal framework," says Ienca. "That's why we need to combine this top-down regulatory approach with a bottom-up one based on making consumers aware of the value of their data, the actual risks involved and how they can mitigate those risks."
He also points to another important issue: regulators must take steps to prevent monopolies from being formed, like the ones we're seeing today in AI and the IT industry. "Having a monopoly on the human brain is the riskiest thing that could happen to our species," says Ienca. "Preventing that from happening will be very difficult, but one thing we could do is adopt a strong European strategy. European countries still abide by the rule of law, and it's important that we introduce democratic strategies to guide neurotechnology development. Also, Europe - and especially EPFL - is ideally positioned in the field of neurotechnology. We need to strike the right balance between a complete lack of regulation - which is what people like Musk are advocating - and overly burdensome regulations that stifle innovation. European universities could serve as a catalyst by promoting responsible innovation that upholds our fundamental rights and aims to limit the societal impacts of neurotechnology, thereby asserting their global leadership in this field."