The ethics of neuroscience and AI

Neuroscience is merging with technology in ways that will have a huge impact on society. This won’t be limited to improvements in just health or brain function. There will also be profound ethical challenges and possibly even a redefinition of what it means to be human.

So, what are some of the ethical issues created by merging machines with our brains—the organs that define us as humans and as individuals?

Bias

Can AI be biased? Algorithms programmed for AI are coded in a very logical way, with data fed into them and clear outcomes defined. The primary role of AI is to find patterns in huge data sets, so it would appear that they are impartial machines. 

But the rules they operate by are encoded into the programs, and the data fed to them come from humans, who are not without flaws and biases. AI then, reflects the biases of their human creators.

In the US, for example, an AI-powered ‘judge’ predicted that the likelihood of criminal re-offence was significantly higher among African-Americans compared to Caucasian defendants.

And yet when both groups were tracked for the following two years, the rates of re-offence were the same; African-American defendants had been wrongly stereotyped as more likely to commit a future offence. AI researchers are becoming more aware of the bias problem and are working to overcome it.

IBM says more than 180 different human biases have been defined and it is working to address them in AI programs. 

Identity and responsibility

Today, we are the ‘agents’ of our own actions – meaning we are in control. But technologies that alter our brain activity have the potential to blur that line.

For example, what is someone’s responsibility if they commit an out-of-character crime while being stimulated by such a device? Would the answers be any different for somebody on antidepressants or other medications, which also affect brain activity?

Identity could be another issue with devices that can change our patterns of brain activity. One anecdotal report described a patient with a brain stimulation device who sometimes wondered “who he was”.

Reports like this are extremely rare (and a loss of identity isn’t rare for people with brain degeneration, suggesting an alternative explanation for his feelings); nevertheless, they remind us that changing our brain activity can, at least in theory, change our sense of self. 

Taking the pulse of the public can help guide ethically fraught research. For example, one survey of students found that people generally don’t like the idea of altering their personality traits.

They were willing to improve cognitive abilities such as attention, alertness and memorisation, but empathy and kindness were off limits—perhaps because they shape a person’s emotions, identity and sense of self.

In raising these issues, the point isn’t to question the value of brain stimulation devices as medical therapies—they’ve proven themselves as safe, effective treatments, and patients are delighted to have their quality of life restored.

The point is that as the technology progresses, we need to make sure that the legal and ethical guidelines keep pace. 

Brain enhancement

Enhancing brain function is certain to be one goal of future research, and the military may be the best example of who might want this. The United States defence research program DARPA is already investing heavily in brain-computer interfaces that could one day boost combat readiness, performance and recovery of military personnel.

But is cognitive enhancement something we should allow? The truth is some people already take steps to achieve it; ‘smart’ drugs like Ritalin and modafinil can help focus and extend attention; Prozac alters mood to ward off depression and anxiety. 

The prospect of cognitive enhancement raises issues of equality and fairness—who should have access to these enhancements, and would they be limited to those who can afford it? Would a high score on a test be fair with the use of brain enhancement? Similar issues regarding performance-enhancing drugs are confronted in professional sports.

The definition of human

Some modern robots are being made to look decidedly human, but how much should we treat them as human? In October 2017 Sophia, a social robot capable of more than 50 facial expressions, was made a citizen of Saudi Arabia, to the concern of many experts across the world. 

A recent study also found that people subconsciously treated robots in a very human way. "Please do not switch me off!" pleaded a robot, causing almost 30% of people to comply, even though researchers had requested them to switch off the robot. 

Privacy

Today’s biggest technology companies collect huge amounts of personal information because they can sell it for commercial gain. This means that if or when your brain activity can be recorded with a wearable EEG headset, companies would find huge value in accessing your brain-based information. 

Imagine, in the future, if you merely thought about buying a new Smart TV while wearing an EEG headset, and that information was relayed to a big online retailer.

The retailer could use AI programs to automatically contact you with their latest specials on Smart TVs. You wouldn’t even have to act on your thoughts by typing it into a search engine; your head-mounted device would record the activity pattern caused by thinking ‘Smart TV’, and commercial operators could act. 

There are huge privacy concerns around who would or should have access to your brain activity. For example, what if a health insurance company wanted to buy your brain activity, which could indicate if you had a mental health disorder? 

These are important issues to consider as technology improves and brain data becomes more readily available.

Morality

The goal of robot researchers is to build robots with general intelligence (run by AI), guided by a sense of morality. But whose morals should a robot take? What principles should guide robot decisions? What happens when human input is purposely dishonourable?

For example, less than one day after its release in March 2016 to engage in ‘conversational understanding’ with the general public, Microsoft’s AI-based chatbot Tay became a Hitler-loving racist and conspiracy theorist based on the interactions it had on Twitter. 

Teaching general moral principles to robots and letting them deduce appropriate decisions won’t work all the time either; there will always be exceptions to the rule or ambiguous situations. The other option is to have the robot learn through experience, just as humans do, but perhaps with ethicists guiding it. 

But what moral code should the robot learn? There are many, though these are just some: the Golden Rule (do unto others as you would have them do unto you); utilitarianism (act for the greater good); the categorical imperative (actions bound by moral duty to do what’s right, regardless of outcome). 

Humans still often disagree on what the right moral decisions are even within a single country or culture. 

 

    ​      

Help QBI research

Give now

QBI newsletters

Subscribe

The Brain: Intelligent Machines QBI