A main concern with the home/DIY use of neurotechnology has to do with safety and regulation. One example of a neurotechnology is transcranial direct current stimulation (tDCS), which uses low direct current delivered via electrodes on the head to stimulate specific areas of the brain. Although there is some evidence that tDCS can improve working memory, attention, and decision-making (Kuo and Nitsche, 2015), tDCS and other noninvasive brain stimulation techniques are still highly experimental in the U.S. Nonetheless, as word of tDCS’s potential reaches the public, tDCS blogs and consumer devices have become more accessible and commonplace. Although most tDCS devices are not intended for clinical purposes, none of the consumer kits have undergone the testing process that a drug or medical device must undergo for approval by the FDA. The FDA defines a medical device as “an instrument that is…intended to affect the structure or any function of the body” (FDA, 2018). Under this definition, tDCS can be considered a medical device. However, the DIY use of these devices is not regulated. Furthermore, scientists who study tDCS have shown that tDCS might cause undesired effects (Wurzman et al., 2016). For example, enhancement of some cognitive abilities may come at the cost of others, and tDCS effects are highly variable across different people. How long you stimulate, where you place the electrodes, and how much current you put in all matter. Suspicion about enhancement claims have led to more controlled lab experiments; however, most research studies use brain stimulation for different purposes than do home tDCS users. The technical specifications for medical uses have yet to be clearly defined, let alone the parameters for cognitive enhancement (Landhuis, 2017).
This lack of regulation and knowledge about how to produce reliable results can lead us to question the safety of DIY noninvasive brain stimulation. Safety and regulation are issues that also apply to the field of pharmacological neuroenhancements. For example, the use of stimulants and “smart drugs” inevitably leads to safety concerns about adverse side-effects and abuse potential. While their use may not be inherently unethical, steps must be taken to ensure that they are safe for their intended purposes. Even so, some FDA approved drugs are widely used for off-label, nontherapeutic purposes. A special set of concerns may arise when these cognitive drugs or neurotechnologies are used for enhancement purposes. In the case of performance-enhancing drugs, some individuals may use enhancements regardless of the risk if they expect benefits that will be great enough. With neurotech companies claiming that their devices can “increase concentration” or “amp up brain function”, it is hard to determine whether the neurohype that surrounds such devices will promote responsible home use.
A second issue that surrounds the DIY use of neurotechnology is a question of justice. What would happen if the level of “normal” cognitive performance is increased, but only the affluent have the means of attaining this new standard? This concern would have a downstream impact on the very fabric of society. For instance, Halo Neuroscience—a large neurotech company—sells $700 headgear aimed at athletes. It is meant to stimulate the motor cortex and promises gains in strength and endurance (Landhuis, 2017). These expensive neurotechnologies, like performance-enhancing drugs, raise concerns about access. However, the possibility of unequal distribution of resources is not a reason to reject neurotechnology and drug enhancements outright. Education itself can be considered a cognitive enhancer that is very inequitably distributed, but society is not against private education. Justice issues are not new when it comes to innovative technologies and drug enhancements, such as steroids or Adderall. Thus, we have a similar situation in which some will be able to afford these enhancements and others will not. Although social justice issues are common in our society, neurotechnology and pharmacological enhancement raise a different set of concerns than, say, organ transplants. Organ transplants and cognitive enhancements both have the potential to improve quality of life. However, the problem with organ transplants is not that there are limited financial resources, but that there is a limited supply of organs available for transplants. Neurotechnology is different because it will not necessarily be in short supply, although the financial means may be (Vaseashta, 2012). Still, it does not seem immoral to prevent people from benefitting from something just because everyone cannot benefit. Nonetheless, it is difficult to predict whether neurotechnologies will become cheaper and more readily affordable in the long term, or whether the long-term effect will be to exacerbate the levels of inequality and discrimination in our society. Overall, the justice question shifts the focus away from the thing itself, neurotechnology, to the social consequences of providing it. These questions involve overarching principles about inequality and resource distribution.
A third concern with the use of neurotechnology is that some devices might compromise a patient’s privacy. For example, neurotechnologies that aim to decode people’s intentions from brain activity raise concerns about data integrity. Currently, brain-computer interface technologies are being developed to help disabled patients perform certain tasks, including robotic limbs that can “autonomously” interpret and execute a patient’s motor intentions (Muller and Rotter, 2017). The read-out of brain activity becomes more sensitive the more precisely one can interpret and share someone’s internal state. The impact of manipulation of such brain data could potentially be harmful to the patient if his mind is involuntarily “read”. Perhaps neuroprosthetic devices could be hacked and taken over by viruses, just like computers, telephones, and industrial facilities. Furthermore, if a user’s intentions are transferred to a machine, computer-based translation may transform the user’s identity and obscure our concept of responsibility. While brain-computer interface advances have a great potential for research and medicine, they pose a great challenge: to determine whether, or under what conditions, it is legitimate or ethical to gain access to, or to interfere with another person’s neural activity. Attempts to decode mental information is already occurring in legal and commercial settings, where neuroimaging has become an advertising or forensic tool. For instance, Google uses neurotechnology to detect consumer preferences and hidden impressions on their ads or products (Ienca, 2017). As neurotechnology becomes more advanced, we may need to implement new laws that protect mental privacy and regulate responsibility on both the human and the artificial side.
Unlike issues of safety and access, privacy concerns do not have a parallel in pharmacological neuroenhancement. Although issues about data security do not apply to cognitive enhancement drugs, as there is no brain information being recorded or manipulated, traditional notions of privacy are already being eroded by our increasingly digital world. In this sense, the data security issue is not novel. For instance, Facebook combines location and other personal data to suggest friends and display personalized advertisements. Information is gleaned from personal profiles to conduct large-scale market research. The constant threat of breaches, surveillance, and online data collection is not unfamiliar territory. In a sense, neurotechnology can be seen as another technological trend that might jeopardize our privacy. However, when mental information is no longer private, nothing is private, and the very notion of subjectivity—the quality of existing in someone’s mind rather than the external world—becomes meaningless. In this way, neurotechnology places an even greater emphasis on privacy and data security concerns.