The Emerging Life Sciences and the National Security State

13 Sep 2016

Jonathan Moreno believes that the ability of the US military’s Third Offset Strategy to blunt a new generation of “disruptive” technologies will largely depend on the successful mixing of neuroscience and engineering. Today, Moreno describes the “game-changing” capabilities that neurotechnologies may soon provide.

This article was external pageoriginally published by the external pageAir Force Research Institute in the Fall 2016 (Vol 10, Issue 3) of external pageStrategic Studies Quarterly.

In 2014 Secretary of Defense Chuck Hagel described a new “game-changing offset strategy” intended to counter a new generation of disruptive technologies being developed by China and Russia, innovations that could undermine US military advantages. Secretary Hagel’s strategy has come to be known as the third offset, following in the line of the Eisenhower administration’s “New Look” that emphasized massive nuclear retaliation and the Carter administration’s “Offset Strategy” that led to precision-guided munitions like laser-guided “smart bombs” and computerized command-and-control systems. These technologies were cutting edge in their day, but in the past two decades possibilities have emerged that require new ways of thinking about defense research and development, particularly in the life sciences.

So far the concept of a third offset seems mainly to be a convenient handle for a menu of new defense capabilities, many based on the convergence of neuroscience and engineering. These novel capabilities include autonomous “deep learning” machines and systems for early warning based on crunching big data, human-machine collaboration to help human operators make decisions, assisted-human operations so that humans can operate more efficiently with the help of machines like exoskeletons, and advanced human-machine teaming in which a human works with an unmanned system.

Notably, all of these technologies involve a combination of applied neuroscience and engineering. For example, so-called autonomous systems may benefit from software that has been developed with improved knowledge derived from basic science about how the brain processes information. Although the brain is often called a computer, it is more accurate to say that the brain is an evolved biological system that computes while it adapts. The adaptive abilities of the brain are the salient properties that underlie deep learning and set it apart from artificial systems that have historically been “dumb,” relying on their original programming. As a colleague at the University of Pennsylvania remarked to me a few years ago, Google has much more memory than humans do, but the software is not as good.

There is currently an argument not only about whether offensive autonomous weapons systems can be accountable but also whether they can be controlled (leaving aside all the technical and epistemological issues about the meaning of autonomy in this setting). A system capable of making suitably complex decisions independent of a human operator could challenge conventions about accountability. That is a solvable problem; presumably new conventions for the laws of autonomous armed conflict can be devised. Some have suggested that, far from creating new problems for commanders, these complex devices can have ethics rules built into their programming so they will be less likely to violate military ethics than humans. However, the philosopher Nick Bostrom, in his book Superintelligence: Paths, Dangers, Strategies, has argued that silicone-based machine intelligence is not only inevitable but inherently quite dangerous, whether in the context of armed conflict or not. An intelligent machine that is equipped with adaptive deep learning could both program itself and develop other machines it could integrate into its system, thereby vastly expanding its computational capacity to the point that it would achieve what Bostrom calls superintelligence. Suppose such a device were to develop certain goals that would serve the completion of its computational task—for example, the solution of a seemingly impossible mathematical problem. In that case it could in principle subjugate every bit of matter on Earth—and perhaps beyond—to the job of information processing. Such an outcome would mean not only that human beings would be entirely dependent on the superintelligence for their survival but could lead to the end of human life itself.

This doomsday scenario is met with skepticism among computer scientists—who regard their devices as exceptionally vulnerable to hacking, plug-pulling, or even a swift kick—and by biologists, who do not believe any inorganic system can master all the skills of even a fairly simple biological brain. By contrast, human-machine collaboration is already here, from iPhones pulling information off the cloud to augmented, reality-equipped visors to military pack animals like Boston Dynamics’ “Big Dog” (though the prototype needs to get a lot quieter to be viable for its intended purpose). But these devices require the use of eyes and hands and entail some delay in response. Some medical devices are implantable and respond immediately, such as intracardiac defibrillators for patients at risk of heart attack and cochlear implants for those with hearing impairments. In neuroscience, strides have been made with brain implants to relieve symptoms of movement disorders and perhaps even depression. Currently these chips have only 96 electrodes, but the Defense Advanced Research Projects Agency (DARPA) is supporting work on a new implantable array for brain implants that would include hundreds of thousands of electrodes. Clearly, advances in material science will be required to achieve that goal, but if these super neural chips can be developed and safely introduced into the brain with reliable results—all very high bars—the relationship between an operator and a machine will be utterly transformed (think Clint Eastwood’s robotic airplane in the film Firefox). At that point we would be led to ponder important questions about the nature and limits of the human being in relation to the machine.

Not all neurotechnology-related developments entail such a high level of advanced science or engineering. According to some, improved decision making and accelerated learning can be achieved with relatively simple neural stimulation devices used in the right way. A number of studies have reported that a painless technology called transcranial magnetic stimulation (TMS) can improve visual perception in healthy people.1 In TMS, a magnetic coil is placed above the head, and electrically produced magnetic pulses pass through the cortex. These pulses can alter the firing rate of certain neurons. Researchers hope that TMS may someday be used to treat stroke patients or those with dementias or depression. Research also suggests that TMS could help healthy people benefit from better-than-normal visual perception. The military application is provocative: soldiers on reconnaissance duty, snipers, or fighter pilots operating in target-rich environment could benefit. A 2009 National Research Council (NRC) report, Opportunities in Neuroscience for Future Army Applications, lists in-helmet and in-vehicle TMS as long-term projects to keep on the research and development radar.

Of course, in the twenty-first century, national security strategists face a multipolar world that also includes non-state actors capable of terror attacks that pose mainly a psychological rather than an existential threat. Some technology disruptors are, in the language of a 2014 NRC report, “emerging and readily available.”2 To use one example, the cheaper cousin of TMS, called transcranial direct current stimulation (tDCS), might turn out to be just as beneficial in improving cognitive abilities as TMS. All tDCS requires is a 9-volt battery and a couple of electrodes.3 Enhanced cognition might also be accomplished with new and better pharmaceuticals. A trailblazer in this regard is modafinil, the generic form of the antisleep stimulant marketed as Provigil that is already approved for use in the Air Force. In a different vein, terrorist organizations and conventional militaries would like to be stronger and faster. There is no reason in principle why prosthetic devices like exoskeletons and artificial limbs could not improve or even replace physical functions. Terrorist groups might not be as inhibited as conventional forces about recruiting fighters to undergo deliberate amputation for the sake of significantly improved performance.

Especially in the context of terrorism, looming in the background are variations of the age-old problem of biosecurity. Since ancient times, and even in the biblical account of the plagues unleashed against pharaonic Egypt, microorganisms have represented a special kind of scourge. In the American war for independence, George Washington worried that the British were spreading smallpox in Boston, and during the Civil War, Confederate forces dropped horse carcasses in wells as they retreated from Union armies. Modern biology poses new opportunities to add to the list of select biological threat agents. Synthetic biology uses engineering principles to create new biological entities. Cells can be engineered to perform novel functions and provide new drugs, materials, and energy sources. Besides unintended consequences, they may also be designed to be harmful to humans, animals, and the environment. Increasingly, any bright high school biology student can master “synbio” techniques, and the cost of the raw materials like yeast and Escherichia coli (E. coli) is dropping rapidly.

Besides synthetic biology—which generally builds DNA molecules out of smaller parts—powerful and efficient new laboratory technologies grouped under the heading of gene editing use an ancient biological system to modify strands of genes with great precision. Gene editing techniques like clustered regularly interspaced short palindromic repeats (CRISPR)/cas9 are already being used in agriculture and can modify genes in pests like mosquitos to render them infertile. Using these techniques, genes have been inactivated in human cell lines in the laboratory, but experiments on human beings are not permitted by any national regulatory system.

What is especially remarkable and controversial about gene editing is the fact that the DNA in fertilized human eggs can be modified in germ cells so that novel traits can be inherited. Previously human germline modifications have largely been viewed as unethical, unlike modifications in somatic or body cells of an individual. These techniques bring germ-line changes closer to practical reality. There are plausible arguments for eliminating, say, breast cancer-related genes. The techniques also stimulate visions of armies made up of “designer soldiers.” However, apart from the fact that no one can predict the results of such experiments (genomes are of vast complexity and their manifestations depend on environmental triggers that cannot be factored in with confidence), the payoff for an aggressor would be nearly two decades in the future, and before that, concealment of the project would prove very difficult. Such science-fiction scenarios are compelling, but from a security-planning standpoint, they are ludicrous.

Of more immediate interest is the need to bring certain neurotechnologies under extant international conventions as “dual use,” research that can be used for malign as well as benign purposes. TMC and tDCS are among the most likely neurotechnological candidates for consideration in the periodic revisions of the Biological and Toxin Weapons Convention (later in 2016) and the Chemical Weapons Convention (2017). As well, “calmatives” for crowd control—such as the opioid carfentanil—have been used by Russian special forces and have attracted the attention of the US military. Of interest to interrogation operations, neuroeconomists have studied the usefulness of the artificially introduced brain hormone oxytocin to enhance trust. The Briton Malcolm Dando and his colleagues have taken the lead on bringing these issues to the attention of the convention revision bodies, while my former post-doctoral fellow Nick Evans and I have initiated a project to catalogue these other neurotechnologies that are candidates for regulation.

Finally, I offer a word about the changing politics and sociology of national security research. Discussions about national security and science usually focus on the physical sciences and engineering, but the life sciences, including biology and the social and behavioral sciences, have played a distinctive role in defense and intelligence research and development. Especially in the past 50 years, these sciences’ fortunes have ebbed and flowed depending on political events, cultural trends, and developments in the sciences themselves. In the late 1960s, much social and behavioral science undertaken on behalf of national security agencies was seen as politically objectionable and moved away from university campuses to contract research organizations. Especially in the case of cultural studies of problems like communist insurgency, some argue that the result was an inherent conflict of interest, with paymasters getting the answers they wanted and research receiving inadequate peer review. But social and behavioral sciences are increasingly converging with basic physical science. Developments such as those described here in fields like genetics and neuroscience have brought much of this activity back to campus and appear to be the leading edge of a new era in the academic-industrial complex and the national security state.

Notes

1. See, for example, Michael L. Waterston and Christopher C. Pack, “Improved Discrimination of Visual Stimuli Following Repetitive Transcranial Magnetic Stimulation,” PLoS One 5, no. 4 (28 April 2010): 1–10, doi: 10.1371/journal.pone.0010354.

2. Jean-Lou Chameau, W. F. Ballhaus Jr., Herbert Lin, and National Research Council, Emerging and Readily Available Technologies and National Security: A Framework for Addressing Ethical, Legal, and Societal Issues (Washington, DC: National Academies Press, 2014), ix, doi: 10.17226/18512.

3. Alexandre F. DaSilva, Magdalena Sarah Volz, Marom Bikson, and Felipe Fregni, “Electrode Positioning and Montage in Transcranial Direct Current Stimulation,” Journal of Visualized Experiments 51 (2011): e2744, doi: 10.3791/2744.

About the Author

Jonathan D. Moreno, Ph.D. is the David and Lyn Silfen University Professor of Ethics at the University of Pennsylvania, where he is also Professor of Medical Ethics and Health Policy, of History and Sociology of Science, and of Philosophy.

For more information on issues and events that shape our world, please visit the CSS Blog Network or browse our Digital Library.

JavaScript has been disabled in your browser