The Military and the Machine

25 Nov 2016

Since artificial intelligence (AI) and autonomous systems (AS) are already acting as “disruptive technologies” in the civilian realm, Zoe Stanley-Lockman isn’t surprised that their influence is becoming more prevalent in the military domain. Here are some of the effects AI and AS may soon have, particularly on command structures.

This article was external pageoriginally published by the external pageEuropean Union Institute for Security Studies (EUISS) on 18 November 2016.

Autonomous systems (AS) and artificial intelligence (AI) are already widely recognised as ‘disruptive’ in civilian spheres. Technologies with military applications are not exempt: the ways of war will also undergo technological disruption due to advances in AI, deep learning and AS. In the US, this has become the focal point of the Defense Innovation Initiative. Regardless of how the Trump administration chooses to continue the initiative at large, 2017 will be a decisive year simply because the current Pentagon directive on ‘autonomy in weapons systems’ will reach its expiration date, and also because research and development (R&D) funding from the ‘third offset strategy’ is expected to enter into the US defence budget. In Europe, 2017 also represents a tide of change with the Preparatory Action on CSDP-related research, through which the EU will provide defence R&D funding for the first time. To stay competitive and prevent the transatlantic technology gap from widening further, it is worth exploring how autonomous systems and AI could impact future military capabilities and organisations.

The ethical and legal implications of lethal autonomous weapons systems (LAWS) – colloquially referred to ‘killer robots’ – are at the centre of this debate. Necessary conversations on ethics and legality are taking place to determine the ‘appropriate level’ of human judgement and control, for example about drones capable of selecting and engaging their own targets, or buttons that can essentially press themselves to prevent a missile attack. But further away from the frontlines, non-lethal robots are also changing the less visible dimension of conflict.

The needle in the haystack

These technological advances boil down to symbiosis between machines and humans to enhance the human role. Serving non-lethal ends, AI and AS are most immediately relevant to command, control, communications, computers, intelligence, surveillance and reconnaissance (C4ISR). This includes anything from facial recognition software, to software that improves the quality of shaky video, and to algorithms that identify patterns out of images and text.

The commercial-tech sector houses most of the talent in these domains. Defence communities seeking out military applications for burgeoning technologies are up against deep-pocketed venture capitalists and market forces. In particular, AI competition is fierce because the talent pool is small. Even when the software is open source, each context is so unique that many algorithms become non-transferrable. On top of other challenges of utilising commercial technologies in the defence business, this heightened competition means it is not assured that AI will lead to the next revolution in military affairs. In other words, this technological disruption of conflict is likely to occur, but it is not inevitable.

Many innovations focus on harnessing big data to obtain a complete operational picture. It is difficult for intelligence and military officers – whose shifts tend to be longer than those of civilians – to avoid natural human error caused by fatigue and other distractions.

Machines can be faster, more accurate and more consistent. Through deep learning algorithms, robotics can be taught to find the needle in the haystack for C4ISR. With the right interconnected grids in battle networks in place, the future could conceive of phenomena like the ‘little green men’ in Crimea as data analytics problems. In Ukraine, even the prevalence of time-stamped, geo-located selfies has proved useful in identifying weapons and their users. Here, AI does not necessarily alter the core task at hand, but catalyses processes with greater efficiency.

The human dimension

Just as citizens grapple with the prospect of robots capable of doing their jobs and driving cars, militaries are also considering how more AI and AS will impact their organisations. Technological disruption is often measured by the amount and type of jobs that are lost and subsequently created. AI and autonomous systems have already impacted the blue- and white-collar workforces; indeed analysts estimate that, in 20 years’ time, almost half of today’s jobs could be automated. For defence ministries concerned with military readiness levels, this could remedy recruitment issues and cut bloated personnel expenses.

But a more nuanced approach to disruption lies in measuring the types of activities – rather than jobs – that machines will replace. A recent McKinsey study found that 45% of professional activities can already be automated. The question then becomes: how will military personnel fill the time that machines free up for them? There is no one-size-fits-all answer: individuals of all types will ultimately be impacted differently. Across services and countries, at least, a few common threads stand out.

First, there is likely to be a new taxonomy of warrant officers – senior technical experts embedded in the military – with greater emphasis on networks rather than platforms. As opposed to knowing where every last nut and bolt fits in, future warrant officers will have to work with living, dynamic algorithms and networks in order to manage – or at least understand – what machines may teach themselves. Next, the multitasking workload demanded of battle captains will be reduced if machines perform the secondary tasks, such as guard duty or paperwork, thereby allowing them to re-focus their attention on the core task of managing the information flow of operations ‘behind the screens’.

These technological advances can also be expected to change command relationships. On the one hand, if the top brass trust that C4ISR networks are more reliable and consistent than human-controlled ones, it could enhance trust and confidence between a commander and his subordinates. On the other hand, more machines cooperating with other machines could also lessen the need for inter-rank interaction. The question then will be whether technological advancements make room for stronger relationships between chains of command rather than within them.

Multiplying force multipliers

It is vital to also consider the several side effects that increased emphasis on AI and AS may produce. Concretely, many individuals already believe that governments and companies gathering information infringe upon data privacy rights. These concerns have to be considered, as AI and AS will catalyse information gathering and analysis. Even as non-lethal instruments, AI and autonomous systems pose a difficulty in the trade-off between security and privacy, which is likely to strengthen with a greater military dimension.

Somewhat abstractly, futurists warn that smart machines may replace human-set goals with ones they teach themselves. It would not be difficult to imagine deep-learning algorithms adjusting their objectives based on self-identified patterns. What begin as small adjustments could lead to smart robots redefining their own ontologies: the common science-fiction plotline of machines mastering humans is not out of the question.

In addition to the very real ethical questions, an increased reliance on machines will also affect the very ways militaries conduct business and relate to broader society. Civil-military relations need to considered, especially in countries where soldiers and veterans are a resounding source of national pride. Paradoxically, more robotics-dependent militaries could also detract from recruitment. And in an already fast-paced environment, decision-making processes will also be accelerated – especially if adversaries with similar capabilities can act just as quickly. If kinetic contexts change too rapidly for personnel to process, this could inadvertently increase reliance on smart machines that are capable of teaching themselves their own lessons.

As is being done for lethal autonomous weapons systems, delegating greater control to machines for non-lethal purposes requires greater attention. These technological advances are not new, but they are snowballing. C4ISR networks are already recognised as significant force multipliers for a variety of civilian military operations. If AI and AS have the potential to catalyse the processes of these force multipliers, their impact on military organisations should be considered as early, and as comprehensively, as possible.

About the Author

Zoe Stanley-Lockman is the Defence Data Research Assistant at the EUISS.

For more information on issues and events that shape our world, please visit the CSS Blog Network or browse our Digital Library.

JavaScript has been disabled in your browser