Artificial intelligence ‘could be worth a thousand Napoleons’ to the military

Credit: Heathcliff O’Malley

Artificial intelligence can be worth “a thousand Napoleons” for the armed forces – as long as military chiefs understand how to use it, a senior officer has said.

Advances in technology and the benefits of machine-learning systems offer great potential for the military, provided commanders are able to keep pace with the data produced.

Dr Keith Dear, an officer in the RAF, says artificial intelligence (AI) could make the world a more stable place if it “presents more of the truth about what is actually happening in any given scenario” as “more transparency should lead to less uncertainty.”

However, he warns that the predictability of much of human behaviour could be exploited as never before by AI systems identifying vulnerabilities in military decision-making processes, or even in individual commanders, faster than human analysts can.

“We should not always assume that technological advances will be good for humanity,” Dr Dear told me.

Speaking in a personal capacity having completed a PhD from the Department of Experimental Psychology at Oxford, Dr Dear believes psychologists and AI computer scientists are in a similar position to that experienced by physicists on the Manhattan Project (the development during the Second World War of the world’s first nuclear weapons).

Psychology has a leading role to play as AI systems evolve. “Sometimes advances in technology can be profoundly damaging,” he suggests.

“If we think of ourselves as biochemical algorithms and our decisions the results of inputs leading to outputs, that makes our behaviour probabilistically predictable which allows us to be manipulated in ways which up until now have not been possible.”

Multiple experiments have shown how easily humans can be manipulated to make errors by presenting information in a particular manner or by overloading the individual with data, he says. Handled poorly then, AI could be a threat to sound military decision-making, he suggests.

“We are definitely not fully rational, no psychologist would make the argument that we are. It’s the exploitation of those vulnerabilities that concerns me.”

Dr Dear suggests most Western armed forces have been trying to create a human form of artificial intelligence “as far back as the Prussian staff system” of the early nineteenth century.

Instead of the French model of relying on the natural elan of a commander to come up with a fantastic battle-winning idea, he says the Prussians decided “not to wait for another Frederick the Great and instead thought ‘we need a system to produce a thousand Napoleons because we keep losing’.”

The idea was to build military headquarters with very narrow specialised functions to get the maximum out of each individual and to make the decision-making process more efficient and rigorous, and therefore more reliable.

“That’s what we’re trying to do with AI,” he says.

Humans, he suggests, are tightly constrained because we can see only a small number of possibilities. Not so computers.

As an example he points to AlphaGo, a programme developed by technology company DeepMind to compete in the ancient Chinese strategy game of Go. The programme used “other-worldly moves” that no human had ever considered.

In 2016 it beat the world’s best human Go player by using moves that had never been seen before.

“As we move more and more into this world of AI, people will become more used to there being things that we can’t explain so well.

“This is already happening in a highly strategic game that we thought humans would always win at, with the programme doing things we would never do ourselves. It’s not unreasonable to start to wonder what that means.”

What would be the human response in the run up to any future conflict if an AI system working in a military strategic headquarters recommended such an ‘other-worldly’ move?

“Because so much of AI is a black box – you don’t see the computational layer; it just has an input and works out, probabilistically, an output – what happens when we get the order to do that and all the evidence from wargaming suggests it usually is right?” he asks.

“Do we risk lives?”

2 Comments Add yours

  1. Danny O’Neill says:

    Thanks Dom. As mentioned in the article, AI, if used correctly, can be a powerful enabler in many areas, whether enhancing Cyber security or better informing and empowering decision makers in the military. It can help to address skills shortages and vastly improve analysis, processing vast amounts of data at speed in a way the human brain cannot, to provide a comprehensive picture of complex scenarios rapidly from multiple data sources. However, on the side of caution, it will not completely replace human expertise. AI can reduce the dependency on human intervention, increasing efficiency, effectiveness & scalability, which will enable organisations to optimise resources and decision making. Finally, we should also acknowledge the inevitability that our adversaries will too, leverage similar capabilities and technologies. The game of cat and mouse continues.

    Like

    1. Dom Nicholls says:

      Thanks Danny, always good to hear from you. In the Cat & Mouse game you describe, how do you think the UK is getting on?

      Like

Leave a comment