Artificial intelligence at war
20 Aug 2024|

There’s a global arms race under way to work out how best to use artificial intelligence for military purposes. The Gaza and Ukraine wars are now accelerating this. These conflicts might inform Australia and others in the region as they prepare for a possible AI-fuelled ‘hyperwar’ closer to home, given that China envisages fighting wars using automated decision-making under the rubric of what it calls ‘intelligentization’.

The Gaza war has shown that the use of AI in tactical targeting can drive military strategy by encouraging decision-making bias. At the start of the conflict, an Israeli Defence Force AI system called Lavender apparently identified 37,000 people linked to Hamas. Its function quickly shifted from gathering long-term intelligence to rapidly identifying individual operatives to target. Foot soldiers were easier to swiftly locate and attack than senior commanders, so they dominated the attack schedule.

Lavender created a simplified digital model of the battlefield, allowing dramatically faster targeting and much higher rates of attacks than in earlier conflicts. Human analysts did review Lavender’s recommendations before authorising attacks, but they quickly grew to trust it, considering it more reliable. Humans often spent only 20 seconds considering Lavender’s target recommendations before approving them.

These human analysts displayed automation bias and action bias. Indeed, it could be said that Lavender was encouraging and amplifying these biases. In a way, the humans offloaded their thinking to the machine.

Human-machine teams are considered by many, including the Australian Defence Force, to be central to future warfighting. The way Lavender’s tactical targeting drove military strategy suggests that the AI machine part should be designed to work with humans on the task they are undertaking, not be treated as a part able to be quickly switched between different functions. Otherwise, humans might lose sight of the strategic or operational context and instead focus on the machine-generated answers.

For example, the purpose-designed Ukrainian GIS Arta system takes a bottom-up approach to target selection by giving people a well-fused picture of the battlespace, not a recommendation derived opaquely of what to attack. It’s described as ‘Uber for artillery’. Human users apply the context as they understand it to decide what is to be targeted.

Ukraine offers further insights into the application of AI for knowing what is happening on the battlefield. Advanced digital technology has made the close and deep battlespace almost transparent. Strategy is now formed around finding enemy forces while fooling their surveillance systems to avoid being targeted. The result is that the frontline between the two forces, out to about 40km on either side, is now a very deadly zone through which neither side can break through to win.

This tactical crisis appears likely to deepen as present semi-autonomous air, land and sea systems are progressively updated by Ukraine and Russia with AI. This will make these robots much less vulnerable to electronic warfare jamming and allow them to autonomously recognise a hostile target and attack. Sensing the significant battlefield advantages, the US has launched the large-scale Replicator program aiming to field ‘autonomous systems at scale of multiple thousands, in multiple domains, within the next 18 to 24 months’.

Given AI’s use in Gaza and Ukraine, it appears likely that in a potential war with China the principal utility of AI similarly will be find-and-fool. Consider clashes over the first island chain, which runs from Indonesia to Taiwan and through Okinawa to mainland Japan. With China to the west and the United States to the east, military forces would use AI’s ability to quickly find items within a background full of clutter while attempting to fool the enemy’s AI systems.

Helped by AI, US-led coalition kill webs and Chinese kill webs will readily find and target hostile air and naval forces on their respective sides of the island chain. The first island chain might then become a stabilised but very dangerous land, sea and air battlespace, with US and allied forces dominating on the eastern side and Chinese forces dominating on the western side. The island chain would become a no man’s land that neither side could pass through without suffering prohibitive losses.

How to win in a war so driven and influenced by AI may be the major question facing defence forces today. The Ukraine war suggests some strategies: wearing the other side down in a protracted attrition battle; using mass frontal attacks to overwhelm the adversary in a weakly defended area; infiltrating using small assault groups with heavy firepower support; or quickly exploiting some fleeting technological advantage to break through. Such options may become practicable as more and more AI-enabled weapon systems enter service.

The operational balance seems to have swung to favour defence over offence, to the advantage of status quo powers, such as India, Japan, South Korea, Taiwan, Singapore and Australia. But this may prompt a revisionist power like China to seize territory before others can respond, making it difficult to push back. As Japanese Prime Minister Fumio Kishida  warned, ‘Ukraine of today may be East Asia of tomorrow.’