Future soldier
dangerous tasks. Robots have been used to defuse bombs for over 40 years, but most machines currently deployed in the field cannot perform contextual decision making or operate autonomously. By embracing sophisticated algorithms and image recognition technology AI-powered machines could theoretically learn to recognise the type of bomb technicians are dealing with and choose the best option for neutralising it.
A similar tactic is being deployed by the Royal Navy, which uses three vessels capable of working manually, remotely or autonomously, to collect and analyse data in real time to detect and classify mines and maritime ordnance. Known as Project Wilton, this £25m initiative has developed sophisticated vessels capable of controlling and communicating with fellow machines.
Calling the shots
A challenge of implementing deep learning systems in defence operations is the inherent bias hidden in datasets.
to complex deep learning networks – militaries are harnessing semi-assisted and autonomous AI systems to streamline logistical operations, improve battlefield awareness and defend their bases from attack. Aware of the radical potential of ruthlessly efficient algorithms to reshape military operations, the US government has invested heavily in these technologies. In 2021, it was estimated to have $6bn tied up in AI-related research and development projects. In 2024, the US military will ask for more than $3bn to advance its AI and networking capabilities. Unsurprisingly, the UK doesn’t want to get left behind. It too has recognised the advantages of autonomous ‘learning’ systems and AI, and the integral role they are likely to play in the future of defence. In June 2022, the Ministry of Defence published its ‘Defence AI Strategy’, which brazenly declared ambitions to make the UK a global leader in the responsible use of AI as part of a once-in-a-generation defence modernisation plan. The mood from inside the MoD about AI and its warfighting potential might best be described as cautiously optimistic. Brigadier Stefan Crossfield, principal AI officer at the British Army, says that while many AI systems are still being used experimentally or at the discovery phase, they are “maturing at pace” and “supporting defence from the back office to the front lines”.
$6bn Bloomberg Government 30
The total cost of the US government’s AI- related R&D projects in 2021.
Crossfield talks of AI technologies exhibiting “the potential to be incorporated into a wide range of systems to enable various degrees of autonomous or semi-autonomous behaviours”. These include enhancing the speed and efficiency of business processes and support functions; increasing the quality of decision making and tempo of operations; and improving the security and resilience of inter- connected networks. Crossfield also sees AI as playing a vital, albeit supportive role on the battlefield by “enhancing the mass, persistence, reach and effectiveness of our military forces” and protecting soldiers from harm by automating dull, dirty and
Due to the proliferation of more advanced technologies, Crossfield talks of computer-aided military decision- making occurring more and more frequently. “The areas which show the most promise are systems that take vast amounts of data to help inform commander’s decision making under the stress of battle,” he says, arguing that this talent for sifting through public data sources offer senior military personnel unique insights and better understand local and international geopolitical environments. If clandestine Cold War ops like Igloo White were focused on stealing data, today’s military and intelligence communities are swimming in it. For example, in 2011 it was reported that US drones had captured 327,000 hours – 37 years – of footage for counterterrorism purposes. By 2017, it was estimated that for that year alone the footage US Central Command collected could amount to 325,000 feature films – or approximately 700,000 hours or 80 years. Militaries are turning to AI systems to curate, analyse and deliver novel or valuable insights from these data streams. Powered by convolution neural network algorithms – a class of artificial neural networks commonly used to analyse images – an AI system named Project SPOTTER is being trained by MoD experts to identify specific objects of interest from classified satellite imagery. Inevitably, AI systems are largely defined by the content and quality of the information they consume. This places a huge responsibility on those tasked with developing algorithms and deep neural networks, carrying the Frankensteinian risk that these machines grow to inherit unwanted biases. When applied to the military sphere the gravity of these problems becomes somewhat more acute.
“It becomes really important to take a long, hard look at the data, so that it is not under-representative of any particular category or misrepresentative of the sample that you have,” explains Shimona Mohan, research assistant at Centre for Security, Strategy and Technology, Observer Research Foundation, in New
Defence & Security Systems International /
www.defence-and-security.com
SeventyFour/
Shutterstock.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49