search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Future soldier


Delhi. To illustrate the point, Mohan gives a simple example pertaining to gender bias. “If you just show your system images of male doctors, it’s going to think that every doctor in the world is male.”


In a paper titled ‘Managing Expectations: Explainable AI and its Military Implications’, Mohan cites a study tracking available instances of bias in 133 AI systems across industries from 1988 to 2021. It found that 44.2% demonstrated a gender bias, and 25.7% exhibited both gender and racial biases. In a conflict environment, Mohan explains that “deploying AI systems could mean that a woman of a race against which this programme is biased […] could be misidentified by the computer vision or facial recognition software of an autonomous weapon system as a non-human object”.


What’s in the box?


Another potentially horrifying bind for AI military researchers is the ‘black box’ problem: the inability to see how and why complex deep learning systems make their decisions. Deep neural networks are composed of thousands of simulated neurons that can be structured into hundreds of interconnected layers that interact in enigmatic ways. Even when these systems have yielded remarkable outcomes – like combing through medical data to predict diseases – their developers and users can’t explain how they work. For this reason, Mohan advocates for the use of explainable AI systems in a military context. This encompasses ante-hoc models that are less sophisticated and by default more transparent, and post-hoc versions that transfer the knowledge from the black box model to a simpler, smaller one, known as the ‘white-box surrogate model’. “The best thing that we can look at now is regulating these systems as soon as we can and in as robust a manner as we can,” Mohan says. “We don’t know what kind of processing takes place within them, we don’t know what these deep neural networks are taking in,


Visual superiority


Day and night, spot people and situations right away with VECTED thermal imaging cameras.


Seeing more – and seeing better – can give you the edge you need to survive. Our state-of-the-art thermal imaging cameras deliver clarity where the naked eye or residual 


Designed for use as plug-and-play clip-ons, handheld devices or weapon sights.


German technology you can rely on. VECTED GmbH Engineering Advantage made in Germany.


www.vected.de · info@vected.de Tel. +49 (0)911 960 687 0


and why they’re spitting out what they’re spitting out.” For soldiers to adequately trust and understand AI, Crossfield says that defence personnel must have “an appropriate, context-specific understanding of the AI-enabled systems they operate and work alongside”. He cites a lack of knowledge, skills and experience in using these technologies as a major impediment to AI implementation for military operations in general. Crucially, he talks of personnel being trained and therefore competent enough to understand these tools, enabling them to verify that all is working as intended. “While the ‘black box’ nature of some machine learning systems means that they are difficult to fully explain, we must be able to audit either the systems or their outputs to a level that satisfies those who are duly and formally responsible and accountable,” Crossfield says. With a growing number of countries now building on decades’ worth of research and development in AI to boost their defence capabilities, the spectre of lethal autonomous weapons looms ominously on the horizon. Mohan worries about the lack of rigorous global safeguards currently in place around the use of AI in military contexts. “There’s not enough hesitancy and not enough urgency in policy circles around military- specific AI,” she says. As for ChatGPT and the hyperbole it has generated, Mohan argues that while the technology seems mind bogglingly proficient, it is making the same mistakes AI systems have been making for the past two decades. Actually, this paradox, she argues, is what makes these technologies so interesting and so terrifying.


“AI systems are so advanced, but they can be so stupid,” Mohan concludes. “There’s really no standard explanation or expectation of where and how AI systems are progressing. It could be as stupid as a dumb computer from the 1990s, but at the same time, it could do things that can spin beyond our wildest imaginations.” ●


$3bn


The amount the US military have requested in the 2024 budget to advance its AI and networking capabilities.


US Department of Defense


Defence & Security Systems International / www.defence-and-security.com


31


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49