Big interview
challenging use case that will help the DAIC to better understand barriers to deploying AI across defence, and to develop common enablers to overcome those barriers. This is the case for Project SPOTTER, which explores the use of automated detection and identification of objects within classified satellite imagery. Directly working on AI projects, either in a leading or supporting role, gives DAIC credibility when providing advice and other products services in the ‘enable’ function.
“If we were not safe and responsible, then we’d [...] lose the trust of end users and the public. And [...] that would actually make us less ambitious.”
Problem solver 6 10
The number of problem spaces in
defence where AI offers great potential.
DAIC Defence AI Playbook
The DAIC sees six main problem spaces in defence that AI has clear potential to deliver step changes in performance, which it lays out in the ‘Defence AI Playbook’ – recognise, comprehend, predict, simulate, generate and decide. The first, ‘recognise’, involves using AI to help detect subjects of interest within the vast volumes of sensor data the MoD currently collects – this would include the previously mentioned Project SPOTTER, alongside other projects concerned with object detection in satellite images, analysing radio signals, and so on. ‘Comprehend’, then, covers AI deriving insights from large unstructured or semi-structured datasets, which Woodward explains is associated more with back-office functions – this would include the Intelligent Search & Document Discovery program, which aims to simplify policies and boost understanding by enabling efficient searching and navigating through documents, uncovering valuable insights and identifying previously unseen connections. Woodward describes the ‘predict’ function as “forecasting what might happen in the future, based on what’s happened in the past”. This is something AI excels at, covering uses like predictive maintenance and parts failure, or training regime optimisation. ‘Simulate’, on the other hand, involves running scenarios and generating data to inform planning and different courses of action. Here, Woodward notes that AI-enabled tools can come to different conclusions than a human would in certain contexts, citing the Playfun Learnfun AI system created by computer scientist Tom Murphy back in 2013. Designed to ‘solve’ how to play classic Nintendo Entertainment System (NES) games like Super Mario Bros., the AI’s methodology completely failed when it came up against Tetris. While attempting to gain a higher score by laying blocks randomly on top of each
other, the AI would pause the game as soon the screen had filled up – just before it was about to lose – leading its creator to quote the 1983 film WarGames, noting “the only way to win is not to play”. The exploding space, however, is ‘generate’, which concerns the use of generative AI to create new content that appears natural or human-made and has been grabbing headlines in the public sphere for a while now. For the DAIC, the focus here is on the huge amount of work required in order to use this form of AI in “an ambitious, safe and responsible manner,” Woodward says, and these are the watchwords that the DAIC applies across all of its work. Finally, we have ‘decide’, which refers to creating autonomous or automated behaviours by selecting the actions to achieve a goal – the kind of system that would drive how drone swarms behave, or last-mile resupply solutions. Here, Woodward cites November’s DAIC-sponsored hackathon, which saw 40 programmers demonstrate how AI-enabled robotic dogs could carry out potentially dangerous tasks that would otherwise be undertaken by army bomb-disposal experts. Ultimately, the AI solutions applied to these six problem spaces enhance the UK’s defence capabilities in key areas highlighted in the Defence AI Strategy: boosting the efficiency of business processes and support functions; increasing the quality of decision making and speed of operations; improving the security and resilience of interconnected networks; enhancing the mass, persistence, reach and effectiveness of the UK MoD’s armed forces; and protecting its personnel from harm by automating “dull, dirty and dangerous” tasks.
The human touch Returning to the DAIC’s watchwords, established in its eponymous ‘Ambitious, safe and responsible’ policy paper, Woodward believes they serve as the perfect statement of the UK MoD’s approach in this space. The paper lays out the MoD’s aspirations to accelerate best-in-class AI enabled capabilities in accordance with five ethical principles jointly developed with leaders in the field, but crucially stresses the need to incorporate AI only where it is the appropriate tool for the task at hand, concluding, “We will not adopt AI for its own sake; it is not an ‘end’ in itself.” “We want to be ambitious – we want to move at pace and make sure that defence can harness this technology,” notes Woodward, reflecting on what these watchwords mean to the DAIC’s work. “But at the same time, if we were not safe and responsible, then we’d rightly lose the trust of end users and the public. And ironically, that would actually make us less ambitious, because we’d effectively be slowed down.”
Defence & Security Systems International /
www.defence-and-security.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45