DIGITAL & IT | AI & SAFETY
AI and the nuclear safety case
What is the role of AI in the nuclear industry? In one example, Palantir is exploring using its AI to help nuclear operators write safety cases more quickly and more reliably. Janet Wood spoke to the company’s Nick Prettejohn
ARTIFICIAL INTELLIGENCE (AI) IS AN idea that is pervasive at the moment, both talked up as the solution to many current problems and talked down as a risky technology that could take control out of human users’ hands. Nicolas Prettejohn, Head of AI, UK & Nordics at Palantir, summarises it as “software that helps companies tackle their most critical problems”. Prettejohn explains that AI is not about doing simple
things more quickly, as has often been the rationale for types of automation, whether hardware or software. Instead, it is about giving people the ability to do difficult things.
He gives a simple example of a task that is more complex than it seems: asking questions in unstructured language about consenting, in an international company where documents are in different languages and refer to different permitting regimes. He says, “It’s those things that are most useful. Civil society will talk about AI not being used in critical circumstances – but that’s where all the benefit is. We need to find a way of thinking about risk and mitigation.”
It is true there is a risk that the AI will give the ‘wrong’
answer to an unexpected question. An AI issue that is often highlighted is the risk of so-called ‘hallucination’. Prettejohn compares this with the experience of working with junior
employees. They may also come up with outliers, he says, “but you manage that within the organisation. You have oversight, checks and balances, and people checking each other’s work. Of course, there are some risks that can never be fully eradicated and there is where companies use insurance.” Prettejohn says most people have proxys for AI – refering
to the control rooms in nuclear power plants, which have evolved over the decades from hard-wired ‘red light’ alarms to screens providing both detailed information that can be interrogated and recommended actions. But, he suggests the AI is more of a colleague. He explains, “If everyone had an assistant you would be able to debate ideas all the time, especially if they had your own knowledge base and operations experience – it would be a collaboration. The big objective of some of our most successful deployments has been that the end user sees an interesting perspective they had not thought about.” “With an AI ‘assist’, a language model can also access
enterprise knowledge and maybe explore something from the data store that the user had not appreciated.” Because the AI has the full sum of all the knowledge “You actually uplift your operators,” he says. Prettejohn speaks from Palantir’s experience of using AI in the health industry. He stresses that the first aim has
Right:
AI is able to make a new safety case that looks broadly like something already existing, and pulls in all the relevant content
20 | June 2024 |
www.neimagazine.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49