search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
FUSION | REVEALING UNCERTAINTY


AI, uncertainty and fusion


The Uncertainty Engine designed to build, deploy, and manage probabilistic


machine-learning models, is helping to direct research efforts in understanding the complex world of fusion plasma physics. Are there lessons here for fission too?


regulatory and safety-critical sectors such as nuclear, energy infrastructure, and environmental monitoring.”


From academic research to applied engineering Emerging from research emphasising methods that combine physical understanding, data, and probabilistic reasoning, digiLab’s academic lineage continues to shape the company’s approach. “Thelab is a high-tech research organisation,”


Niedfeldt notes, adding: “Our technology is built on that research base, similar in depth to what you might see at organisations like DeepMind, but focused on practical applications rather than purely theoretical advances.” At the centre of digiLab’s offering is what it refers to


as its Uncertainty Engine, a platform designed to build, deploy, and manage probabilistic machine-learning models. The platform supports both code-based workflows for advanced users and graphical workflows intended to make advanced methods accessible to engineers without deep software backgrounds. While much public attention has focused on large


Above: Fusion has emerged as an early adopter of advanced digital and AI-driven methods. Source: UKAEA


ARTIFICIAL INTELLIGENCE HAS MOVED rapidly from academic research into industrial practice, but its adoption in nuclear engineering remains under cautious review. This reticence is not ideological but rather reflects the reality that nuclear systems operate under extreme regulatory, safety, and liability constraints. Under these conditions any analytical tool must be demonstrably reliable, auditable, and explainable. As a result, while the nuclear sector is increasingly interested in the opportunities that AI can present any such approaches must be able to quantify uncertainty, expose decision logic, and integrate explicitly with physical models and engineering judgement. A UK-based technology company spun out of the


University of Exeter, digiLab, has positioned itself squarely within this space. Its work focuses on uncertainty quantification, probabilistic machine learning, and explainable AI methods intended for safety-critical engineering environments such as nuclear power. According to the company, the goal is not to replace engineering expertise but to augment it with mathematically rigorous tools that allow engineers to understand not just what a model predicts, but how confident that prediction is and why. “We specialise in trustworthy, explainable AI,” Amanda


Niedfeldt, Head of Business Development at digiLab explains. “That means methods that are auditable, traceable, and explainable, which is essential for


20 | February 2026 | www.neimagazine.com


language models, digiLab argues that these represent only a narrow subset of artificial intelligence and are often ill- suited to nuclear applications. “When people hear AI, they immediately think of ChatGPT and large language models,” Niedfeldt says. “Those are useful for some things, but they are a small sliver of what artificial intelligence actually is.” Instead, digiLab’s work centres on probabilistic machine learning rooted in Bayesian statistics, an approach whose theoretical foundations date back several centuries but whose practical application has only recently become feasible at scale.


Probabilistic models and uncertainty quantification Uncertainty quantification (UQ) lies at the core of digiLab’s platform. In contrast to deterministic machine-learning models that return a single answer, probabilistic models produce distributions of possible outcomes. This distinction is critical in engineering contexts where understanding confidence and risk is as important as the nominal result. “Many machine-learning methods will give you


an answer without telling you how they got there,” Niedfeldt explains. “With probabilistic methods, you get a distribution of potential results, which tells you how confident the model is based on the data and information available.” The company’s research and applied work makes extensive use of Gaussian processes, a class of


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53