FEATURE Automation & AI
HOW TO OVERCOME THE AI ‘FEAR FACTOR’
XpertRule’s Technical Director, Darren Falconer, says a decision- centric approach in manufacturing can help build trust in AI
I
t’s no secret that trust is the foundation for successful AI adoption. By addressing scepticism, prioritising data quality and ensuring algorithms are explainable and
auditable, AI can become a powerful force- multiplier in manufacturing operations. Manufacturers are increasingly looking to AI automate routine tasks, with 75% planning to step up their AI spending in 2025. However, much of this attention is focused on Generative AI – something that we believe is poorly suited to factory settings.
Part of this misalignment stems from a lack of understanding of AI’s practical applications in industry. With only 7% of manufacturing leaders feeling “very knowledgeable” about AI applications, trust issues loom large. Feedback from vendors and end-users consistently points to trust as a leading barrier to adoption. Without trust, AI cannot deliver on its full potential, leaving many manufacturers hesitant to go beyond pilot projects.
Business leaders are having to overcome not only technical hurdles but also the misconception that AI solutions are uncontrollable or inherently risky. To counter this, companies must approach AI with transparency and explainability at every stage, showing that AI is a tool to amplify human capability, not replace it.
For a simple comparison, think about
cruise control in a car. Traditional cruise control maintains a set speed but that’s all. Compare that to adaptive cruise control, which considers real-time conditions, adapts to your driving preferences and responds intelligently. Similarly, AI in manufacturing must adapt to unique needs and complexities of operation. For those implementing these systems, understanding the ‘mechanics’ - how algorithms interact with data inputs and trust. Explainable AI bridges the gap between automation and operator oversight, providing a clear view of how the system reacts and among users, fostering trust in AI’s outputs. But of course, building trust also requires a mindset shift – from a data-centric focus to a decision-centric approach.
8 September 2025 | Automation
A common misstep in AI adoption is starting with the data instead of focusing on the desired outcomes. Many manufacturers think, We have all this data – what can we do with it? However, this approach often leads to complex systems that lack focus, transparency, fail to deliver meaningful outcomes and reinforce doubt over AI’s value. A decision-centric approach begins by asking: What do we want to achieve, and what decisions need to be made to deliver those outcomes? Only then should businesses ask: What data supports those decisions and what are the models linking these decisions to this data? From there, manufacturers must focus on ensuring data quality – calibrating sensors, cleaning data streams, validating inputs and standardising formats. Remember, the vast majority of AI success lies in data preparation and only a small percentage in the modelling itself. Imagine a manufacturer aiming to improve quality control. They might gather extensive data from every step of the production overwhelming volume of disjointed data with no clear path to action. Using a decision-centric approach, they would: : Improve product quality and aim to reduce defects by 10% over the next quarter.
: What factors
directly impact product quality? What parameters should trigger quality checks? How can inspection processes be optimised to catch defects earlier? What actions should be taken when deviations are detected? : Build AI models that analyse historical production data,
to discover explainable patterns relating outcomes to metrics like machine settings, material consistency or environmental conditions. The system can then use these indicate potential defects and recommend adjustments to maintain product quality. This clarity in purpose makes AI implementations transparent, explainable and, ultimately, more trustworthy. It also provides a clear framework for measuring success, engineers, users and management alike. A key factor in building trust is recognising
that AI doesn’t replace human insights and experience – quite the opposite. Human operators and engineers bring a level of expertise, contextual knowledge and intuition that machines cannot replicate. Having a ‘human in the loop’ is therefore critical to an AI
Decision Intelligence connects Explainable AI principles with operational trustworthiness by embedding human oversight at its core. For example, experienced technicians possess knowledge built up over years of practice. While they can’t be everywhere at once, their expertise can be integrated into AI systems to automate routine decisions while reserving complex or ambiguous scenarios for human intervention. For AI adoption to move from pilot projects to the heart of manufacturing operations, achieve this.
XpertRule
xpertrule.com
automationmagazine.co.uk
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40