search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
FEATURE Industrial AI


BUILDING TRUST BY DESIGN BUILDING TRUST BY DESIGN


Iain Bowes, Head of Technical Assessment Services for TÜV SÜD Business Assurance, explains how ISO/IEC 


A


s part of a smart manufacturing drive to support Industry 4.0 processes, AI is being increasingly integrated into


systems to support productivity and skill shortages. Areas of AI support include predictive maintenance, quality control and  However, simply having systems that  enough. Investors and customers are demanding that companies prove their AI systems are trustworthy, transparent and responsible. Venture capital and procurement decisions are increasingly prioritising companies that can demonstrate robust AI governance and ethical practices. Organisations must therefore treat AI trustworthiness and transparency as a strategic priority. Trust by Design builds on the legacy of


Secure by Design and Privacy by Design, both approaches that shifted security and privacy considerations to the earliest design stages, and expands them to a broader mandate of trustworthiness. Whereas Secure by Design embedded cybersecurity from the start, and Privacy by Design mandated data protection by default, Trust by Design calls for governance and continuous assurance across the entire AI lifecycle. This proactively engineers trust into AI systems  ISO/IEC 42001 provides a concrete framework to meet those challenges by   an organisation. It guides organisations


20 April 2026 | Automation


in managing the whole AI lifecycle and ensures responsible AI use that is aligned with emerging regulatory requirements. With an AI  to the international standard ISO/IEC 42001, legal requirements can be better understood and implemented.


ISO/IEC 42001 provides a structured framework for establishing, implementing, maintaining and continuously improving an AI management system. It outlines what processes and controls need to be in place for responsible AI management. The companion  (AI) – AI system impact assessment) helps organisations systematically evaluate, document  AI on individuals, groups and society across the entire lifecycle.


Key pillars of ISO/IEC 42001 include: • Governance


and AI ethics. • Risk Management 


risks. • Impact Assessment – Evaluating how AI  • Transparency – Ensuring AI decisions are explainable. This provides a holistic governance framework, ensuring that an organisation addresses all the key dimensions of trustworthy AI: ethical use, risk management, security/ privacy, transparency, human oversight and compliance. This set of requirements and  validates an organisation’s commitment to trustworthy AI and adherence to international best practices. Companies already familiar with


 high-level structure, including clauses on context, leadership, planning, support, operation, performance evaluation, and continual improvement, allowing easy integration with existing corporate governance systems. These standards support  processes. However, organisations must establish  monitoring AI, not just during deployment but throughout the whole AI lifecycle. The standard’s structure addresses AI technical


controls, and the organisational processes and cultural elements required for trust. Its key components – governance, impact assessment, risk management, security, oversight, third-party management, incident handling and improvement – provide a multi- dimensional assurance framework. Implementing ISO/IEC extends beyond compliance, as by building trustworthiness into their AI systems and processes,  operational advantages. Implementing Trust by Design goals with ISO/IEC 42001 should be approached as a change programme involving people, processes and technology. The implementation pathway helps embed a sustainable capability for trustworthy AI. This will allow  Following the pathway, organisations create a  trust goals; those goals translate into processes and controls; the controls are executed by teams; outcomes are monitored and fed back into improvements. However, ISO/IEC 42001 adoption does not need to happen all at once, as a phased approach can yield quick wins and lessons that inform a broader rollout.


TÜV SÜD Business Assurance www.tuvsud.com/en-gb/cybersecurity


automationmagazine.co.uk


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40