search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
AI Technology


Building trust and AI governance


By Iain Bowes, head of technical assessment services for TÜV SÜD Business Assurance


A


s with all industries, the electronics components design and manufacturing sector is experiencing a profound transformation as AI becomes


increasingly integrated into business systems and end products. AI is becoming part of the entire lifecycle of electronics production, from the initial design phase to final production. Key benefits that AI brings the sector include design optimisation, where AI algorithms are used to automate PCB layouts so that thermal management and signal integrity are quickly optimised. In terms of electronics manufacturing, AI supports predictive maintenance approaches through real-time monitoring of equipment health, so that any failures can be more effectively predicted to avoid costly downtime. Likewise, high-speed automated optical inspection systems are using machine learning to identify minute defects with exceptional accuracy, such as problems with solder joints or incorrect component placement. Overall, AI systems are already proving to reduce time-to-market, minimise material waste and support the production of reliable components at a lower cost.


However, simply having systems that benefit from advanced AI features is not enough. Investors and customers are demanding that companies prove their AI systems are trustworthy, transparent and responsible. Venture capital and procurement decisions are increasingly prioritising companies that can demonstrate robust AI governance and ethical practices. Organisations must therefore treat AI trustworthiness and transparency as a strategic priority.


AI challenges


With the proliferation of AI comes a parallel increase in concern over trust as AI systems can be biased, opaque, insecure, or otherwise misaligned with human values. Organisations


18 April 2026 ISO/IEC 42001


therefore face multiple challenges when pursuing responsible, trustworthy AI. Unlike previous technological environments, AI lacks widely accepted checklists, frameworks or operational standards. Translating abstract concepts like trust, ethics and human oversight into measurable, actionable policies therefore remain a central challenge. In contrast to earlier technological environments, AI systems are probabilistic, not deterministic. AI systems learn and adapt, which introduces “emergent properties” that can be hard to predict.


Also, complex AI models often act as “black boxes” that limit trust when users cannot understand or challenge decisions. Likewise, protecting sensitive data throughout the AI lifecycle is critical. Alongside all these challenges, rapidly changing global AI regulations necessitate adaptable compliance strategies, but without hindering innovation.


Trust by Design is therefore emerging as the next evolution, creating trustworthy systems from day one, addressing not only security and privacy but also fairness, accountability, and transparency at each step. Trust by Design builds on the legacy of


Components in Electronics


“Secure by Design” and “Privacy by Design”, both approaches that shifted security and privacy considerations to the earliest design stages and expands them to a broader mandate of trustworthiness. It recognises that trust in AI is both a technical and sociological outcome. Systems must not only be secure and reliable, but also explicable, unbiased and aligned with societal values. Trust by Design calls for governance and continuous assurance across the entire AI lifecycle, proactively engineering trust into AI systems from day one, rather than retrofitting it later. ISO/IEC 42001 provides a concrete framework to meet those challenges by defining requirements for establishing an effective AI governance programme. It guides organisations in managing the whole AI lifecycle and ensuring responsible AI use that is aligned with emerging regulatory requirements. With an AI Management System (AIMS) certified according to the international standard ISO/IEC 42001, legal requirements can be better understood and implemented. By aligning closely with Trust by Design principles, the standard enables companies to “build trust by design” in a systematic and certifiable way.


At its core, ISO/IEC 42001 provides a structured framework for establishing, implementing, maintaining and continuously improving an AI management system. Rather than prescribing specific technical solutions, it outlines what processes and controls need to be in place for responsible AI management. In addition, the companion standard ISO42005 helps organisations systematically evaluate, document and manage the potential benefits and risks of AI on individuals, groups and society across the entire lifecycle. Overall, ISO/IEC 42001 provides a holistic governance framework for AI, ensuring that an organisation addresses all the key dimensions of trustworthy AI: ethical use, risk management, security/ privacy, transparency, human oversight and compliance. This set of requirements and certification by an accredited body externally validates an organisation’s commitment to trustworthy AI and adherence to international best practices. Companies already familiar with implementing ISO standards will find a common high-level structure, including clauses on context, leadership, planning, support, operation, performance evaluation, and continual improvement. This means it can be integrated into existing corporate governance systems.


The standard’s structure addresses AI technical controls, and the organisational processes and cultural elements required for trust. Its key components - governance, impact assessment, risk management, security, oversight, third- party management, incident handling and improvement - provide a multi-dimensional assurance framework.


 Adopting a Trust by Design approach through the implementation of ISO/IEC 42001 extends beyond compliance, as by building trustworthiness into their AI


www.cieonline.co.uk


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44