ARTIFICIAL INTELLIGENCE De-risking AI deployment
By Chris Whyborn, head of cybersecurity services (UK & Europe) for TÜV SÜD business assurance
A
ny organisation deploying AI within its systems must successfully integrate technical safeguards with ethical and legal compliance across the entire AI lifecycle. They must ensure they fully understand their responsibilities, liabilities and potential exposure, ensuring that any AI systems are trustworthy, fair and secure.
Regulations
The EU AI Act requires companies to take a risk-based approach to comply with legal requirements for AI applications. Non- €15 million or up to three per cent of global annual revenue. While this does not apply to organisations based solely in the UK, it does apply if they operate in the EU. There is also the “Brussels Effect” as the EU’s large market size often means its regulations become a global standard. This means that UK businesses trading with or supplying EU partners will need to align their practices with the EU Act for commercial and practical reasons. An organisation should therefore classify AI systems into the risk classes applicable requirements, and whether it is affected by the regulation.
The UK is taking a different approach, currently favouring a “pro-innovation” and principles-based framework rather than the EU’s prescriptive legislation. The UK’s • Safety, security and robustness • Transparency and explainability • Fairness • Accountability and governance • Contestability and redress The UK Government is currently relying on existing industry regulators to apply these principles using current law, rather than introducing a single piece of legislation. However, it has indicated that targeted legislation is likely in the future, particularly for the most powerful AI models.
AI deployment challenges Organisations that want to harness the full potential of AI must assess organisational readiness and ensure risks are managed large-scale deployment. Risks include
those associated with safety, security, legality, ethics, societal, performance and sustainability.
Effective due diligence reduces the likelihood of occurrence and demonstrates that proportionate measures have been taken to address the risk or the consequence. All aspects along the lifecycle of an AI system and its data must be covered, including data preparation and quality, model development and fairness, explainability, deployment readiness and ongoing monitoring and feedback. The organisation must therefore be fully prepared, both technically and culturally, for widespread adoption.
Managing the challenges To ensure AI quality during development and operations, the following considerations must • AI system - accuracy, reliability, explainability and performance, including infrastructure
• AI data – bias, data pollution, lifecycle and how it relates to the AI system
• AI governance - compliance, strategy and human resource
• AI process management - continuous lifecycle and risk management recognised standard for AI management systems (AIMS). It provides a structured framework for organisations to develop, implement and operate AI systems, analyse risks and impacts, and establish protective
14 MARCH 2026 | ELECTRONICS FOR ENGINEERS
measures. The standard can be easily integrated into any existing management systems, such as the widely used quality management standard ISO 9001 or information security management standard ISO 27001.
As AI moves from “experimental” to “essential”, ISO 42001 helps organisations overcome a new set of pressures. This includes building trust through transparency and ensuring readiness for laws like the EU AI Act. It also provides a competitive edge when dealing with clients who require proof of AI safety. By replacing manual processes with repeatable governance, it also increases unknowns” that lead to project failures or data leaks.
However, AI governance is not solely a topic for the technical department as seemingly harmless systems can introduce bias. This is where an AI system delivers systematically distorted results due to faulty or one-sided training data. An AIMS will help to keep processes and decisions transparent and controllable by prompting companies through structured risk management to address bias risks, fairness criteria and to regularly monitor and optimise the algorithms used.
not a regulatory requirement, it represents a solid basis for compliance with current and future AI regulations, helping organisations manage their AI responsibly in the long term.
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46