Software solutions
WHAT MACHINE BUILDERS NEED TO KNOW ABOUT THE EU AI ACT
By Derek Coulson, Safe Machine
A 30
rtificial intelligence is now a reality and it is starting to impact our personal and working lives. Machine builders are already looking at using artificial intelligence (AI) for a variety of tasks, including
within control systems for automation and robotics. Depending how AI systems are deployed, there can be safety implications – for example, AI might determine how a robot reacts when a person enters its operating zone. If you think the idea of using AI for safety-related control systems is far-fetched, this author can remember the widespread scepticism when programmable electronic safety systems were first introduced. Many engineers declined to rely on software for safety functions, yet programmable safety systems are now well proven and commonplace.
The European Commission has decided to legislate with the aim of promoting the uptake of AI that is human- centric and trustworthy. The AI Act (sometimes known as the AI Regulation) also refers to protecting fundamental rights and ethical principles. Certain types of AI system that have unacceptable risks are being outlawed, such as those that exploit individuals’ vulnerabilities, and those that utilise subliminal techniques to distort behaviours. Having a legislative framework in place is intended to help promote innovation, investment and the adoption of AI in general.
KEY DATES
On 1 August 2024, the Artificial Intelligence Regulation (EU) 2024/1689 entered into force as the AI Act and most of the legislation is applicable from 2 August 2026. However, the date that is of most interest to machine builders is 2 August 2027, as this is when high-risk AI systems used as safety components become regulated. In the UK, the government is not introducing AI legislation, as it feels existing laws are sufficient. Nevertheless, the situation will remain under review as the government waits to see how the risks and opportunities develop.
Meanwhile, UK machine builders supplying to the EU will have to comply with the AI Act. If an AI system is operating outside the EU and its output impacts people within the EU, then it also needs to comply – though this is unlikely to be the situation for AI systems used as safety components on machines.
As with most regulations, the AI Act is not retrospective. This means an AI system does not need to comply if it is placed on the market before the AI Act becomes applicable. However, if the AI system is substantially modified after the AI Act becomes applicable, then it will have to be conformity assessed and CE marked to indicate its compliance.
STEPS TO COMPLIANCE Before an AI system is placed on the market or put
into use for the first time in the EU, it needs to be CE marked. This is true whether an AI system is supplied on its own or embedded within a product such as a machine. If an AI system is embedded within a product, then the product manufacturer becomes responsible for the AI system and takes on the obligations of a ‘provider’. Note that the AI Act differentiates between ‘providers’ and ‘deployers’. For example, a machine builder would be the provider of an AI system and the end user would be the deployer.
The CE marking process has several similarities to that for CE marking to the Machinery Directive. Most of the steps along the route to compliance will therefore be familiar to anyone who has CE marked a machine to the Machinery Directive.
Several categories of AI system are defined in the AI Act. For machine builders, the one of interest is high-risk AI systems used as safety components. ‘High-risk AI systems’ are ones with the potential to have a significant impact on the health, safety or fundamental rights of persons. Much of the AI Act applies to other types of AI system, so not all 144 pages of Regulation (EU) 2024/1689 are relevant to machine builders. As with the Machinery Directive, the AI Act lays down procedures to be followed for conformity assessment. The Act covers both self-certification and the use of third- party assessment bodies (Notified Bodies). Fortunately, self-certification should be adequate for AI systems
April 2025 Instrumentation Monthly
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76 |
Page 77 |
Page 78 |
Page 79 |
Page 80