search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
COMMENT Can AI be functionally safe in


embedded software development? Jill Britton, director of compliance at Perforce, says that AI removes many limitations from the embedded software development process, enabling greater levels of innovation and automation, which is especially important given that modern vehicles typically require over 100 million lines of code.


H


owever, the continuous evolution of AI means that traditional ways of verifying and validating functional safety are no longer universally applicable. Accounting for evolving AI applications will require a new approach, especially in embedded applications that involve functional safety. This must be a priority for the embedded industry, supported by new and updated standards and needs to happen fast. Users are aware of this, even if they do not know the solution yet. Perforce’s 2025 State of Automotive Software Development Report found that professionals cited safety as the leading concern for AI in connected and autonomous vehicle development. Traditionally, functional safety standards concentrate on the risks caused by a system malfunctioning. For instance, the ISO26262 requirements for automotive applications are deterministic—for example, requiring code to have one entry point and one exit point. With AI, the nature of the algorithms means that the system is no longer deterministic. There is no sequential ‘This happens, so therefore, that will happen again’, so the definition of the system’s functionality has become essential to ensure that AI does not do something unexpected.


The need for a new approach The new focus is on defining and applying the concept of requirements: what are you expecting this piece of software to do, and is it still fulfilling that objective? Requirements need to be precise and agreed upon, and prompts must be clearly tied to the requirements.


This new way of addressing functional safety is difficult, but the electronics industry has risen to the challenge, and the latest generation of functional safety standards look more at the risks rather than fulfilling a sequential set of steps.


For example, the new AI road safety standard, ISO/DPAS 8800 “Road vehicles – safety and artificial intelligence”, extends ISO26262 to cover AI systems and the


6 MAY 2025 | ELECTRONICS FOR ENGINEERS


concepts and guidance provided by SOTIF. The new standard proposes risk reduction measures during the design and operation phases, using an iterative approach to reducing risk. This is relevant for all uses of AI where safety requirements can be foreseeably allocated, either through the use of AI for the functionality itself or the use of AI as a safety mechanism.


Specifically, ISO/DPAS 8800 is designed to cover: • • • • • •


Verification of AI systems;


Methods for testing AI components; AI system integration and verification; Virtual testing vs physical testing; AI system safety validation; and


Measures to assure the safety of the AI system during operation.


In development for over a year, the standard has reached the first pass stage. It will hopefully be introduced by the end of 2025. Certainly, everyone involved is aware of the urgency, given the speed at which AI adoption is accelerating. Perforce’s 2025 State of Automotive Development found that 71% of the industry professionals who responded to the survey indicated that ISO/ DPAS 8800 would be important to their team. To assist with compliance to the new


AI-focused standards, generative AI tools can be used in the development process for writing, reviewing, and testing code. For example, static analysis tools can be used to scan AI-generated code to ensure it addresses strict security, compliance, and quality requirements.


Some static analysis tools are also now incorporating AI, such as using AI to automatically create new analysis checks based on natural language requirements, or to provide enhanced searching and filtering of issues detected, which helps teams determine which ones to prioritise. The potential benefits of AI are clear, but we need transparency around how a decision was reached and why and this is why functional safety in embedded product development is being turned on its head. Can AI be functionally safe in embedded software applications? In theory, yes, if the concept of requirements is applied, which is why standards such as ISO/DPAS 8800 need to be adopted as soon as possible. As the standard is already available, now is the time for organisations to start preparing for this major change in how functional safety can be achieved and maintained while benefiting from all generative AI has to offer.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42