search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Feature: AI in automotive applications


Tailoring AI and ML models for safety-critical embedded systems


By Ricardo Camacho, Director of Safety & Security Compliance, Parasoft T


he integration of artificial intelligence (AI) and machine learning (ML) into embedded systems is revolutionising


industries and systems through enabling smarter, more adaptive devices. However, embedding AI/ML into safety-critical systems presents unique challenges beyond the high stakes of failure and stringent compliance requirements. Embedded systems often operate


in harsh environments, like extreme temperatures, vibrating environments, or even underwater. They are also required to adhere to strict energy, memory and computational limits, where there’s no room for a bulky computer chip or a cooling fan. Equally, many embedded systems run on


batteries that must last years. Yet, AI models – especially large ones – use a lot of power. All these parameters make running AI in embedded systems challenging. To make it fit, developers use strategies like pruning (removing less important neural pathways) and quantisation (compressing numerical data into low-bit formats), to reduce model size and computational overhead whilst preserving system performance. In safety-critical systems there are the


added requirements of determinism, certifiability and resilience. Safety- critical systems, like automotive braking or flight controls, for example, require deterministic behaviour, consistent, predictable outputs for given inputs within guaranteed time frames. However, AI/ML models (especially neural networks) are inherently


32 February 2026 www.electronicsworld.co.uk


probabilistic and often exhibit non- deterministic outputs. Safety-critical systems must also


comply with stringent certification standards (e.g., ISO 26262 for automotive, IEC 62304 for medical devices, yet, AI/ML models are largely “black boxes” with opaque decision- making processes, making it difficult to trace decisions and prove system robustness. Regulators demand evidence that outputs derive from verifiable logic, not uninterpretable statistical patterns, which is certifiability. Embedded systems often operate


in uncontrolled environments (e.g., industrial robots, drones) and face adversarial attacks such as malicious inputs designed to trick ML models (e.g., perturbed sensor data causing misclassification). To address these unique demands,


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48