search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Feature: Embedded AI


monitors, can be rigorously analyzed using established tools to detect vulnerabilities or coding standard violations, AI components (e.g., neural networks) introduce new complexities. Emerging tools are beginning to address AI-specific risks, such as unsafe operators in frameworks like TensorFlow or PyTorch, but the field remains in its infancy. Static analysis must therefore evolve to encompass both conventional code and AI framework configurations, ensuring compliance with safety standards like MISRA C/C++ or CERT C/C++.


• Unit testing continues to validate low-level functionality, particularly for non-AI components such as communication protocols, fault handlers and hardware interfaces. However, AI models, which derive behaviour from data rather than explicit code, demand a shift toward data-driven validation. This includes testing for edge cases, adversarial inputs and scenario coverage. While the AI model itself may not be amenable to traditional unit tests, the surrounding code, such as data preprocessing pipelines, inference engines and output handlers, must still undergo rigorous unit testing to isolate and mitigate defects.


• Code coverage metrics, such as modified condition/decision coverage (MC/DC), remain mandatory for certification in domains like aviation (DO-178C) or automotive (ISO 26262). These metrics ensure exhaustive testing of decision logic in conventional code. For AI components, traditional code coverage is not directly applicable, but analogous concepts are emerging. Techniques like input space coverage and neuron activation coverage aim to quantify the sufficiency of testing for neural networks, ensuring that diverse scenarios and model behaviours are exercised during validation.


• Requirements traceability is non-negotiable in safety-critical systems, as it provides an auditable chain from system requirements to implementation and testing. AI complicates this process due to its data-driven, often opaque decision- making. Practices such as explainable AI (XAI), model introspection and thorough documentation of training data, model architecture and validation results are essential in maintaining traceability. For example, a requirement such as “the system shall detect obstacles in low-light conditions” must map not only to


sensor code but also to the AI model’s training data diversity and robustness testing.


• Regulatory frameworks (e.g., IEC 62304 for medical devices) implicitly mandate these practices, regardless of AI involvement. Hybrid approaches that combine classical verification with AI-specific methods, such as adversarial testing, runtime monitoring and bias detection, are necessary to address both traditional software risks (e.g., memory leaks) and AI-specific failure modes (e.g., dataset bias). Furthermore, system-level safety depends on the seamless integration of AI and non-AI components; flaws in either can cascade into catastrophic failures. While AI introduces novel challenges,


traditional verification practices remain indispensable. The solution lies in augmenting these practices with AI-aware techniques, ensuring comprehensive coverage of both code and model risks. By maintaining rigorous static analysis, testing, coverage and traceability whilst adapting tools and methods to address AI’s unique demands, developers can achieve the robustness required for safety- critical certification and operational deployment.


www.electronicsworld.co.uk February 2026 35


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48