Security & Monitoring
Building a foundation of trust is critical for AI adoption
By Rod Cope, CTO, Perforce Software
U
sing AI to develop software for electronics applications is no longer just an experiment in a lab but is now used in live production environments,
supporting real-world applications. Of course, safety-critical and heavily regulated environments will be more cautious and risk-averse than some others and we all know that AI is far from perfect. Even so, AI is now at the point where it is mature enough to support a multitude of use cases.
This is why 2025 is also the year when AI governance comes to the fore as a fundamental requirement and a big part of that is building trust with AI, such
16 March 2025
as explainability. We must be able to interrogate AI and for it to show in detail why it came to a particular decision. In addition, there needs to be the ability to test whether that answer is correct by simulating an action based on AI’s recommendation and demonstrating how that works in practice.
In the electronics industry, a robust AI governance strategy will help reduce the risk of physical products being shipped with faults and needing to be patched or recalled. In the world of IT infrastructure, AI governance will mean testing the impact of patching a critical security vulnerability before it is implemented across hundreds or even thousands of servers.
Components in Electronics
Safety first
Developments like these are already happening and represent a safe way to carry out a dry run. In turn, the human ultimately responsible for making a decision can feel confident about AI’s recommendations. AI will be able to present everything to that person: the plan, the steps involved, the tests performed, and whether all the compliance requirements were met, so there is a clear audit trail plus conviction that the right actions have been taken.
Apart from compliance purposes, that level of reassurance is also going to be essential to address cultural resistance. Otherwise, there is a strong chance that
people will look at what AI can do but still use old processes, even if those are slower and riskier. Conversely, the more trust can be established, and people see that AI works, the more they will use it, which will be vital for any organisation in the future. While AI still gets things wrong, choosing not to adopt it is arguably a far more dangerous strategy because organisations that do not embrace its benefits — the speed, the greater productivity, the reduced error — will be left behind (and possibly far more rapidly than many of us can imagine). This brings us back to why a thorough AI governance strategy is now non-negotiable: it gives businesses the foundation to use AI but with appropriate guardrails and
www.cieonline.co.uk
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56