Cybersecurity & AI
Companies deploying best practice guidelines for AI might find themselves ahead of the competition.
Symons says this goes even further to include educating businesses about the biases that AI can introduce, explaining how their data is being processed, and offering clarity on where and how that data is stored. The company launched a responsible GenAI solution Live Intelligence for customers wishing to harness the benefits of GenAI without compromising data security. Orange also created a Data and AI Ethics Council comprising 11 independent experts to advise and ensure that, “humans remain central to technology and its benefits, and it’s crucial we stay in control”, said Symons who considers responsible practices as business critical as they directly impact an organisation’s reputation and long-term success.
Sally Epstein, chief innovation officer at Cambridge Consultants, which is the deep tech arm of IT services company Cap Gemini agrees that companies deploying best practice guidelines might find themselves ahead of the competition. Epstein warns against ignoring the principles of responsible AI, drawing a parallel with the development of genetically modified seeds. “The science behind GM foods was fundamentally good, but public lack of confidence held the technology back. Decades on, we are still feeling the consequences of this, a technology that has the potential to reduce the amount of pesticides, herbicides and fertilisers,” says Epstein. As in the case of GM foods, Epstein puts user trust above all. “If people don’t trust AI, they won’t use it, no matter how advanced or beneficial it might be,” she adds. Trust begins with purpose-driven applications, says Ulf Persson, CEO of AI software company ABBYY. “Tailored AI solutions that address real business challenges while ensuring measurable outcomes not only boost efficiency but also mitigate risks associated with human error or bias, which is critical in regulated industries like healthcare and logistics,” he says. ABBYY’s own AI-driven intelligent document
20
processing has helped some customers deliver goods to market at 95% faster and created efficiencies that continue to build trust. Transparency is key to building trust in AI, says Nathan Marlor, head of data and AI at software services company Version 1. He recommends prioritising explainability in AI solution design. “The technical complexity of many AI models, especially those driven by neural networks, often limits their inherent explainability, making it challenging for users to understand how decisions are made,” says Marlor. “AI is being used in some high-stakes scenarios that can have life-changing consequences – making understanding why an AI made a particular decision essential. People are more likely to trust AI if they understand how it arrives at decisions,” he adds. To address this, tools such as counterfactual explanations, LIME (Local Interpretable Model-agnostic Explanations), and SHAP (SHapley Additive exPlanations) can be leveraged to shed light on these ‘black-box’ systems. “Whether through LIME’s local approximations SHAP’s comprehensive global insights, or counterfactuals offering actionable feedback, explainable AI is key to building AI systems that we can truly understand and rely on when it comes to decision- making processes,” says Marlor.
AI hallucinations further risk undermining trust in AI. The antidote, says Marlor, is education. “Organisations should prioritise user education by clearly communicating the limitations of AI, offering guidance on validating outputs, and implementing mechanisms to identify or correct inaccuracies,” he adds. The concept of responsible AI is still fluid, and will likely change over time and vary by region making it difficult to hold businesses accountable to the same standards globally. Indeed, a consensus on the definition of responsible AI may still be some way off. But businesses must still act now to ensure future compliance. ●
Defence & Security Systems International /
www.defence-and-security.com
metamorworks/
Shutterstock.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29