Cybersecurity & AI
related to security, data governance, transparency, use cases, model management and auditing that organisations can strive to implement. “But the concept of responsible AI is still fluid,” adds Bhattacharya. “Companies should look towards their corporate ethics policies for guidance and pull together multidisciplinary teams that track and review not only how AI is being used across the organisation, but also understand the data sources informing the models,” says Battacharya. Susan Taylor Martin is CEO of the British Standards Institution (BSI) – the UK’s national standards body – and has worked for the UK Department for Science, Innovation and Technology towards the creation of a self-assessment tool that aims to help organisations assess and implement responsible AI management systems and processes. Taylor Martin says that there is an onus on leaders to show how they are taking steps to manage AI responsibly within their companies. “The way forward on the safe use of AI may be unclear at this stage, but we are not powerless to influence the direction of travel, or to learn lessons from the past,” she says. However, this is far from straightforward with companies facing a patchwork of global regulation with over 70 pieces of legislation under review and different jurisdictions taking very different approaches.
The white paper
In 2023, the UK government published a white paper outlining its pro-innovation approach to AI. In February 2024, Rishi Sunak’s Conservative government issued guidance that built on the white paper’s five pro- innovation regulatory principles. At press time, there is no statutory regulation of AI in the UK. The Labour government’s AI Blueprint published on 13 January simply referenced regulation with: “the government will set out its approach on AI regulation”. Europe leads on AI regulation with its EU AI Act entering into full force in August 2024 with potential fines of up to €35m or up to 7% of global annual revenue for some Big Tech companies in breach of the risk-based rules.
China aims to become the world’s AI leader by 2030 with top down edicts for AI governance. According to GlobalData TS Lombard macroeconomic research, the Chinese government has shown a surprising emphasis on promoting AI innovation and a willingness to tolerate some experimentation, albeit within CCP defined and – controlled areas. In the US, President Trump made an election manifesto promise to advance AI development with light-touch regulation. This follows outgoing President Biden’s Executive Order on Safe, Secure and Trustworthy Development and Use of AI, passed in October 2023, which saw the establishment of a US AI Safety Institute. While AI is by its very nature “boundaryless”, there are practical “no regrets” actions that organisations can take in the short-term, according
UK government’s five principles for the development of responsible AI regulation
■ Fairness: AI systems should treat all individuals and groups impartially, equitable and free from prejudice. Regulators should assess systems for potential bias and ensure transparency around decisions.
■ Explainability and transparency: Regulators should encourage system developers to ensure people understand AI decisions by providing clear explanations. Systems should be transparent about development, capabilities, limitations and real-world performance.
■ Accountability: Regulators should ensure that AI is human-centric, with appropriate oversight and controls. Organisations remain responsible for systems and their impacts. Proportionate governance measures should manage risks.
■ Robustness and security: AI systems should reliably behave as intended while minimising unintentional and potentially harmful consequences. Threats like hacking should be anticipated and systems designed accordingly.
■ Privacy: The privacy rights of individuals, groups and communities should be respected through adequate data governance. The collection, use, storage and sharing of data must be handled sensitively and lawfully.
Source: UK Department for Science, Innovation and Technology
to Taylor Martin. There are myriad training opportunities available, from instructor-led training to on-demand e-learning courses, such as the ones the BSI offers. While global regulation takes shape, the BSI’s AI management system standard (BS ISO 42001), released a year ago is already being used by organisations including KPMG Australia. “Becoming certified will help organisations be prepared for the regulation coming down the track,” says Taylor Martin. Other AI risk management standards have been underdeveloped since 2017 by the International Organisation for Standardisation (ISO) and the International Electrotechnical Commission (IEC). And to Taylor Martin’s point about preparation for more formal regulation, the EU AI Act draws heavily on ISO’s risk management standards and ISO/IEC’s AI terminology standards.
Businesses can devise AI good practice Aside from formal standards, many companies are devising their own internal risk-based framework. Bosch UK & Ireland Steffen Hoffmann says any regulatory standards will first require a more common understanding of AI’s potential risk and effect. As a company, Bosch acknowledges the risks but also that these are manageable. The company has devised its own AI Code of Ethics based on best practices as a starting point. The central premise of which is that humans should be the ultimate arbiter of any AI-based decisions. “With innovation comes social responsibility, that is basically in the DNA of Bosch coming from our founder,” says Hoffmann.
French telecoms giant Orange is another large corporation prioritising responsible AI. “Transparency and accountability are key – our algorithms must allow users to understand how decisions are made, what data is being used, and whether any biases are present in these systems,” said former Orange Business, international CEO, Kristof Symons.
Defence & Security Systems International /
www.defence-and-security.com 19
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29