search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
19


Standards support ethical artificial intelligence pages


Xxx xx xx xx x


One of the three core pillars of the Many countries, including most


developed countries, have an Artificial Intelligence strategy, aware of the enormous potential AI holds for economic growth and improving our lives, but also wary of the ethical and security issues that its wider deployment raises. In 2017 PwC estimated that AI has the potential to bring substantial benefits to the UK economy, adding 10.3% to GDP between 2017 and 2030. The speed of development of AI in the last few years means this is probably an under-estimation.


However, while certain sectors are embracing AI there are others that are more reluctant. Research carried out by Ernst & Young in 2021 for the Department of Digital, Culture, Media and Sport showed that AI is still an emerging technology. Their survey showed 27% of organizations were at an advanced level of implementing AI and 38% were piloting the technology, although uptake was particularly low among SMEs and the third sector. There are multiple reasons why


organizations may hesitate to adopt AI,


from simply not understanding its potential, to lack of skills and lack of funding. One that rings the many alarm bells is that AI can introduce or reinforce bias, meaning it may not be used fairly and could cause harm in the real world, leading to loss of consumer confidence, liability issues and reputational damage.


The UK government has a ten-year plan to turn the UK into an AI ‘superpower’ and has a National AI strategy to achieve this, balancing good governance with encouraging innovation.


government strategy is governing AI effectively, something that recognizes how critical it is to build trust in AI so it becomes more widely accepted. The government has recognized that standards will play a central role in this. BSI’s Sector Lead (Digital), Tim McGarr sees potential parallels to the lack of trust in other technologies like the Internet of Things, social media and more recently, blockchain, that have developed in a context of little or no governance. “The rise of the web has seen enormous benefits for the way we trade and communicate, but is has also seen issues including bullying, prejudice and exploitation, concerns that may also be replicated as the metaverse takes shape,” Tim points out. “There are plenty of examples where technological development has led to situations that we want to avoid. Another example is blockchain; where despite spurring innovative approaches, not enough thought has been given so far to its enormous environmental impact on carbon emissions.


Tim McGarr


“With AI, the potential to improve outcomes in all sorts of areas is enormous. However, there is justified concern about how inbuilt bias could impact people in their real lives. There are cases in the US of people not getting parole because they fit a certain profile, and examples of candidates not getting job interviews because of the way an AI system was trained.” The problem for regulators and legislators is that the speed of the


Contents


Subscribe


Contact us


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34