search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
In Focus Risk


Developing an AI framework


Reflecting on the progress made in developing the ICO approach to auditing artificial intelligence, and some of the broad themes emerging from the feedback received


Simon McDougall Executive director, The Information Commissioners Office


Earlier this year, we set out the proposed overall structure of the framework, and explored the data-protection challenges and possible controls in relation to five specific risk areas: lMeaningful human reviews in non-solely automated AI systems. l Accuracy of AI systems outputs and performance measures. l Known security risks exacerbated by AI. l Explainability of AI decisions to data subjects. l Human biases and discrimination in AI systems. We have received some detailed technical


feedback on a number of aspects of AI and data protection, but a number of common and broader themes have also emerged. A number of people asked us to clarify


what we mean by ‘AI’. In reality, we use the term AI because it has become a mainstream way for organisations to refer to a range of technologies that mimic human thought. Some of these are quite new and ground-


breaking, such as modern deep-learning approaches to object recognition, while others have been around for a while; decision trees, for example. But they all have been referred to as AI at some point.


Definition Before we launched our blogs, we gave some thought to working on our definition of AI, but we decided to focus our efforts on the underlying data-protection issues instead. This is because, while AI has become an


umbrella term, any technology underneath it will involve, or enable, the large-scale automated processing of large amounts of data. If this processing involves the use of personal data for purposes of profiling,


40


identification, and decision-making, then we are concerned with any new, or heightened, data-protection risks that may arise. The primary objective of our framework


is to give us a solid methodology to assess whether organisations using any such technologies are compliant with data- protection law. When necessary, the framework will also call out when identified risks are specific to certain technologies, such as facial recognition. Some respondents have also argued that


a number of issues, such as security, are not unique to AI. We agree that only a few of the risks arising from the use of AI will be completely unique.


www.CCRMagazine.com For those risks that are not new, our job


The primary objective of our framework is to give us a solid methodology to assess whether organisations using any such technologies are compliant with data- protection law


is to understand how, and to what extent, they can be exacerbated by AI, and whether additional controls are required to identify, measure, mitigate, and monitor them. We do not expect organisations to


redesign their risk-management practices from scratch, but we do expect them to review them and make sure they remain fit-for-purpose if AI is used to process personal data. A number of stakeholders have highlighted


the trade-offs that having a ‘human in the loop’ may entail: either in terms of a further erosion of privacy, if human reviewers need to consider additional personal data in order to validate or reject an AI-generated output, or the possible reintroduction of human biases at the end of an automated process. Trade-offs are a key area of risk in our


framework, and, therefore, we will make sure to reflect this feedback in the associated blog, which we will publish soon.


Shortcoming Finally, some of the comments have also highlighted an important shortcoming in our initial approach. We were aware AI may be developed or run partially or completely by third-parties, rather than in-house. However, the feedback strongly suggests


that this is the case the majority of the time. As we finalise our framework, we will need to consider further the challenges this presents for data controllers in exercising adequate levels of oversight and control. Our plan remains to publish the formal


consultation paper on our AI Auditing Framework no later than January 2020. CCR


Edited from a blog on the ICO website. September 2019


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52