search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
INSIGHT


‘‘ Future Technology Turing’s red flag O


VER the last few years artificial intelligence (AI) has been woven into more and more of the products


and services we use every day. Sometimes the AI is very much front and centre, but more often than not it is quietly working away in the background – powering everyday things like our Facebook feeds, SnapChat filters and Zoom backgrounds, and automatically classifying our Google Photos images.


We often hear people pontificating about the future of AI, but the truth is that it’s already here and it has proliferated beyond most people’s wildest expectations.


What does this mean in practice? We have an AI problem – every day our actions are being influenced to a greater or lesser extent and decisions made on our behalf using AI, but this process generally isn’t transparent or accountable.


AI professor Toby Walsh has suggested that we need a ‘Turing Red Flag’ – analogous to the Locomotive Act 1865, which mandated that “a man on foot with a red flag must precede the train by a minimum of 60 yards”, leading to it becoming known as the Red Flag Act.


In a few cases AI forms the majority of the processing that takes place, such as an app which lets you take a picture of a mole and compares it against a dataset of malignant melanoma images. However AI is becoming commonplace as a discrete part of a complex website, app, business or scientific process — examples range from the relatively innocuous such as recommendation engines for social media and online shopping to more obviously sensitive use cases, such as plagiarism detection in exams or predictive policing systems that try to


46 INFORMATION PROFESSIONAL


identify people and places likely to be involved in crimes. These are all real- world examples of how AI is used right now.


Algorithmic accountability We know (or suspect) that AI plays a part in systems and processes we interact with every day, but we don’t necessarily have an easy way to find out how and why, or to influence the algorithm by providing feedback. Some of these processes can have direct consequential outcomes for our lives and livelihoods such as being refused insurance cover, or getting a visit from the police because our profile matches a potential offender. Others are more subtle – such as the discovery that Facebook was inadvertently promoting far right and fringe groups such as QAnon and anti-vaxxers in its news feed. Some providers of AI enabled products and services have started to surface their methodologies to an extent, such as Facebook’s labelling of some of the factors influencing why you see a particular advert or why a particular group or page is being recommended to you. The Dataset Nutrition Label project has been developing a standardised approach to “build labels that highlight the key ingredients in a dataset such as meta-data and populations” inspired by approaches used for food labelling. It might be that a similar approach could be followed to help establish how AI has been used as part of a process, which could help to facilitate explainable AI models.


What to do? Librarians and information scientists aren’t always in a position to ensure that the products and services they procure follow best practice in transparency and explicability where AI technologies such as machine learning are used. However, there may be


We have an AI problem — every day our actions are being influenced to a greater or lesser extent and decisions made on our behalf using AI...


Martin Hamilton is a strategist and innovation adviser. See https://martinh.net for his portfolio and contact details.


opportunities to introduce these topics as part of product selection processes such as the requirements specification for a procurement.


You should also consider probing how and why vendors are using AI techniques, and what those techniques actually are. In my experience it’s not uncommon for vendors to embellish the “AI” credentials of systems that in actuality are based on simple flowchart type choices rather than sophisticated approaches such as machine learning. Whilst machine learning approaches tend to function as black boxes, flowchart type approaches (sometimes described as expert systems) are more inherently explainable. It’s also very important to understand how AI products and services use data, particularly personal data. Is it used to train a machine learning model and then thrown away? Where will the data be sent and who will have access to it? All of the questions and issues that you are already familiar with around the General Data Protection Regulation (GDPR) are equally relevant in an AI context – in many ways more so, because areas like subject access requests and the right to erasure can be significantly more complex. IP


June 2021


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60