search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
CILIP – AI Report.


mation professional work. So it’s about learning how to apply the tools to our own services.”


Having an engaged profession will be increasingly important as the technology moves forward – without it there is a danger of missing opportunities. Andrew says: “Some ways of using AI are only now emerging. These could be power- ful in terms of enabling better access to knowledge, e.g. machines trained to read documents to provide summaries or look for patterns. This is very exciting for our work, but there seems to be sense in the profession that a quite a lot of the promises from the tech firms around these applications are still hype. The tools may not do much new and require a lot more resource to get them to work than is promised. It’s not clear they do things that are actually needed. They only do some tasks and humans are very much still needed to complete the job. “Some strands of AI promise to save us time and do things more reliably, some could enhance what we do, some might change what we can do. Over the last decades, infor- mation professionals have shown them- selves able to adapt to change. AI presents another wave of change that we want to be active in responding to and shaping posi- tively. I do not think we can choose not to engage with AI. We haven’t done so far!” The report delivers a number of recom-


18 INFORMATION PROFESSIONAL


mendations that feed into the notion of engagement, and the practicalities of skills and training. However, Andrew believes that while information professionals should be interested and engaged, it does not mean that all of them will need to become technology experts. For many it will be about awareness, and being able to articulate that to their end-users so that they have a clearer understanding of AI. He says: “I don’t see all of us becoming data scientists. A common recommenda- tion from the people I interviewed for the research was to get your hands on and play with increasingly accessible AI tools to get a feel for how they can be used. This seems to be excellent advice. But it may be as much about developing a vision of how AI can be developed for the benefit of our users and contributing to a multi-pro- fessional team to develop AI enabled services.”


Because the fuel for AI is data the skills and knowledge information professionals already have are highly relevant and need to be tweaked, rather than completely relearned. Andrew points out that: “AI is founded on data. We have always been a data profession. We know a lot about managing data (though we are not the only profession to claim to do so) e.g. about the importance of metadata, of standards in descriptions, of interopera- bility. Data governance is part of infor-


Automation.


mation governance. I think we also have important insights on the limits of data. For example, we understand that collec- tions have a history, a provenance – that makes them only partially reliable. A certain scepticism about data is essential for responsible AI. So our professional knowledge base enables us to play an important role in an AI world. “But it would be wrong to think AI is nothing new and that we only need to translate our existing practices. I think there is a language shift we need to make to think and talk more in terms of data and work on translating our information skills to data.”


The question of ethics is perhaps just as important as the applications that can be found for AI. From transparency issues over how data is collected and used and issues of inbuilt bias in algo- rithms, to concerns about surveillance and privacy or the use of AI to make decisions about individuals – there are real reasons for concern. The report covers these issues, and more, in detail. Andrew says: “AI poses many ethical (safety and security challenges). The impact of unrecognised bias in both data and algorithms has become increas- ingly apparent. It is often not transparent how AI works or clear who is responsible for mistakes that it might make. AI’s use of data can impact individual privacy.


June 2021


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60