“Data uses in AI should be based on consent. AI can also be seen as creat- ing surveillance and a feeling of being surveilled that may dampen freedom of expression. The paternalistic or even blatantly manipulative idea of ‘nudging’ behaviour seems particularly problem- atic. Ultimately the development of AI poses dilemmas around human agency and values. So it is increasingly under- stood that we need responsible AI.” The information profession already has a strong ethical framework in place, and this can be a huge asset – however, engagement is key once again. Andrew says: “Information and knowledge professionals need to work through how our existing code of ethics and our values apply to the new challenges to define what responsible AI means in our context.
“Given the profession’s commitment to access to knowledge we will be excited by the way that AI technologies give easier access to knowledge, e.g. through much better translation tools and summarisation techniques. But we should also be aware of the risks to free expression, such as through automated forms of moderation.
“Our ethical commitment to spreading information skills is highly relevant. As AI is applied in more and more con- texts, we need all employees, indeed all citizens to understand the implications for them. There is a need for widespread AI literacy, building on data literacy. Our stress on privacy and confidentiality are highly relevant to ensure how data used and to avoid creating systems of surveillance. Our code of ethics mentions avoiding bias in our own decision-mak- ing and advice and this feels relevant to thinking about bias in AI.
“So, our core ethical principles and values seem to speak to AI. AI may not
make us change our fundamental values or ethical principles. I do think, though, that AI is posing new forms of the old ethical dilemmas and puzzles and that we need wide discussion in the profession, across sectoral boundaries, and with other profes- sions to understand the implications.”
What next?
The report has a number of key recom- mendations for services and libraries, individual information workers, educa- tional organisations, training providers, and also for CILIP. CILIP Chief Executive
Recommendation
Develop and promote a shared vision of the role of the information professions in unlocking the potential of AI and machine learning through good data stewardship.
Nick Poole has responded to the report, describing it as “a milestone in the develop- ment of our professional community.” He adds: “There is a tremendous opportu- nity ahead for the information professions and thanks to this research we are well- placed to capitalise on it. Working together, we can help our users look ahead with confidence to a near future that is being transformed by AI, machine learning, automation and robotics.”
CILIP has also set out the next steps for each of the report’s recommendations:
Action
The publication of the Research Report and our response mark the beginning of this process. We will work with other sector representative organisations and our membership to set out and advocate for a shared vision of the role of information professionals.
Ensure the PKSB is aligned to the future skills needs associated with AI and robot- ics, including computational sense, data science, data stewardship and soft skills.
This work is underway already. The Review Group leading the revision of the new Pro- fessional Knowledge and Skills Base (PKSB) worked alongside Dr Cox to ensure that the new version is closely aligned to the future skills needs he has identified.
This new PKSB will be launched to the profession in the Summer of 2021 and will thereafter form the basis of our ongoing training and CPD support.
Facilitate discussion within the profession about how CILIP’s Ethical Framework applies to the specific case of AI and robotics.
We will work with the CILIP Ethics Committee and Policy Committee to examine the Ethical Framework in the context of the findings of this research, and where possible provide additional guidance and supporting materials for information professionals.
Identify and support pathfinder organisations and individuals who can demonstrate how AI and robotics can be introduced for the benefit of users and organisations.
We will work with the CILIP Community (our Member Networks, Devolved Nations, Special Interest Groups and Diversity Networks) to identify people and exemplars which demonstrate the ‘art of the possible’ when it comes to AI, machine learning, automation and robotics.
Work with other professional bodies to foster a culture of knowledge sharing across the profession and with adjacent professions.
Support our Member Networks and Special Interest Groups to develop communities of practice to support learning about the new technologies.
We will seek out and engage with other professional bodies in related disciplines to form an ongoing network for knowledge exchange around AI, machine learning, automation and robotics.
We are currently developing a Community Strategy with our CILIP Community which will set out how we propose to develop ‘communities of practice’ that will facilitate collaboration and knowledge exchange. IP
Read the full report at
www.cilip.org.uk/page/ResearchReport AI and robots. June 2021 INFORMATION PROFESSIONAL 19
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60