search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Generative AI isn’t the only way to use AI


Whilst generative AI has quickly gener- ated a lot of excitement (and trepidation), there are other hugely valuable ways to use AI, or what was traditionally referred to as “machine learning”. These include “text classification”, where a piece of text is put into a category, for example, an email is labelled “spam” or “not spam”, and “image object detection”, where a box is placed around particular objects in an image (for example, all of the cats in an image, or all of the flysheets in a digitised book). Whilst these systems are less flexible than generative AI systems, this is also one of their strengths. A model trained to put text into one of two categories can only “make a mistake” by putting text into the wrong category. It can’t make a mistake by producing undesirable or factually incorrect text like a generative system can do. There are other advantages of machine learning systems that focus on these narrower tasks; they are often more constrained in the resources they require and can run on a normal PC without the need for expensive and environmentally damaging server clus- ters, and they are easier to interpret. They can often be easier to integrate into existing workflows. Again, when we talk about “training” a model or a model “making a mistake”, we should ensure that the anthropomorphising language used around this technology is not misunderstood to reveal any inherent consciousness within the technology. These models are complex probability machines, not proto-robot librarians.


Natural Language Processing, known as NLP, a sub-field of AI enabling computers to understand language. Chatbots, voice assistants, automatic translation, text summarisa- tion, speech recognition and synthetic voices are applications that fall under NLP. Picture © Teresa Berndtsson / Better Images of AI / Letter Word Text Taxonomy / CC-BY 4.0


exhibit. A single system can respond to a wide range of prompts and produce a wide range of responses. However, this is also one of the significant limitations of generative AI. Since the responses they produce are so flexible, they can generate a text which has the appearance of being a sensible response to a prompt but which could contain subtle or not-so-subtle errors.


Where can information skills help? There are various ways in which you can try to ‘steer’ the outputs of a generative model. For example, with a system like ChatGPT, you can manipulate the prompt in various ways, with the aim of receiving a ‘better’ response. These manipulations of prompts include asking for a response in a specific format, providing examples to the system, or adding encouragements such as “Take a deep breath and work on


18 INFORMATION PROFESSIONAL


this problem step-by-step” to prompts. These strategies all fall under the heading of “prompt engineering”.


This is one area where traditional infor- mation retrieval skills held by librarians can potentially play a huge role. Many information professionals have experience in generating fairly complex searches. The skills required to do this are not dissimilar to many skills one needs to become good at prompting a model to generate the desired response. The fact that some of these prompts appear to address the system as a living thing, i.e. one that can take a deep breath, should not be misunderstood. Under the surface, these systems are all essentially producing a set of probabilities based on an input response, i.e. based on the training data for a model; it will return the series of words with the highest proba- bilities based on the given prompt.


AI across the industries. A data scientist had found that their work (the algorithm depicted on their laptop screen) has ‘jumped’ out of the screen and threatens to cause problems with a variety of different industries. Here a hospital, bus and siren could represent healthcare, transport and emergency services. The data scientist looks shocked and worried about what trouble the AI may cause there. Picture © Yasmin Dwiputri & Data Hazards Project / Better Images of AI / AI across industries / CC-BY 4.0


October-November 2023


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60