search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
A


I has been a prominent emerging technology for at least a decade. The speed of progress in the past few years has been remarkable – and possibly unsettling. Generative AI, most


notably in the form of large language models (LLMs) such as ChatGPT, has gone viral as the first general- purpose AI technology to reach a global audience. As AI systems integrate more deeply into everyday


life and complex decision-making, he argued there is a need to cultivate a balanced, critically aware approach to these technologies. Since every one of us interacts with AI in our daily lives – from chatbots to recommendation algorithms and from computer software to wearable technology, such as Apple watches – we need to be aware that all our interactions form training data for AI systems. He told the audience that the key to balancing


technology with the human touch is learning to critically question AI’s outputs, actively engage in “prompt engineering” to get better results from AI and ensure that we treat personal data with the importance it deserves.


HOW & WHY AI EXPLODED IN 2024 Professor Wooldridge explained how AI has been around since the 1960s, but its development has only begun to gather pace in the last decade. This is because it took the advent of massive computing power and an understanding of how to train AI in order for it to progress. “The key point in this century was that we learned how


to be able to train neural networks,” he said, explaining why AI has been going through a recent period of rapid expansion. “When we train a neural network, we show it an input and we show it the desired output. We adjust the network so that it is producing an answer closer to the answer that we want. That is how we train it.” Yet these systems, at their core, are limited by


the data they learn from and the structure of their algorithms. Blindly trusting AI can lead to unintended consequences, whether in over-relying on automated decisions or in inadvertently surrendering control over personal information. He cited the example of checking what AI said


about his own academic background. It claimed he had studied at Cambridge, something he had never done and which does not appear in any information about him or in any CV. “In 2020, I’m playing with GPT-3 and I say, ‘Tell


me about Michael Wooldridge’. It says: ‘Michael Wooldridge is a professor at the University of Oxford known for his research in artificial intelligence and


studied at the University of Cambridge’. I didn’t study at Cambridge. I’ve never had any affiliation with Cambridge whatsoever. Never worked there and I wasn’t a student there. Remember the way it works. It’s not designed to tell you the truth. It’s designed to take the most plausible thing. It’s read probably thousands of biographies of Oxford professors and studying at Oxford or Cambridge is very, very common. So, in the absence of any information, it’s filling in the gap with what it thinks is the most plausible. And it’s very plausible. If you read that, you wouldn’t have batted an eyelid. You would have taken it for granted.”


AI IS NOT A SUPER BRAIN: QUESTIONING ITS OUTPUTS He explained that the programmes “lie convincingly” a lot. “They don’t know what the truth is, but the fact that they lie plausibly makes those lies particularly problematic in terms of bias and toxicity,” said Professor Wooldridge. For this reason, he says one of the most critical skills in


the digital age is learning to treat AI with a healthy dose of scepticism. AI systems are designed to predict, optimise and assist, but they aren’t infallible or omniscient. They operate on patterns derived from data. This fundamental nature of AI means that anyone using it needs to be aware of its shortcomings. This is particularly true in cases where accuracy is of vital importance. He cites the example of the Harm Assessment


Reduction Tool (HART) introduced by Durham Constabulary. This AI system was created to assist police in making decisions about whether a detainee should be


“ AS AI SYSTEMS GROW IN SOPHISTICATION & AUTONOMY, OUR RESPONSIBILITY AS USERS AND DEVELOPERS IS TO ENGAGE WITH THESE SYSTEMS ACTIVELY & THOUGHTFULLY.”


MICHAEL WOOLDRIDGE, PROFESSOR OF COMPUTER SCIENCE AT THE UNIVERSITY OF OXFORD & DIRECTOR OF FOUNDATIONAL AI RESEARCH, ALAN TURING INSTITUTE


53


GLOBAL LEADERSHIP SUPPLEMENT


AI D ATA PROTE CTION MATTER


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68  |  Page 69  |  Page 70  |  Page 71  |  Page 72  |  Page 73  |  Page 74  |  Page 75  |  Page 76  |  Page 77  |  Page 78  |  Page 79  |  Page 80  |  Page 81  |  Page 82  |  Page 83  |  Page 84  |  Page 85  |  Page 86