a law where you do need to be mindful when it comes to bias and discrimination because you have groups that are protected under the Equality Act from indirect discrimination.” The law also covers GDPR and the obligations will
apply when it comes to putting candidate data through your systems or using employee data to generate data around performance evaluation, she said. It also covers decisions you might have taken based on the data collected and giving employees information on how you are going to use the data they provide. “When you look the outside UK, there is an obligation
around works councils and employee representative bodies, and often those existing obligations may even go further than something like the EU AI Act,” she said. “In the EU AI Act, there’s an obligation to notify workers’ representatives. There will be countries where you need to consult them and you need their agreement, for example in Germany.” She said it was therefore important not to look at the
legal landscape in a vacuum as regards AI. “While we do not have new legislation in the UK, we have a lot of existing legislation which will affect it,” she told the panel.
BIAS & UNFAIRNESS: HOW TO RECOGNISE & MITIGATE DAMAGE One of the most pressing challenges in AI and its role in HR is inbuilt bias, which can manifest in various ways. Historically, AI algorithms have shown unintended discrimination due to biased training data. If the data on which they are trained is biased, then the AI will produce a biased result. For example, some AI recruitment tools have inadvertently marginalised women or certain ethnic groups because the datasets used to train these models reflected societal biases. “Bias is something that has long been an issue when it
comes to rolling out tech,” said Furat Ashraf. “Thinking about the adverse impact on certain demographics of the population – those with protected characteristics in the context of the Equality Act – is particularly important. There were a few very prominent cases some time ago, although technology has improved since then. Some of the big tech giants that were early adopters found that AI as a recruitment tool was deselecting for women because the training data wasn’t programmed properly.” She said that it was important to take the legal
perspective into account when looking at bias in discrimination because that is the one area in grievance hearings where damages are uncapped, together with whistleblowing. “Every HR practitioner in the room will say that if
you’ve met a manager, you know that human bias is real,” she said. “You can’t remove your difficult manager.” She raised the point that, in theory, if you find the
point of bias within AI and change the algorithm and train it on some additional data, then the issue should be fixed; something that is arguably much harder to do when dealing with humans. She also talked about how disability fitted into
discrimination and bias and how video technology in the context of recruitment had shortcomings when it came to failure to pick up neurodiverse conditions or cultural
50
differences. “That’s another area where I think we have to be cautious on,” she said. “In that scenario, what are the emotional behaviours it is reading and what data set is the AI trained on?”
THE LEGAL IMPLICATIONS OF AI BIAS IN RECRUITMENT & HR Emily Campbell-Ratcliffe is head of AI assurance at the UK’s Department for Science, Innovation and Technology (DSIT). Emily leads DSIT’s efforts to support the growth of an ethical, trustworthy and effective AI assurance ecosystem in the UK, which is a key pillar to realising the UK’s AI governance framework. She represents the UK at the OECD’s working party on AI governance and is also part of the
OECD.AI network of experts, contributing to its expert groups on AI risk and accountability and compute and climate. She sounded a warning note around how to manage
bias in AI from a legal point of view and the perception that algorithms can be fairer than a human if you can remove bias from the AI programme. “Bias mitigation is an extremely difficult legal issue,” she said. “It’s a very big grey area. The Equality and Human Rights Commission (EHRC) are struggling to understand where the lines are with bias mitigation algorithms and we are working with them on this, but it is slightly controversial and a legal grey area.” Another pitfall is organisations thinking that AI is
a less biased recruitment tool, even though it has been trained on internal data. “They’re just reinforcing existing biases within the organisation and people are expecting results they just fundamentally won’t get,” she said. “So, it’s about how you can enhance your training data through synthetic data. Otherwise, organisations are trying to bring some tools to make different decisions, but they end up fundamentally reinforcing existing hiring practices.” Nikki Sun explained that AI recruitment video
interviews also had inbuilt flaws and sometimes AI picked up random features that were not actually relevant to the job. This could include the background to the interview, the lighting, the candidate’s facial expressions and other details that might not be favourable. “It is an area we need
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76 |
Page 77 |
Page 78 |
Page 79 |
Page 80 |
Page 81 |
Page 82 |
Page 83 |
Page 84 |
Page 85 |
Page 86