search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
THE IMPORTANCE OF TRANSPARENCY Emily Campbell-Ratcliffe acknowledged that in the negotiation stage, it was very hard to negotiate who picks up the liabilities for an inaccurate or biased output. “For HR, it is all very well your commercial or legal


colleagues saying the provider will pick up the liability if there is a discriminatory output. As an employer, the claim is going to be brought against you and you are going to be the one in the tribunal explaining the decision. It doesn’t then matter that you have a contractual right to make a claim against the provider of the tool.” She said companies that have really good early


to be very careful about,” she said. “For example, if there is a bias, we don’t know what bias the machine has.” Furat Ashraf agreed that this example was completely


prohibited under the EU AI Act. “One specific system is the emotional recognition tools, which are often used in recruitment to tell how engaged and enthusiastic someone is,” she said. “That has been marked in the EU AI Act and also in the draft bill of the TUC as something that people are particularly concerned about and on the red list.” Other potential pitfalls that might cause bias in the


system are special rules around sick leave, maternity leave or when people are out of the office. “Does your system take account of those and is


there sufficient human augmentation to make sure that you’re not just going off AI’s end decision?” she asked. “We have redundancy selection tools and it might seem as though you have mitigated all the risk, but have you looked at that tool and considered if is there somebody in there who has made a complaint of


discrimination?” Whistleblowing is something that is on the rise and


is a protected disclosure, but it is also something that an algorithm will not know about, she said.


STEPS TO MITIGATE BIAS IN AI & RECRUITMENT Emily Campbell-Ratcliffe suggested that there should always be a human checking the outcomes, reading through CVs and personal statements and ensuring that AI is not making mistakes. “It is very easy to game the system nowadays,” she said. “If you’re just using a machine to read CVs, it’s very easy. So, from a business perspective, having a human there is very important.” Furat Ashraf suggested that all organisations should


audit their recruitment process and tools to find out what they were already using. “The starting point for clients is auditing what you already use, because people say they don’t want to use an AI tool and then we find most of the recruitment tools already use AI,” she said. “When you use them, you never want to be the case


example of something that wasn’t fully tried and tested. I’m always going to talk to clients about what you agree contractually with your provider and how you mitigate some risk that way.”


engagement strategies and engage with employee representative bodies and bring them on board can decrease some of the suspicion around what the tool is doing. “When it comes to litigation, the US goes first and we are already seeing in the US a whole new work stream of legislation that is around class actions against providers that deliver AI screening.” She cited the example of one man who was Black and


had health conditions. He was rejected by 100 different companies for a job. All of the companies were using the same AI tool and that tool was deselecting him. He is now pursuing a case against the provider rather than the companies who rejected him. She called for meaningful transparency in organisations


with an emphasis on the deployer of the system, not the developer. “So if you are the deployer, there are certain questions you should be asking developers and they should be explaining those in a way that you understand and in plain English. Even if you don’t have the technical expertise internally, you can feel confident that a system is going to do what it is intended to and in the right way.”


LOOKING FORWARD – USING AI SAFELY As organisations embrace AI, building trust and ensuring accountability are as critical as technological sophistication. By focusing on transparency, human oversight and working through potential risks and legal liabilities, companies can leverage AI’s benefits while minimising legal or reputational damage. This balanced approach, creating a transparent, responsible AI system, will be essential for any organisation looking to use AI as a tool for sustainable and ethical growth.


“ Bias is something that has long been an issue when it comes to rolling out tech. Thinking about the adverse impact on certain demographics of the population – those with protected characteristics in the context of the Equality Act – is particularly important.”


FURAT ASHRAF, PARTNER, BIRD & BIRD IN THE INTERNATIONAL HR SERVICES GROUP


51


GLOBAL LEADERSHIP SUPPLEMENT


AI RECRUITMENT


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68  |  Page 69  |  Page 70  |  Page 71  |  Page 72  |  Page 73  |  Page 74  |  Page 75  |  Page 76  |  Page 77  |  Page 78  |  Page 79  |  Page 80  |  Page 81  |  Page 82  |  Page 83  |  Page 84  |  Page 85  |  Page 86