Technological advancements coupled with a desire to optimise recruitment processes and to target a higher quality of candidate has led to an increase in the use of artificial intelligence (AI) by employers during the recruitment process.
AI software is increasingly becoming mainstream and features in at least one stage of the recruitment process by many global organisations including Vodafone, McDonald’s, and Unilever; at face value it is relatively easy to see its benefits.
In 2019, Unilever reported that its use of AI had saved its human recruiters approximately 100,000 hours in interviewing time and nearly £1m per year. However, an overreliance on AI when making recruitment decisions case can see employers easily being wrong-footed and inadvertently breaching UK data protection and anti-discrimination laws.
In the data privacy arena, the UK GDPR restricts employers from making solely automated decisions that have a significant impact on job applicants except in limited circumstances, such as where the decision is necessary for entering into or performing the employment contract or where the data subject has consented.
An employer will make a solely automated decision where the decision is reached through AI without human scrutiny. Employers are unlikely to meet the UK GDPR exemptions and should always ensure that there is some human influence on the outcome in any employment decisions involving AI. The ability to process a job applicant’s health data in solely automated decisions is even more limited and must be avoided.
Within UK employment law, the use of AI can result in indirect discrimination claims where someone with a protected characteristic suffers a disadvantage as a result of an algorithm’s output based on data set analysis.
In October 2018, an industry-leading retailer was reported to have scrapped an algorithm for recruiting new staff after its machine learning system was configured in a way that saw male candidates as being preferable to female candidates and therefore creating bias. It was reported that the reason for this was that to create the algorithm, data sets based on patterns in CVs that had been previously received had been used. As the overwhelming majority of CVs came from men, this inadvertently led to an algorithm which discriminated on the basis of sex.
In September 2021, UK campaign group Global Witness accused Facebook of discrimination in its showing of job advertisements after an experiment it carried out showed that certain jobs were predominantly being advertised to a specific sex. Global Witness said that as an example it created two job advertisements on Facebook and of the people shown an advertisement for mechanics, 96 per cent were men whereas 95 per cent of those who had been shown a job for nursery nurses were women. Global Witness complained that Facebook’s algorithm, which decided whom the advertisements were shown to, showed a clear bias in its application.
Get all the daily talent news sent to your in-box. Click here to subscribe to the Talentid Newsletter