Workday is facing a collective action lawsuit after a federal judge allowed claims to move forward that its AI-driven hiring software unfairly filters out applicants over 40. The case was brought by multiple plaintiffs who say they were repeatedly rejected for jobs due to biased algorithms.
The leading HR software company has denied the claims, calling the judge’s decision procedural and stating that the lawsuit relies solely on allegations. The outcome could significantly affect how AI is used in recruitment across industries.
How the lawsuit against Workday began
Workday’s AI hiring tool at the center of the case is an automated screening system embedded in its platform. It is used by thousands of companies globally. Plaintiffs argue that, often within minutes of applying, the tool disproportionately excludes:
- Candidates over 40.
- Individuals who are Black.
- Those who have disabilities.
The legal action began with Derek Mobley, who says that despite being qualified and experienced, he was rejected from over 100 roles over several years without being interviewed. Additional plaintiffs have since joined with similar complaints.
Patterns in rejections raise red flags
According to the complaint, the speed of rejection emails suggests decisions were made automatically, without human involvement. Mobley cited an instance where he applied for a job at 12:55 a.m. and was rejected before 2:00 a.m., implying that artificial intelligence dismissed his application.
Another plaintiff, Jill Hughes, reported receiving hundreds of rejection notices within hours, usually during overnight hours. She also noted that some rejections wrongly stated she didn’t meet basic qualifications, raising further concerns about potential flaws in the screening process.
AI’s influence in hiring decisions
Workday’s lawsuit highlights broader issues about AI bias in hiring processes. Research from the University of Washington shows these systems can inherit and amplify racial and gender biases, which supports worries about fairness in recruitment technology.
Bias extends beyond resume screening. AI used in interviews, language analysis, and candidate rankings can also yield flawed results, boosting the risk of unfair decisions when human oversight is minimal. For instance, a make-up artist lost a job at a top brand after an AI tool using facial recognition flagged her body language negatively.
Protecting job seekers against AI discrimination
As scrutiny of algorithmic hiring intensifies, some lawmakers and advocacy groups are pushing for new legislation that mandates transparency and fairness in AI-driven recruitment.
These efforts could include requiring companies to disclose when automated tools are used, explain how decisions are made, and offer candidates a pathway to request human review. Experts warn that without such guardrails, technology designed to streamline hiring may instead deepen inequality in the job market.