
Recent class action litigations, utilizing a variety of theories of liability, underscores the rapidly increasing legal risk associated with using poorly understood AI-driven hiring and applicant screening tools. Courts and regulators are signaling that both vendors and employers may face direct liability when AI systems are used to screen, rank, or reject job applicants without adequate governance and oversight.
Key Case to Watch: Mobley v. Workday (N.D. Cal.)
In this federal class action, a job applicant alleges that Workday’s AI screening tools disproportionately rejected older applicants, among other claims. The court has allowed the case to proceed under a federal disparate impact theory, rejecting Workday’s argument that it is merely a neutral software provider. Importantly, the court held that Workday may be liable as an agent of employer customers. Class notice has now been approved, and Workday has been ordered to produce its customer list in discovery.
Why this matters:
- The putative class potentially includes millions of applicants.
- The court emphasized plaintiff’s allegations of fully automated decision-making without meaningful human oversight.
- The ruling raises the risk of direct vendor liability, joint liability, indemnification disputes, and joinder or downstream claims against employers using AI tools.
Other AI Hiring Cases Gaining Momentum
- Kistler v. Eightfold AI (N.D. Cal., filed Jan. 2026): Challenges AI-driven applicant profiling under the FCRA and California law, alleging opaque data scraping, automated “dossiers,” and lack of applicant transparency.
- Harper v. Sirius XM (E.D. Mich.): Title VII class claims alleging that an AI-powered applicant tracking system perpetuated racial bias.
The Takeaway for Employers
Courts and regulators are increasingly skeptical of “black box” hiring tools. Employers using AI—whether custom-built or third-party—should expect scrutiny around:
- Bias testing and documentation
- Human oversight and appeal processes
- Vendor diligence and contractual risk allocation
- Compliance with emerging AI laws (e.g., New York City, Illinois, Colorado, California, EU)
If your organization relies on AI in hiring – or works with a third-party provider that includes AI tools in the applicant screening process – and has not recently reviewed its governance, auditing, or vendor agreements, these cases should serve as a wake-up call.
We are actively advising clients on AI hiring compliance, risk mitigation, and governance best practices. Please reach out to Catherine Castaldo or any professionals on our Cybersecurity and Data Privacy team to discuss next steps.