Home > Resources > HR Strategy > HR technology > When Algorithms Fail: Exposing Age Discrimination Through AI Hiring Tools
HR manager working on a laptop

When Algorithms Fail: Exposing Age Discrimination Through AI Hiring Tools

A class action lawsuit alleging age discrimination against an AI screening tool provider highlights important considerations for employers to reduce bias and discrimination.

Share this:

By Melissa A. Silver, Brightmine Principal Legal Editor

A recent court ruling in the case of Mobley v. Workday is making waves across HR when it comes to age discrimination and tools used to screen job applicants.  A federal district court in California granted a preliminary certification for a nationwide class action lawsuit regarding plaintiff Derek Mobley’s claim under the Age Discrimination in Employment Act (ADEA). The action includes all individuals aged 40 and over who applied for jobs through Workday, Inc.’s application platform from September 24, 2020, to the present and were denied employment recommendations.

Workday offers human resource management services, including applicant screening across various industries. Their platform claims to reduce hiring time by using artificial intelligence systems (AI) to move candidates forward in the recruiting process. Mobley argues that these AI systems are designed in a manner that reflects employer biases and relies on biased training data, which often prevents qualified candidates from advancing in the hiring process unless they pass Workday’s screening algorithms.

Mobley claims that since 2017 he applied to over 100 positions with companies that use Workday’s features but was rejected each time. In support of his motion, Mobley submitted the declarations of four other plaintiffs with similar experiences of automated rejections despite meeting qualifications.

The court held that whether Workday’s system had a disparate impact on applicants over 40 is a common issue that can be addressed collectively.

While this case is still in its infancy, and it is unknown what the ultimate result will be, it is a gut check for HR to ensure that they are choosing and using AI tools wisely when looking for help to streamline their workflows. Although the Mobley case may be making HR unsettled about using AI screening applications, this is actually not the first time an age discrimination case is making the news when it comes to artificial intelligence. In 2023, the Equal Employment Opportunity Commission (EEOC) settled with iTutorGroup over allegations of systemic age discrimination. That case involved algorithmic discrimination, with allegations that the company’s application software automatically rejected female applicants over 55 and male applicants over 60, eliminating over 200 qualified candidates. As part of the settlement, iTutorGroup was ordered to pay $365,000 to those affected and revise its hiring practices.

Key takeaways

As AI technologies continue to advance and organizations increasingly adopt these innovations, employees—particularly those in HR functions—are actively seeking ways to implement AI solutions that streamline repetitive tasks and enhance productivity. The talent acquisition process exemplifies this trend. Companies face many recruitment challenges, such as the lengthy hiring timeline and difficulty filtering out unqualified applicants from a large pool of candidates. Therefore, it is not surprising that HR departments are turning to AI tools to facilitate parts of the selection process.

However, with every benefit comes inherent risks. The Mobley case underscores the necessity for organizations to establish quality and safety standards when integrating AI into their operations, especially when collaborating with third-party vendors. To mitigate potential issues, organizations should take several proactive steps:

  • When utilizing a third-party vendor, thoroughly review their policies, agreements and product documentation to gain insight into the quality and safety standards they uphold.
  • Set stringent quality standards for AI applications—including those provided by third-party vendors—that address essential aspects such as data quality, security measures, privacy protocols, safeguards against AI errors and strategies for preventing bias and discrimination.
  • Conduct regular audits of new AI tools to ensure compliance with established criteria and legal requirements concerning quality assurance, security integrity, bias mitigation, and discrimination prevention. Notably, New York City already mandates bias audits for employers using automated employment decision tools. Additionally, effective February 1, 2026, Colorado will require businesses to take reasonable care in protecting residents from algorithmic discrimination based on any protected characteristic under federal or state law.
  • Implement a structured auditing schedule for the ongoing evaluation of deployed AI systems to protect against bias and discrimination in employment decisions.
  • Ensure that all vetting and auditing practices associated with AI comply with relevant laws and regulations and continue to track legal developments.

By leveraging AI effectively within their operations, organizations can significantly boost productivity levels in their hiring process while enhancing performance and fostering innovation. Nevertheless, it is crucial that these advancements do not inadvertently lead to bias or discrimination against employees or job candidates. Therefore, companies must commit themselves to employing AI responsibly and continue to have human oversight of AI tools in order to avoid potential negative consequences.

Get access our extensive HR resources and expertise

In an ever-changing regulatory environment, we have everything you need to stay in control and compliant.

For full access, sign up to a subscription to the HR & Compliance Center today.

You may also be interested in…

Webinars

Unlocking AI’s Potential in HR: Insights for Today’s Professionals

Join Melissa as she reveals insights from the results of the latest Brightmine survey on AI's role in …

Commentary and Insights

When Algorithms Fail: Exposing Age Discrimination Through AI Hiring Tools

A class action lawsuit alleging age discrimination against an AI screening tool provider highlights important considerations for employers …

HR News

Texas rides in with third state AI law

Texas has become the third state in the US to enact a law governing the use of artificial …

About the author

Melissa A. Silver, JD, Content Manager, Policy Solutions at Brightmine

Melissa A. Silver, JD
Principal Legal Editor, Brightmine

Melissa Silver is a former practicing employment law attorney with 10 years of experience. As content manager of the Brightmine Policy Solutions team, she supports Automated Handbook Management by ensuring the tool includes the most up-to-date and legally compliant local, state and federal employment policies available.

Melissa holds a Juris Doctor degree from Syracuse University College of Law. Before joining Brightmine, she defended clients in a variety of employment-related matters involving harassment, discrimination and retaliation claims, as well as the enforcement of restrictive covenants.

Want to learn more?

Sign up for a FREE 7 day trial and access subscriber-only articles and tools.