Thought Leadership » AI in Employment Practices: How to Mitigate the Risk of Bias and Discrimination

AI in Employment Practices: How to Mitigate the Risk of Bias and Discrimination

By Jonathan J. Brown

June 3, 2024

AI in Employment Practices: How to Mitigate the Risk of Bias and Discrimination

Jonathan J. Brown, a Senior Associate Attorney at Pearlman, Brown & Wax, LLP, represents employers in all areas of employment law including wrongful discharge, discrimination, harassment, retaliation, accommodation, interactive process, and wage and hour claims.

The legal landscape surrounding AI in employment practices is evolving rapidly as more and more employers, including the vast majority of Fortune 500 companies, adopt artificial intelligence.

While the widespread integration of AI into recruitment and management tasks has promised enhanced efficiency and objectivity, there are concerns over potential biases and discrimination. Lawmakers and regulators are responding with updated guidelines and new standards to address these emerging challenges.

Regulation surrounding AI in employment is multifaceted, with initiatives spanning federal, state, and international levels. Federally, significant developments include updated guidelines from the Equal Employment Opportunity Commission (EEOC), a White House executive order establishing new AI safety and security standards, and the formation of the U.S. AI Safety Institute Consortium (AISIC). 

Meanwhile, states like California are spearheading their own AI law-related initiatives, with numerous legislative efforts underway. The European Union has also taken strides by approving a comprehensive AI legal framework. Most recently, on April 29, 2024, the Department of Labor (DOL) issued guidance in the form of a Field Assistance Bulletin, which not only addresses bias and discrimination but also delves into other employment law issues such as wage and hour regulations and the Family and Medical Leave Act (FMLA).

iTutorGroup, Workday Lawsuits Loom Large

Recent discrimination lawsuits serve as cautionary tales for companies deploying AI in their hiring and decision-making processes. In what has been coined as the first-of-its-kind AI discrimination suit, iTutorGroup settled with the EEOC and agreed to pay $365,000 to resolve allegations of age and gender discrimination. 

Similarly, a pending class-action lawsuit against Workday highlights the potential risks associated with automated screening platforms. Plaintiff Derek Mobley’s allegations of systematic discrimination based on race, age, and disability raise fundamental questions about accountability and liability in the AI discrimination space. As companies increasingly rely on AI for hiring decisions, the need for clear guidelines and legal frameworks becomes paramount to protect against discriminatory practices.

Vendors, Liability are Prime Concerns

The question of liability in AI discrimination cases is complex and multifaceted. While companies like Workday argue that their products are adaptable and customizable by customers, the extent of control delegated to AI vendors remains contentious. Under most discrimination statutes, including Title VII of the Civil Rights Act of 1964, covered employers are liable for the discriminatory actions of their agents. As AI systems play an increasingly significant role in the hiring process, the line between AI vendors, employers, and their agents becomes blurred, raising challenges for determining liability.

The widespread adoption of AI in business and employment presents both opportunities and challenges. While AI offers the potential to streamline processes and enhance decision-making, the risks of bias and discrimination cannot be overlooked. 

To navigate the evolving legal landscape and mitigate the risks of bias and discrimination, employers should prioritize several key measures:

  1. Ensure contracts with AI vendors outline responsibilities regarding antidiscrimination laws and bias prevention. 
  2. Carefully vet AI vendors and products, and demand transparency regarding algorithms and data used in AI systems.
  3. Conduct regular audits of hiring processes and AI tools to monitor for bias and ensure fairness. 
  4. Provide training for employees involved in the hiring process to recognize and address AI bias effectively.
  5. Stay informed about legal developments and seek guidance from legal counsel to navigate the complex legal landscape surrounding AI. 

Implementing these proactive measures will help employers minimize the risks associated with AI-related discrimination and ensure fair and equitable hiring practices.

Must read intelligence for general counsel

Subscribe to the Daily Updates newsletter to be at the forefront of best practices and the latest legal news.

Daily Updates

Sign up for our free daily newsletter for the latest news and business legal developments.

Scroll to Top