Why We Need To Be Cautious About the Use of AI in Human Resources

By Leah Stiegler and Anne Bibeau

January 13, 2025

Why We Need To Be Cautious About the Use of AI in Human Resources

Leah M. Stiegler and Anne Bibeau are principals in the Labor & Employment practice at Woods Rogers in Virginia. They advise company leaders and their human resources departments on compliance with employment laws. Woods Rogers hosts the biweekly video series “What’s the Tea in L&E,” available on YouTube.

The giant leap in artificial intelligence has caused equal parts fascination and fear for legal departments. While AI can lend great efficiency in the workplace, AI technology is error-prone, and let’s face it, threatens to replace us all. The dichotomy of curiosity and caution is on full display in the use of AI in human resources. HR professionals use AI to cull resumes, monitor employees, evaluate performance, set compensation, and streamline other operational functions. But users beware: AI can generate significant liability for your business.

Bias in Recruitment

In 2023, iTutor Group used an AI recruitment tool that caught the attention of the U.S. Equal Employment Opportunity Commission (EEOC). Ultimately, iTutor Group agreed to pay $365,000 and furnish other relief to settle an employment discrimination claim brought by the EEOC related to the company’s use of this AI recruitment tool. The AI decided that certain candidates for tutoring positions were preferable and automatically rejected female applicants aged 55 and older and male applicants aged 60 and older. More than 200 qualified applicants were automatically rejected based on their age alone. 

In an earlier example of AI gone awry from 2018, Amazon developed an AI tool to sort through resumes and score applicants based on the AI’s review of the resumes of successful applicants. With most of the successful applicants being male, the AI taught itself that male candidates were preferable and, therefore, rejected or downgraded female candidates. Amazon tried to correct the problem but eventually abandoned that AI program after concluding the bias could not be eliminated.

Data Privacy Risks and Responsibility

In addition to discrimination, be aware that any data you enter in generative AI, such as ChatGPT, could be used to train the AI. Although some generative AI services allow users to opt out of their data from becoming training data, once shared with AI, the data is neither confidential nor secure. 

Our colleague Ross Broudy, an attorney in our firm’s Cybersecurity & Data Privacy Practice, stresses the importance of handling sensitive personal information carefully.

“Generally, you should not enter any of the following types of data into an AI system: proprietary information, confidential information, such as an employee’s personal identifiers (e.g., Social Security Number, date of birth, etc.), medical information, or financial information,” he said.

This is not to say that businesses should steer away from AI. However, businesses need to be mindful that they are responsible for AI outputs the same way they would be if they had not used the technology. If the AI discriminates against a protected class, a business, not the AI, will be liable. In other words, AI is neither an exception nor a defense to escape liability. Therefore, it is important to include robust indemnification clauses in both contracts to purchase AI applications and contracts with vendors that may use AI to deliver services to your business.

Read the latest thought leadership and analysis from legal experts

Follow and Establish Clear Guidelines

The EEOC, Department of Labor, and Office of Federal Contract Compliance Programs have guidance encouraging businesses to validate any AI used in recruiting, performance evaluations, promotions, and other HR functions to determine if such programs have a disparate impact on a protected class. Businesses should use the Uniform Guidelines on Employee Selection Procedures for this analysis. They should conduct validation studies periodically, as biases can creep into an AI program over time as it continues to train and retrain itself. 

Of course, the cost and time of these validation studies may eat into monetary and time savings from AI’s efficiencies, but they are important steps to mitigate discrimination risks and liabilities. 

If your organization is making use of AI in human resources functions, be upfront and transparent about your use of the technology. Inform both applicants and employees. Avoid any AI programs that rely on facial characterization or emotional inference in hiring, as such programs could disparately impact individuals with disabilities. 

One of the preeminent challenges in implementing AI is an absence of controls over AI deployment and use. As a critical first step, your legal department should establish basic AI policies and guidelines to ensure employee use of AI has not gone off the rails. In the long run, engage in a cross-department collaborative effort to learn how AI is being used in your organization and develop an AI governance framework to ensure responsible development, deployment, and use of AI by your organization. 

Finally, businesses should adhere to other best practices in HR. Have a human HR professional review the AI’s output to ensure compliance with applicable laws. Keep records supporting all employment decisions and maintain accurate job descriptions.

Critical intelligence for general counsel

Stay on top of the latest news, solutions and best practices by reading Daily Updates from Today's General Counsel.

Daily Updates

Sign up for our free daily newsletter for the latest news and business legal developments.

Scroll to Top