Thought Leadership » How In-House Counsel Should Address Risk When Deploying New AI Tools

How In-House Counsel Should Address Risk When Deploying New AI Tools

By David J. Oberly

April 19, 2024

AI tools, legal technology

David Oberly is Of Counsel in the Washington, D.C. office of Baker Donelson, and leads the firm’s dedicated Biometrics Team. David is also the author of Biometric Data Privacy Compliance & Best Practices. He can be reached at doberly@bakerdonelson.com or followed on X at @DavidJOberly.

In 2024, harnessing the immense benefits of artificial intelligence (AI) will remain a top priority for corporate boards and C-suite executives across all industries. Consequently, in-house legal teams need to be armed with the proper tools for developing and implementing effective AI governance programs that facilitate the deployment of these cutting-edge tools in a legally compliant, responsible, and safe manner.

Legal landscape

At the federal level, Congress has introduced a number of AI-focused bills in recent years, but none have gained significant traction or support. In the absence of clear guidance from lawmakers in Washington, D.C., federal agencies have stepped up to fill the void. At the forefront of this activity is the nation’s de facto federal privacy regulator, the Federal Trade Commission (FTC), which has been extremely active in pursuing investigations and enforcement actions in the AI space over the last 12 months. Moreover, recent FTC guidance has reinforced the agency’s commitment to regulating AI tools—including in the hiring and workplace context.

The Equal Employment Opportunity Commission (EEOC) has also been a key player in scrutinizing and policing the use of AI, including its release of two pieces of guidance detailing the different ways that AI can run afoul of federal equal employment opportunity laws. Last September, the EEOC also settled its first action specifically targeting allegedly discriminatory AI employment practices relating to automated job applicant software.

Illinois and Maryland currently have laws on the books governing the use of AI tools in the hiring context. More recently, New York City enacted Local Law 144, which significantly restricts Big Apple employers from using AI to help with employment decisions.

Moving forward, in-house counsel should anticipate lawmakers and regulators at the federal, state, and local levels to continue their efforts to enact greater regulation over the use of AI technology, especially with respect to addressing the significant bias- and discrimination-related concerns shared by legislators and policymakers at all three levels of government.

Key considerations and practical strategies for in-house legal teams

As technology-focused regulation continues to expand, so too does the desire of more corporate boards and C-suite executives to capitalize on the range of strategic opportunities presented by AI tools. At this critical crossroads, in-house counsel must guide their organizations in deploying AI and assist in charting a path forward that both maximizes potential benefits and manages increasing legal risk. When doing so, corporate legal teams should be mindful of the following issues and consider the below strategies:

  • Determine applicability: As a starting point, an initial scoping analysis should be completed to evaluate the extent of organizational legal obligations under the patchwork of AI-related regulation. Given the law’s broad definitions of “AI” and “algorithmic decision-making,” a wide range of use cases are likely to trigger compliance.
  • AI governance program frameworks: At a high level, a framework should be mapped out for enterprise-wide AI governance programs, which can provide internal guideposts and direction as organizations evaluate various AI tools for deployment to achieve certain objectives. Key aspects of any AI governance program include (among other things) the principles of fairness, transparency, “explainability,” privacy, and accountability.
  • Bias and fairness audits: Another key component of effective AI compliance programs is the auditing of AI tools to evaluate potential discrimination and fairness-related issues—both before initial deployment and thereafter at regular intervals—and even where not required to do so by law.
  • Third-party risk management: In-house legal teams should also be mindful of the significantly increased risk that exists with respect to the deployment of AI tools developed by outside, third-party vendors. The end users of this technology can oftentimes find themselves legally responsible for the discriminatory or otherwise unfair outputs or results involving AI tools—even those that are designed and developed exclusively by an outside, unrelated entity.

In-house attorneys should consider seeking the assistance of experienced outside AI counsel, who can offer practical solutions to the increasing number of complex legal challenges and potential pitfalls companies must navigate when deploying AI tools in today’s highly regulated but fractured legal environment.

Must read intelligence for general counsel

Subscribe to the Daily Updates newsletter to be at the forefront of best practices and the latest legal news.

Daily Updates

Sign up for our free daily newsletter for the latest news and business legal developments.

Scroll to Top