Sign up for a complimentary subscription to Today's General Counsel digital magazine.
By David J. Oberly
April 19, 2024
David Oberly is Of Counsel in the Washington, D.C. office of Baker Donelson, and leads the firm’s dedicated Biometrics Team. David is also the author of Biometric Data Privacy Compliance & Best Practices. He can be reached at doberly@bakerdonelson.com or followed on X at @DavidJOberly.
In 2024, harnessing the immense benefits of artificial intelligence (AI) will remain a top priority for corporate boards and C-suite executives across all industries. Consequently, in-house legal teams need to be armed with the proper tools for developing and implementing effective AI governance programs that facilitate the deployment of these cutting-edge tools in a legally compliant, responsible, and safe manner.
Legal landscape
At the federal level, Congress has introduced a number of AI-focused bills in recent years, but none have gained significant traction or support. In the absence of clear guidance from lawmakers in Washington, D.C., federal agencies have stepped up to fill the void. At the forefront of this activity is the nation’s de facto federal privacy regulator, the Federal Trade Commission (FTC), which has been extremely active in pursuing investigations and enforcement actions in the AI space over the last 12 months. Moreover, recent FTC guidance has reinforced the agency’s commitment to regulating AI tools—including in the hiring and workplace context.
The Equal Employment Opportunity Commission (EEOC) has also been a key player in scrutinizing and policing the use of AI, including its release of two pieces of guidance detailing the different ways that AI can run afoul of federal equal employment opportunity laws. Last September, the EEOC also settled its first action specifically targeting allegedly discriminatory AI employment practices relating to automated job applicant software.
Illinois and Maryland currently have laws on the books governing the use of AI tools in the hiring context. More recently, New York City enacted Local Law 144, which significantly restricts Big Apple employers from using AI to help with employment decisions.
Moving forward, in-house counsel should anticipate lawmakers and regulators at the federal, state, and local levels to continue their efforts to enact greater regulation over the use of AI technology, especially with respect to addressing the significant bias- and discrimination-related concerns shared by legislators and policymakers at all three levels of government.
Key considerations and practical strategies for in-house legal teams
As technology-focused regulation continues to expand, so too does the desire of more corporate boards and C-suite executives to capitalize on the range of strategic opportunities presented by AI tools. At this critical crossroads, in-house counsel must guide their organizations in deploying AI and assist in charting a path forward that both maximizes potential benefits and manages increasing legal risk. When doing so, corporate legal teams should be mindful of the following issues and consider the below strategies:
In-house attorneys should consider seeking the assistance of experienced outside AI counsel, who can offer practical solutions to the increasing number of complex legal challenges and potential pitfalls companies must navigate when deploying AI tools in today’s highly regulated but fractured legal environment.
Sign up for a complimentary subscription to Today's General Counsel digital magazine.
Sign up for our free daily newsletter for the latest news and business legal developments.