Compliance » Legal Operations Should Focus on AI Transparency and Explainability

Legal Operations Should Focus on AI Transparency and Explainability

December 20, 2023

Legal Operations Should Focus on AI Transparency and Explainability

As artificial intelligence (AI) and generative AI (GenAI) become essential to businesses, it is necessary to understand the risks involved. While there’s no comprehensive AI law yet, regulators are focused on transparent and explainable solutions, ensuring users and key stakeholders comprehend how these systems operate and make decisions, according to an article from Foley & Larder LLP.

Transparency helps users to understand a particular result in the system, increasing user trust and confidence in its capabilities and allowing for external auditing and evaluation. Explainability refers to perceiving how GenAI makes decisions and offers rational justifications in understandable terms. Transparency and explainability are necessary components of AI governance and increase the credibility and trustworthiness of AI solutions.

The European Union (E.U.) and U.S. regulators have stressed the need for transparency in AI systems and technical documentation. The E.U.’s Artificial Intelligence Act and General Data Protection Regulation require transparency in decision-making processes and detailed descriptions. The U.S. Federal Trade Commission has emphasized detailed AI model descriptions, data usage, and risk assessment protocols in its investigation into OpenAI.

The opaque nature of many AI systems poses challenges in risk management. This is where Legal Operations come in. To mitigate risk, Legal Operations needs to provide thorough documentation, policies for human intervention in case of adverse outcomes, and compliance with copyright laws. Further, in contracting with third parties, Legal Operations needs to include governance structures and risk assessment frameworks, monitoring and auditing protocols, and technical safeguards.

Businesses can help minimize the risk of potential liability if they can clearly explain how their AI solutions work, how they are trained, and why they made certain predictions or decisions.

Critical intelligence for general counsel

Stay on top of the latest news, solutions and best practices by reading Daily Updates from Today's General Counsel.

Daily Updates

Sign up for our free daily newsletter for the latest news and business legal developments.

Scroll to Top