Companies Need Explainable AI

September 15, 2022

application-of-smartphone-with-business-graph-and-analytics-data-on-vector-id1193278024

The only way to gather insights from the vast amount of data that exists today is through artificial intelligence. AI identifies complex mathematical patterns found in thousands of variables and the relations among those variables. The insights gathered help companies make predictions. Nevertheless, AI can be a black box, and we are often unable to answer crucial questions about its operations. While we understand its inputs (variables) and outputs (analyses or predictions), we might not understand such questions as: Is it making reliable predictions? Is the AI making those predictions on solid or justified grounds? What we need is an explainable AI, that is, an AI whose predictions can be explained.

In general, companies need explainability in AI when (1) regulation requires it, (2) it’s important for understanding how to use the tool, (3) it could improve the system, and (4) it can help determine fairness. Organizations should create a framework allowing them to prioritize explainability in each of their AI projects. The framework would enable data scientists to build models that work and empower executives to make informed decisions about what should be designed and when systems are reliable enough to deploy.

Critical intelligence for general counsel

Stay on top of the latest news, solutions and best practices by reading Daily Updates from Today's General Counsel.

Daily Updates

Sign up for our free daily newsletter for the latest news and business legal developments.

Scroll to Top