To Reduce Risk, Make AI “Explainable”
December 9, 2021
The problem for companies that employ AI, says a post from Federal News Network, is to make sure it’s implemented “in an ethical way” – meaning that it’s operating “within scope and its behavior is well-investigated in terms of fairness and harm.” The solution is to make it explainable, says the writer, explaining what’s being called “XAI,” short for “Explainable AI,” which he says is “an essential part of a risk management framework.”
For those wondering what this might mean in practice, the writer cites the Commerce Department’s National Institute of Standards and Technology (NIST) and its Four Principles of Explainable Artificial Intelligence. Among its imperatives: The explanation must be accurate and meaningful to stakeholders, and it must acknowledge its own limits. (“The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output.”)
It may sound a bit abstract, but the consequences of wayward AI are anything but, according to “Products liability law as a way to address AI harms,” a report from the Artificial Intelligence and Emerging Technology (AIET) Initiative at The Brookings Institution. AI sometimes makes mistakes, notes the author. Driverless cars get into accidents. Mortgage applications get rejected for unacceptable reasons. Robotic surgery harms a patient. He goes on to provide an overview of what he identifies as key concepts in products liability law – including design defects, manufacturing defects, failure to warn, misrepresentation, and breach of warranty – and their potential application to AI.
Critical intelligence for general counsel
Stay on top of the latest news, solutions and best practices by reading Daily Updates from Today's General Counsel.
Daily Updates
Sign up for our free daily newsletter for the latest news and business legal developments.