Proposed Framework To Address AI Risk Released By NIST

February 13, 2023

A ghostly image of the mythical lady holding the scales of justice, comprised of white zeros and ones on a black background.
Lady justice on digital background (Concept of artificial intelligence lawyer)

The initial version of its Artificial Intelligence Risk Management Framework (AI RMF 1.0) has been released by the National Institute of Standards and Technology (NIST). This document is one of the first attempts anywhere to codify a “risk framework” in this new field, and it could prove influential in the development of global standards for AI governance, according to a post from law firm Hogan Lovells. It could be especially relevant to companies that want to get a jump start on developing internal controls to address pending regulations, like the EU’s proposed AI Act. The framework defines what it designates as the “seven characteristics of trustworthy AI systems.” They include fairness and effective management of potential bias, explainability, and of course security.

The NIST document includes an entire section on the measurement of AI risk, including the difficulty presented by what probably goes to the heart of the issue at this point, “inscrutability,” and the problems inherent in the fact that risks as determined in a laboratory “may differ from risks that emerge in operational, real-world settings.”

The Hogan Lovells post also notes the NIST document could be the starting point for many “sector-specific” management frameworks in the U.S., and suggests some steps companies can take based on that likelihood. -Today’s General Counsel/DR

Critical intelligence for general counsel

Stay on top of the latest news, solutions and best practices by reading Daily Updates from Today's General Counsel.

Daily Updates

Sign up for our free daily newsletter for the latest news and business legal developments.

Scroll to Top