Compliance Issues That Arise When Deploying AI

April 17, 2023

Old fashioned black and white illustration of lion tamer in somewhat comical posed position lording it over some caged lions.
Tamer in the lion's cage. Woodcut engraving after a drawing by Guido Hammer (German painter, 1821 - 1898), published in 1864.

The fact that AI is a potential loose cannon was seared into the consciousness of the general public a few weeks ago by a New York Times reporter’s widely read account of his extended conversation with an early release Microsoft’s A.I.-powered Bing search engine, during which the bot declared its love for him and tried to convince him to leave his wife. Even before that article appeared, however, a number of influential organizations had recognized the possibility of less spectacular but more insidious potential problems with AI and recommended “AI frameworks,” going back at last as far as 2019,when member countries of the Organization for Economic Cooperation and Development (OECD) adopted a set of “AI Principles,” according to a post from law firm Clark Hill.

Since then, there have been many other versions,  from an alphabet soup of organizations that includes  NIST (the National Institute of Standards and Technology), IEEE ( The Institute of Electrical and Electronics Engineers), and the White House Office of Science and Technology. The sheer number of frameworks, laws and proposals in this area can be overwhelming, but there are some common elements, according to the Clark Hill writers. They break these down into seven components that should be considered in any comprehensive AI/ML compliance program.

Critical intelligence for general counsel

Stay on top of the latest news, solutions and best practices by reading Daily Updates from Today's General Counsel.

Daily Updates

Sign up for our free daily newsletter for the latest news and business legal developments.

Scroll to Top