Why We Need to Rethink the Muddled State of AI Governance

By Mark Diamond

June 5, 2025

Why We Need to Rethink the Muddled State of AI Governance

Mark Diamond, founder & CEO of Contoural, is a leading expert in records management, privacy, AI governance, and compliance strategies. He and his company help bridge legal, compliance, security and business needs and polices with effective processes, technology and change management. He can be reached at markdiamond@contoural.com.

Rarely has a risk and compliance topic taken off quite like generative AI. Organizations are rushing to adopt generative AI tools like ChatGPT or embedding calls to generative AI systems from within their processes and applications, and many are now asking, “How do we govern this stuff?” Unfortunately, the current state of AI governance is, to put it bluntly, muddled. And the forecast? Continued cloudiness.

Many organizations are solving this backward. They start with frameworks—the EU Artificial Intelligence Act, National Institute of Standards and Technology (NIST), International Organization for Standardization (ISO), IEEE Standards Association—and try to bolt those on to their own AI initiatives. However, because these frameworks are risk management or risk assessment frameworks, the organizations are left with a lot of risk controls that may or may not apply to known or unknown risks. That’s like mixing flour, eggs, and yeast before you know if you are baking bread or muffins.

Before launching an AI governance program, let’s ask some basic questions. First, what exactly are organizations trying to govern? Regulators are, understandably, focused on the overarching models, and therefore on the model creators, like OpenAI and Google. These model developers are training their Large Language Models (LLMs), intending them to be used as broadly as possible, and regulators are looking at the broad societal impact of the LLMs. This is an interesting debate, but the average user has little effect on how OpenAI trains its LLM. Instead, legal and compliance professionals should focus on how their companies’ employees and applications are using these models. 

Sign up for our weekly newsletters specifically curated to different practice areas: litigation, cybersecurity & data privacy, legal ops, and compliance.

A Flood of Frameworks

Then we have the flood of AI frameworks. The EU AI Act applies to developers and “deployers” and classifies systems by risk, mostly focused on risk to persons through the use of personal data in AI. NIST’s Risk Management Framework is broader, focusing on managing AI risk to achieve trustworthiness. ISO 42001 offers requirements for a comprehensive “AI Management System”. IEEE focuses on designing and computing for ethics and human rights. Both Colorado and Utah have passed laws regulating AI. These frameworks are, again, mainly directed at model developers or “deployers” rather than model users. 

Many of the regulatory bodies foresaw AI and the sea change it brings and are rushing to become the “gold standard” of AI regulation and compliance. Each framework offers something useful, but none of them addresses or answers the larger questions many companies are facing. 

Also, the tool vendors have arrived. There is no shortage of products promising to easily automate AI governance for everything. The reality, however, is that many of these “governance” products are dashboards that help track compliance with one framework, maybe two. A few allow cross-mapping between standards. There is very little actual risk mitigation, though, and that is a big problem.

Check-the-box compliance is not the same thing as real governance. Throwing a handful of controls at a dashboard might feel good, but it may do little to manage or even identify the actual risks AI brings to your organization. It can even create a false sense of security. Governance must be more than a spreadsheet exercise, even if automated within a tool.

So, what should organizations do instead?

Flip the Approach

Start with how you’re using AI—or plan to. Are you automating content creation? Using it to help identify potential applicants? Analyzing customer data? Each of these carries different risks. Some use personal information. Others impact decisions about individuals. A few might require rigorous oversight to ensure safety and accuracy. Map your use cases first.

Let’s look at the five core areas that should be at the center of your AI governance program:

  1. Compliance – Are you complying with all the existing and emerging laws and regulations? Do you have policies in place that tell your employees how the rules apply to AI? Do you know how AI is being used in your organization? Know what rules you’re subject to, but also make sure your own policies support how you’re using AI.
  2. Data Governance and Provenance – Understand the source of your training data. Is it biased? Copyrighted? Outdated? If you don’t know what went in, you can’t trust what comes out. If you don’t know the source of the training data, as is the case when using a commercially available Large Language Model such as ChatGPT, you can see how it becomes even more critical to pay attention to the management of your AI program and other areas of governance.
  3. Sensitive Information – AI tools can leak private or proprietary data, either by accident or design. Do you have controls for what goes in and guardrails for what comes out? Do these controls extend to all types of media used by generative AI, including unstructured data and emails? Make sure that you’re considering not just customer information but also your employee information and that you’re considering your organizational confidential information, not just personal.
  4. Ethical Use – Bias is real. So is reputational risk. Does your AI use violate anything you already claim to stand for? Would it create bias? Don’t assume the AI will make ethical decisions—it won’t. That’s your job.
  5. Accuracy and Safety – AI is confident, fast, and sometimes wrong. Don’t let it make decisions without human oversight, especially in high-stakes scenarios. What are your processes for safe and accurate use? How will you keep on testing, even after production?  Do you have processes in place to validate the output and make sure that what comes out makes as much sense as what went in?

Read the latest thought leadership and analysis from legal experts

All the above are internal questions that you need to answer yourself. A framework won’t help you make the initial decisions regarding the use of AI in your organization.

Only after you answer these questions do you want to apply the frameworks. Use them to find the right controls, not to define your whole program. Maybe ISO 42001 helps structure your governance function. Maybe NIST helps you think about lifecycle risks. Maybe the EU AI Act flags requirements you hadn’t considered, even if you’re not subject to it. But the point is to start with your needs and apply the tools, not the other way around.

Develop Repeatable Processes

After you develop governance for the first few use cases, start thinking about turning these steps into a repeatable process. There are certain to be more findings, and more steps, and a repeatable process will help you identify what you didn’t know to think about before. Even though generative AI is new, developing its governing process will feel familiar to many experienced compliance professionals. We’ve seen this type of problem before.

If you find your AI governance effort is going in circles—stuck in policy reviews, buried in framework mapping, or frustrated by underwhelming tools—take a step back. Maybe the issue isn’t what you’ve done. Maybe it’s how you’ve started. A backward approach leads to fragmented, check-the-box, let’s-just-throw-a-tool-at-it governance. A forward-looking, use-case-first strategy builds real control, trust, and defensibility. And that’s what we’re after: compliance with confidence.

Critical intelligence for general counsel

Stay on top of the latest news, solutions and best practices by reading Daily Updates from Today's General Counsel.

Daily Updates

Sign up for our free daily newsletter for the latest news and business legal developments.

Scroll to Top