AI Governance That Holds: From Principles to Operating Infrastructure
By Sasha A. Carbone
March 5, 2026
Sasha A. Carbone is Senior Vice President, General Counsel, and Assistant Corporate Secretary at the American Arbitration Association (AAA), the world’s leading provider of dispute resolution services. Carbone oversees AAA’s legal, AI governance, corporate governance, inclusion, and enterprise risk management functions. She advises on the ethical, legal, and operational risks associated with emerging technologies, data privacy, and cybersecurity.
Much of the public conversation around AI governance is still stuck at the level of principles. Fairness. Transparency. Accountability. Human oversight. These concepts are necessary, but they are not what determine whether governance works.
In practice, AI governance fails for a different reason: it is treated as static, episodic, and external to operations. Policies are approved. Tools are reviewed. Boxes are checked. Then AI systems evolve—models change, data shifts, use cases expand—and governance quietly falls out of sync.
Artificial intelligence does not fail governance because it is novel. It fails governance because it is dynamic. Governing AI requires an operational framework, not a policy document.
From principles to operating infrastructure
The first step toward durable AI governance is structural. Governance must be designed as an enterprise operating model, not an overlay on technical teams.
In a framework-driven approach, AI governance begins with a formal mandate and scope that applies to all AI systems and AI-enabled use cases that create, support, influence, or process organizational data or outputs. Governance is not limited to decision-making systems; it also applies to assistive, analytical, and generative use cases that may indirectly shape outcomes.
Accountability and decision rights are distributed across existing enterprise structures. Boards and audit committees set expectations and risk appetite. Executive leadership and enterprise risk management (ERM) committees approve thresholds and receive structured reporting. A dedicated AI governance body operates lifecycle controls. Individual AI owners and technical teams are responsible for execution and evidence. This separation ensures governance is independent of delivery while remaining operationally grounded.
Risk acceptance, not tool approval
A second, often-missed distinction is that effective AI governance is about risk acceptance, not tool approval.
In many organizations, governance discussions center on whether a specific tool or model should be allowed. That framing breaks down quickly as AI systems become embedded, interconnected, and continuously updated.
A framework-based approach instead requires that every AI use case be classified according to risk. Typical classifications range from “minimal” to “unacceptable risk,” based on potential legal, operational, ethical, and reputational impact. This classification determines approval authority, required controls, testing depth, evidence expectations, and monitoring cadence. Use cases assessed as unacceptable risk do not proceed unless mitigations can bring them into a lower category of risk.
Crucially, risk is assessed twice during the early stages. Inherent risk is evaluated during planning and design, before controls are applied. Residual risk is confirmed prior to deployment, after required controls have been implemented and tested. This forces explicit, evidence-based risk acceptance decisions rather than implicit approvals driven by momentum or perceived benefit.
Lifecycle governance is the differentiator
Where most governance models fail is at deployment. Approval is treated as an endpoint. In reality, deployment is the moment when risk becomes operational.
A durable governance framework recognizes that AI risk is dynamic and applies oversight across the full AI lifecycle, from ideation through decommissioning. Defined lifecycle phases—such as intake, design, development, approval, production, monitoring, improvement, and sunset—create the structure for applying risk-based controls consistently over time.
At each phase, required governance actions are triggered automatically based on the system’s risk classification. Material changes, performance drift, expanded use cases, new data sources, or incidents prompt reassessment and, where necessary, escalation. Governance is continuous by design, not dependent on discretionary reviews.
This lifecycle approach is what allows organizations to govern AI systems that learn, adapt, and scale without losing control.
Separation of builders and governors
One of the least discussed, but most important elements of effective AI governance is structural independence. Governance cannot be owned solely by the teams building or deploying AI systems.
A mature framework explicitly separates delivery from oversight. Builders build. Owners operate. Governance bodies assess risk, confirm controls, review evidence, and escalate issues through ERM channels. This separation reduces conflicts of interest and ensures that risk decisions are reviewed independently, using consistent standards across the enterprise.
Without this separation, governance becomes advisory at best and symbolic at worst.
Evidence Is the output of governance
Trust in AI is not created by assurances. It is created by records.
Framework-driven governance produces evidence as a natural byproduct of operation. A centralized AI registry and evidence library serves as the system of record for governance decisions across the enterprise. For each AI system, it captures risk classifications and reassessments, approval records, testing and validation artifacts, monitoring metrics, incident logs, third-party assessments, material change approvals, and decommissioning documentation.
This infrastructure supports audit readiness, executive oversight, ERM reporting, and regulatory inquiry without retroactive reconstruction. More importantly, it allows organizations to demonstrate, not assert, that AI-related decisions were made deliberately and within defined risk parameters.
Governance as an enabler of scale
There is a persistent misconception that strong AI governance slows innovation. In practice, the opposite is true.
Framework-based governance enables scale by making risk explicit, controls predictable, and approvals repeatable. When teams understand the conditions under which AI use cases can proceed, and what evidence is required, they can design systems accordingly. Automation becomes possible not because risk is ignored, but because it is governed.
What organizations should do in the next 90 days
Building durable AI governance does not require a multi-year transformation. It requires structural clarity, executive sponsorship, and disciplined execution from the outset.
Organizations should formally establish a cross-functional AI governance committee with clearly defined objectives. This committee should include leaders from legal, technology, operations, risk, compliance, and ethics. The c-suite should formally endorse the committee and communicate that endorsement early across the organization, signaling that responsible AI governance is an organization-wide priority, not a technical side initiative.
Organizations should also develop a framework for evaluating AI use cases and prioritize cataloging all existing AI use cases and move them through a formal governance review process. Governance cannot begin only with new deployments; it must address what is already live.
These efforts should include assessing whether new AI governance roles are needed or whether existing roles should be refactored to support oversight, monitoring, documentation, and risk management.
All governance structures and processes should be designed to withstand internal audit scrutiny, with clear documentation, defined accountability, and consistent approval workflows.
Durable governance emerges when clarity, structure, and accountability are operationalized early, before scale makes course correction costly or impractical.
The general counsel’s role
As AI systems increasingly influence legal outcomes, governance cannot remain a technical concern. It sits squarely within the general counsel’s remit because it defines accountability, risk acceptance, and institutional legitimacy.
Organizations that lead on AI will be those that can demonstrate how risk is identified, accepted, monitored, and escalated across the AI lifecycle. Governance that holds is not defined by principles alone, but by operating systems that make accountability, control, and evidence routine.
Must read intelligence for general counsel
Subscribe to the Daily Updates newsletter to be at the forefront of best practices and the latest legal news.
Daily Updates
Sign up for our free daily newsletter for the latest news and business legal developments.