How GCs Can Get Ahead of California’s Transparency in Frontier Artificial Intelligence Act
By Corey P. Gray and Rachel E. Noveroske
January 29, 2026
Corey P. Gray is a Partner at Boies Schiller Flexner. His practice areas include high-stakes business litigation, antitrust actions, and energy litigation that often involve issues of first impression.
Rachel E. Noveroske is an Associate at Boies Schiller Flexner. Her practice focuses on complex business litigation, antitrust litigation, and commercial arbitration disputes on behalf of plaintiffs and defendants in federal and state courts.
AI is transforming business operations by boosting productivity and innovation, but its “black box” nature makes risk assessment challenging. California’s Transparency in Frontier Artificial Intelligence Act addresses this by requiring certain AI developers to meet new transparency and reporting rules. The act also offers whistleblower protections, introduces civil penalties, and urges general counsel to quickly adapt for compliance.
Transparency reports and incident reporting
Starting January 1, 2026, AI developers training models with over 10^26 operations (AI models requiring astronomically large amounts of computation) must post transparency reports online, detailing their release dates, intended uses, and restrictions. Those with annual revenues over $500 million (“large frontier developers”) (LFDs) must also publish a ten-category frontier AI framework, update it yearly, and report major changes within 30 days to inform consumers about catastrophic risk management.
The act introduces new reporting obligations. All frontier developers are required to notify the Office of Emergency Services (OES) about “critical safety incidents” within 15 days and must report any such incidents posing an imminent risk of death or serious injury within 24 hours. Additionally, LFDs are obligated to file a confidential summary assessing catastrophic risks every three months. OES has not yet released the official reporting system, but in the interim the OES website provides contact information where reports should be made. General counsel should keep track of updates from OES for further direction.
New whistleblower provisions
The act includes robust enforcement mechanisms overseen by the California Attorney General, with civil penalties of up to $1 million per violation. It protects “covered employees” who report critical risks or public safety incidents from retaliation and requires employers to give clear notice of these rights and provide an anonymous reporting process for disclosing violations or substantial public dangers.
Civil litigation under the act can also be costly: once a covered employee shows a violation by a preponderance of the evidence, the company must prove independent, legitimate reasons for its conduct. Courts may award attorneys’ fees when granting injunctive relief, and the act’s remedies are cumulative to other California laws, with injunctive relief not stayed on appeal.
The act presents several new requirements to frontier AI developers. GCs are well-suited to address these challenges and ensure coordination across functional teams. Here is a rundown of what GCs should address in light of this new law:
-
Understand the requirements
- Determine whether your organization qualifies as a frontier developer and whether it is a “large frontier developer” or LFD.
- Identify which reporting, transparency, and framework obligations apply.
- Understand California AG enforcement authority and whistleblower-based civil liability.
- Establish required transparency reports and (if applicable) a frontier AI framework.
- Implement an internal whistleblower reporting process.
- Calendar periodic submissions to OES.
- Confirm reporting obligations for new or substantially modified frontier models.
-
Assess organizational risk
- Define key undefined statutory terms (e.g., “discover”) that trigger reporting deadlines.
- Set internal protocols for identifying and reporting critical safety incidents within 15 days.
- Identify all uses that may constitute “deployment” of a frontier model.
- Identify “covered employees” and provide whistleblower protection notices to covered employees.
- Track whistleblower compliance for existing and new employees.
- Review employment practices liability insurance (EPLI) and directors and officers (D&O) insurance to confirm alignment with compliance and reporting risks.
- Monitor evolving regulatory guidance, including OES and AG reports and updated definitions.
-
Implement a cross-functional compliance program
- Assign a single executive owner for compliance and reporting accuracy.
- Establish cross-functional input from development, policy, safety, HR, and legal.
- Require GC review of: AI framework documentation, transparency reports, and critical safety incident reports.
- Implement incident response plans and escalation protocols.
- Review employee agreements and NDAs to ensure covered employees can report safety incidents without restriction.
- Promote internal buy-in and a culture of compliance.
An evolving AI regulatory landscape
The AI regulatory landscape is rapidly evolving. With definitions in the act, like “Large Frontier Developer,” set for review this year, the scope of act could change significantly. States, such as Colorado, Utah, Tennessee, Connecticut, Delaware, and Indiana are also implementing AI regulations that will impact companies operating in numerous states. At the federal level, several executive orders targeted AI in 2025. GCs must keep watch and stay informed. Companies will need them to provide timely advisement as they navigate the AI frontier.
Jon Mills and Joshua Quaye contributed to this article.
Must read intelligence for general counsel
Subscribe to the Daily Updates newsletter to be at the forefront of best practices and the latest legal news.
Daily Updates
Sign up for our free daily newsletter for the latest news and business legal developments.