Decoding Algorithmic Price Fixing: A New Frontier for Antitrust Law

By Jeffery M. Cross

April 1, 2026

Decoding Algorithmic Price Fixing: A New Frontier for Antitrust Law

Jeffery M. Cross is a columnist for Today’s General Counsel and a member of the Editorial Advisory Board. He is Counsel in the Litigation Practice of Smith, Gambrell & Russell, LLP. Cross was a Partner at Freeborn & Peters, which merged with SGR in 2023. He can be reached at jcross@sgrlaw.com.

Artificial intelligence (AI) is arguably the most important technical innovation of the twenty-first century, with tech firms spending trillions to develop it and seemingly every company racing to find a new way to use it. However, this power has put antitrust watchdogs on high alert. They are grappling with a new legal frontier: Could AI be used to rig markets, and can algorithms “conspire” to fix prices entirely on their own?

To answer these questions, it is important to define AI as numerous forms have emerged since researchers began studying artificial intelligence in the 1950s. For purposes of this analysis, let’s define AI as the large language model generally known as generative AI that has been pre-trained on data. The AI is an algorithm that responds to a prompt or query by repeatedly predicting the next word or number.

Read the latest thought leadership and analysis from legal experts

Applying the Sherman Act

Section 1 of the Sherman Act is the principal antitrust law applicable to this situation. Section 1 requires an agreement by two or more independent economic entities to restrain trade. Clearly price fixing is a restraint of trade. Indeed, it may be the quintessential restraint of trade.

Speculation about AI “conspiring” without human intervention makes for a great parlor game, but it ignores the fundamental architecture of the tech. Since generative models rely on human-defined data and queries—and the deliberate choice by entities to use them—the antitrust inquiry doesn’t change. That means we are looking for the persons or entities pulling the levers.

Let’s consider a scenario where two competitors agree on the data to train the large language model; agree on the algorithm used by the AI to determine the outcome of the query or prompt; agree on the query or prompt; and agree to follow the prices predicted by the AI. This is clearly an agreement to restrain trade. There is no room for independent decision-making by the human actors, which is the antithesis of a Section 1 agreement.

Let’s change the hypothetical and assume that the two competitors independently decide to use the same AI algorithm that has been trained on publicly available data. The parties do not agree to use the prices generated by AI but use it only as an input into their own independent decision-making as to the prices to charge. In cases such as Mach v. Yardi Systems, courts have held that it is not a violation of the antitrust laws for companies to independently decide to use the same algorithm and independently select the price they are charging.

However, a variation of this hypothetical probably establishes an antitrust violation.  Suppose that the competitors decide to use the same AI model and agree that they will adopt the prices chosen by the AI algorithm. This clearly establishes the joining of separate decision-makers, who are separate economic actors pursuing separate economic interests. The agreement to use the same algorithm and abide by its output deprives the marketplace of independent decision-making.

Sharing confidential data

The sharing of confidential data the AI algorithm uses for training is one of the issues that has come up in cases involving pricing algorithms. If companies were to share data to fix prices, it would be an antitrust violation.

Using an AI model to generate data-driven weights from nationwide statistics for individual competitors is legally permissible, according to court decisions like Mach v. Yardi. In other words, the AI takes broad national trends and applies them to a company’s private data. This allows the generative AI to craft a personalized pricing strategy.

However, using AI to share confidential proprietary data among competitors is more complex. Unless competitors explicitly agree to fix prices, courts generally view information exchanges as “procompetitive.” Under the Rule of Reason, regulators weigh the benefits of these exchanges against their potential to harm competition. This “give-to-get” arrangement—where companies share data to receive shared insights—is legally justifiable if it increases market transparency. While anyone can walk into a rival store to check a price, the real legal risk isn’t the AI data exchange itself, it’s whether competitors have agreed to actually use the AI-generated prices.

The “gray zone”

Exchanging confidential proprietary data could also trigger a “per se” antitrust analysis. If the exchange lacks any plausible procompetitive benefit—known as a “naked” restraint—the court applies the per se rule. In these cases, the law automatically presumes harm to competition and forbids any defense or justification. Such violations can lead to criminal charges. 

In 1978’s United States v. United States Gypsum Co., the US Supreme Court held that the exchange of pricing information was in the “gray zone” of behavior.  It could be procompetitive or it could be a violation of the antitrust laws. The Supreme Court held that there must be two kinds of intent regarding an information exchange to criminally violate antitrust laws: an intent to enter into the agreement being challenged and an intent that the conduct have an anticompetitive effect. 

Following the Gypsum decision, the American Bar Association (ABA) Antitrust Section included instructions for “information exchange” in its 1984 Sample Jury Instructions in Criminal Antitrust Cases. However, when these instructions were updated in 2009, the Department of Justice (DOJ) Antitrust Division successfully pushed to remove that section, claiming they would not pursue criminal charges for information exchange alone. Recent DOJ actions now suggest the department may be reconsidering this stance.

The key to determining antitrust violations is whether there is an agreement by the competitors. Agreement is the “sine qua non” of a violation of Section 1 of the Sherman Act. The use of AI to determine prices complicates the issue because there are multiple stages of the process where parties can form agreements. But fundamental antitrust principles can still guide this analysis.

Critical intelligence for general counsel

Stay on top of the latest news, solutions and best practices by reading Daily Updates from Today's General Counsel.

Daily Updates

Sign up for our free daily newsletter for the latest news and business legal developments.

Scroll to Top