Since OpenAI released ChatGPT, debates over the regulation of artificial intelligence (AI) have intensified. But for all the interest in regulating AI, there has been little discussion of AI’s industrial organization and market structure. This is surprising because parts of the AI supply chain (i.e., the “layers” in the “AI technology stack”) are highly concentrated.

In this article, we make the case for an antimonopoly approach to governing artificial intelligence. We show that AI’s industrial organization, which is rooted in AI’s technological structure, evinces market concentration at and across a number of layers. And we argue that an unregulated AI oligopoly has undesirable economic, national security, social, and political consequences.

Our analysis of AI’s industrial organization leads to some important conclusions: that important elements of AI are stable enough to invite regulation, notwithstanding ongoing technical development; that ex ante tools of competition regulation are likely to prove more effective than modes of ex post enforcement, as under antitrust law; that regulation can help facilitate more downstream innovation and that the current market structure may in fact inhibit innovation; and that some of the most prominent worries about AI — such as bias and privacy — might themselves be partly the result of market structure concerns.

In light of these conclusions, we show how antimonopoly market shaping tools—the law of networks, platforms, and utilities; industrial policy; public options; and cooperative governance—can all help facilitate competition and combat inequality. As policymakers debate governing AI at this early stage in its technological lifecycle, antimonopoly tools must be part of the conversation.

About the authors

Tejas N. Narechania

Professor of Law, UC Berkeley

Ganesh Sitaraman

New York Alumni Chancellor's Chair in Law, Vanderbilt University