All eyes on AI: Promoting competition
Victor Peng, the president of chip giant AMD, has seen plenty of innovations come to market in his 42 years in the tech business — from personal computers to social media to mobile devices. None, however, can compare to the boom in artificial intelligence.
“I’ve never seen a concept commercialized so fast,” Peng said of the race to profits set in motion when ChatGPT showed the world in late 2022 just how advanced AI had become.
AI’s rapid adoption is the reason Peng and an all-star cast of top business leaders, academics and government officials from Europe, Washington, D.C., and California convened for a daylong public workshop at the Stanford Institute for Economic Policy Research (SIEPR) on May 30. SIEPR co-hosted the event with the U.S. Department of Justice and Stanford Graduate School of Business (GSB) to examine AI’s fast-changing competitive landscape — and the steps needed to ensure fair odds for companies and their customers.
Jonathan Kanter, the Justice Department’s assistant attorney general and chief antitrust enforcer, kicked off the event with a warning: There is no AI exemption when it comes to fair competition.
“We are actively examining the AI ecosystem,” said Kanter. “If firms in the AI ecosystem violate the antitrust laws, the antitrust division will have something to say about that.”
Susan Athey, a SIEPR senior fellow and The Economics of Technology Professor at the GSB, oversaw the program for the event in her role as the chief economist in the Justice Department’s antitrust division.
In a sideline interview, Athey said that policymakers in the U.S. and around the world have been educating themselves about AI, including its technical aspects, “in a way that I have never seen before.”
But there’s a lot that’s missing from today’s policy conversations around AI business models.
“We need to understand AI’s competitive challenges — and the trade-offs that will have to be made — at a deeper level,” she said.
As Kanter noted in his remarks, there’s “a degree of seriousness and urgency” in bringing together people “who would not ordinarily come together in the same room and talk about issues that are important and sometimes difficult.”
Different perspectives, some common ground
The event was marked by a diversity of perspectives. In addition to Peng, invited speakers included Andrew Ng, an AI pioneer and founder of DeepLearning.AI; Věra Jourová, vice president of the European Commission; Amy Klobuchar, the U.S. senator from Minnesota; Condoleezza Rice, the Tad and Dianne Taube Director of the Hoover Institution who served as secretary of state under George W. Bush; and, Duncan Crabtree-Ireland, chief negotiator of the Screen Actors Guild-American Federation of Television and Radio Artists.
Also core to the lineup were business and investment leaders from across the “AI stack” — a term that refers to layers in the AI ecosystem, from chip makers at the bottom to end-user applications at the top and all of the players in between, including cloud platforms and tool developers. If one company dominates one layer, its market power threatens players along other layers of the stack.
“It’s very easy to talk past each other when [an AI technology] has potential advantages in one area or disadvantages in another,” said Alex Gaynor, deputy chief technologist at the Federal Trade Commission (FTC). Lina Khan, who is Kanter’s counterpart at the FTC, has also expressed concern at a SIEPR event about AI’s potential antitrust implications.
A range of views has emerged from AI’s various stakeholders, but on one point, workshop participants agreed: Open source is critical to fostering innovation and fair competition in AI.
Percy Liang, a Stanford associate professor of computer science and director of The Center for Research on Foundation Models, suggested that concerns about bad actors abusing open-source AI may be overblown. “Right now,” Liang said, “I don’t see substantial evidence that the marginal risk is high” compared to the marginal benefits.
Throughout the day, several workshop speakers echoed this point, further highlighting concerns that lobbying by providers of closed AI models might succeed at blocking the future development and release of open models.
David George, a partner at venture capital firm Andreessen Horowitz (also known as a16z), called on regulators to make sure small companies have a fair shot. “Big Tech” companies, from a business and regulatory standpoint, "are throwing their weight around in ways that we haven't seen [before]," he said.
A key message from the workshop speakers to policymakers was thus: Don’t rush to erect safety guardrails around AI that might harm competition, especially if open AI models would be threatened.
“Regulators are considering acting fast, or reacting fast, in a world where it’s very difficult to estimate the side effects of those actions and reactions,” said Blanche Savary de Beauregard, the general counsel of French startup Mistral AI. Next month the European Union AI Act — the world’s first legal framework governing AI — takes effect.
Athey, who is also the founding director of the Golub Capital Social Impact Lab at the GSB, said the pressure that regulators feel to safeguard competition in the AI era only underscores that research is critical to effective policymaking.
“Right now, policymakers do feel the weight of this moment and understand that there are trade-offs if they act too fast or too slow,” she said. “Whenever there are trade-offs that need to be weighed, for example between safety concerns about AI and the benefits to competition from open AI models, economists have an important role to play.”
The Justice Department invites comments from the public on the topics covered by this workshop. Interested parties may submit public comments now through July 15 at ATR.2024AIworkshop@usdoj.gov.
All photos by David Kim.