US, EU and UK issue joint statement on competition in generative AI models

HomeTechnologyUS, EU and UK issue joint statement on competition in generative AI...

Hot Topics

World News

Featured Content

The United States, European Union and the United Kingdom have signed a joint statement setting out shared principles to protect competition in generative AI technologies

The statement from the US Federal Trade Commission and Department of Justice, the European Commission and the UK’s Competition & Markets Authority says AI, particularly powerful foundation models, has the potential to spur innovation and growth, and to bring about transformational positive change in how we live and work.

It also recognises that these technologies may pose risks to competition and consumers which, if they materialize, will require action before they become “entrenched or irreversible harms”.

“A technological inflection point”

The statement says that while there are many unknowns about the precise trajectory of artificial intelligence tools, generative AI has evolved rapidly and could potentially become one of the most significant technological developments of the past couple of decades.

“Technological inflection points can introduce new means of competing, catalyzing opportunity, innovation and growth,” it adds.

“This requires being vigilant and safeguarding against tactics that could undermine fair competition.”

Risks to competition

While the authorities recognize the great potential benefits of new services that AI is bringing to market, they identify a number of risks that require “ongoing vigilance”.

  1. Concentrated control of key inputs.

Specialized chips, substantial compute, data at scale and specialist technical expertise are critical ingredients to develop foundation models.

This could potentially put a small number of companies in a position to exploit existing or emerging bottlenecks across the AI stack and to have outsized influence over the future development of these tools.

This could limit the scope of disruptive innovation or allow companies to shape it to their own advantage at the expense of fair competition that benefits the public and our economies.

  1. Entrenching or extending market power in AI-related markets.

Foundation models are arriving at a time when large incumbent digital firms already enjoy strong accumulated advantages. For example, platforms may have substantial market power at multiple levels related to the AI stack.

This can give these firms the ability to protect against AI-driven disruption, or harness it to their particular advantage, including through control of the channels of distribution of AI or AI-enabled services to people and businesses.

This may allow such firms to extend or entrench the positions that they were able to establish through the last major technological shift to the detriment of future competition.

  1. Arrangements involving key players could amplify risks.

Partnerships, financial investments and other connections between firms related to the development of generative AI have been widespread to date. In some cases, these arrangements may not harm competition but in other cases, these partnerships and investments could be used by major firms to undermine or co-opt competitive threats and steer market outcomes in their favour at the expense of the public.

In addition, the agencies highlight the risk of AI algorithms allowing competitors to share competitively sensitive information and fix prices, or that algorithms may enable firms to undermine competition through unfair price discrimination or exclusion.

“In light of these risks, we are committed to monitoring and addressing any specific risks that may arise in connection with other developments and applications of AI, beyond generative AI,” the statement says.

The statement comes after 10 countries and the EU, including the US, Japan, Singapore and the UK, agreed earlier this year to launch the first international network to cooperate on advancing AI safety.

The agreement, reached at the AI Seoul Summit in South Korea, commits the countries to developing a common understanding of AI safety and aligning work on research, standards and testing.

Category

Advertisements

Subscribe

Related Content