Nicholas A. Caputo, Haydn Belfield, Jakob Mökander, Matteo Pistillo, Huw Roberts, Sophie Williams, Robert Trager
View Journal Article / Working PaperExecutive summary
The government of the United Kingdom is currently in the process of developing a bill to regulate frontier AI. Such a bill must have an international scope because the companies seeking to create these systems are scattered around the world and because AI models and the effects that they cause travel easily across borders. But the international implications of frontier AI regulation have been relatively neglected in discourse about the bill so far.
To address this neglect, the Oxford Martin AI Governance Initiative recently convened a group of experts to explore how the forthcoming UK frontier AI bill can be best shaped to achieve the UK government’s goals of effective regulation while remaining narrow and pro-innovation. The core takeaways from the convening are as follows:
- The United Kingdom should act now to secure a position of leadership in frontier AI: The field of international regulation of frontier AI remains relatively empty. A strong and well-designed bill that sets the model for frontier regulation would put the United Kingdom in a leadership position to shape further developments in the area. Furthermore, a clear bill would both improve safety and clear the way for innovation and economic growth by providing predictable rules that facilitate compliance on the part of AI developers.
- Domestic law is a key part of international regulation: While direct regulation of foreign AI companies and other entities is likely a necessary part of ensuring the safe development of frontier AI, well-designed domestic laws can shape activity abroad without necessarily raising hard issues of extraterritoriality and the like. Mechanisms like a ‘London Effect’ and modeling best practices allow domestic law to influence foreign actors. As such, designing domestic laws with international effects in mind would let the government maximize its regulatory effectiveness.
- The government must balance expanding its own reach and its reliance on others: Frontier AI regulation must cover foreign entities, but the government should not go too far in trying to assert domestic power outside its own jurisdiction. An international system of evaluators and regulatory authorities that provide mutual safety assurances across jurisdictions would help resolve this dilemma, but no one state can create such a system alone. As such, the government should shape the bill to rely on credible assurances from foreign regulatory authorities where possible while retaining the power to prevent harm directly in emergencies. A clause to the effect that the law would apply to companies whose systems could have ‘substantial and foreseeable harmful effects’ on the United Kingdom would help provide this backstop while also keeping it constrained.
- Robust international regulation promotes the domestic economy: Much of the United Kingdom’s economic advantage in AI will likely come from specialized AI development and the deployment of models in particular use cases. Regulation at the frontier level would enable downstream users to deploy frontier models without worrying about safety risks and focus the regulatory burden on international frontier companies rather than on smaller domestic start-ups and other enterprises.
- The UK AI Safety Institute (AISI) should continue to advance the state of the science as an arm’s-length body (ALB): UK AISI is the world leader in frontier model evaluations and has made strides in the science of AI safety and in coordinating with companies and with other evaluators. This cooperative approach has been effective so far and illustrates the value of making the United Kingdom the best place to do business and work with regulators. Turning AISI into an ALB will allow it to continue exerting influence to achieve the government’s goals. How to supplement AISI in its scientific role with direct regulation (for example, by creating an independent frontier regulator or by expanding the responsibilities of existing bodies) is outside the scope of this report but is a key question for lawmakers.
- The government should emphasize offering free evaluations and safety certification to open-weight AI developers to incentivize participation in the regulatory regime: Open-source models are generally transparent, decentralized, and accessible to consumers and companies that cannot afford proprietary models. However, they can be difficult to regulate under a frontier framework because their harms cannot be as easily attributed to the model provider that would otherwise be subject to regulation. Offering free evaluations and similar safety tools to open-source developers would help bring them into the frontier regulation regime without imposing excessive costs on these groups.
The convening focused on six core means of international regulation of frontier AI, each with its benefits and costs. Different ways of writing the frontier AI bill will make use of various methods of regulation, and lawmakers should carefully consider how to accomplish their goals of safety and growth with these tools.