Insuring Emerging Risks from AI
The Oxford Martin
AI Governance Initiative
The Challenge
Artificial intelligence (AI) has the potential to bring profound benefits to all. AI can improve medical care, legal services, education and scientific innovation. AI systems also have the capacity to improve the productivity of workers individually and of economies as a whole.
However, AI also has the potential for harms, such as displacing workers, reducing transparency, reinforcing societal biases, and threatening public safety. AI poses substantial risks both from unintended consequences and from deliberate misuse by malicious actors.
The social and political challenges posed by advanced AI are in the news daily, but rigorous work to understand these challenges and to deliver helpful interventions remains sparse. Furthermore, while the nature of future AI capabilities and their associated risks are difficult to predict, it is vital to analyse how we can safeguard against potential future harms now.
The Oxford Martin AI Governance Initiative aims to understand and anticipate the lasting risks from AI through rigorous research into the technical and computational elements of AI development, combined with deep policy analysis.
The work will include understanding the form and frameworks of national and international AI regulation, exploring the technological feasibility of using AI and machine learning technologies themselves to facilitate the governance of AI, and investigating how the AI industry can and should cooperate with international institutions for public benefit. The Initiative’s research will then be used to support decision-makers from industry, government, and civil society to mitigate AI’s challenges and to realise its benefits.
Subscribe to receive the latest news and updates from the AI Governance Initiative
publications
What Should Be Internationalised in AI Governance?
Voice and Access in AI: Global AI Majority Participation in Artificial Intelligence Development and Governance
Public vs Private Bodies: Who Should Run Advanced AI Evaluations and Audits? A Three-Step Logic Based on Case Studies of High-Risk Industries
Open Problems in Technical AI Governance
The Future of International Scientific Assessments of AI’s Risks
AISIs’ Roles in Domestic and International Governance
Governing Through the Cloud: The Intermediary Role of Compute Providers in AI Regulation
Structured access for third-party research on frontier AI models: Investigating researchers’ model access requirements
International AI Governance
Keep in touch
If you found this page useful, sign up to our monthly digest of the latest news and events
Subscribe