Structured access for third-party research on frontier AI models: Investigating researchers’ model access requirements
The Oxford Martin
AI Governance Initiative
Artificial intelligence (AI) has the potential to bring profound benefits to all. AI can improve medical care, legal services, education and scientific innovation. AI systems also have the capacity to improve the productivity of workers individually and of economies as a whole.
However, AI also has the potential for harms, such as displacing workers, reducing transparency, reinforcing societal biases, and threatening public safety. AI poses substantial risks both from unintended consequences and from deliberate misuse by malicious actors.
The social and political challenges posed by advanced AI are in the news daily, but rigorous work to understand these challenges and to deliver helpful interventions remains sparse. Furthermore, while the nature of future AI capabilities and their associated risks are difficult to predict, it is vital to analyse how we can safeguard against potential future harms now.
The Oxford Martin AI Governance Initiative aims to understand and anticipate the lasting risks from AI through rigorous research into the technical and computational elements of AI development, combined with deep policy analysis.
The work will include understanding the form and frameworks of national and international AI regulation, exploring the technological feasibility of using AI and machine learning technologies themselves to facilitate the governance of AI, and investigating how the AI industry can and should cooperate with international institutions for public benefit. The Initiative’s research will then be used to support decision-makers from industry, government, and civil society to mitigate AI’s challenges and to realise its benefits.
Keep in touch
If you found this page useful, sign up to our monthly digest of the latest news and eventsSubscribe