How should states regulate advanced artificial agents that can plan future better than us? Can we maintain control over AI when highly capable systems intentionally bypass human oversight to maximize long-term rewards? Are safety tests reliable when AI systems behave differently during tests to ensure they pass? Who should be permitted to build such systems? What should the right governance frameworks look like?
Explore these questions with Michael Cohen, a postdoctoral researcher at the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley. Cohen will present his forthcoming lead-authored editorial in Science to address the prospect of AI systems that cannot be safely tested.
This event is organised by the Oxford Martin AI Governance Initiative.
REGISTRATION
Booking is recommended - please RSVP to nikki.sun@oxfordmartin.ox.ac.uk