The Oxford Martin Programme on

Ethical Web and Data Architectures

the challenge

Thirty years ago, the World Wide Web launched as an open, common, universal infrastructure that anyone with a computer and a modem could use to communicate, publish and access information. In recent years, however, it has radically diverged from the values upon which it was founded, and it is now dominated by a number of platform companies, whose business models and services generate huge profits.

While an original ambition was to foster a Digital Enlightenment, what has developed is the large-scale collection of sensitive data about people’s beliefs, interests, activities and ways of life. This data is used as the input to artificial intelligence (AI) analytics and machine learning (ML) and used to target advertising and direct us to particular content, groups and viewpoints. Individuals are treated as a means of value extraction, with no long-term control or agency over their personal data or many of the decisions made using it.

As the remaining half of the world’s population comes online, we will need digital infrastructures that promote and support human flourishing, individual autonomy and self-determination in our emerging digital societies. To do this we will need to redesign the fundamental information architectures which underpin the web, and deploy new legal and regulatory infrastructures. We use the term architecture to evoke the notion of a carefully designed structure that may be technical or regulatory, a software system, a social process, or typically a combination of all of these.

Our work is organised around four themes:

Data Autonomy
We wish to empower people with their data, allowing users to control, manage, maintain and use their personal data towards their goals, objectives and preferences. We have established a radical new technical approach - backwards-compatible with the existing web - that enables web users to control the hosting of their data. In addition, we will further develop methods, tools and techniques to understand and control the flow of data prototype from apps and devices they use.

Data Privacy
We will develop a range of privacy-preserving machine learning (PPML) methods that will mean AI training can be decentralised, enabling data to be processed locally, rather than in large, centralised data centres. Processing data in a way that maintains user privacy, prevents subversion of the results whilst still extracting wider collective value.

Algorithmic Accountability
We will develop methods to assess whether AI or algorithmic decision making is fair, equitable and complies with regulatory requirements, and seek ways to promote a right to explanation of the internal decision-making process of algorithms.

Data Sharing
We will explore new institutional and legal constructs within which to hold data or algorithmic outputs. New forms of holding and sharing data include concepts such as data trusts, mutuals or cooperatives. Such constructs would comprise stated purposes, a legal structure, rights and duties over stewarded data, defined decision-making processes, and a description of how benefits are shared. Architectures that could allow data to be shared in more flexible and innovative ways, respecting individual autonomy while generating wider societal benefits.

The programme brings together researchers from Oxford’s Department of Computer Science, Faculties of Law and Philosophy, the Oxford Internet Institute and the Blavatnik School of Government.

Visit the Ethical Web and Data Architectures website