Ethical AI Kills Too: An Assessment of the Lords report on AI in the UK
19 Apr 2018
Developing AI that does not eventually take over humanity or turn the world into a dystopian nightmare is a challenge. It also has an interesting effect on philosophy, and in particular ethics: suddenly, a great deal of the millennia-long debates on the good and the bad, the fair and unfair, need to be concluded and programmed into machines. Does the autonomous car in an unavoidable collision swerve to avoid killing five pedestrians at the cost of its passenger’s life? And what exactly counts as unfair discrimination or privacy violation when “Big Data” suggests an individual is, say, a likely criminal?
The recent House of Lords Artificial Intelligence Committee’s report acknowledges the centrality of ethics to AI front and centre. It engages thoughtfully with a wide range of issues: algorithmic bias, the monopolised control of data by large tech companies, the disruptive effects of AI on industries, and its implications for education, healthcare, and weaponry.
Many of these are economic and technical challenges. For instance, the report notes Google’s continued inability to fix its visual identification algorithms, which it emerged three years ago could not distinguish between gorillas and black people. For now, the company simply does not allow users of Google Photos to search for gorillas.
But many of the challenges are also ethical – in fact, central to the report is that while the UK is unlikely to lead globally in the technical development of AI, it can lead the way in putting ethics at the centre of AI’s development and use.
To that end, the report proposes developing a national and international “AI code” based on five overarching principles:
- Artificial intelligence should be developed for the common good and benefit of humanity
- Artificial intelligence should operate on principles of intelligibility and fairness
- Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families, or communities
- All citizens should have the right to be educated to enable them to flourish mentally, emotionally, and economically alongside artificial intelligence
- The autonomous power to hurt, destroy, or deceive human beings should never be vested in artificial intelligence
This seems a good starting point from which to develop a shared ethical AI framework. But note the last principle on “autonomous power”. Though the report’s focus here is on cyber security in particular, it also highlights the UK government’s hazy distinction between an automated weapon system and an autonomous one. For the latter, the current ministry of defence guidance suggests that an autonomous weapon must be “aware and show intention” – criteria that are vague while also settings the bar quite high for what constitutes autonomy in this context, at least compared to the other government definitions the report surveys.
Without a sharper and internationally shared definition of what constitutes an autonomous weapon, the report argues we could “easily find ourselves stumbling through a semantic haze into dangerous territory”.
This is a level-headed argument and a useful observation. The problem is this principle is not limited to AI as it applies to weaponry or cyber security, but to AI in general. This seems implausible; in fact, the most critical task ahead for developing ethical AI is to determine exactly when it autonomously can, and should, hurt and destroy human beings. The autonomous car is a prime example here.
In an unavoidable collision, does an autonomous car give priority to its passengers, or does it impartially try to minimize harm, potentially killing its passenger(s) if that means saving a greater number overall? And does the car take into account the age or perhaps some other physical or intellectual attributes in determining its course of action?
It’s easy to assume such unavoidable collisions reminiscent of the Trolley Dilemma would arise infrequently, but if millions of autonomous cars eventually roam the streets, the unlikely becomes inevitable. Programming cars with the right values to respond to such life and death trade-offs becomes necessary. And that means knowing precisely when it is right to hurt or destroy humans.
On a cynical side note, there is evidence that people in principle want manufacturers to make “utilitarian autonomous cars” that aim for impartial harm minimization in collisions, but that when it comes to their own cars those same people would only really be willing to buy egoistic cars that protect them over others. Ironically, if the promise of autonomous vehicles is that they will massively reduce traffic accidents, then building egoistic cars that people are willing to use may be how to minimize overall harm.
Autonomous vehicles aside, decisions by AI to harm and destroy thousands if not millions of human beings become inevitable as we increasingly use it to more efficiently configure our economics and allocate resources. These decisions inevitably have trade-offs, and in many cases, such as how we allocate resources to healthcare, and which sectors of healthcare, they entail some people dying prematurely or suffering in ways they otherwise wouldn’t have.
What all this means is that to design AI with the right values we need to be able to straightforwardly answer what have often been intractable questions in ethics: what criteria determine who gets to die and who gets to live, and what is the optimal balance between fairness and individual well-being, freedom and security?
Of course, policy and regulatory efforts in any society have always attempted answering these questions, albeit in an indirect, hodgepodge and typically uncoordinated way. Ethical AI may not be compatible with this approach: the values programmed into it need to be settled on and clear cut. As this report highlights, this is a philosophical rather than a merely technical problem; and it entails a radical shift in comfort for the armchair philosopher.
This opinion piece reflects the views of the author, and does not necessarily reflect the position of the Oxford Martin School or the University of Oxford. Any errors or omissions are those of the author.