Skip to main contentSkip to navigationSkip to navigation
CEO of Google DeepMind Demis Hassabis after the Go match between South Korean Lee Sedol and Google’s AI program, AlphaGo.
CEO of Google DeepMind Demis Hassabis after the Go match between South Korean Lee Sedol and Google’s AI program, AlphaGo. Photograph: Lee Jin-man/AP
CEO of Google DeepMind Demis Hassabis after the Go match between South Korean Lee Sedol and Google’s AI program, AlphaGo. Photograph: Lee Jin-man/AP

Whatever happened to the DeepMind AI ethics board Google promised?

This article is more than 7 years old

When the search giant bought the artificial intelligence company, part of the deal was setting up an ethics board. Three years on, where is it?

Three years ago, artificial intelligence research firm DeepMind was acquired by Google for a reported £400m. As part of the acquisition, Google agreed to set up an ethics and safety board to ensure that its AI technology is not abused.

The existence of the ethics board wasn’t confirmed at the time of the acquisition announcement, and the public only became aware of it through a leak to industry news site The Information. But in the years since, senior members of DeepMind have publicly confirmed the board’s existence, arguing that it is one of the ways that the company is trying to “lead the way” on ethical issues in AI.

But in all that time DeepMind has consistently refused to say who is on the board, what it discusses, or publicly confirm whether or not it has even officially met. The Guardian has asked DeepMind and Google multiple times since the acquisition on 26 January 2014 for transparency around the board, and received just one answer on the record.

In January 2016, during a press conference in which DeepMind announced that its AlphaGo system had successfully defeated a high-level human player at the ancient board game Go, the Guardian asked DeepMind co-founder and chief executive Demis Hassabis whether it would make any information about the ethics board public.

“We have convened our ethics board, that’s progressing very well,” Hassabis replied. “It’s an internal board, so confidential matters are discussed on that. And so far, we feel that a lot of it, the purpose of the board currently is to educate the people on that board as to the issues and bring everyone up to speed.

“So there hasn’t really been anything major yet that would warrant announcing in any way. But in the future we may well talk about those things more publicly,” he added.

Asked now whether DeepMind still stands by the comment, a spokesperson said: “It’s crucial that we bring in independent third-party experts from outside the field of AI, who have huge experience of ethical questions in other areas of science and beyond. That involves a big investment of time in getting everyone up to speed on the current state of the art in AI and how the field may develop in the years ahead, so they can bring their expertise to bear on this too.”

Other AI companies have similar boards, which do have a public presence. For instance, Texan AI startup Lucid.AI has a six-person ethics board including Unicef’s Liz Gibbons and Imperial College’s Murray Shanahan.

Importance of committee

There is no doubting that DeepMind values the ethics board highly. Jaan Tallinn, an early investor in the company, says that Google’s offer to create the board was a strong motivation in picking the company over other potential suitors. “To the best of my knowledge (I wasn’t privy to the details of the negotiations) … Google’s offer wasn’t the best one on the table in pure financial terms, but DeepMind decided to go with Google nevertheless,” Tallinn says, “partly because of Google’s promise to establish a neutral ethics and safety board to oversee and consult DeepMind’s future operations.”

And while DeepMind’s main ethics board hasn’t progressed publicly, the company has managed to get similar projects successfully off the ground. A second ethics board, created to specifically oversee DeepMind’s health-related projects like its partnerships with NHS hospitals, is public, and met for the first time in June 2016. The intention is that it will meet four times a year, and issue an annual statement outlining their findings. It includes the editor of the Lancet medical journal Richard Horton, the NHS’s former “Kidney tsar” Prof Donal O’Donaghue, and the chair of Tech City UK, Eileen Burbidge.

Mustafa Suleyman, the DeepMind co-founder who heads up the company’s applied research arm, said in May that the health board would be fully independent. “They’re not going to be contracted, they’re not going to be paid, and they’re going to be free to speak publicly about what we’re doing. I’m really proud to be able to say that and to be able to open ourselves up for scrutiny proactively.”

Externally, DeepMind has also been fighting for wider ethical oversight for the AI industry. The company was one of the inaugural five members of the “Partnership on Artificial Intelligence to Benefit People and Society”, along with Facebook, Amazon, IBM and Microsoft. Suleyman is one of the partnership’s two interim co-chairs, along with Eric Horvitz. When the Partnership was announced, Suleyman said it wouldn’t replace the internal ethics board, but that it would complement it.

The Partnership on AI hasn’t done much since it was formed, but that may soon change: Bloomberg reports that its numbers have swelled, with Apple joining the organisation.

‘The best world we can imagine’

Why is the board so important that DeepMind would turn down higher offers to ensure its creation, and then spend three years perfecting its composition?

Tallinn says that such boards are “a partial solution to navigating a potentially hazardous moral landscape”.

“Giving the control over powerful AI to the highest bidder is unlikely to lead to the best world we can imagine. For example, one great concern is an arms race in autonomous weapons where AI would be literally killing people,” he added.

Dr Nick Bostrom, a philosopher at the University of Oxford, provided one of the most compelling arguments for the ethical oversight of artificial intelligence in his 2014 book, Superintelligence. Bostrom argues that super-intelligent AIs could end up destroying humanity by prioritising its simple goals to the exclusion of all else.

“In my opinion, it’s very appropriate that an organisation that has as its ambition to ‘solve intelligence’ has a process for thinking about what it would mean to succeed, even though it’s a long-term goal,” Bostrom says. “The creation of AI with the same powerful learning and planning abilities that make humans smart will be watershed moment. When eventually we get there, it will raise a host of ethical and safety concerns that will need to be carefully addressed. It is good to start to studying these in advance rather than leave all the preparation for the night before the exam.”

Bostrom does, however, defend the quiet around DeepMind’s ethics and safety committee. “Since it’s meant to focus on long-term issues, it’s probably more important that it be done well than that it be done quickly.

“I don’t know what is involved internally inside Alphabet [DeepMind and Google’s parent company] in making this happen, but I can’t think of another case of a small group being acquired by a large company and setting up a mechanism to oversee and regulate that the large company will only use their inventions ethically and safety.”

The Guardian again asked DeepMind for specific details of the ethics and safety board for this article, DeepMind again declined to comment.

More on this story

More on this story

  • AI firm DeepMind puts database of the building blocks of life online

  • Demis Hassabis: the deep mind Dominic Cummings turned to as the pandemic hit

  • DeepMind AI cracks 50-year-old problem of protein folding

  • Google’s DeepMind makes AI program that can learn like a human

  • Google's DeepMind plans bitcoin-style health record tracking for hospitals

  • AI can win at poker: but as computers get smarter, who keeps tabs on their ethics?

  • The Guardian view on AI in the NHS: not the revolution you are looking for

  • No one can read what’s on the cards for artificial intelligence

  • Labour calls for closer scrutiny of tech firms and their algorithms

Most viewed

Most viewed