Advertisement

DeepMind now has an AI ethics research unit. We have a few questions for it…

DeepMind now has an AI ethics research unit. We have a few questions for it…
From TechCrunch - October 4, 2017

DeepMind, the U.K. AI company which was acquired in 2014 for $500M+ by Google, has launched a new ethics unit which it says will conduct research across six key themesincluding privacy, transparency and fairness and economic impact: inclusion and equality.

TheXXVI-Alphabet-owned company, whose corporate parent generated almost $90BN in revenue last year, says the research will consider open questions such as: How will the increasing use and sophistication of AI technologies interact with corporate power?

It will helped in this important work by a number of independent advisors (DeepMind also calls them fellows) to, it says, help provide oversight, critical feedback and guidance for our research strategy and work program; and also by a group of partners, aka existing research institutions, which it says it will work with over time in an effort to include the broadest possible viewpoints.

Although it really shouldnt need a roster of learned academics and institutions to point out the gigantic conflict of interest in a commercial AI giant researching the ethics of its own technologys societal impacts.

(Meanwhile, the issue of AI-savvy academics not already being attached, in some consulting form or other, to one tech giant or another is another ethical dilemma for the AI field that weve highlighted before.)

The DeepMind ethics research unit is in addition to an internal ethics board apparently established by DeepMind at the point of the Google acquisition because of the founders own concerns about corporate power getting its hands on powerful AI.

However the names of the people who sit on that board have never been made publicand are not, apparently, being made public now.Even as DeepMind makes a big show of wanting to research AI ethics and transparency. So you do have to wonder quite how mirrored are the insides of the filter bubbles with which tech giants appear to surround themselves.

One thing isbecoming amply clear where AI and tech platform power is concerned: Algorithmic automation at scale is having all sorts of unpleasant societal consequenceswhich, if were being charitable, can be put down to the result of corporates optimizing AI for scale and business growth. Ergo: we make money, not social responsibility.

But it turns out that if AI engineers dont think about ethics and potential negative effects and impact before they get to work moving fast and breaking stuff, those hyper scalable algorithms arent going to identify the problem on their own and route around the damage. Au contraire. Theyre going to amplify, accelerate and exacerbate the damage.

Witness fake news. Witness rampant online abuse. Witness the total lack of oversight that lets anyone pay to conduct targeted manipulation of public opinion and screw the socially divisive consequences.

Given the dawning political and public realization of how AI can cause all sorts of societal problems because its makers just didnt think of thatand thus have allowed their platforms to be weaponized by entities intent on targeted harm, then the need for tech platform giants to control the narrative around AI is surely becoming all too clear for them. Or they face their favorite tool being regulated in ways they really dont like.

The penny may be dropping from we just didnt think of that to we really need to think of thatand control how the public and policymakers think of that.

And so we arrive at DeepMind launching an ethics research unit thatll be putting out ## pieces of AI-related research per yearhoping to influence public opinion and policymakers on areas of critical concern to its business interests, such asgovernance and accountability.

This from the same company that this summer was judged by the UKs data watchdog to have broken UK privacy law when its health division was handed the fully identifiable medical records of some 1.6M people without their knowledge or consent. And now DeepMind wants to research governance and accountability ethics? Full marks for hindsight guys.

Now its possible DeepMinds internal ethics research unit is going to publish thoughtful papers interrogating the full spectrum societal risks of concentrating AI in the hands of massive corporate power, say.

But given its vested commercial interests in shaping how AI (inevitably) gets regulated, a fully impartial research unit staffed by DeepMind staff does seem rather difficult to imagine.

Advertisement

Continue reading at TechCrunch »