UK report urges action to combat AI bias

UK report urges action to combat AI bias
From TechCrunch - April 16, 2018

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament.

The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct, the committee writes, chiming with plenty of extant commentary around algorithmic accountability.

It is essential that ethics take centre stage in AIs development and use,adds committee chairman,LordClement-Jones, in a statement. The UK has a unique opportunity to shape AI positively for the publics benefit and to lead the international community in AIs ethical development, rather than passively accept its consequences.

Thereportalso calls for the government to take urgent steps to help fosterthe creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisionsrecommending a publicly funded challenge to incentivize the development of technologies that can audit and interrogate AIs.

The Centre for Data Ethics and Innovation, in consultation with the Alan Turing Institute, the Institute of Electrical and Electronics Engineers, the British Standards Institute and other expert bodies, should produce guidance on the requirement for AI systems to be intelligible, the committee adds. The AI development sector should seek to adopt such guidance and to agree upon standards relevant to the sectors within which they work, under the auspices of the AI Councilthe latter being a proposed industry body it wants established to help ensure transparency in AI.

The committee is also recommending a cross-sector AI Code to try to steer developments in a positive, societally beneficial directionthough not for this to be codified in law (the suggestion is it could provide thebasisfor statutory regulation, if and when this is determined tobe necessary).

Among the five principles theyre suggesting as a starting point for the voluntary code are that AI should be developed for the common good and benefit of humanity, and that it should operate on principles of intelligibility and fairness.

Though, elsewhere in the report, the committee points out it can be a challenge for humans to understand decisions made by some AI technologiesgoing on to suggest it may be necessary to refrain from using certain AI techniques for certain types of use-cases, at least until algorithmic accountability can be guaranteed.

We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individuals life, unless it can generate a full and satisfactory explanation for the decisions it will take, it writes in a section discussing intelligible AI. In cases such as deep neural networks, where it is not yet possible to generate thorough explanations for the decisions that are made, this may mean delaying their deployment for particular uses until alternative solutions are found.

A third principle the committee says it would like to see included in the proposed voluntary code is: AIshould not be used to diminish the data rights or privacy of individuals, families or communities.

Though this is a curiously narrow definitionwhy not push for AI not to diminish rights, period?

Its almost as if follow the law is too hard to say, observes Sam Smith, a coordinator at patient data privacy advocacy group, medConfidential, discussing the report.

Unlike other AI ethics standards which seek to create something so weak no one opposes it, the existing standards and conventions of the rule of law are well known and well understood, and provide real and meaningful scrutiny of decisions, assuming an entity believes in the rule of law, he adds.

Looking at the tech industry as a whole, its certainly hard to conclude that self-defined ethics appear to offer much of a meaningful check on commercial players data processing and AI activities.

Topical case in point: Facebookhas continued to claim there was nothing improper about the fact millions of peoples information was shared with professor Aleksandr Kogan. Peopleknowingly provided their information is the companys defensive claim.

Yet the vast majority of people whose personal data was harvested from Facebook by Kogan clearly had no idea what was possible under its platform termswhich, until 2015, allowed one user to consent to the sharing of all their Facebook friends. (Hence ~270,000 downloaders of Kogans app being able to pass data on up to 87M Facebook users.)

So Facebooks self-defined ethical code has been shown to be worthlessaligning completely with its commercial imperatives, rather than supporting users to protect their privacy. (Just as its T&Cs are intended tocover its own rear end, rather than clearly inform peoples about their rights, as one US congressman memorably put it last week.)

A week after Facebook were criticized by the US Congress, the only reference to the Rule of Law in this report is about exempting companies from liability for breaking it, Smith adds in a MedConfidentialresponse statement to the Lords report. Public bodies are required to follow the rule of law, and any tools sold to them must meet those legal obligations. This standard for the public sector will drive the creation of tools which can be reused by all.

Health data should not be shared lightly

The committee, which took evidence from Google -owned DeepMindas one of a multitude of expert witnesses during more than half a years worth of enquiry, touches critically on the AI companys existing partnerships with UK National Health Service Trusts.

The first of which, dating from 2015and involving the sharing of ~1.6 million patients medical records with the Google-owned companyran into trouble with the UKs data protection regulator. The UKs information commissioner concludedlast summer that the Royal Free NHS Trusts agreement with DeepMind had not complied with UK data protection law.

Patients medical records were used by DeepMind to develop a clinical task management app wrapped around an existing NHS algorithm for detecting a condition known as acute kidney injury. The app, called Streams, has been rolled out for use in the Royal Frees hospitalscomplete withPR fanfare. But its still not clear what legal basis exists to share patients data.

Maintaining public trust over the safe and secure use of their data is paramount to the successful widespread deployment of AI and there is no better exemplar of this than personal health data, the committee warns. There must be no repeat of the controversy which arose between the Royal Free London NHS Foundation Trust and DeepMind. If there is, the benefits of deploying AI in the NHS will not be adopted or its benefits realised, and innovation could be stifled.

The report also criticizes the current piecemeal approach being taken by NHS Trusts to sharing data with AI developerssaying this risks the inadvertent under-appreciation of the data and NHS Trusts exposing themselves to inadequate data sharing arrangements.

The data held by the NHS could be considered a unique source of value for the nation. It should not be shared lightly, but when it is, it should be done in a manner which allows for that value to be recouped, the committee writes.

A similar pointabout not allowing a huge store of potential value which is contained within publicly-funded NHS datasets to be cheaply asset-stripped by external forceswas made byOxford Universitys Sir John Bell in a UK government-commissionedindustrial strategy reviewof the life sciences sector last summer.

Despite similar concerns, the committee also calls foraframework for sharing NHS data be published by the end of the year, and is pushing for NHS Trusts to digitize their current practices and recordswith a target deadline of 2022in consistent formats so that peoples medical records can be made more accessible to AI developers.


Continue reading at TechCrunch »