Report calls for algorithmic transparency and education to fight fake news

Report calls for algorithmic transparency and education to fight fake news
From TechCrunch - March 12, 2018

A report commissioned by European lawmakers has called for more transparency from online platforms to help combat the spread of false information online.

It also calls for urgent investment in media and information literacy education, and strategies to empower journalists and foster a diverse and sustainable news media ecosystem.

The High-Level Expert Group (HLEG), which authored the report, was set up last November by the European Unions executive body to help inform its response to the fake news crisis which is currently challenging Western lawmakers to come up with an effective and proportionate response.

The HLEG favors the term disinformationarguing (quite rightly) that the fake news badge does not adequately capture the complex problems of disinformation that also involves content which blends fabricated information with facts.

Fake news has also of course become fatally politicized (hi, Trump!), and the label is frequently erroneously applied to try to close down criticism and derail debate by undermining trust and being insulting. (Fake news really is best imagined as a self-feedingouroboros.)

Disinformation, as used in the Report, includes all forms of false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit, says the HLEGs chair, professor Madeleine de Cock Buning, in a report forward.

This report is just the beginning of the process and will feed the Commission reflection on a response to the phenomenon, writes Mariya Gabriel, the EC commissioner for digital economy and society, in another forward. Our challenge will now lie in delivering concrete options that will safeguard EU values and benefit every European citizen.

The Commissions next steps will be towork on coming up withthose tangible options to better address the risks posed by disinformation being smeared around online.

Rather, its42-page reportrecommends a multi-dimensional approach to tackling online disinformation, over the short and long termincluding emphasizing the importance of media literacy and education and advocating for support for traditional media industries; at the same time as warning over censorship risks and calling for more research to underpin strategies that could help combat the problem.

It does suggest a Code of Principles for online platforms and social networks to commit towith increased transparency about how algorithms distribute news being one of several recommended steps.

The report lists five core pillars which underpin the its various interconnected and mutually reinforcing responsesall of which are in turn aimed at forming a holistic overarching strategy to attack the problem from multiple angles and time-scales.

These five pillars are:

Zooming further in, the report discusses and promotes various actionssuch as advocating for clearly identifiable disclosures for sponsored content, including for political ad purposes; and for information on payments to human influencers and the use of bot-based amplification techniques to be made available in order for users to understand whether the apparent popularity of a given piece of online information or the apparent popularity of an influencer is the result of artificial amplification or is supported by targeted investment.

It also promotes a strategy of battling bad speech by expanding access to more, better speechpromoting the idea that disinformation could be diluted with quality information.

Although, on that front, a recent piece of MIT research investigating how fact-checked information spreads on Twitter, studying a decades worth of tweets, suggests that without some form of very specific algorithmic intervention such an approach could well struggle to triumph against human natureas information that has been fact-checked as false was found to spread further and faster than information that had been fact-checked as true.

In short, humans find clickbait more spreadable. And thats why, at least in part, disinformation has scaled into the horribly self-reinforcing problem it has.

A bit of algorithmic transparency

The reports push for a degree of algorithmic accountability by calling for a little disinfecting transparency from tech platforms is perhaps its most interesting and edgy aspect. Though its suggestions here are extremely cautious.

And, yes, staffers from Facebook,Google and Twitter are listed as membersso the major social media tech platforms and disinformation spreaders are directly involved in shaping these recommendations. (See the end of this post for the full list of people/organizations in the HLEG.)

A Twitterspokesman confirmed the company has been engaged with the process from the beginning but declined to provide a statement in response to the report. At the time of writing requests for comment from Facebook and Google had not been answered.

The presence of powerful tech platforms in the Commissions advisor body on this issue may explain why the groups suggestions on algorithmic accountability comes across as rather dilute.

Though you could say that at least the importance of increased transparency is being affirmedeven by social medias giants.

But are platforms the real problem?

One of the HLEGs members, European consumer advocacy organization BEUC, voted against the reportarguing the group had missed an opportunity to push for a sectorinquiry to investigate the link between advertising revenue policies of platforms and the dissemination of disinformation.

And this criticism does seem to have some substance. As, for all the reports discussion of possible ways to support a pluralistic news media ecosystem, the unspoken elephant in the room is that Facebook and Google are gobbling up the majority of digital advertising profits.

Facebook very deliberately made news distribution its businesseven if its dialing back that approach now, in the face of a backlash.

In a critical statement, Monique Goyens, director general of BEUC, said:This report contains many useful recommendations but fails to touch upon one of the core causes of fake news. Disinformation is spreading too easily online. Evidence of the role of behavioral advertising in the dissemination of fake news is piling up. Platforms such as Google or Facebook massively benefit from users reading and sharing fake news articles which contain advertisements. But this expert group choose to ignore this business model. This is head-in-the-sand politics.

Giving another assessment, academic Paul Bernal, IT, IP and media law lecturer at theUEA School of Law in the UK, and not himself a member of the HLEG, also argues the report comes up shortby failing to robustly interrogate the role of platform power in the spread of disinformation.

His view is that the whole idea of sharing as a mantra is inherently linked to disinformations power online.

[The report] is a start, but it misses some fundamental issues. The point about promoting media and information literacy is the biggest and most important oneI dont think it can be emphasized enough, but it needs to be broader than it immediately appears. People need to understand not only when news is misinformation, but to understand the way it is spread,Bernal told TechCrunch.

That means questioning the role of social mediaand here I dont think the High Level Group has been brave enough. Their recommendations dont even mention addressing this, and I find myself wondering why.

From my own research, the biggest single factor in the current problem is the way that news is distributedFacebook, Google and Twitter in particular.

We need to find a way to help people to wean themselves off using Facebook as a source of newsthe very nature of Facebook means that misinformation will be spread, and politically motivated misinformation in particular, he added. Unless this is addressed, almost everything else is just rearranging the deckchairs on the Titanic.

Beyond filter bubbles

But Lisa-Maria Neudert, a researcher at the Oxford Internet Institute, who says she was involved with the HLEGs work (her colleague at the Institute, Rasmus Nielsen, is also a member of the group), played down the notion that the report is not robust enough in probing how social media platforms are accelerating the problem of disinformationflagging its call for increased transparency and for strategies to create a media ecosystem that is more diverse and is more sustainable.

Though she added: I can see, however, how one of the common critiques would be that the social networks themselves need to do more.

She went on to suggest that negative results following Germanys decision to push for a social media hate speech law which requires valid takedowns to be executed within 24 hours and includes a regime of penalties that can scale up to50Mmay have influenced the groups decision to push for a far more light-touch approach.

The Commission itself has warned it could draw up EU-wide legislation to regulate platforms over hate speech. Though, for now, its been pursuing a voluntary Code of Conduct approach. (It has also been turning up the heat over terrorist content specifically.)

Less trusted sources


Continue reading at TechCrunch »