Social media handed “one-hour rule” for terrorist takedowns in Europe

Social media handed “one-hour rule” for terrorist takedowns in Europe
From TechCrunch - March 2, 2018

The European Commission is still considering whether to regulate social media platforms to ensure they promptly remove illegal contentbe it terrorist propaganda, child sexual exploitation or hate speech but also commercial scams and even copyright breaches.

Yesterday it revealed the next steps in trying to rule social sharing platforms in the meanwhile, placing a big squeeze on tech companies to takedown terrorist content specifically by setting out what its calling the one-hour rulewhich requires companies take down this type of illegal content within one hour of it being reported (or at least as a general rule).

It says this time frame is needed because thistype of content poses a particularly grave risk to the security of Europeans, and thus its spread must be treated as a matter of the utmost urgency.

And while the Commission is using the word rule informally this is not (yet) a new law.

Rather its putting pressure on firms to comply with an informaland, say critics arbitraryrecommendation or face the risk of actual legislation being drafted to rule social media, potentially with penalties attached (as has already happened in Germany).

The Commission defines terrorist content as any material which amounts to terrorist offences under the EUDirective on combating terrorismor under national lawsincluding material produced by, or attributable to, EU or UN listed terrorist organisations.

So as well as ISIS propaganda it would, for example, include content created by the banned UK Far Right hate group, National Action, too.

Last fall the UK government put its own squeeze on tech giants to radically shrink the time it takes to remove extremist content from their platformssaying it wanted the average to shrink from 36 hours down to just two. So its perhaps been providing the inspiration for the EU executive bodys even more stringent clampdownto a one-hour rule.

Although it is giving companies and EU Member States three months grace before they need to submit relevant information on terrorist content to enable the Commission to monitor their performance.

Commenting in a statement, AndrusAnsip, VP for the Digital Single Market said:Online platforms are becoming peoples main gateway to information, so they have a responsibility to provide a secure environment for their users.What is illegal offline is also illegal online.

While several platforms have been removing more illegal content than ever beforeshowing that self-regulation can workwe still need to react faster against terrorist propaganda and other illegal content which is a serious threat to our citizens security, safety and fundamental rights.

Last month the UK government also revealed it had paid an AI company to develop a machine learning tool that it said can automatically detect online propaganda produced by the Islam extremist hate group ISIS with an extremely high degree of accuracy.

It said the tool could be integrated into platforms to block such content before its uploaded to the Internet. And UK Home Secretary Amber Rudd said she was not ruling out forcing tech firms to use the tool.

The Commission is also pushing platforms to implement what it calls proactive measures, including automated detection, toas it puts iteffectively and swiftly remove or disable terrorist content and stop it from reappearing once it has been removed.

Its also following the UK governments lead by saying it also wants social media giants to share learnings and techniques with smaller platforms, and says it wants tech firms to put in place working arrangements for better cooperation with the relevant authorities, including Europol.

Fast-track procedures should be put in place to process referrals as quickly as possible, while Member States need to ensure they have the necessary capabilities and resources to detect, identify and refer terrorist content, it adds.

EU Member States are being instructed to report regularly to the EC on tech firms performance regarding terrorist content referralsand also on overall cooperation.

The Commission also says it will launch a public consultation in the coming weeks.

While terrorist content is the clear priority here, the EC is continuing to apply pressure on platforms to tighten the screw on all illegal contentas it defines it.

Though it seems to have picked up on some of the criticisms of bundling up so many different types of content issues into one illegal package, and the associated risk of applied measures being disproportionate, as its Recommendation also specifies the need for safeguards against unjust and/or improper content takedowns, including byimprovingtransparency for citizens on platforms content decisions.


Continue reading at TechCrunch »