YouTube: More AI can fix AI-generated “bubbles of hate”

YouTube: More AI can fix AI-generated “bubbles of hate”
From TechCrunch - December 19, 2017

Facebook, YouTube and Twitter faced another online hate crime grillingtoday by UK parliamentarians visibly frustrated at their continued failures to apply their own community guidelines and take down reported hate speech.

The UK government has this year pushed to raise online radicalization and extremist content as a G7 priorityand has been pushing for takedown timeframes for extremist content to shrink radically.

While the broader issue of online hate speech has continued to be a hot button political issue, especially in Europewith Germany passing a social media hate speech law in October. And the European Unions executive body pushing for social media firms toautomate the flagging of illegal content to accelerate takedowns.

In May, the UKs Home Affairs Committee also urged the government to consider a regime of fines for social mediacontent moderation failuresaccusing tech giants oftaking a laissez-faire approach to moderating hate speech content on their platforms.

It revisited their performance in another public evidence sessions today.

What it is that we have to do to get you to take it down?

Addressing Twitter, Home Affairs Committee chair Yvette Cooper said her staff had reported a series of violent, threatening and racist tweets via the platforms standard reporting systems in Augustmany of which still had not been removed, months on.

She did not try to hide her exasperation as she went on to question why certain antisemitic tweets previously raised by the committee during an earlier public evidence session had also still not been removeddespite Twitters Nick Pickles agreeing at the time that they broke its community standards.

Im kind of wondering what it is we have to do, said Cooper. We sat in this committee in a public hearing and raised a clearly vile antisemitic tweet with your organization but it is still there on the platformwhat it is that we have to do to get you to take it down?

Twitters EMEA VP for public policy and communications, Sinead McSweeney, who was fielding questions on behalf of the company this time, agreed that the tweets in question violated Twitters hate speech rules but said she was unable to provide an explanation for why they had not been taken down.

She noted the company hasnewly tightened its rules on hate speechand said specifically that it has raised the priority of bystander reports, whereas previously it would have placed more priority on a report if the person who was the target of the hate was also the one reporting it.

We havent been good enough at this, she said. Not only we havent been good enough at actioning, but we havent been good enough at telling people when we have actioned. And that is something thatparticularly over the last six monthswe have worked very hard to change so you will definitely see people getting much, much more transparent communication at the individual level and much, much more action.

We are now taking actions against 10 times more accounts than we did in the past, she added.

Cooper then turned her fire on Facebook, questioning the social media giants public policy director, Simon Milner, about Facebook pages containing violent anti-Islamic imagery, including one that appeared to be encouraging the bombing of Mecca, and pages set up to share photos of schoolgirls for the purposes of sexual gratification.

He claimed Facebook has fixed the problem of lurid comments being able to posted on otherwise innocent photographs of children shared on its platformsomethingYouTube has also recently been called out fortelling the committee: That was a fundamental problem in our review process that has now been fixed.

Cooper then asked whether the company is living up to its own community standardswhich Milner agreed do not permit people or organizations that promote hate against protected groups to have a presence on its platform. Do you think that you are strong enough on Islamophobic organizations and groups and individuals? she asked.

Milner avoided answering Coopers general question, instead narrowing his response to the specific individual page the committee had flaggedsaying it was not obviously run by a group and that Facebook had taken down the specific violent image highlighted by the committee but not the page itself.

The content is disturbing but it is very much focused on the religion of Islam, not on Muslims, he added.

This week a decision by Twitter to close the accounts of far right group Britain First has swiveled a critical spotlight on Facebookas the company continues to host the same groups page, apparently preferring to selectively remove individual posts even though Facebooks community standards forbid hate groups if they target people with protected characteristics (such as religion, race and ethnicity).

Coopered appeared to miss an opportunity to press Milner on the specific pointand earlier today the company declined to respond when we asked why it has not banned Britain First.

Giving an update earlier in the session, Milner told the committee that Facebook now employs over 7,500 people to review contenthaving announced a 3,000 bump in headcount earlier this yearand said that overall it has around 10,000 people working in safety and securitya figure he said it will be doubling by the end of 2018.

Areas where he said Facebook has made the most progress vis-a-vis content moderation are around terrorism, and nudity and pornography (which he noted is not permitted on the platform).

Googles Nicklas Berild Lundblad, EMEA VP for public policy, was also attending the session to field questions about YouTubeand Cooper initially raised the issue of racist comments not being taken down despite being reported.

He said the company is hoping to be able to use AI to automatically pick up these types of comments. One of the things that we want to get to is a situation in which we can actively use machines in order to scan comments for attacks like these and remove them, he said.

Cooper pressed him on why certain comments reported to it by the committee had still not been removedand he suggested reviewers might still be looking at a minority of the comments in question.

She flagged a comment calling for an individual to be put downasking why that specifically had not been removed. Lundblad agreed it appeared to be in violation of YouTubes guidelines but appeared unable to provide an explanation for why it was still there.

Cooper then asked why a video, made by the neo-nazi group National Actionwhich is proscribed as a terrorist group and banned in the UK, had kept reappearing on YouTube after it had been reported and taken downeven after the committee raised the issue with senior company executives.

Eventually, after about eight months of the video being repeatedly reposted on different accounts, she said it finally appears to have gone.

But she contrasted this sluggish response with the speed and alacrity with which Google removes copyrighted content from YouTube. Why did it take that much effort, and that long just to get one video removed? she asked.

I can understand thats disappointing, responded Lundblad. Theyre sometimes manipulated so you have to figure out how they manipulated them to take the new versions down.

And were now looking at removing them faster and faster. Weve removed 135 of these videos some of them within a few hours with no more than 5 views and were committed to making sure this improves.

He also claimed the rollout of machine learning technology has helped YouTube improve its takedown performance, saying: I think that we will be closing that gap with the help of machines and Im happy to review this in due time.

Youre actually actively recommending racist material


Continue reading at TechCrunch »