No one wants to build a “feel good” internet

No one wants to build a “feel good” internet
From TechCrunch - March 3, 2018

If there is one policy dilemma facing nearly every tech company today, it is what to do about content moderation, the almost-Orwellian term for censorship.

Charlie Warzel of Buzzfeed pointedly asked the question a little more than a week ago: How is it that the average untrained human can do something that multibillion-dollar technology companies that pride themselves on innovation cannot? And beyond that, why is it thatafter multiple national tragedies politicized by malicious hoaxes and misinformationsuch a question even needs to be asked?

For years, companies like Facebook, Twitter, YouTube, and others have avoided putting serious resources behind implementing moderation, preferring relatively small teams of moderators coupled with basic crowdsourced flagging tools to prioritize the worst offending content.

There has been something of a revolution in thinking though over the past few months, as opposition to content moderation retreats in the face of repeated public outcries.

In his message on global community, Mark Zuckerberg asked How do we help people build a safe community that prevents harm, helps during crises and rebuilds afterwards in a world where anyone across the world can affect us? (emphasis mine) Meanwhile, Jack Dorsey tweeted this week that Were committing Twitter to help increase the collective health, openness, and civility of public conversation, and to hold ourselves publicly accountable towards progress.

Both messages are wonderful paeans to better community and integrity. There is just one problem: neither company truly wants to wade into the politics of censorship, which is what it will take to make a feel good internet.

Take just the most recent example. The New York Times on Friday wrote that Facebook will allow a photo of a bare-chested male on its platform, but will block photos of women showing the skin on their backs. For advertisers, debating what constitutes adult content with those human reviewers can be frustrating, the article notes. Goodbye Bread, an edgy online retailer for young women, said it had a heated debate with Facebook in December over the image of young woman modeling a leopard-print mesh shirt. Facebook said the picture was too suggestive.

Or rewind a bit in time to the controversy over Nick Uts famous Vietnam War photograph entitled Napalm Girl. Facebooks content moderation initially banned the photo, then the company unbanned it following a public outcry over censorship. Is it nudity? Well, yes, there is are breasts exposed. Is it violent? Yet, it is a picture from a war.

Whatever your politics, and whatever your proclivities toward or against suggestive or violent imagery, the reality is that there is simply no obviously right answer in many of these cases. Facebook and other social networks are determining taste, but taste differs widely from group to group and person to person. Its as if you have melded the audiences from Penthouse and Focus on the Family Magazine together and delivered to them the same editorial product.

The answer to Warzels question is obvious in retrospect. Yes, tech companies have failed to invest in content moderation, and for a specific reason: its intentional. There is an old saw about work: if you dont want to be asked to do something, be really, really bad at it, so then no one will ask you to do it again. Silicon Valley tech companies are really, really, bad about content moderation, not because they cant do it, but because they specifically dont want to.

Its not hard to understand why. Suppressing speech is anathema not just to the U.S. constitution and its First Amendment, and not just to the libertarian ethos that pervades Silicon Valley companies, but also to the safe harbor legal framework that protects online sites from taking responsibility for their content in the first place. No company wants to cross so many simultaneous tripwires.


Continue reading at TechCrunch »