Advertisement

Facebook security chief rants about misguided “algorithm” backlash

Facebook security chief rants about misguided “algorithm” backlash
From TechCrunch - October 7, 2017

I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos wrote Facebook Chief Security Officer Alex Stamos on Saturday in a reeling tweetstorm. He claims journalists misunderstand the complexity of attacking fake news, deride Facebook for thinking algorithms are neutral when the company knows they arent, and encourages reporters to talk to engineers who actually deal with these problems and their consequences.

Yet this argument minimizes many of Facebooks troubles. The issue isnt that Facebook doesnt know algorithms can be biased or that people dont know these are tough problems, but that the company didnt anticipate abuses of the platform and work harder to build algorithms or human moderation processes that could block fake news and fraudulent ad buys before they impacted the 2016 U.S. presidential election, instead of now. And his tweetstorm completely glosses over the fact that Facebook will fire employees that talk to the press without authorization.

Stamos comments hold weight because hes leading Facebooks investigation into Russian election tampering. He was the Chief Information Security Officer as Yahoo before taking the CSO role at Facebook in mid-2015.

The sprawling response to recent backlash comes right as Facebook starts making the changes it should have implemented before the election. Today, Axios reports that Facebook just emailed advertisers to inform them that ads targeted by politics, religion, ethnicity or social issues will have to be manually approved before theyre sold and distributed.

And yesterday, Facebook updated an October 2nd blog post about disclosing Russian-bought election interference ads to congress to note that Of the more than 3,000 ads that we have shared with Congress, 5% appeared on Instagram. About $6,700 was spent on these ads, implicating Facebooks photo-sharing acquisition in the scandal for the first time.

Stamos tweetstorm was set off by Lawfare associate editor and Washington Post contributorQuinta Jurecic, who commented that Facebooks shift towards human editors implies that sayingthe algorithm is bad now, were going to have people do this actually just entrenches The Algorithm as a mythic entity beyond understanding rather than something that was designed poorly and irresponsibly and which could have been designed better.

Heres my tweet-by-tweet interpretation of Stamos perspective:

He starts by saying journalists and academics dont get what its like to actually like to implement solutions to hard problems, yet clearly no one has the right answers yet.

Facebooks team has supposedly been pigeonholed as naive of real-life consequences or too technical to see the human impact of its platform, but the outcomes speak for themselves about the teams inadequacy to proactively protect against election abuse.

Facebook gets that people code their biases into algorithms, and works to stop that. But censorship that results from overzealous algorithms hasnt been the real problem. Algorithmic negligence of worst-case scenarios for malicious usage of Facebook products is.

Understanding of the risks of algorithms is whats kept Facebook from over-aggressively implementing them in ways that could have led to censorship, which is responsible but doesnt solve the urgent problem of abuse at hand.

Now Facebooks CSO is calling journalists demands for better algorithms fake news, because these algorithms are hard to build without becoming a dragnet that attacks innocent content too.

What is totally false might be somewhat easy to spot, but the polarizing, exaggerated, opinionated content many see as fake is tough to train AI to spot because of the nuance with which its separated from legitimate news, which is a valid point.

Advertisement

Continue reading at TechCrunch »