Facebook security chief rants about misguided “algorithm” backlash

“I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos” wrote Facebook Chief Security Officer Alex Stamos on Saturday in a reeling tweetstorm. He claims journalists misunderstand the complexity of attacking fake news, deride Facebook for thinking algorithms are neutral when the company knows they aren’t, and encourages reporters to talk to engineers who actually deal with these problems and their consequences.

Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks.

— Alex Stamos (@alexstamos) October 7, 2017

Yet this argument minimizes many of Facebook’s troubles. The issue isn’t that Facebook doesn’t know algorithms can be biased or that people don’t know these are tough problems, but that the company didn’t anticipate abuses of the platform and work harder to build algorithms or human moderation processes that could block fake news and fraudulent ad buys before they impacted the 2016 U.S. presidential election, instead of now. And his tweetstorm completely glosses over the fact that Facebook will fire employees that talk to the press without authorization.

[Update: 3:30pm PT) I commend Stamos for speaking so candidly to the public about an issue where more transparency is appreciated. But simultaneously, Facebook holds the information and context he says journalists and by extension the public lack, and the company is free to bring in reporters for the necessary briefings. I’d certainly attend a “Whiteboard” session like Facebook has often held for reporters in the past on topics like News Feed sorting or privacy controls.]

Stamos’ comments hold weight because he’s leading Facebook’s investigation into Russian election tampering. He was the Chief Information Security Officer as Yahoo before taking the CSO role at Facebook in mid-2015.

The sprawling response to recent backlash comes right as Facebook starts making the changes it should have implemented before the election. Today, Axios reports that Facebook just emailed advertisers to inform them that ads targeted by “politics, religion, ethnicity or social issues” will have to be manually approved before they’re sold and distributed.

And yesterday, Facebook updated an October 2nd blog post about disclosing Russian-bought election interference ads to congress to note that “Of the more than 3,000 ads that we have shared with Congress, 5% appeared on Instagram. About $6,700 was spent on these ads”, implicating Facebook’s photo-sharing acquisition in the scandal for the first time.

dreambox dm900 hd

Stamos’ tweetstorm was set off by Lawfare associate editor and Washington Post contributor Quinta Jurecic, who commented that Facebook’s shift towards human editors implies that saying “the algorithm is bad now, we’re going to have people do this” actually “just entrenches The Algorithm as a mythic entity beyond understanding rather than something that was designed poorly and irresponsibly and which could have been designed better.”

Here’s my tweet-by-tweet interpretation of Stamos’ perspective:

I appreciate Quinta’s work (especially on Rational Security) but this thread demonstrates a real gap between academics/journalists and SV. t.co/CWulZrFaso

— Alex Stamos (@alexstamos) October 7, 2017

He starts by saying journalists and academics don’t get what it’s like to actually like to implement solutions to hard problems, yet clearly no one has the right answers yet.

I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos.

— Alex Stamos (@alexstamos) October 7, 2017

Facebook’s team has supposedly been pigeonholed as naive of real-life consequences or too technical to see the human impact of its platform, but the outcomes speak for themselves about the team’s inadequacy to proactively protect against election abuse.

Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks.

— Alex Stamos (@alexstamos) October 7, 2017

Facebook gets that people code their biases into algorithms, and works to stop that. But censorship that results from overzealous algorithms hasn’t been the real problem. Algorithmic negligence of worst-case scenarios for malicious usage of Facebook products is.

In fact, an understanding of the risks of machine learning (ML) drives small-c conservatism in solving some issues.

— Alex Stamos (@alexstamos) October 7, 2017

Understanding of the risks of algorithms is what’s kept Facebook from over-aggressively implementing them in ways that could have led to censorship, which is responsible but doesn’t solve the urgent problem of abuse at hand.

For example, lots of journalists have celebrated academics who have made wild claims of how easy it is to spot fake news and propaganda.

— Alex Stamos (@alexstamos) October 7, 2017

Now Facebook’s CSO is calling journalists’ demands for better algorithms fake news, because these algorithms are hard to build without becoming a dragnet that attacks innocent content too.

Without considering the downside of training ML systems to classify something as fake based upon ideologically biased training data.

— Alex Stamos (@alexstamos) October 7, 2017

What is totally false might be somewhat easy to spot, but the polarizing, exaggerated, opinionated content many see as “fake” is tough to train AI to spot because of the nuance with which it’s separated from legitimate news, which is a valid point.

A bunch of the public research really comes down to the feedback loop of “we believe this viewpoint is being pushed by bots” -> ML

— Alex Stamos (@alexstamos) October 7, 2017

Stamos says it’s not as simple as fighting bots with algorithms because…

So if you don’t worry about becoming the Ministry of Truth with ML systems trained on your personal biases, then it’s easy!

— Alex Stamos (@alexstamos) October 7, 2017

…Facebook would end up becoming the truth police. That might lead to criticism from conservatives if their content is targeted for removal, which is why Facebook outsourced fact-checking to third-party organizations and reportedly delayed News Feed changes to address clickbait before the election.

Iblog.at comments disabled due to abuse