Facebook closed 583m fake accounts in first three months of 2018

Katie Ramirez
May 16, 2018

Which is to say, this doesn't mean that 0.22% of the content posted on Facebook contained graphic violence; just that the graphic content posted accounted for 0.22% of total views.

According to the numbers, covering the six-month period from October 2017 to March 2018, Facebook's automated systems remove millions of pieces of spam, pornography, graphic violence and fake accounts quickly - but that hate-speech content, including terrorist propaganda, still requires extensive manual review to identify.

The figures are contained in an updated transparency report published by the company which for the first time contains data around content that breaches Facebook's community standards. Its systems spotted almost 100 percent of spam and terrorist propaganda, almost 99 percent of fake accounts and around 96 percent of posts with adult nudity and sexual activity.

Responses to rule violations include removing content, adding warnings to content that may be disturbing to some users while not violating Facebook standards; and notifying law enforcement in case of a "specific, imminent and credible threat to human life".

The number of pieces of content depicting graphic violence that Facebook took action on during the first quarter of this year was up 183% on the previous quarter.

The social network estimates that it found and flagged 85% of that content prior to users seeing and reporting it - a higher level than previously due to technological advances.

Meanwhile, Facebook in Q1 2018 flagged 1.9 million terrorism-related posts, from related to ISIS, al-Qaeda and affiliated groups, up from 1.1 million in Q4 2017. The company found and flagged 95.8% of such content before users reported it.

Major League Baseball suspends Seattle's Robinson Canó 80 games for drug violation
As it turns out, the Mariners will have to get used to playing without him for the foreseeable future. Cano is now 583 hits away from 3,000 and was a shoe-in for Cooperstown.

The company said most of the increase was the result of improvements in detection technology.

As Facebook continues to grapple with spam, hate speech, and other undesirable content, the company is shedding more light on just how much content it is taking down or flagging each day.

Now, however, artificial intelligence technology does much of that work.

The new disclosures from Facebook are part of its larger effort to rebuild trust among users - and advertisers - after widespread concern from lawmakers and regulators about its content-management practices.

The company took down 837 million pieces of spam in Q1 2018, almost all of which was flagged before any users reported it. But a recent report from the Washington Post found that Facebook's facial recognition technology may be limited in how effectively it can catch fake accounts, as the tool doesn't yet scan a photo against all of the images posted by all 2.2 billion of the site's users to search for fake accounts.

Facebook shares slid as much as 2% Tuesday morning after it announced it had disabled 583 million fake accounts over the last three months. "Our metrics can vary widely for fake accounts acted on", the report notes, "driven by new cyberattacks and the variability of our detection technology's ability to find and flag them".

Other reports by

Discuss This Article