Facebook labelled 167 million user posts for Covid misinformation

Ruben Fields
November 22, 2020

On a call with reporters, Facebook's head of safety and integrity Guy Rosen said the audit would be completed "over the course of 2021". "Prevalence is like an air quality test to measure pollution". That data matters, he said, as "there are many forms of hate speech that are not being removed, even after they're flagged".

Mike Schroepfer, the firm's chief technology officer, said AI now detects 94.7 per cent of the hate speech that is removed, up from just 24 per cent in 2017. Though this does not sound like a lot, it is.

"After months of allowing content moderators to work from home, faced with intense pressure to keep Facebook free of hate and disinformation, you have forced us back to the office", the letter said.

Facebook has announced that it labelled 180 million pieces of misinformation related to the USA election on its platform.

The group, which included 62 named or partly-named signees and 171 who chose to remain anonymous, made a series of demands that included calling for Facebook to "maximise home-working", for moderators to receive "hazard pay" of 1.5 times their wage and for employees who are high risk or live with someone high risk to work from home indefinitely.

To enforce standards and guidelines, the first step would be to detect violations. The figures come from the latest edition of the Community Standards Enforcement Report report the company began issuing quarterly as of August.

User reports and human moderators can only do so much on a platform which has over 2.7 billion active monthly users, so the AI has to do most of the work to provide enforcement at scale.

But that same luxury was apparently not extended to the thousands-strong fleet of contract workers Facebook employs to moderate harmful content on the platform, and on Wednesday, 200 of those workers sent an open letter to top executives at the company objecting to the way they've been treated. Facebook has been accused of forcing them to work back at the office despite the coronavirus pandemic's alarming concern.

Celtics to sign Tristan Thompson to reported 2-year, $19M deal
Veteran big man Tristan Thompson is signing with the Boston Celtics , ending his run with the Cleveland Cavaliers . Tristan Thompson is one of the best rebounders and defenders in the National Basketball Association .

As the world was hit by the coronavirus pandemic at the beginning of this year, majority of the companies shifted to working from home to safeguard their employees from the deadly coronavirus.

The AI wasn't up to the job. "Important speech got swept into the maw of the Facebook filter - and risky content, like self-harm, stayed up", they wrote.

The lesson is clear.

In the letter, moderators argue that Facebook's algorithms are nowhere near where they need to be in order to successfully moderate content. "They may never get there".

This raises a stark question.

Though the company didn't mention this summer's ad boycott, which was organized by civil rights leaders in response to the company's hate speech policies, the new "prevalence" metric seemed created to push back on the narrative that hate speech is rampant on the platform.

The crackdown on content that violates its policies has been fuelled by improvements in its artificial intelligence systems, Facebook said. On Facebook-owned Instagram, meanwhile, content actioned for hate speech more than doubled between Q2 and Q3, reflecting the company's expansion into Arabic- and Indonesian-language moderation. In the letter published, the moderators said that the giant social media company "wanted them to risk their lives", and that their family relatives were also at risk.

Other reports by

Discuss This Article