Facebook said today, Tuesday: it removed 7 million posts in the second quarter of 2020 for containing wrong information about the emerging coronavirus (Covid-19), including content promoting fake preventive measures and exaggerated treatments.
Facebook released the data as part of its sixth application of community standards, which it first launched in 2018, along with stricter fitness rules in response to backlash for its lax approach to monitoring content on its platforms.
Facebook said it will invite external experts to independently review the metrics used in the report, starting in 2021.
Facebook – which has the world’s largest social network – removed about 22.5 million hate speech posts on its main app in the second quarter, up from 9.6 million in the first quarter. It also deleted 8.7 million publications linked to extremist organizations, compared to 6.3 million in the previous period.
Facebook said: Its reliance on automation technology to review content increased during the months of April, May and June, when the number of its reviewers attending offices decreased due to the coronavirus pandemic.
Facebook said on Published On his blog: This has resulted in the company taking less action on suicide, self-harm, child nudity, and sexual exploitation content on its platforms.
The company said: It expands the policy of hate speech to include “content that depicts black faces, or stereotypes about Jews who control the world.”
Some politicians and public figures in the United States stirred controversy by appearing with a black face, a practice that dates back to the nineteenth century but has long been used to demean African Americans.