December 23, 2020
It’s a tradition at Facebook for departing employees to write farewell posts on internal message boards. One such recent message lifted the lid on how deep Facebook’s hate speech problem runs -- and that’s a problem with consequences for all of us.
Buzzfeed reported that on December 9, a departing data scientist wrote that some 5 billion posts go up on Facebook each day. And, referring to the prevalence of hate speech violations reported in the company’s November 2020 transparency report, they calculated that 1 in every 1,000 pieces of content viewed, or 5 million posts, violate the company’s hate-speech rules. According to the data scientist, Facebook removes less than 5% of the posts that violate those rules, leaving millions of posts on the platform unactioned. Guy Rosen, the company’s Vice President of Integrity, disputed this percentage, saying it incorrectly compared pieces of content with numbers of actual views. (Note: Mr. Rosen is on the Technical Advisory Board for ADL’s Center for Technology and Society.)
Despite Mr. Rosen’s correction, it’s clear that Facebook’s understanding of hate speech differs from that of civil society’s and Facebook does not remove a lot of hate speech on its platform. To be clear, the metric of prevalence is an important one, but it’s not clear if it lends itself to the calculation of hate speech not removed by the platform, as Mr. Rosen highlights. The prevalence of hate speech on Facebook is the likelihood that a user has viewed a piece of content that includes hate speech. This does not account for the number of unique pieces of content that contain hate speech. Instead, it is averaging out the total views of hateful content across the platform’s user base.
In addition, Facebook also reports on the total number of unique pieces of content that contain hate speech, covered by Facebook’s November 2020 transparency report metric “How much content did we take action on?” This is also an important metric to understand how much hate speech Facebook believes is on its platform. However, outside of this metric, Facebook is not sharing any data about the hate speech it does not take action on, nor does it seem possible to calculate that, given the public data it publishes.
Regardless, the scale of Facebook and the reach of hate speech on the platform is breathtaking.
What Facebook’s Reports Are Missing
The former employee’s post, as reported by Buzzfeed, offers a rare inside account of Facebook’s efforts and inaction concerning hate speech. The insights are alarming. Although the company started publishing transparency reports in 2017, its reports are not very transparent.
According to the November 2020 transparency report, the prevalence of hate speech is approximately 11 in 10,000 viewed posts. Separately, the company also notes that it removed 22.1 million posts in its third quarter. But Facebook’s report offers an incomplete picture. Vast numbers of people can view a single piece of content. If one piece of hate content goes viral, thousands or millions of people could have viewed it—an important indicator of the reach and potential impact of hate speech on Facebook.
In short, providing a single prevalence metric on hate speech and the total number of posts removed does not fully document the impact of hate speech on Facebook by an especially hateful post that has gone viral. It does not speak to the multitude of hate speech that Facebook refuses to take action against.
By reporting the prevalence of hate speech viewed on the platform for all pieces of hateful content across all users, the metric obscures the impact that a single piece of hateful content might have when viewed by thousands of people.
Furthermore, removing hate speech after the fact does not solve for the fundamental problem of Facebook. Its algorithms play a great role; they feed the platform’s 2.3 billion users tailored content based on individual identity and behavioral data, (e.g. user browsing activity) and can increase exposure to hate speech for users who already have seen hateful content or are searching for it. This is a particular concern with users who may be susceptible to hateful disinformation and conspiracy theories. Radicalization often is the result.
Algorithmic amplification is at the heart of the criticism of Facebook’s controversial News Feed. It has been accused of increasing polarization in the United States by filtering the news that different users see and amplifying news sources that spread misinformation and incendiary content. The damage it can impart is profound.
How Algorithms Impact Facebook’s Hate Speech Problem
The former employee’s post and the company’s latest transparency report also provide a troubling glimpse into the issues surrounding Facebook’s plan to automate solutions to its hate speech problem. We take a deeper look into how Facebook’s algorithms feed us the content we see on the platform.
Despite all of the trenchant critiques from Congress, outside experts, those targeted by hate on Facebook and his employees, Mr. Zuckerberg continues to vastly understate the nature and severity of Facebook’s hate speech problem—and the role the company’s algorithms play in amplifying it. For example, in the case of Holocaust deniers, Mr. Zuckerberg stated in an interview: “I don’t think that they’re intentionally getting it wrong”.
This comment was the pretext that allowed Facebook to turn a blind eye to Holocaust denialism on its platform for years. His view ignored the historical nature of deliberate campaigns designed to spread antisemitism, and how Holocaust deniers accessed, shared and weaponized such content. Under pressure from campaigns such as Stop Hate For Profit and others, Mr. Zuckerberg ultimately reversed that decision.
Whether intentional or not, that denial of the enormous scope and impact of the problem of hate speech on its platform has to change. Moreover, the company must enforce its policies far more vigorously. As was the case with Holocaust denial, Facebook—which in these high-profile and highly contentious incidents is virtually identical to one man, Mr. Zuckerberg—can choose whether to acknowledge a certain kind of hate speech is indeed a problem in the first place.
Moreover, the departing data scientist revealed the existence of a “Hate Bait” dashboard used internally by employees to rank which content, from which groups and pages, receive the most hateful interactions each day. The data scientist wrote, “[The top-ranked purveyors of hate] all create dozens or hundreds of posts like this [a] day, each eliciting endless volumes of hateful, vile comments—and we reward them fantastically for it.” (The ex-employee added the emphasis.)
The reward is amplification. Controversial, hateful and polarizing information and misinformation are often considered more engaging than other types of content and receive wide circulation. A study from MIT showed false news spreads more rapidly than the truth on social media. The study’s authors wrote, “Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information.”
Another study, published earlier this year in Nature, concluded, “Based on engagement, Facebook’s Feed drives clicks and views, but also privileges incendiary content, setting up a stimulus–response loop where outrage expression becomes easier and even normalized. “
As exposed by the Wall Street Journal, Facebook’s own research found that the algorithms it employs to determine which content users see on their News Feed maximize engagement by serving up more and more divisive content, content that Facebook’s employees have acknowledged sows political polarization. Facebook benefits from the existence and spread of incendiary content because it motivates users to spend more time on the platform, thereby increasing advertising revenue. That is -- Facebook’s fundamental business model exploits people’s penchant (some would say addiction) for clicking on incendiary content and sharing misinformation and divisive material.
When critics say that the existence and viral amplification of hate content and disinformation is a feature, not a bug, of social media platforms, this is what they mean.
“AI Will Not Save Us”
More and more, we are hearing from Facebook insiders about their misgivings on these issues. The departing data scientist was not alone in their specific belief that Facebook’s reckless optimism around AI blinds the company to its significant shortcomings. In a different article, Buzzfeed quoted another Facebook employee who quit their job.
“AI will not save us,” wrote Nick Inzucchi, who had worked as a civic integrity product designer. “The implicit vision guiding most of our integrity work today is one where all human discourse is overseen by perfect, fair, omniscient robots owned by [CEO] Mark Zuckerberg. This is clearly a dystopia, but one so deeply ingrained we hardly notice it any more.”
Another shortcoming of machine learning results from the fact that it is easier to train computers with a high volume of examples of hate speech; thus, the algorithms Facebook uses to remove hate speech work best in widely-spoken languages such as English, Spanish and Mandarin Chinese. Detecting hate speech is more challenging for languages with far fewer speakers, meaning less data on which to train the company’s AI-based detection models.
For example, in 2019, the company told TIME that its algorithms recognize only 4 out of India’s 22 official languages. Tellingly, Facebook refuses to reveal the success rate of its individual language algorithms in flagging hate speech. As a result, we have no idea what languages the 22.1 million pieces of hate speech content Facebook actioned were written in—and how many posts in less-spoken languages go undetected by moderators and AI not trained in them.
The company’s community guidelines reflect similar shortcomings. They are translated into about 41 of the over 111 languages allowed on the platform, as of April of 2019. If users cannot understand the guidelines because they are not in the language they speak, how can they follow the rules?
Without changing its algorithms, Facebook will never tackle the root of its hate speech problem. Currently, Facebook is waging a losing battle against posts telling people not to get the Covid-19 vaccine. After last month’s national elections, the company rolled back a “nicer” News Feed some of its employees wanted to retain, which had increased traffic for higher-quality news sources and lowered it for sites with inflammatory, false coverage. The tech giant’s hate speech problem has real-life consequences: people do not protect themselves against a deadly virus in a pandemic, and it provides space for toxic users to target marginalized groups like Jews, African-Americans, immigrants, Muslims and LGBTQ+ users with divisive rhetoric and outright lies.
We want the company to do better but Facebook’s current efforts at transparency around hate speech look less like a clear window and more like a black box. Much remains unknown about the prevalence of hate speech on Facebook. However, as revealed by the company’s former employees, we can be sure that the problem is much, much worse than we have been led to believe.