Despite purported changes to its policy enforcement and the stated intent of its management, Twitter continues to host antisemitic threats and tropes.
In a tweet from January 29, 2023, Ella Irwin, the company’s vice president of trust and safety, outlined Twitter’s enforcement priorities. Irwin emphasized that threats or incitement to violence based on individual or group identities would not be tolerated. Yet the ADL Center for Technology and Society (CTS) has found that tweets fitting this description remain on the platform long after we reported them. Antisemitic tropes also persist. On average, only 28% of antisemitic tweets that ADL reported as a trusted flagger (an organizational partner that can report content and get it prioritized) were removed or sanctioned in some way. At a time when antisemitic attitudes in America have surged, we are concerned about the persistence of this kind of prejudice on Twitter.
In November 2022, shortly after he acquired the company, incoming CEO Elon Musk stated that hateful tweets would be “deboosted and demonetized.” This was a welcome promise. Unfortunately, Twitter currently does not provide data on how it recommends or monetizes tweets, thus independent researchers cannot evaluate this claim. The company also recently announced it will end free access to its API, a feature that allows third-party developers to interact with the platform. This change in policy will curtail the ability of nonprofits and researchers to study Twitter.
CTS has reported 292 antisemitic tweets that violate Twitter’s hateful conduct policy since December 2022 (based on a random 1% sample of all tweets), which any Twitter user can retweet, like, or comment on without restriction (see Methodology). To the best of our knowledge, these tweets violated Twitter’s policies at the time, including its prohibition against violent threats and incitement.
Twitter did remove several threats of violence and content that explicitly celebrated the Holocaust. However, the company did not take action on other content that appeared to call for violence against Jews. These are just a few examples:
Antisemitic tropes also remain prevalent on Twitter. It is unclear how the platform is moderating antisemitic content according to its hateful conduct policy. Based on our previous investigation in July 2022, Twitter told us that some of the content we reported to the platform “was purposely not actioned because it did not meet a particular threshold of hateful content on the platform.” According to Twitter, tweets with one hateful comment were not subject to content moderation by the platform, but if a tweet had repeated hateful comments, it would be subject to content moderation. While this line may be arbitrary for those targeted, it indicated the platform’s threshold for removing hateful content.
In the below examples, Twitter no longer appears to enforce even this arbitrary line. These examples clearly feature “repeated hateful comments.” Yet Twitter again took no apparent action.
Twitter claims its current approach to content moderation is to prioritize de-amplification over removal. Its management—echoing social media researchers—has referred to this model as “freedom of speech, but not freedom of reach.” While we have no way to actually verify if the company is de-amplifying antisemitic content, we have found that Twitter is failing to take down tweets that clearly violate hateful conduct policies.
CTS reported batches of tweets through our trusted flagger account weekly from Dec. 16, 2022 through Jan. 19, 2023. We gathered tweets using Twitter’s open API, which provides access to a random 1% sample of all tweets. We analyzed these tweets through our Online Hate Index antisemitism classifier, which assigns them a score. We then manually reviewed 800 tweets that the classifier gave a high antisemitism score. If the tweet, or the account that posted it, violated Twitter’s hateful conduct policy, we reported them (292 in total).