An illustrative image of a thumbs-down icon. (Filo/Getty Images)
Related content
Jewish Community Faces Unprecedentedly High Threat Environment
Familiar Attempts to Justify and Downplay Antisemitic Violence Follow Latest Attack on Jewish Community
Moderating Content During War: What Online Platforms Should Do Now
Amid a spate of violent antisemitic attacks in recent weeks, too many online platforms are failing to enforce existing policies meant to keep users safe, or rolling back rules to the point where they no longer adequately address hate, extremism and antisemitism. Speech that years ago was judged too violent, hateful, or extreme for major platforms is now, far too often, part of the public discourse.
This was evident following the murder of a young couple at a Jewish event in Washington, D.C. last month and the firebombing attack on June 1, 2025, against Jewish community members at a Boulder, CO rally for Israeli hostages, in which at least 15 people were injured.
Two major platforms, Meta and X, have slid the furthest.
Meta's intentional steps to discontinue the automated removal of hateful content are nothing short of a trillion-dollar company standing down in the fight against antisemitism online. Following the shooting attack at the Capital Jewish Museum in D.C. last month, celebrations of the murders were rampant on online platforms, where extreme anti-Zionist groups and users flocked to issue justifications, endorsements, and further calls for violence. On Instagram, extreme anti-Zionist groups shared the news with messages such as, “May all Zionists burn.” One group leader shared a screenshot of one of the victims of the attack overlayed with the text “Death to Nazis.” This follows Meta’s testimony before the UK parliament in February that the phrase “Jews are flat out greedier than Christians” was not considered violative under their new policies.
In line with Meta, X has become a haven for antisemitic rhetoric and conspiracy theories. The manifesto allegedly written by the murder suspect in the Washington, D.C., attack was posted to X shortly after the fatal shootings, and is still publicly available as of June 9, over two weeks later. Additionally, anti-Zionists and other extremists justified or downplayed both the D.C. and Boulder attacks, decrying them as a “hoax” or a false flag. For example, far-right conspiracy theorist and prolific antisemite Stew Peters is permitted by the platform to leave a post up that reads, “The only thing inciting violence is jewish [sic] behavior.”
YouTube also made changes to its moderation policies in December 2024, raising the threshold for moderating violative content when videos are considered “in the public interest.” Videos that are considered in the public interest if they discuss or debate elections, ideologies, movements, race, gender, sexuality, abortion, immigration, censorship or other issues, can feature violative content for up to half their length and still be permissible on the site. This move essentially doubles the amount of hateful or extremist messaging a creator is allowed to share while skirting the rules, something they are already quite adept at.
Following the Hamas-led attack on Israel on October 7, 2023, ADL released recommendations to platforms on what they could do better to fight misinformation and antisemitism in times of heightened conflict.
In response to the current threat environment for American Jews, highlighted by a June 5 FBI and DHS public service announcement indicating an elevated threat level, ADL recommends that platforms recognize the danger by adopting the following recommendations:
- Invest in trust and safety teams and resources to confront incitement, glorification of violence and antisemitism. Tech companies should recognize that we are in the midst of a period of profound importance to the safety of American Jews and other communities and should react accordingly by prioritizing tools—both automated and human—to confront and mitigate antisemitic threats. Real-time tools should also be mobilized rapidly when violent incidents occur.
- Promote reliable news sources and prevent dangerous conspiracy theories. In times of crisis, tech companies should be directing their audiences towards reliable news sources, increasing vigilance around abuse of community moderation efforts and maintaining a strict policy against platforming dangerous conspiracy theorists that could provide inspiration for more violence.
- Consider this a “break glass” moment. Tech companies should recognize this time for what it is – a period of increased violence against Jews and Jewish institutions. ADL has observed trending content online with a particular focus on dehumanizing and justifying violence against Israelis and Zionists in the US. That content will continue to spread if current enforcement practices are deemed “good enough.”
- Coordinate best practices. Tech companies should continue to leverage convening bodies, such as the Global Internet Forum to Counter Terrorism (GIFCT) and the Christchurch Call Advisory Network, as venues for sharing information and coordinating responses to terrorism threats.
- Build friction into livestreaming. While the recent attacks in the United States have not been livestreamed, platforms should build friction and resilience into their livestreaming products to ensure that the next attack does not reach millions – or billions – of viewers in real time, potentially radicalizing others and inspiring further violence.
- Do not monetize antisemitism or antisemites. Platforms should thoroughly evaluate their monetization policies, including by using third-party auditors, to ensure that they are not providing incentives to users espousing antisemitic conspiracy theories or beliefs.
- LLM companies should provide extra guardrails. ADL research shows LLMs like ChatGPT and Llama often display troubling anti-Jewish and anti-Israel bias. Companies behind these powerful tools should be putting extra guardrails in place around antisemitism in this frightening moment for Jews in America.