Press Release

Hate and Harassment Drive One-in-Four Moderator Actions on Minecraft Servers, Study Finds

ADL and partners call for More Data, Stronger Community Guidelines

New York, NY, July 27, 2022 – One-in-four moderation actions across three private servers of the popular video game Minecraft are in response to online hate and harassment, according to a study published today by ADL (Anti-Defamation League) Center for Technology and Society, in collaboration with Take This, Gamersafer and the Middlebury Institute's Center on Terrorism, Extremism, and Counterterrorism.

“As with many online games, we’ve found that large numbers of Minecraft users experience hateful speech and harassment while using the platform,” said Jonathan Greenblatt, ADL CEO. “From this snapshot, it is clear that Minecraft and the gaming industry broadly must do more to ensure their online spaces have robust community guidelines and that they provide researchers access to more data and information on its servers.”

The study found that of all content that elicited a moderator response (including a ban, warning, mute, or kick from the server), 16 percent were the result of harassment and 10 percent were the result of identity-based hate. 

Through the course of this study, ADL found that:

  • Many in-game offenders are repeat offenders. Almost one fifth of offending users had multiple actions taken against them during the data collection.
  • Hateful messages are 21% more likely in public chats than private chats. Messages with identity-based hate were 21 percent more common in public chats.     
  • Servers with in-depth community guidelines were associated with more positive social spaces. Of the three servers reviewed, the server with the most extensive community guidelines and highest ratio of moderators to players had the lowest frequency of sexually explicit, hateful, and severely toxic behavior between users, suggesting the positive impact of robust guidelines.
  • Temporary bans proved to be an effective solution for reprimanding bad behavior. Early evidence shows temporary bans to be more effective than muting in reducing the rate of offending behaviors by the moderated player.
  • Hateful rhetoric is common in gaming spaces. The presence of slurs previously only affiliated with white nationalism and hate groups suggests the normalization of extreme language in gaming spaces. 

A previous survey published by ADL last year revealed that extremist messages continue to be a concern in online games: One-in-10 young gamers and 8 percent of adult gamers were exposed to white supremacist ideologies in online multiplayer games.

To better address hate and harassment across its platform, ADL recommends that Minecraft takes the following actions:

  • Invest in content moderation efforts and robust community guidelines. Active, effective human moderation and community guidelines are critical to reducing sexually explicit, hateful and severely toxic behavior in gaming spaces as the server with the most staff and most extensive guidelines had the fewest incidents of these kinds of behaviors. Industry leaders need to continue investing in moderator training to better understand and respond to toxic behaviors. 
  • Increase researcher access to data. Without providing researchers access to unfiltered data, the games industry cannot identify or address the challenges of hateful, harassing and toxic behavior.  
  • Conduct additional research on content moderation and complementary tools and techniques. Moderator intervention seems to reduce harmful behavior in the short term and individual level, but it remains unclear whether this holds over time and across the server. Future research should focus on determining the long-term and aggregate effects of moderator intervention.
  • Standardize reporting categories. To better understand the frequency and nature of hate in online spaces, we recommend an industry-wide standardization of moderation reporting, including defined categories and violating offenses with clear descriptions. This would help facilitate future research, particularly in regards to documenting how moderation actions change user behavior over time. The ADL's Disruption and Harms in Online Gaming Framework could be used as the foundation for this effort. 

Building on ADL’s century of experience building a world without hate, the Center for Technology and Society (CTS) serves as a resource to tech platforms and develops proactive solutions to fight hate both online and offline. CTS works at the intersection of technology and civil rights through education, research and advocacy.

ADL is the leading anti-hate organization in the world. Founded in 1913, its timeless mission is “to stop the defamation of the Jewish people and to secure justice and fair treatment to all.” Today, ADL continues to fight all forms of antisemitism and bias, using innovation and partnerships to drive impact. A global leader in combating antisemitism, countering extremism and battling bigotry wherever and whenever it happens, ADL works to protect democracy and ensure a just and inclusive society for all.

Take This is a mental health advocacy organization with a focus on the game industry and community. We provide resources, training, and support for individuals and companies that help the gaming community improve its mental well-being and resilience. The organization addresses the underlying conditions that can create and perpetuate mental health challenges: stigma, harmful studio culture, harassment and toxicity, lack of diversity and accessibility, and problematic game and community design.

GamerSafer helps multiplayer games and Esports organizations scale safety, positive and fair play experiences to millions of players. Based in San Jose, serving global clients, our goal is to increase life-time value, support regulatory compliance, and decrease the platform's misuses.

Middlebury Institute's Center on Terrorism, Extremism, and Counterterrorism conducts in-depth research on terrorism and other forms of extremism. Our research informs private, government, and multilateral institutional understanding of and responses to terrorism threats. CTEC is a mixed-methods research center, meaning that our experts and students use analytic tradecraft, data science, and linguistics. We mentor our students and the wider MIIS community on skills that are in demand from government agencies, international organizations, technology companies, and financial institutions.