Resources

Report

Very Fine People

Read more about Very Fine People
This report is part of the ADL Belfer Fellowship series. What can machine learning tell us about white supremacist speech online? ADL Belfer Fellow Libby Hemphill shows how platforms miss discussions of conspiracy theories about white genocide and Jewish power and malicious grievances against Jews and people of color--and how platforms can do better.
Blue-tinted photo of a far right rally.
Report

Exposure to Alternative & Extremist Content on YouTube

Read more about Exposure to Alternative & Extremist Content on YouTube
This report is part of the ADL Belfer Fellowship series. The authors study the role of YouTube in recommending and promoting harmful videos on its platform using a comprehensive collection of behavioral data.
Belfer
Report

How Platforms Rate on Hate: Measuring Antisemitism and Adequacy of Enforcement Across Reddit and Twitter

Read more about How Platforms Rate on Hate: Measuring Antisemitism and Adequacy of Enforcement Across Reddit and Twitter
The ADL Center for Technology and Society is building the Online Hate Index, a machine learning system that detects hate targeting marginalized groups on online platforms. This report presents the inaugural findings of the OHI’s antisemitism classifier.
Rate of Hate

Social Pattern Library

Social Pattern Library

Our resource for social platform interaction designers and developers provides design prototypes for features to mitigate and prevent hate and to improve abuse reporting and support for targets.

Belfer Fellows

Belfer

Belfer Fellows

In 2018, ADL Center for Technology and Society, with the support of the Robert Belfer Family, launched the Belfer Fellows program. The Belfer Fellowship program advances ADL’s work by promoting awareness around online hate and digital citizenship, as well as implementing these projects for the wider social good.

What We Do

ADL Center for Technology & Society brings together policy experts, developers and researchers to combat hate in the digital world. We recommend policy and product interventions to elected officials and technology companies to mitigate online hate and harassment. We drive advocacy efforts, hold online platforms accountable and educate their staff on current threats and challenges. We produce applied research by analysts and a network of fellows, generating new insights into the nature and impact of hate and harassment on marginalized communities. We create technical tools and products that provide much-needed data measurement and analysis to track identity-based online hate and harassment. We empower targets of harassment by responding to online incidents of hate and work with platforms to create safer online spaces for all.

Icon  board

Tech Advisory Board