I Testified at a Congressional Hearing on White Nationalism. Here's Some of What I Wish We Had Discussed.

  • April 25, 2019
Eileen Hershenov Hearing CSPAN

By Eileen Hershenov, Senior Vice President for Policy, ADL

Earlier this month, I was honored to appear before the House Judiciary Committee to discuss the increase in hate crimes and the rise of white nationalism. ADL (the Anti Defamation League) has been tracking the resurgence of white supremacy offline and online for decades. Over the last year we have been alarmed to see a disturbing and precipitous rise in hate in America.

The four-hour hearing covered many important points made by the House panel and by other witnesses, including Dr. Mohammad Abu-Salha, a Muslim doctor who spoke poignantly about his two daughters and son-in-law who were murdered in their home state of North Carolina in 2015, representatives from the civil rights community including the eloquent Eva Paterson from Equal Justice Society and Kristen Clarke from the National Lawyers’ Committee, and representatives from Google and Facebook. In part because the hearing was eventually encumbered by partisan tactics, an important component to the conversation went largely missing: a discussion on the role technology companies must play in helping combat the rising incidence and impact of online hate. The untold impacts of technological advances and, often, the business models of major platforms, are affecting society in ways we are only beginning to understand. This is our new reality, but it should not be the new normal.

One thing is abundantly clear—technology is accelerating the speed and scale of hate online. The scale at which social media platforms operate, and the unique ability they provide for people to instantaneously connect, discuss, and spread ideas has created a new, different kind of public square. Humanity has never been this connected, which brings about collaboration, innovation and community, but has also come at the cost of rampant hate. As I said in my testimony, extremists and those vulnerable to radicalization can now join what are effectively round-the-clock extremist rallies and toxic communities online without even having to leave their rooms. This enables potentially violent extremists to find the support, validation and reach they hope to achieve.

So, how do we bring to bear society’s concerns upon the unbridled momentum of digital technology? First by asking important questions and demanding real answers. The bottom line is that there is a lot that we simply do not know and must understand in order to fashion effective responses. For legislators trying to address online hate and harassment, here are a few areas worthy of exploration:

Scale: The proliferation of social media platforms over the last 15 years has brought about a dramatic transformation in human interactions - billions of people across the globe suddenly gained the ability to communicate instantly in public and private digital spaces. And billions more are poised to join those spaces. This scale has been weaponized by those who want to spread hateful ideologies like white nationalism, attack opponents in droves, and spread misinformation to create deep social divisions. This is a feature unique to social media - for most of human history, stretched over thousands of years, we have connected, collected, and organized in relatively small manageable batches of physical encounters and we have had centuries to build norms of conduct governing behavior and community interaction. How do we contain the exposure to hate when the fluidity of communication, often but not always cloaked in anonymity, is at a scale and speed previously unimagined?

Problematic Speech: Most of the conversation around online hate focuses on hateful comments like slurs or vicious attacks on a targeted group. But not all speech that aims to spread hate and radicalize people is itself clearly hate speech. For example, in order to attack opponents and evade content moderation, hate groups and extremists often use dog whistles, which is a messaging strategy that employs coded language that has additional or specific resonance to an intended subgroup audience. Similarly, white nationalist groups are particularly good at developing new terms or memes in order to avoid automated content moderation. Can companies solve the problem of moderating content that doesn’t fall neatly into hate speech but is no less intentional or potent and which is constantly morphing? And how do we address problematic speech that is not just text, but includes images, videos, podcasts and other forms of content?

Offline Burden: Exposure to hateful content online can have real, traumatic, life changing consequences offline. A recent ADL survey found that 53% of American adults have experienced online hate and harassment, and the impact of this is significant. Of those who have been harassed online, 38% have stopped, reduced or changed their online activity as a result, and 15% have taken steps to increase their physical safety through changes such as altering their commute. It’s not just users, content moderators who spend all day reviewing hateful and disturbing content are reported to have developed PTSD-like symptoms after leaving their jobs. What emotional support and services can companies offer their users as well as content moderator staff that would help mitigate the immediate consequences and the long-term impact of hateful content?

Private Groups: Some online hate is out in the open, but because it is made publicly it allows offended and concerned individuals to flag the content to platforms. At the same time, a lot of hate is not accessible to the public, and instead exists in echo chambers on private Facebook groups or Discord servers. Members of these groups are unlikely to report hate since they have chosen to be part of these groups, and these groups are effective tools for radicalizing, recruiting, and organizing around hate. What will companies do to address hate on private groups and servers?  Does de-platforming extremists from one platform merely lead them to go to another, less publicly visible platform? How can we track extremists across platforms?

Chilling Speech: ADL’s work with victims of online hate and harassment revealed to us that such experiences have a chilling effect on a victim’s participation in civil engagement. Thus, speech rights exist on all sides of this equation: they are not just the concern of the loudest or most empowered speakers and our legitimate concerns over free speech need to take who is silenced and sidelined into consideration. How do we balance the need to protect freedom of speech while also preventing vulnerable groups from being further disenfranchised and silenced online?

While it’s important that victims and bystanders report hate and harassment on social media platforms, we must resist enabling a reality where the overwhelming burden of safety is placed at the doorsteps of vulnerable groups. Each of us must play a role, and for Congress that means using its oversight and legislative tools to ask the right questions; to demand from tech platforms significantly more -- and more significant -- ongoing, verifiable information and measurements of the proliferation and types of hateful content and the impact of different types of  interventions; and to do more in areas where it can affect change through passing stronger legislation to directly address these issues, particularly in terms of holding perpetrators accountable for the effects of online hate. Legislation requiring tech companies to provide certain independently verifiable information about their content moderation efforts is also an area Congress should look into.

This may seem like asking more questions than providing answers, but there are no magic bullets here. If left unchecked, the challenges we’re up against have the power to undermine our sense of community and democracy. Few, if any, can afford to stay on the sidelines.