September 06, 2016
The following are prepared remarks by Christopher Wolf, Chair of the Anti-Defamation League's Task Force on Cyberhate, a nationally renowned cyberhate expert and longtime ADL leader.
Panel on Anti-Semitism and Hate Speech on the Internet and in Social Media: Solutions for the Digital Age
United Nations Headquarters
September 7, 2016
My name is Christopher Wolf and I thank you for the privilege of participating today in this very important Forum on Global Anti-Semitism. I am especially pleased to be able to share my thoughts on this panel on Anti-Semitism and Hate Speech on the Internet and in Social Media: Solutions for the Digital Age.
In my capacity as an Internet lawyer and senior lay leader with the Anti-Defamation League, I have been addressing the issue of anti-Semitism and hate speech on the Internet for more than twenty years. I chair the ADL Task Force on Cyberhate and have served in that role since 1995. In 2013, I co-authored (with former ADL National Director Abraham Foxman) a book entitled Viral Hate: Containing its Spread on the Internet, a book that surveyed the problem on online hate and proposed solutions consistent with principles of free expression. I have chaired the International Network Against Cyberhate (INACH) and co-chaired (with Israel’s Speaker of the Knesset Yuli Edelstein) the Interparliamentary Coalition for Combatting Anti-Semitism Task Force on Internet Hate, which issued a comprehensive report in 2013 proposing approaches to be taken by government, industry, civil society and individuals. Through my work combatting online hate and terrorism, I have worked closely with Facebook, Google, Twitter, Microsoft and other companies and I have had the opportunity to work with leading academics and engineers.
Today, I have been asked to focus my remarks on the effectiveness of online codes of conduct enforced by social media companies to eliminate hate and terrorist content and to examine whether there are practical alternatives to the enforcement regimes, such as legal requirements for filtering out unwanted content.
I start by observing that the problem of hate and terrorist content on the Internet is growing. It is a particular problem on social media platforms. As Internet usage expands globally, as mobile devices accessing the Internet proliferate and as new variants of social media emerge through Apps, the scope and scale of online anti-Semitism and other forms of hate are expanding greatly.
My experience working with the Internet industry confirms that major social media companies take seriously the problem of online hate and terrorist content online. Eliminating online hate and terrorist content is a business imperative for these companies. None of the major companies benefit from hosting such content and the appearance of such content on their platforms harms their business. Thus, this is not a question of having to convince the companies of the societal need to eliminate online hate and terrorism.
For many years, the major social media companies have cooperated with ADL to analyze and address the problem of online hate and terrorist content. ADL has convened working groups in Silicon Valley for many years, and ADL’s channels of communication with online companies are robust. Major social media companies have endorsed ADL’s 2014 “Best Practices” for responding to cyberhate and they have developed standards governing the use of their platforms that forbid online hate and terrorist content.
Social media companies principally address hate and terrorist content through a reporting regime, under which users report content prohibited by the companies’ standards, which triggers review by company personnel. Anti-Semitic, terroristic and other forms of hate speech are banned by standards set by the online platforms, although definitional differences exist in some cases. Given the massive volume of content posted to the Internet every minute, and the many different contexts in which such information appears, the only practical way for companies to address content violative of their standards is to rely on individuals bringing such content to their attention. Once the offending content is reported, the companies regularly remove the content and suspend users posting such content. They have teams of personnel worldwide trained to analyze and address online hate and terrorist speech.
It is virtually impossible for online companies to employ filters or automated blocking tools because the context within which content appears greatly affects whether such content runs afoul of company standards. An anti-Semite might publish hateful imagery and statements prompting removal under a company’s code of conduct while at the same time a history professor might publish the very same imagery and content in the context of teaching about the prevalence and threats of anti-Semitism. The latter content, because it appears in the educational context, would not be subject to removal. Thus, laws requiring filters or automated blocking are impractical and in the United States likely would be found unconstitutional as overbroad.
Moreover, my research has found that the law is a limited tool to address hate and terrorist content online and has not proven to be effective outside the United States. In many cases, prosecutions have prompted glorification and martyrdom for online haters, and their content frequently reappears through other channels, published by like-minded haters. There has been a failure of cross-border law enforcement and civil actions to produce any meaningful change in the amount and intensity of cyberhate. Witness the limited effect the hate speech protocol to the 2001 Convention on Cybercrime has had. The law simply is not an effective tool to deal with the scale of online hate. New laws attempting to regulate online hate would be difficult to implement and difficult to enforce given the global nature of the Internet, the physical impossibility of monitoring content in real time, fundamental differences in legal systems, and ever-changing technology.
Additional legal regulation of social media companies will almost certainly have a chilling effect on free expression and likely will not be permitted constitutionally in the United States. In repressive regimes, legal regulation of Internet content ostensibly for the purpose of restricting hate and terrorist speech leads to suppression of legitimate speech because hate and terrorist content is defined in a way that suppresses legitimate dissent. Even when done for legitimate purposes of combatting online hate and terrorism, because of the importance of context, legal regulation is likely to be overbroad and thus will lead to suppression of legitimate speech and will interfere with free expression of ideas.
As a practical matter, given that the United States remains the hub of global Internet communication and given that the US Constitution forbids most legal regulation of Internet content, and since the law largely is ineffective in addressing online hate and terrorism, the global community would be well-served in continuing to work with social medial companies to address online hate and terrorism.
As the ADL said in its report to the Global Forum on Anti-Semitism in 2015:
To confront cyberhate effectively, the greatest needs today are closer industry cooperation, improved voluntary enforcement of terms of service and community guidelines, greater transparency, simplified mechanisms for users to flag offensive content, and more direct interaction with stakeholders. In addition, more attention must be given to counter-speech strategies, teaching critical thinking, developing educational materials on cyberhate and raising awareness of the problem. All of these principles are reflected in the ADL Best Practices [for responding to Cyberhate].
The recommended Best Practices for the Internet industry providers include the following:
- Providers should take reports about cyberhate seriously, mindful of the fundamental principles of free expression, human dignity, personal safety and respect for the rule of law.
- Providers that feature user-generated content should offer users a clear explanation of their approach to evaluating and resolving reports of hateful content, highlighting their relevant terms of service.
- Providers should offer user-friendly mechanisms and procedures for reporting hateful content.
- Providers should consider creating a specific new terrorism category for users seeking to flag terrorism-related content.
- Providers should respond to user reports in a timely manner.
- Providers should enforce whatever sanctions their terms of service contemplate in a consistent and fair manner.
- Providers should give priority attention to how their platforms are being used by terrorists and terrorist groups to promote terrorism, to recruit potential new terrorists, and to foster self-radicalization.
- Providers should make their expertise available to those looking to generate and promote counter-narratives.
- Providers should work with interested stakeholders to analyze the impact of counter-narratives in terms of their reach, scope, and effectiveness; and
- Providers should use their corporate voices to condemn terrorist use of their platforms and to explain why terrorist activity and advocacy is inconsistent with their goals of connecting the world.
Underlying all of the recommendations is the understanding that rules on hate speech may be written and applied too broadly so as to encumber free expression. Thus, an underlying principle for these recommendations is that care should be taken by online companies to respect free expression and not to encumber legitimate debate and free speech.
Let me conclude by saying that while not every company is in full compliance with every one of the just-mentioned recommendations for best practices in addressing hate and terrorist content online, I can report that the major online platforms are taking great strides to improve their notice and take down activities in accordance with these recommendations. The ADL’s 2015 Report presented to the Global Forum itemizes the progress and is available online. Given the absence of viable alternatives, including especially the limited role legal regulation can play, governments and civil society should applaud the efforts of the major online companies and should undertake to work closer with those companies to achieve the mutual goal of eliminating online hate and use of the Internet for terrorism.
"...the problem of hate and terrorist content on the Internet is growing. It is a particular problem on social media platforms. As Internet usage expands globally, as mobile devices accessing the Internet proliferate and as new variants of social media emerge through Apps, the scope and scale of online anti-Semitism and other forms of hate are expanding greatly."