There is an epidemic of Internet hate. With each advance in technology over the last two decades, from the appearance of web sites and the interactivity of America Online through the blossoming of social media sites like Facebook, Twitter, and YouTube and the recent explosion in mobile computing and texting, we have seen how anti-Semites, racists, anti-Islamists, homophobes, misogynists, anti-immigrants and other kinds of haters have embraced the new technologies to spread their lives, to recruit and to mislead.
From the earliest years of the Internet era it became clear that the haters would be among the first not only to recognize the incredible potential of this technology, but successfully exploit it for their own ends.
Today, while it is a marvelous medium for education, communication, entertainment and commerce, the ways in which the Internet is being used to disseminate and promote hateful and violent beliefs and attitudes are astounding, varied and continually multiplying.
Social media is the next frontier in the battle against online hatred.
We all understand that the incredible volume of content posted on Facebook, on YouTube, on Twitter and on other social media platforms makes real-time monitoring impossible. Every minute, approximately 350,000 tweets are sent via Twitter and 100 hours of video are uploaded to YouTube. The volume of posts on Facebook is similarly enormous.
So it is understandable that social media companies rely on their users to flag objectionable content.
The critical questions include whether users know how to flag objectionable content when they encounter it, whether or not they are motivated to do so, and what the companies do when such content is flagged. The good news is that the major Internet companies, including Facebook, Google/YouTube, and Twitter, are thinking more now about these questions than they ever have before, taking them more seriously, and finding new ways to address them.
In 2012, at the behest of the Inter-Parliamentary Coalition for Combating Anti-Semitism, the Anti-Defamation League – an American civil rights and human relations organization dedicated to combating anti-Semitism and all forms of bigotry – convened a working group to address the challenge of cyberhate. That working group, whose membership included representatives of several major Internet companies, met several times in the Silicon Valley to discuss these issues.
As an outgrowth of those conversations, in September ADL publicly released a set of Best Practices intended to provide guideposts for addressing cyberhate. These Best Practices, which were welcomed by the industry, complement a previously published ADL Cyber-Safety Action Guide which brought together for users’ convenience the relevant terms of service of the major companies.
The Best Practices initiative represents an important step forward for the Internet industry, and especially the major U.S.-based social media companies. The 10 Best Practices are divided into two parts: guideposts specifically for providers, and guideposts for the broader Internet community.
Providers, according to the Best Practices, should:
- take reports about cyberhate seriously, mindful of the fundamental principles of free expression, human dignity, personal safety and respect for the rule of law;
- offer users a clear explanation of their approach to evaluating and resolving reports of hateful content, highlighting their relevant terms of service;
- offer user-friendly mechanisms and procedures for reporting hateful content;
- respond to user reports in a timely manner;
- enforce whatever sanctions their terms of service contemplate in a consistent and fair manner.
At the same time, the Best Practices urge the broader Internet community – those who use the platforms as well as those who provide them – to:
- work together to address the harmful consequences of online hatred;
- identify, implement and/or encourage effective strategies of counter-speech – including direct response; comedy and satire when appropriate; or simply setting the record straight;
- share knowledge and help develop educational materials and programmes that encourage critical thinking in both proactive and reactive online activity;
- encourage other interested parties to help raise awareness of the problem of cyberhate and the urgent need to address it;
- welcome new thinking and new initiatives to promote a civil online environment.
It is encouraging that Facebook, Google/YouTube, Microsoft, Twitter and Yahoo are among those who have already expressed support for these Best Practices. Their comments are worth noting briefly, not so much because they are consistent with existing Terms of Service or Community Guidelines, which have been in place for a long time, but rather because of their publicly-expressed concern about cyberhate and their commitment to responding.
When the Best Practices were issued, Facebook expressed support for ADL’s effort, adding that “the best practices outlined today provide valuable ways for all members of the Internet community to engage on this issue…we are committed to creating a safe and respectful platform for everyone who uses Facebook.” Similarly, Google – speaking about its various platforms including YouTube, Blogger, and Google+ – reiterated in a statement supportive of ADL’s efforts that they “offer tools to report hateful content, and act quickly to remove content that violates our policies.” Similar statements of support for the initiative came from Microsoft, Twitter and Yahoo!.
Of course, just because these companies appear more willing now to express concern about cyberhate than they have in the past does not mean that such hate and anti-Semitism is likely to disappear from the Internet overnight. However, it does represent an acknowledgement of the problem and a greater openness to addressing it, and that is an important step forward.
One recent example of the spread of hate speech on social media serves as a model of how the industry is dealing with the issue. Until about a year ago, the leading Hungarian purveyor of anti-Semitic and anti-Roma hate speech, kuruc.info, found a home on Facebook. Using their Facebook page, they spewed hate, denied the Holocaust, conveyed threats, and incited violence, all in violation of Hungarian law. And because they were using an American-based service, their efforts frustrated the Hungarian government’s efforts to enforce the country’s laws. Ultimately, their page was removed from Facebook – not because it violated Hungarian laws, but because users complained to Facebook and Facebook concluded that the page violated their own guidelines. Kuruc.info is no longer active on Facebook, but the challenge of how to address anti-Semitic and other hate speech on social media is as important and as complex as ever. Years spent monitoring online anti-Semitism reveal that it is widespread, and manifests itself in many different ways.
There are anti-Semitic websites and anti-Semitic blogs; there is anti-Semitism in chat rooms, comment sections of news organizations, and gaming; and there is a distressingly large amount of anti-Semitism in social media.
The prevalence of anti-Semitism and hate online unfortunately diminishes the Internet’s incredibly positive value as a global forum for free expression when it creates a hostile environment for Jews and other minorities. Furthermore, such hate, often spewed anonymously online, can lead to real-world violence.
One of the important lessons of the Kuruc.info experience is that given the global nature of the Internet and how much the laws on hate speech vary from country to country, the best opportunity to respond to cyberhate may be to work through the companies themselves.
Users concerned about such hate have the opportunity to promote counter-speech and, when appropriate, to urge the companies to enforce their existing Terms of Service and their Community Guidelines, many of which reject online hate.
The only practical solution to hate speech online, we believe, is a collaborative approach - one that brings a wide array of stakeholders to the table to discuss the challenges and opportunities openly and to strive to craft solutions that everyone can live with.
It is absolutely fundamental not only to create a more civil society, but also to empower kids to protect themselves online, so that when they grow up they will understand that certain words and behaviours are not acceptable.