The communications revolution, which has brought us all the wonders of the iPad, Google Earth, smartphones and social media, has also been a tremendous boon for anti-Semites and racists.
The tools that have enhanced our lives have also enabled bigots to spread their hatred, often under the cover of anonymity, faster and broader than ever before. In the span of 20 years, haters who once handed out fliers on street corners can now disseminate massive amounts of data in nanoseconds globally
Hate groups were quick to adapt to new online methods. One early adopter, the South Florida-based forum for anti-Semites and white supremacists called Stormfront still exists. The First Amendment gives wide latitude for the kinds of hate speech such sites profess — provided they do not engage in making specific threats. Moreover, efforts to remove hate sites from the Internet rarely are effective, as anti-Semitic or racist sites can simply migrate to some other digital space online.
Despite these challenges, educating the public about the dangers of hate sites remains critical. In addition, there are other tangible steps that can be taken to reduce the amount of hate speech online while not running afoul of First Amendment protections.
First, let’s take a closer look at the problem from within the Internet’s anti-Semitism echo chamber.
When Janet Yellen was named as the next chair of the Federal Reserve, we expected the usual outpouring of anti-Semitic commentary from anti-Semites who have long viewed the government’s institutions with suspicion. The appointment of a Jewish woman to head the Fed would only fuel conspiracy theories. And the anti-Semites responded as we expected, expressing their views in predicable viral rants on their blogs and extremist websites.
Unlike other recent notorious anti-Semitic conspiracies regarding the financial crisis or the September 11 terrorist attacks, this conspiracy theory did not enter the mainstream. But there were many anti-Semitic comments left on the forums of popular websites where ordinary Americans would be much more likely to find them.
On the popular news website Politico, one commenter questioned, “Why only Jews are allowed to run our bank? On Bloomberg News, another poster remarked, “Here’s another suggestion, ‘Klein,’ how about enough of Jews running things?”
And on the comment section of a Huffington Post article another person observed, “Only one thing to say Greenspan, Bernanke & Yellen, Religion Jewish wins. Rothschild family and Goldman Sachs would approve.”
Last July, in the aftermath of the suspension of Ryan Braun from Major League Baseball for using performance enhancing drugs, social media networks witnessed a similar outpouring of impassioned commentary, including many anti-Semitic remarks made against him for his highly publicized Jewish heritage.
So what’s the big deal, some may ask? Why should we care if a few bigots use social media as a platform for hate?
Make no mistake: Internet hate is dangerous. Over time, the relentless drumbeat of hateful content can serve to desensitize Internet readers. Long and sustained exposure to Internet hate speech, like exposure to any propaganda, can warp opinions and in some cases inspire people to act out violently on their hatred.
If only we could flip a switch to make online hate disappear, or ask the companies that provide platforms to do so. Unfortunately, it’s not so easy. Recent statistics show that Facebook has 1.11 billion monthly users; YouTube has 72 hours of video uploaded to its servers every minute; Twitter has approximately 400 million tweets every day.
Simply put, it’s unrealistic to expect all online communication to be devoid of any hate or bigotry.
The good news is that we need not be passive — something can be done. Social media companies and non-governmental organizations can work together to combat cyber-hate. In fact, the Anti-Defamation League is in constant contact with the service providers about these issues.
Users can be empowered to act by flagging offensive content. ADL has produced a Cyber-Safety Action Guide that helps Internet users to register concerns without having to face the daunting task of navigating a website’s maze of links.
They should speak out in a smart and careful way, and be prepared to challenge hateful messages with positive ones. And they should promote counter-speech, applauding positive messages and sharing them with others.
Government can also play a role. We recently wrote members of Congress in support of the Hate Crimes Reporting Act of 2014. The legislation would update a 1993 report on the role of telecommunications in hate crimes with current data and information that would raise awareness about the depth and impact of hate speech across all media platforms. We hope the legislation will be passed without delay.
Finally, companies who provide online outlets for hate speech — social media companies, news outlets with comments sections, etc. — have a serious role to play as well. Companies need to create and enforce clear policies on hate speech, and ensure they dedicate the appropriate corporate resources to address the exploitation of their services.
Social media sites should also actively encourage counter-speech and digital literacy to address hate speech, and perhaps require those who post comments to do so under their real names.
For nothing stops a bigot like the threat of being unmasked.