I am once again honored to be participating in the ICCA conference, building on the work that began in London in early 2009. My focus is on the evil of online anti-Semitism and the steps to address it, an issue I have been studying since 1995.
As a way of framing the discussion, I ask you to suppose that Osama Bid Laden wanted to hold a world-wide rally here in Ottawa, outside on Parliament Hill, with the purpose of condemning Jews, promoting his extremist ideology and recruiting a new generation of terrorists. First of all, it is unlikely he would venture out of his cave to make the trip, given his guaranteed arrest or assassination. And it is also unlikely the rally attendees from abroad would make it across the border, given their purpose in visiting Canada and immigration rules. Even assuming that municipal authorities would issue a rally permit -- a fairly wild assumption -- no right-minded vendor would help with the logistics of video screens, microphones, speakers and the like. A global rally to promote anti-Semitism and terrorism, fortunately, is a far-fetched notion.
Or is it?
The Yemeni terrorist mentioned in connection with the recent attempted cargo bombings, Anwar al-Awalaki, has been holding such hate-inspired rallies every single day, for some time now. Not in person, of course, but online via the Internet. In the week the Chicago bombing plot was foiled, hundreds of al-Awalaki videos were available on YouTube with a combined total of 3.5 million views. al-Awalaki, who in fact has been called has been called “the Bin Laden of the Internet,” was not thwarted in his desire to hold a world-wide rally to condemn Jews and promote acts of terrorism. All he needed was a video camera, an Internet connection and a free online broadcast platform. YouTube, better known for its assortment of videos depicting cute kittens and dancing babies, served as al-Awalaki’s rally venue.
Only after weeks of pressure from the US Congress and the UK Parliament did YouTube finally take down the al-Awalaki videos.
al-Awalaki is far from alone in his use of the Internet to foment hate and violence. The virus of anti-Semitism infected the Internet right from the start. No longer relegated to clandestine meetings down dark alleys or to exchanging propaganda in plain brown wrappers, haters embraced the worldwide publishing and communications capability of the Internet whole-heartedly. Today, as a result, Internet users continue to be misled by hate-filled lies about Jewish people posted online. There are Facebook groups denying the Holocaust with scores of followers (and viewers are being vilely misinformed that the Holocaust is an historical fiction perpetrated by Jews). There are web sites hosted by neo-Nazis, some even selling music and t-shirts with “Kill the Jews” as their theme. And even in the online comment sections of stories published by mainstream news organizations, there are proclamations of a worldwide Jewish conspiracy and worse.
One recent example of how hate is infiltrating the Internet occurred on this year’s Fourth of July holiday in the United States. As Americans were celebrating that event, a new “Event” was announced on Facebook, entitled “Kill a Jew Day.” The Facebook “host” for the Event wrote “You know the drill guys,” and he urged followers to engage in violence “anywhere you see a Jew” between July 4th and July 22d. A Nazi swastika adorned the Event page.
The posting of that sickening Event prompted a wave of anti-Semitic rants and copycat Events on Facebook in support of the targeting of Jewish people. But it also prompted a counter-event on Facebook entitled “One Million Strong Against Kill a Jew Day” (whose supporters actually numbered, more modestly, in the thousands). And, pursuant to the Facebook Terms of Service, complaints about the “Kill a Jew Day” Event to Facebook administrators resulted in the company disabling the Event page.
The outrage over the Facebook Event site was justified, not just because of the vile anti-Semitism underneath it or the glorified display of a swastika. People also objected to the site because they know that Internet messages can and do inspire violence. Online anti-Semitic hate speech has been implicated in real-world acts of violence, such as an attack on Nobel Laureate and Holocaust survivor Elie Wiesel by a Holocaust denier in 2007.
And there is the case of James Von Brunn, the self-proclaimed white supremacist, anti-Semite and Holocaust denier who murdered Stephen Johns, a guard at the United States Holocaust Memorial Museum. Von Brunn maintained a hate Web site and was a frequent participant in chat rooms, on bulletin boards and through links to other haters’ Web sites. Von Brunn had a virtual fan club, that cheered him on and legitimized his vicious thinking. Von Brunn found validation for the murderous rage he harbored on the Internet, with tragic consequences.
The prevalence of online anti-Semitism in its various forms has created an online culture where hatred of Jewish people, to some, is made to seem common, normal and acceptable. That, of course, was the goal of Hitler’s propaganda machine leading up to the Holocaust. Today, mercifully, we are not facing the threat of government-sanctioned genocide against Jewish people, but acts of violence against Jews, and terrorist acts, most certainly have been inspired and facilitated by the Internet.
So the question facing this Inter-Parliamentary Coalition for Combating Anti-Semitism is: What should governments’ and society’s response be to the anti-Semitic perversion of the Internet?
One visceral response to the proliferation of online hate is "There ought to be a law." Legal rules are the way a society decrees what is right and what is wrong. Since hate speech is wrong, it seems logical that law would be employed to police it. A legal ban on hate speech and the criminalization of its publication indeed is an alternative in some jurisdictions. But, of course, it is not an option in the United States where the First Amendment gives broad latitude to virtually all speech, even the most repugnant. (Only direct threats against identifiable targets are criminalized).
Legislatures around the world have heeded the call for laws encompassing Internet hate. The hate speech protocol to the Cybercrime Treaty is a prime example of a heralded legal solution to the problem. It was designed to eliminate racist sites from the Internet through criminal penalties.
From Brazil to Canada, and from South Africa to Great Britain, there are legal restrictions on hate speech, online and offline. In much of Europe, denial of the Holocaust (online or offline) is forbidden. In Germany, even displaying the Swastika is a crime. The enforcement of laws against Holocaust deniers – given the bitterly sad history of those countries – serves as a message to all citizens (especially impressionable children) that it is literally unspeakable to deny the Holocaust given the horrors of genocide inflicted in those countries.
Still, there are many who believe that prosecutions, such as that of Holocaust denier David Irving in Austria, promoted his visibility and stirred up his benighted supporters, rather than quelling future hate speech and enlightening the public.
Moreover, laws against hate speech have not demonstrably reduced hate speech or deterred haters. The hate speech protocol to the Cybercrime Treaty, for example, has not brought down online hate. The shield of Internet anonymity and the viral nature of online hate makes legal policing an unrealistic challenge, except in cases where authorities want to “set an example.” Further, the U.S. with our broad First Amendment rights essentially functions as a safe-haven for virtually all Web content. Shutting down a Web Site in Europe or Canada through legal channels is far from a guarantee that the troublesome contents have been censored for all time. The borderless nature of the Internet makes censoring speech like chasing cockroaches, squashing one does not solve the problem when there are many more waiting behind the walls – or across the border.
Many see prosecution of Internet speech in one country as a futile gesture when the speech can re-appear on the Internet almost instantaneously, hosted by an ISP or social networking site in the United States. Moreover, in the social networking era, people are able to upload far faster than police can track and pursue offending speech.
Like the prosecution in Austria of David Irving, the German prosecutions of notorious Holocaust deniers and hate site publishers Ernst Zundel and Frederick Töben sent a strong message of deterrence, showing people that make it their life's work to spread hate around the world that they may well go to jail. And, again, such prosecutions expressed society's outrage at the messages. But if you Google search Zundel and Toben and others like them you will find Web Sites of supporters paying homage to them as martyrs and republishing their messages.
Even some free speech advocates around the world applaud the use of the law to censor speech when it is hate speech because of the pernicious effects of hate speech on minorities and children, and because of its potential to incite violence. But many of those same people object to the use of the law by repressive regimes like China to censor speech it deems to be objectionable as hate directed towards the Chinese government. It is not easy to draw the line between state use of censorship that is good state use of censorship that is bad because defining what is hate speech can be quite subjective. Giving the state the power to censor is problematic given the potential for abuse.
This is not to say that law has no role to play in fighting online hate speech – far from it. But countries with speech codes to protect minorities should make sure that the proper discretion is employed to use those laws against Internet hate speech, lest the enforcement be seen as ineffectual and result in a diminished respect for the law. And, again, the realities of the Internet are such that shutting down a Web Site through legal means in one country is far from a guarantee that the Web Site will be shuttered for all time.
Thus, the law is but one tool in the fight against online hate. And as I’ve discussed, the law has its limits.
Counter-speech – which entailsexposing hate speech for its deceitful and false content, setting the record straight, and promoting the values of tolerance and diversity – has an important role to play in the policing of online hate speech. That is the thrust of the First Amendment – just as haters are free to express themselves so, too, are those opposed to hate empowered to respond. To paraphrase U.S. Supreme Court Justice Louis Brandeis, sunlight is still the best disinfectant. It is always better to expose hate to the light of day than to let it fester in the darkness. One answer to bad speech is more speech. The Facebook Event “One Million Strong Against Kill a Jew Day,” even if far short of a million, is a vivid example of the power of counter-speech as a vehicle for society to stand up to hate speech. Unfortunately, those who might post counter-speech to the Internet are overshadowed by the proliferation of hate-filled content by those who have more time and resources available to engage in their Internet campaign.
Education from an early age on Internet civility and tolerance would go far to stem the next generation of online haters. Sadly, Internet education is not universally a part of school curricula, with the notable exceptions of the well-done programs here in Canada and the work of the Anti-Defamation League through its World of Difference training and anti-cyberbullying education. Education about tolerance and diversity also would go a long way towards inoculating young people from the effects of online hate. Such education, of course, begins in the home but should be fostered and continued in schools.
An equally important and powerful tool against hate speech is the voluntary cooperation of the Internet community – ISPs, social networking companies and others. When Facebook enforced its Terms of Service and disabled the “Kill a Jew Day” Event site, it was a powerful example of an Internet company exercising its own First Amendment rights to ensure that it remained an online service with standards of decency. That voluntary act was quick and effective. A legal action against Facebook for hosting the site – which would be impossible in the US but viable elsewhere around the world – would have been expensive and time-consuming. And, ultimately, it would be no more effective. The chilling effect of a legal action against Facebook may have resulted in undue restrictions by Facebook on future user postings.
Voluntary enforcement by Internet companies of self-established standards against hate speech is effective. If more Internet companies in the U.S. block content that violate their Terms of Service, it will at least be more difficult for haters to promote their hate through respectable hosts. The challenge, of course, is how to deal with social media sites where postings occur constantly and rapidly. Social media companies normally wait for a user complaint before they investigate hate speech posted on their service, but the proliferation of hate-filled postings outpaces the effectiveness of such a “notice and take down” arrangement. New monitoring techniques to identify hate speech as it is posted may be in order.
The prevalence of the al-Awalaki videos on YouTube prompted U.S. Congressman Anthony Weiner to write a letter to YouTube asking that the service take down the posted videos and prevent further postings. The YouTube response was to quote from its Community Standards prohibiting hate speech and terrorist-supporting content, but apparently many al-Awalaki videos remained online despite those Community Standards. Greater vigilance by online companies is called for, and additional inquiries from Parliamentarians such as that from Congressman Weiner should have an effect. There is nothing to prevent online companies from refusing to host hate speech, not even the First Amendment in the United States, so they should take the same position that the hypothetical sound and video company at the Bid Laden Ottawa rally would take – refuse to do business with haters and terrorists.
With Search Engines serving as the primary portals for Internet users, cooperation from the Googles of the world is an increasingly important goal. A good example is the Anti-Defamation League and Google working together to address the site "Jew Watch,” an anti-Semitic Web site. The high ranking of the virulently anti-Semitic group Jew Watch in response to a search inquiry for “Jew” was not due to a conscious choice by Google, but was solely a result of an automated system of ranking. Google placed text on its site that explained the ranking, and gave users a clear explanation of how search results are obtained, to refute the impression that Jew Watch was a reliable source of information, and Google linked to the Anti-Defamation League site for counter-speech.
In short, vigilance and voluntary standards are more effective than the law in dealing with the increasing scourge of online hate speech. Hate speech can be “policed” in a borderless world, but not principally by using traditional law enforcement mechanisms. The Internet community must continue to serve as a “neighborhood watch” against hate speech online, “saying something when it sees something,” and working with the online providers to enforce community standards.