In late 2018, Pew Research Center reported that social media sites had surpassed print newspapers as a news source for Americans, when one in five U.S. adults reported that they often got the news via social media.i By the following year, that figure had increased to 28% and the trend is only risingii. Combine that with a deeply divided polity headed into a bitterly divisive 2020 U.S. presidential election season and it becomes crucial to understand the information that Americans are exposed to online about political candidates and the topics they are discussing. It is equally important to explore how online discourse might be used to intentionally distort information and create and exploit misgivings about particular identity groups based on religion, race or other characteristics.
In this report, we are bringing together the topic of online attempts to sow divisiveness and misinformation around elections on the one hand, and antisemitism on the other, in order to take a look at the type of antisemitic tropes and misinformation used to attack incumbent Jewish members of the U.S Congress who are running for re-election. This analysis was aided by the Online Hate Index (OHI), a tool currently in development within the Anti-Defamation League (ADL) Center for Technology and Society (CTS) that is being designed to automate the process of detecting hate speech on online platforms. Applied to Twitter in this case study, OHI provided a score for each tweet which denote the confidence (in percentage terms) in classifying the subject tweet as antisemitic.
This study presents a snapshot in time of “problematic” content, which for the purposes of this report we have defined as including both antisemitic tweets as well as tweets that include antisemitic tropes but require more context to be definitively categorized as antisemitic. The findings of this report are based on a review of 5,954 tweets directed at all 30 Jewish incumbents up for re-election on November 3, 2020. The tweets in our sample were all posted between July 23, 2020, to August 22, 2020.
The findings of this report, while limited, are alarming and, unfortunately, not surprising. We find distinct antisemitic tropes targeting Jewish incumbents, many of which have been recently amplified by QAnoniii, as well as other conspiracy theorists. For a full list of incumbents and details on OHI and methodology, please refer to the Methodology section.
A large number of tweets questioned the loyalty, honesty, ideology, and faith of Jewish incumbents, making up 48 percent of all tweets labeled as problematic. There appears to be a concerted effort at trying to portray Jewish incumbents as less patriotic and more dishonest, due in part to their Jewish background. Many of these tweets also claimed that Jewish incumbents are Communists and Marxists in hiding, claimed dual loyalty of lawmakers, or questioned their Jewish faith if they were seen photographed next to Muslims.
Misinformation related to Jewish Hungarian-American financier and philanthropist George Soros, whose Open Society Foundations operates in the United States and abroad, constitutes an astounding 39 percent of all tweets targeting Jewish incumbents that were labeled Problematic in the sample set. These tweets push a series of debunked antisemitic conspiracy theories tied to Soros -- namely, that Soros is funding and organizing the political careers of Jewish incumbents, the media, and BLM and Antifa protests in order to assert a Communist or “Jewish supremacy” agenda in the United States.
Fifteen percent of tweets analyzed also included tropes related to the broad conspiracy theory that Jews control key political, financial, and media systems and exploit them for their advantage to the detriment of others. These tweets allege that Jewish incumbents are part of the “Deep State” or claim the American political and financial systems are controlled by the Rothschild family to benefit the Jewish community.
Outside of antisemitic conspiracy theories, tropes, and misinformation, this sample set also includes content that employed Explicit Antisemitic Language. While these only accounted for 7 percent of all Problematic tweets, Twitter has yet to remove them -- despite posts containing explicit forms of antisemitism that violate the platform’s Rules and Policiesiv, including words like “kike.”
The report also includes recommendations for both social media companies as well as lawmakers and candidates:
Recommendations for Social Media Companies:
Develop strong policies and create distinct rubrics for different forms of hate targeting marginalized and minority groups: social media companies must develop decision-making rubrics for their content reviewers and AI tools that are tailored to the needs of different identity-based groups. These rubrics should cover a comprehensive set of tropes and phrases that are used to target different identity groups.
Collect and share data on identity-based hate: Developing ways to counter online hate requires that we know which groups are targeted, the extent to which they are targeted, and the nature of the attacks. Without this information, it is impossible for platforms, researchers, and civil society to address these problems in a way that is informed by empirical evidence.
Improve both manual and automated processes for classifying hate: In addition to creating better rubrics for specific forms of hate speech, social media platforms should assume greater responsibility to enforce their policies and to do so accurately at scale.
Run Informational Interventions on the platform: Companies should experiment with a new set of features that help users navigate the world of disinformation. They can do this through interventions that provide accurate information on candidates and identity-based groups to safeguard the democratic system.
Expand tools and services for targets of hate: At present, Platforms are doing little to nothing for targets of hate. Platforms should offer far more services and tools for individuals facing or fearing online attacks, including assisting with tracking and capturing information, providing resources, and creating better customization options to mitigate harm.
Design to reduce the influence and impact of hateful content: social media companies should redesign their platforms and adjust their algorithms to reduce the prevalence and influence of hateful content and harassing behavior.
Recommendations for Lawmakers and Candidates
Dedicate resources to studying the impacts of online hate: Congress should commission a report to study how the online hate ecosystem impacts the election process, how misinformation sways voters, and how aspiring political candidates at every level are impacted by content that targets them based on their identity.
Incorporate informational interventions in election campaigns: At the outset of a campaign, candidates should use their reach to counter disinformation and hate speech in real-time on social media. Additionally, candidates should amplify accurate information and educate the electorate on the impact of hate speech disguised as political speech and how it reverberates in different identity-based groups.
Use the bully pulpit to encourage greater transparency from social media companies: Congress should work closely with companies to ensure they have updated and accurate information on the state of misinformation campaigns targeting different identity-based groups in America. Candidates should make clear that they do not support the way social media platforms are handling hate speech and disinformation.
On the face of it, this study provides a view into the online experience of a Jewish incumbent in the federal government heading into a presidential election season. Considered independently our findings are informative; however, they also raise a number of questions that beg further study. Some of these questions include: (i) Why are so many Problematic antisemitic tweets easily found on Twitter and why are Twitter’s Rules not enforced against a large number of blatantly antisemitic tropes? (ii) Holding political office in today’s environment of surging online hate can lead to an incredibly challenging work environment -- Moreover, the fact that tweets targeting Jewish incumbents are public means anyone can view them, including others who then may be incited to post their own attacks. How will exposure to vitriolic tweets impact young Jews aspiring to political office? (iii) Often, targets of online hate who are also members of marginalized communities withdraw and self-censor out of fear of being further harassed online. Would candidates or lawmakers that are targets of online hate be at a disadvantage because their social media is flooded with harassment? (iv) Given the increased reliance by Americans on social media as a source of news, to what extent is antisemitic misinformation shaping Americans’ views of and opinions about Jews? And finally, (v) Given the specific ways in which Jewish candidates are targeted online, which may also be illustrative of other individuals associated with a marginalized identity, why are social media platforms not collecting identity-based data? Without this data, they are unable to methodically track and take action against campaigns of harassment aimed at identity-based groups including Jewish, Black, Muslim, Latinx, Asian-American, LGBTQ+ and other marginalized groups.
The United States presidential and midterm elections of 2016 and 2018 saw an unprecedented rise in the use of social media for online political canvassing and discussion. Subsequent studies explored the nature of the disinformation and toxicity that engulfed political discourse on the internet during those election cycles, finding that many of the efforts to divide Americans were driven by bot activityvi as well as by non-state actorsvii. Over the same period, ADL also tracked a significant rise in harassing conduct and antisemitic content targeting Jewish social media usersiii,ix This research is all the more necessary because social media platforms do not provide any transparency around the scope, reach, and nature of content used to attack identity-based groups, like the Jewish, Black, Muslim, Hispanic, LGBTQ+ and other communities on their respective platforms.
The Anti-Defamation League (ADL) Center for Technology and Society (CTS) is beginning to look at how present-day online antisemitic attacks and fear-mongering recycle historical tropes and conspiracy theories. Many of these tropes and conspiracy theories trace their origins to longstanding antisemtic myths and stereotypes. Moreover, as ADL research has long shown, antisemitism typically surges in times of political or economic uncertainty as well as rapid social, economic, political, and demographic changes, and is often used as a tool of political manipulation or populist anger, a description that certainly fits our current environmentx. Finally, ADL looks at how Americans can demand greater accountability, responsiveness, and transparency from the social media platforms that set the rules governing speech on the internet. Collectively, these platforms have an outsized influence on the election process.
A subset of 5,954 tweets was analyzed out of a total dataset of 337,689 tweets, which is the total number of tweets directed at Jewish incumbents over a one-month period between July 23 and August 22, 2020. Of the 5,954 tweets, 10 percent were labeled as Problematic, which for the purposes of this study we have defined as including both antisemitic tweets as well as tweets that include antisemitic tropes but require more context to be categorized as antisemitic. Further analysis revealed distinct groups of antisemitic content: (i) George Soros-related conspiracy theories, (ii) Explicit Antisemitic Language, (iii) tropes related to Jewish Power & Control, and (iv) content Questioning the Loyalty, Honesty, Ideology, and Faith of Jewish incumbents.
Incumbents were targeted disproportionately. The above chart shows both the distribution of all tweets labeled Problematic across incumbents (black bars), as well as the distribution of the total number of tweets directed at each incumbent (grey bars). An incumbent with a larger black bar than a grey bar means that they are being targeted with a disproportionate amount of Problematic content. For example, Representative Nadler (NY-10) accounts for 31 percent of all Problematic tweets, but only 18 percent of all tweets directed at an incumbent. Similarly, Representatives Cicilline (RI-1) and Cohen (TN-9) account for 6 percent of all Problematic tweets, but only for 2 percent and 1 percent each of all tweets directed at an incumbent. This is similar for Senator Ron Wyden (OR), and Representatives Raskin (MD-8) and Zeldin (NY-1).
Table 1 shows the top 5 targeted incumbents for each category of Problematic tweets.
Table 1: Top 5 Targeted Incumbents by Category of Content
While most users in the subset of Problematic tweets had authored only one piece of content, others posted toxic and antisemitic content multiple times. User Axi had 27 Problematic tweets, User B had 12 tweets, User C had 6 tweets, and several users had 4 or fewer Problematic tweets. Users with multiple Problematic tweets tended to focus the majority of their attacks on one Jewish incumbent. For example, 89 percent of Problematic tweets made by User A are aimed at Senator Schumer (NY), while 100 percent of User B and User C tweets are aimed at Representative Nadler (NY-10). This provides some insight into the singular-focused nature of antisemitic and toxic users that attack elected officeholders. Users A, B, and C are not verified users. User A has the highest follower count with 1,010 users following his account, followed by User B with 157 followers, and User C with 49 followers.
George Soros-related Conspiracy Theories
The most prominent conspiracy theory targeting Jewish incumbents relates to George Soros, the Hungarian-American Jewish billionaire financier, philanthropist and Holocaust survivor. Soros, 89, got his start in philanthropy supporting the fall of Communism and transition to (more) open societies in the former Soviet Union and communist bloc countries. Beginning in the mid-1990s, he expanded his philanthropic focus to the United States and has since donated billions of dollars of his personal wealth to liberal and anti-authoritarian causes around the world, making him a favored target among many on the extreme-rightxii. Conspiracy theorists often insert Soros into a series of repackaged historical antisemitic myths, particularly the notion that rich and powerful Jews work behind the scenes, plotting to control countries and manipulate global eventsxiii.
In our analysis, we found several tweets targeting Jewish incumbents contained negative content about George Soros. This content made up 39 percent of all Problematic tweets found in the sample set analyzed in this study. In addition to Soros-related conspiracy theories, many of these tweets included additional conspiracy theories and antisemitic tropes.
The most prominent and frequent conspiracy theory in tweets referencing George Soros is the false allegation that he funded and organized the recent protests, including all BLM and Antifa activity, following the murders of George Floyd, Breonna Taylor, and Ahmaud Arbery. These allegations made up 52 percent of the Soros-related posts. In fact, the use of Soros in conspiracy theories related to the recent protests has gained significant traction in 2020, due in part to QAnon, an online trolling and disinformation conspiracy theory, whose followers believe that a cabal of Satan-worshipping Democrats, Hollywood celebrities and billionaires run the world while engaging in pedophilia, and human trafficking. In a sample assessment of tweets conducted in May 2020, ADL’s Center on Extremism (COE) found that negative tweets about Soros rose from 20,000 per day on May 26, the day after the murder of George Floyd, to more than 500,000 per day on May 30xiv.
A little more than 7 percent of tweets referencing Soros included allegations that the Jewish incumbent in question was a Communist or Marxist or had accepted Soros’ political funding to spread those ideologies. Some 6 percent of Soros tweets included allegations that the incumbent was engaged in treason or was a traitor by virtue of accepting funds from Soros. Other assertions included the idea that incumbents are colluding with George Soros as part of a Deep State or a New World Order that is engineering a “globalist” agenda.
In terms of amplification, one Soros-related video tweet from a right-wing Twitter handle received a significant amount of views and retweets. The tweet and contents of the video allege that BLM has been co-opted by George Soros and “the left” and can no longer be trusted. This video has been viewed by 234,000 users, and made up 20 percent of all Soros-related tweets in our analysis sample.
Representative Nadler (NY-10) received the highest volume of negative Soros-related tweets with 55 percent of all Soros tweets directed at “@RepJerryNadler”. Many of these tweets are responses to a video where Representative Nadler was asked to disavow the violence from Antifa in Portland, to which he responded saying “That’s a mythxv. Tweets responding to the video alleged that Nadler only made these statements because of funding he received from George Soros, and they made up 21 percent of all Soros-related tweets directed at Representative Nadler.
Senator Schumer (NY) also received a large number of negative Soros tweets directed at “@SenSchumer”, which constituted 35 percent of all Soros-related tweets in this sample set. It’s possible that the high volume of tweets directed towards Senator Schumer, other than for being Jewish, is because of his position as the Senate Minority Leader, a position he’s held since 2017. Senators Ron Wyden (OR), Bernie Sanders (VT), Jacky Rosen (NV), and Dianne Feinstein (CA) also received Soros-related tweets but to a lesser degree. Similarly, House Representatives David Cicilline (RI-1), Steve Cohen (TN-9) and David Kustoff (TN-8), Lee Zeldin (NY-1), Max Rose (NY-11), and Jamie Raskin (MD-8) have all received conspiracy-focused, negative tweets about Soros.
While most users attacking incumbents with negative Soros tweets had only engaged in one-off instances, some repeatedly targeted the candidates. User A made 27 tweets aimed almost entirely at Senator Schumer (NY) and using a host of different Soros-related conspiracy theories, User B made 12 tweets targeting Representative Nadler (NY-10), User C had 6 tweets, while the other users ranged from 4 tweets down to 1 tweet each.
Explicit Antisemitic Language
The OHI-ranked sample dataset also revealed instances of obvious antisemitic language. These are relatively low in volume (only 7 percent of the Problematic tweets set), and included slurs, the use of the echo ((( ))) symbol, and one instance of Holocaust denial. Such obvious violations of Twitter’s Rules are more easily tracked and removed by Twitter’s Trust & Safety systems, and might explain the low volume of such tweets in this sample set.
The majority of tweets in this category make blanket statements about Jews, comment on physical traits, and combine dehumanizing and insulting language next to the word “Jew”. The tweets in this set referenced the Torah, made offensive comments about the Jewish identity of the incumbent, espoused the belief that Jews are trying to dismantle America, and repeated conspiracy theories about how Jews are controlling the newsfeed algorithms of social media platforms.
There is a long history of the use of the word “rat” to symbolize Jews and can be traced back to Nazi Germanyxvi. Some tweets referred to Jewish Democratic incumbents as “rats”, usually in the form of “DemoRATS” or some similar variation. While not every incident can be categorized as antisemitic (some users might have misspelled the word Democrat), the use of lower case and upper case letters and repeated use of the word helped filter through its accidental use.
Representative Nadler (NY-10) was the most targeted Jewish incumbent with tweets that contained Explicit Antisemitic Language, making up 28 percent of the Explicit Antisemitic tweets. Nadler was followed by Senator Schumer (NY) who was targeted with 23 percent of the obvious antisemitic tweets set. This was followed to lesser degrees by Representative Raskin (MD-8), Senator Feinstein, Senator Wyden (OR), Representatives Kustoff (TN-8), Cohen (TN-9), Eliot Engel (NY-16), and John Yarmuth (KY-3).
Other than one user having two Explicit Antisemitic tweets, all other users only accounted for one explicit tweet each.
Outside of this set of Explicit Antisemitic tweets, the OHI-ranked sample set also included tweets that compared Jewish incumbents to Nazis or Hitler. These are not categorized as “Explicit Antisemitic Language” as it is unclear if the users were intentionally trying to provoke the targeted incumbent knowing that they are Jewish, or if they were making a comparison as a shorthand for fascism. In some cases, however, it was clear that this stemmed from an antisemitic sentiment as illustrated below.
Power & Control
The sample dataset of tweets also includes several instances of antisemitic tropes related to the idea that Jews control key political, financial, and media systems, and exploit these systems for their advantage and to the detriment of others. According to ADL’s “Antisemitism Uncovered” report, “Jews who pursue or occupy leadership roles in elected office or other stations of public life too often are deemed conspiratorial...The myth of excessive Jewish power also relies on historical claims of Jews controlling global affairs. Jews are cast as manipulative and conniving schemers who work in the shadows to advance an evil agenda.”xvii
Fifteen percent (15 percent) of all Problematic tweets in this sample dataset directed at Jewish incumbents focused on Power & Control. Many of these tweets contained obvious forms of antisemitism, but some are subtler, while others leveraged historical antisemitic tropes but for reasons explored below, they were difficult to categorize as antisemitism.
Twenty-seven percent of the Power & Control subset of tweets leveraged obvious forms of antisemitic tropes, invoking the Rothschild family, which is a common dog whistle to assert Jewish control of the financial system, claiming Jewish control of the media industry, referencing the New World Order, and claiming that Jews spread the coronavirus. Tweets about Power & Control often contained more than one category of problematic content.
Thirty-seven percent (37 percent) of the Power & Control subset of tweets targeted Jewish incumbents with terms like “globalist” and “Deep State”. These terms illustrate some of the challenges in labeling antisemitism. The term “globalist” is often used as an ethnic smear and long-running conspiracy theory about Jewish populations not being loyal to the countries they live in and cooperating through secret international alliances.xviii It is also be used by right-wing media and pundits as a way to cast aspersions toward a political ideology that promotes the benefits of globalization and generally supports greater interconnectedness between countries and free-market trade policiesxix. Similarly, the term Deep State has been used by some, like QAnon supporters, in antisemitic ways to assert that a cabal of Jews controls not just world governments, but the banking system, the media and entertainment industry to keep people poor, ignorant and enslavedxx. Finally, sometimes these terms are directed at both Jewish and non-Jewish politicians, making it difficult to determine the exact nature of their use.
The use of antisemitic tropes around Power & Control has also evolved to include newer conspiracy theories such as “Pizzagate,” a right-wing conspiracy theory that claims a pedophile sex-slave ring connected to prominent Democrats was being operated from a Washington, D.C., pizza parlor. This conspiracy theory led to a man firing a military-style assault rifle inside a Washington pizzeria in 2016, believing he was saving children trapped in a sex-slave ringxxi. The Jeffery Epstein pedophilia case appears to have further fueled this conspiracy theory, as several Jewish incumbents were subjected to tweets referring to them as pedophiles, with no evidence to back up these claims. In all, almost half (49 percent) of the Power & Control tweets targeted incumbents with claims of pedophilia.
Senator Schumer (NY) received the highest number of Power & Control tweets directed at a single incumbent, which accounted for 31 percent of the distribution of these tweets. He was followed by Representative Nadler (NY-10) at 20 percent of the distribution, and Representative Cohen at 12 percent of the distribution.
Seventy percent (72 percent) of all Power & Control tweets are one-time tweets made by individual users. The most frequent repeat offender was the User F who had 4 tweets in the Power & Control set, which targeted Senators Schumer (NY), Blumenthal (CT), and Rosen (NV).
Questioning Loyalty, Honesty, Ideology, and Faith
Forty-eight percent of all Problematic tweets were labeled as questioning Jewish incumbents’ loyalty to the United States and labeling them traitors or treasonous, their honesty and trustworthiness, their political ideology, and their faith as a person and elected officeholder identifying as a Jew.
Fifty-three percent (53 percent) of the Questioning Loyalty, Honesty, Ideology, and Faith subset (which will be referred to as the Questioning subset) are tweets that called Jewish incumbents Communists or Marxists. This again illustrates the challenges involved in automating the labeling of certain kinds of content as antisemitic - attacks centered around Communism and Marxism can be expressions of fairly benign but exaggerated political differences between users that identify as right-wing and politicians in the Democratic party, or they could be using a historical antisemitic conspiracy theory referred to as Jewish Bolshevism or Jewish Communism that date back to the Russian Revolutions of 1917xxii. According to this canard, Jews were responsible for the Russian Revolution, and have dominated key Communist movements around the world.
Although some users are less subtle about their use of Communism, such as User G who claims that the Bolsheviks who committed genocide were mostly Schwartz’s tribesmen, most tweets in this subset are either dog whistles, or indistinguishable from political disagreement. Seventeen of these tweets are combined with Soros-related conspiracy theories, with other tweets combining accusations of treason, lying, and explicit antisemitic language.
Again, the majority of tweets in this subset are directed towards Representative Nadler (NY-10) which accounted for 22 percent of the subset. This was again followed by Senators Schumer (NY) and Feinstein (CA) who both accounted for 18 percent, and Representative Cicilline (RI-1) and Senator Sanders (VT) with 10.5 percent and 10 percent respectively.
This set also includes several tweets which called the Jewish incumbents traitors, claimed they were or are engaged in treasonous activity or questioned their Jewish faith. These accounted for around 21 percent of the Questioning subset of tweets, and are frequently paired with accusations of being funded by Soros (23 percent).
Many of the tweets questioning incumbents’ Jewish faith focused on their interactions with Muslims. One particular image that had wide reach was a picture of Senator Sanders (VT) sitting next to Linda Sarsour. The original post made by Tarek Fatah was retweeted a total of 901 times, received 3.6k likes, and its retweets appeared in our sample set 17 times.
There are also several tweets that questioned the honesty of Jewish incumbents and at least one case evoked the dual-loyalty trope. The accusation that Jews have a “dual loyalty”, that they cannot be trusted because their true allegiance is to their religion or Israel, rather than to the country in which they live, dates back thousands of years. It animated the Nazis in 1930s Germany, when they accused Jewish people of being traitorsxxiii,xxiv. Thirty-three percent (33 percent) of all Questioning tweets called Jewish incumbents liars or questioned their ability to be honest to the American public. The vast majority of these tweets cannot be categorized as antisemitic without additional context, but they should be cataloged and tracked, since most antisemitic tropes are built on top of the idea that Jews are inherently untrustworthy.
Twenty-one percent (21 percent) of all Questioning tweets were directed at Senator Schumer (NY), followed by 18 percent for Representative Nadler (NY-10), and 11 percent each for Representative Cicilline (RI-1), Senator Feinstein (CA), and Senator Sanders (VT). The user with the most tweets in this category was User A with 11 tweets that entirely focused on Senator Schumer, while User D had 4 tweets all directed at Senator Schumer as well. User E meanwhile only had 3 tweets which all focused on Representative Nadler.
The goal of this study was to begin measuring, cataloguing and understanding the nature of hateful content targeting Jewish incumbents running for reelection to political office during the lead up to a politically divisive general election, with a special focus on antisemitic content. We collected all tweets that contained a mention of any of the 30 incumbent Jewish members of the U.S. House of Representatives and Senate from the 116th Congress (table 2)xxv. The tweets represent one month of data and covered the period from July 23, 2020 to August 22, 2020.
A total of 5,954 tweets were labeled and analyzed out of a total dataset of 337,689 tweets, which is the total number of tweets directed at Jewish incumbents over the one-month period. The Online Hate Index (OHI) classifier was applied on all tweets which output a score (ranging from 0% to 99%) of the OHI’s confidence that a given tweet is antisemitic. The OHI is a tool currently in development that aims to automate the process of detecting hate speech on online platforms. The top 200 scored tweets were shortlisted for each of the 30 elected office holders, for manual review and analysis. Overall, three reviewers labeled and analyzed a total of 5,954 tweets as some tweets were directed at multiple incumbents. The purpose of manually reviewing tweets is to understand the nature of the antisemitic tweets and related trends beyond the binary categories of “antisemitic” and “not antisemitic” as well as to verify the accuracy of the OHI model.
Labels were applied to content that related to antisemitic George Soros conspiracy theories, contained generally Explicit Antisemitic Language, control of financial or political systems, references to the Deep State and Globalist agenda, mentions of Holocaust denial, use of the Echo symbol, comparisons to rats, comparisons to Nazis, Communists, Marxists, and references of the Jewish Bolshevism conspiracy theory, and allegations of untrustworthyness and dual loyalty.
As with any analysis of text, not all pieces of content fit exclusively into a given label and there are instances where it is impossible to categorize a tweet as being simply offensive or whether it has intentionally evoked an historical antisemitic trope. For example, while most tweets related to George Soros referred to an antisemitic conspiracy theory alleging his control and manipulation of U.S. politicians, some tweets seemed more concerned about a few donors constituting a large proportion of funding for political candidates, and then there were tweets that contained aspects of both types of content. Where context wasn’t clear, tweets were viewed on Twitter’s webpage to assess content in relation to the thread they were part of. This step improved accuracy of labeling and analysis.
Another important caveat to mention is that the analysis in this report provides a snapshot of the problematic and antisemitic content targeting Jewish incumbents, and should not be interpreted as being representative of the incumbents’ entire online experience. Nor does the analysis of their online experience account for the off-line attacks they might experience.
This non-random sample of tweets contained scores that ranged from 99 percent (high confidence of antisemitism) to 0 percent (low confidence of antisemitism), with an average OHI score of 24 percent confidence and a median value of 0.5 percent confidence. In this sample, 10 percent of tweets were labeled as Problematic, which includes both explicitly antisemitic content as well as content that is problematic but difficult to label as antisemitic without more information. This subset of problematic tweets received an average OHI score of 69 percent confidence of antisemitism, with a median value of 93 percent confidence, showing that while the OHI did not have high confidence in the antisemitic nature of the majority of the 5,954 tweets, it did have relatively higher confidence in the tweets that were manually reviewed and labeled as problematic. This is important because it shows that the model scores are positively correlated with the manual labels.
This study also provided an opportunity to assess the performance of the Online Hate Index (OHI) which is a tool currently in development that aims to measure the spread of online hate speech on online platforms like social media and image boards. The classifier powering the OHI was trained on a sample of tweets labeled for antisemitism, and appears to be fine-tuned towards more obvious forms of antisemitism and toxicity than the subtler forms, likely a function of the data it was trained on. For instance, the average OHI score for Power & Control tweets is 85 percent confidence with a median score of 99 percent. This is significantly higher than the Soros mean score of 69 percent with a median of 77 percent. While more analysis needs to be conducted to unpack the significance of these scores, they highlight one simple fact - the OHI tended to be in closer alignment with human labelers on antisemitic tropes related to Jewish power than with George Soros-related tweets. The scores for Explicit Antisemitic Language also reveal an interesting disinction between antisemitic language and other problematic language. While the mean score is only 67 percent confident of antisemitism, the median score is 97 percent which shows that while the OHI is in alignment with the top half of the human labeled tweets (given the high median score which illustrates its confidence in the antisemitic nature of those tweets), these scores drop dramatically as you go further down the list. This highlights the challenge in making clear cut determinations of antisemitism, which is further discussed later in this report.
For Social Media Companies
Develop strong policies and create distinct rubrics for different forms of hate targeting marginalized and minority groups
Social media companies must develop decision making rubrics for their content reviewers and AI tools that are tailored to the needs of different identity-based groups. These rubrics should cover a comprehensive set of tropes and phrases that are used to target different identity groups. Social media platforms should also develop stronger community guidelines or standards that address hateful content that target marginalized and minority groups. Platforms should work regularly with civil rights organizations and civil society to map out the hate ecosystem for all identity groups and create rules and guidelines tailored to those forms of hate. For example, the antisemitic tropes related to George Soros mentioned in this report appear to either not violate Twitter’s Rules due to the lack of policies around these topics or Twitter does not have the correct rubrics in place to make these decisions.
Collect and share data on identity-based hate
Developing ways to counter online hate requires that we know who is being targeted, the extent to which they are targeted, and the nature of the attacks. Without this information, it is impossible for platforms, researchers, and civil society to create solutions to these problems in a way that is informed by empirical evidence. Platforms should adopt robust accountability and transparency standards. This should include regularly scheduled external, independent audits so that the public knows the extent of hate and harassment on a given platform, with a focus on quantifying violative content that targets each group with a breakdown on how those groups are being targeted. Audits would also allow the public to verify that the company followed through on its stated actions and to assess the effectiveness of company efforts across time. Companies should provide information from the audit and elsewhere through more robust transparency reports.
Improve both manual and automated processes for classifying hate
This study and many others like it demonstrate the ease of finding hateful content on social media platforms. In addition to defining specific forms of hate speech, Social media platforms should assume greater responsibility to enforce their policies and to do so accurately at scale. They can do this by working with civil rights organizations to train their content reviewers on understanding the history, impacts, and distinct sub-components of hate speech for different identity groups like Jews, Muslims, LGBTQ+ and the Black communities. This will help grow the decision making process beyond decision-tree processes that are meant to be applied to all groups. Social media companies should also work on improving their automated hate detection tools so that those same learnings transfer over.
Run Informational Interventions
This study shines a light on misinformation targeting Jewish elected office holders. But we know that candidates from all minority and marginalized groups are targeted with hateful and vitriolic content. Social media companies must conduct informational interventions that correct for campaigns that aim to misinform the public under the guise of “political speech”, using hateful stereotypes, tropes, and conspiracy theories. The features of social media platforms facilitate the wide reach of these hate campaigns. These companies now need to experiment with a new set of features that help users navigate the world of disinformation through interventions that provide accurate information on candidates and identity groups to safeguard the democratic system. Social media companies should also be sharing reports about physical threats and other hateful and toxic discourse targeting candidates with Congress on a regular basis.
Expand tools and services for targets of hate
Given the prevalence of online hate and harassment, platforms should offer far more services and tools for individuals facing or fearing online attack. Platforms should develop an interstitial notification for content that AI detects has obvious hate speech between the moment someone posts content and the moment that it gets posted publicly. Users should also be allowed to flag multiple pieces of content within one report instead of having to create a new report for each piece of content being flagged.
Design to reduce influence and impact of hateful content
Social media companies should redesign their platforms and adjust their algorithms to reduce the influence of hateful content and harassing behavior. Borderline content shared by users who have been flagged multiple times should not appear on news feeds and home pages.
For Congressional Candidates and Elected Officials
Dedicate resources towards studying the impacts of online hate
Candidates for political office should advocate for Congress to commission a report to study the ways in which the online hate ecosystem impacts the election process, how misinformation sways voters, and how aspiring political candidates at every level are impacted by content that targets them based on their identity. Incumbents should commission further study into these research topics and should work with civil rights organizations and technology companies to collectively scope out this research.
Incorporate informational interventions in election campaigns
This report illustrates how hate speech doesn’t target a single candidate but an entire group of people. The negative externalities of spillover to non-political actors in different communities. Candidates should actively work to correct misinformation that targets entire groups of people and has the potential to impact key democratic processes. Candidates should use their reach to amplify accurate information and educate the electorate on hate speech disguised as “facts” about different identity-based groups.
Use the bully pulpit to encourage greater transparency from social media companies
Candidates should advocate for stronger accountability and transparency standards from technology companies. Congress should work closely with companies to ensure they have updated and accurate information on the state of misinformation campaigns targeting different identity-based groups in America.
[iii] QAnon is a baseless internet conspiracy theory whose followers believe that a cabal of Satan-worshipping Democrats, Hollywood celebrities and billionaires runs the world while engaging in pedophilia, and human trafficking. For more, read: https://www.adl.org/qanon
[xi] Twitter usernames have been anonymized replaced with pseudonyms as a safety protocol
[xxiv] Tweets collected only include posts that explicitly mention the incumbents’ twitter handle using the @ mention function, and do not include replies to tweets that are not explicitly directed at an incumbent.
The Robert Belfer Family
Dr. Georgette Bennett
Craig Newmark Philanthropies
The David Tepper Charitable Foundation Inc.
The Grove Foundation
Joyce and Irving Goldman Family Foundation
Horace W. Goldsmith Foundation
Walter & Elise Haas Fund
John Pritzker Family Fund
Righteous Persons Foundation
Alan B. Slifka Foundation
Amy and Robert Stavis
Zegar Family Foundation