In Pittsburgh this week, the Jewish community has been burying its dead, the innocent men and women massacred by an anti-Semite while they were praying at the Tree of Life synagogue.
For many of us, social media platforms — such as Twitter, Facebook and Instagram — have been vital ways to connect with people near and far to share our grief, anger and determination as well as ways to take action, from organizing vigils to raising money.
At the same time, these social media platforms are part of the problem. They are often conduits for anti-Semitism and hate, spreading malicious lies and images. Where once an anti-Semite could fester in his or her own hate alone and know that voicing it would be met with near-universal condemnation, now bigots can find a whole community of like-minded individuals online.
Over time, one can easily tailor an online world in which it seems that everyone thinks and talks like you do; indeed, social media platforms often encourage it by algorithmically recommending more hateful friends, tweets and videos. This can lead people to believe that these hateful ideas are normal, so people feel more empowered both online and offline.
So that's why what happened in Pittsburgh was shocking but not entirely surprising.
Anti-Semitic incidents — from neo-Nazis marching in Charlottesville, Va., chanting “Jews will not replace us” to physical assaults, vandalism and attacks on Jewish institutions — are on the rise. Last year, the Anti-Defamation League’s audit of such incidents recorded the largest single-year overall increase, 57%, since ADL began this annual audit in 1979.
While many factors are emboldening anti-Semites and bigots, the poisonous swamp of hatred online is one of the most significant. According to a report that ADL released just a day before the Pittsburgh attack, far-right extremists and the so-called "alt-right" have stepped up their efforts on social media to attack and intimidate Jews, and especially Jewish journalists, in the run-up to the midterm elections.
All the companies have taken some measures over the years to mitigate the issue. However, these steps are far from sufficient, and stereotypes still circulate widely on their platforms in a manner far different than in any other form of media.
While perfection in fighting hate may be impossible, given our current state, major improvement is easy and urgent.
It starts with policy. Every platform must have clear terms of service that address hateful content and lay out the consequences of contributing hateful content to their platform. At the very least, they should make hateful content less findable in search and algorithmic recommendations; remove advertisements from hateful content; and not allow hateful content to be monetized for profit. Platforms should never recommend hatemongers' tweets, suggest them as friends, or auto-play their videos.
And they should clearly label content suspected to be from automated "bot" accounts, given that over 30% of anti-Semitic tweets seem to come from bots.
Policy must be coupled with meaningful enforcement. Social media platforms must stop relying primarily on complaints from individual users. Instead, they must proactively, swiftly and continuously address hateful content using a mix of artificial intelligence, machine learning and human labor.
Platforms should focus not only on the speaker of hate, but also the target. They must offer a suite of services, from filtering options to easy reporting, to those who want to go about their day without being bombarded with slurs or coordinated attacks.
If there was any excuse before Pittsburgh, there is simply none now.