What If Social Media Was Designed to Help the Average User Rather Than the Average Troll?

  • May 7, 2019
social media stock troll

By Miranda Meyerson | Head of Outreach and Partnerships, ADL's Center for Technology and Society

If you belong to the one-third of Americans who have experienced severe online harassment, the following experience might resonate with you: After uploading an image of yourself on Instagram, your post is immediately flooded with hateful comments about your appearance. This harassment quickly morphs into photoshopped memes, editing your face onto Hitler’s body, which spread across social media platforms. You are now receiving frequent death threats that reference your family and disclose your home address. Ultimately, a false report of a serious law enforcement emergency results in your home being swatted. Not knowing what else to do, you contact social media platforms for help. The process of reporting hateful content forces you to relive your experience in detail and the platforms are slow to respond, if at all, which produces more fear and anxiety. You end up becoming your own personal content moderator, deleting as many hateful comments as you possibly can. But after all of this, you can’t even breathe a sigh of relief—you know this will happen all over again in a matter of time.

The above experience is not a hypothetical narrative meant to illustrate a point; this is the real-life, harrowing story of Brandon Farbstein who, born with a rare form of dwarfism, made an Instagram account at the age of 14 to educate people about his condition. Targets of harassment, like Brandon, are often forced to ignore attacks rather than to subject themselves to the arduous process of flagging content. Social media platforms have established an online reality where it is easier to harass someone than it is to seek safety. This is unacceptable and it is just one example of the online abuse pervading our social media spaces. It’s why we need to perform a radical rethinking of the way we treat targets and vulnerable populations online.

Through our work at the ADL’s (Anti-Defamation League) Center for Technology and Society (CTS) we have been in touch with countless targets of online hate. Much of the conversation to date has focused on the speaker or bad actor and the thorny issues that we are forced to confront within the context of free speech values. Of course, when targets are forced off-line or to censor themselves, that too is a free speech and access-to-information issue; freedom of expression values are not limited to only one side of the debate. But there are also significant problems and a need for solutions outside of the free speech context, and seeing these issues from the viewpoint of those targeted by hate  can empower us to take action in a way that doesn’t implicate the rights of speakers.  Llow-hanging fruit that can be far more easily addressed include improved content reporting processes and user-friendly options enabling individuals to filter the content they see on their accounts.

Let’s consider content reporting processes: Before targets of online harassment can successfully get platforms to remove hate-filled and other threatening messages from their email or social media accounts, they need to screen-capture them, and organize and save the onslaught of unwanted messages as evidence. Next, the target of the harassment usually must mark each message as “harassment” by going through a series of checkboxes. Each platform has its own process for reporting but unfortunately none are trophy worthy. For example, on Twitter a user can report up to five tweets at once. If, like Brandon Farbstein, you’ve experienced severe harassment you know that process does little to relieve a very onerous reporting burden. When a person has received hundreds of threatening messages—or thousands if a harasser engages their network or uses bots to carry out online attacks—being able to report five messages at a time makes little difference. The process of sifting through the countless messages is invariably both traumatizing and time-consuming.

The arduous process of reporting online harassment and the lack of real solutions has had a chilling effect on users’ participation in online forums. This then skews online conversations and engagement in favor of those who believe harassment is an acceptable way to behave. It also has serious effects offline and can lead to depression, anxiety, and a decrease in productivity, to name a few. A recent study found that young targets of cyberbullying are more than twice as likely as their peers to attempt suicide and self-harm. In a survey of school students, researchers at the Cyberbullying Research Center found that 20 percent of targets of cyberbullying seriously contemplated suicide, which prompted the authors to coin the term “cyberbullicide.”

Online harassment doesn’t stop after school. In fact, it can have major effects on a person's desirability as a job candidate, ability to stay employed, and ability to excel in the workplace. People have lost their jobs, had offers rescinded, or been fired because employers are unable, or unwilling, to understand the scourge of online harassment or to help the target reclaim their online space by allowing them the chance to move on in their lives. The story of “Anna Mayer” speaks to this issue. Anna ran a blog where she posted about body acceptance and weight struggles. She was soon the target of a sustained cyber harassment campaign. Years later, she expressed concerns about finding permanent employment as a lot of the harassment was explicitly designed to make her unemployable—with entire websites created to dissuade companies from hiring her. Ultimately, Mayer was forced to include a disclaimer on her resume, warning potential employers of what they might find if they searched her name online.

Though mainstream social media platforms are overlooking this problem, smaller groups of technologists are actively engaged in finding solutions. ADL was thrilled to partner with Cornell Tech to create a prototype app to support targets and help them take back their digital space -- and lives.

The idea of the app was to provide users with a tool to monitor their Twitter accounts and split the messages into positive and negative content, using a publicly available AI tool that assesses the likelihood that a message is considered harassment or bullying. The app would take negative messages and generate a comprehensive report that could be shared with Twitter, law enforcement agencies, employers, or schools. This report would save people who have been targets of harassment painstaking hours they might have otherwise spent combing through their social media accounts, taking screenshots of hateful messages and storing them systematically.

A similar service was developed by The Haystack Group at MIT. Called Squadbox, the service exists to clean out an email inbox after an attack. However, instead of using AI to separate the messages, it allows the user to share their inbox with trusted friends and family who help catalogue and cleanup the hateful messages.

Block together is another service aimed at helping targets of cyberhate and harassment that allows Twitter users to share a list of blocked accounts with their contacts and, if a contact agrees, blocks all the account on the initial blocked original list. The service was  designed to reduce the burden of blocking when being harassed by a large number of  accounts, or when a small number of accounts are harassing multiple people in  a given community. More narrowly, the Cyber Civil Rights Initiative has a crisis helpline for people who are targets of non-consensual intimate imagery attacks (i.e.,: revenge porn).

If you’re not one of the countless individuals who experience severe online harassment, you might ask why anyone should need any of these services, especially when mainstream platforms could do many things that this app could do address these issues. That would be perfectly reasonable to ask. As of writing this, unfortunately, platforms have not taken serious action to address these issues. Until they do, targets of online harassment and internet abuse are forced to choose between leaving the platform, enduring the harassment with the existing minimal tools, or trying to find external means to manage the health of their online presence, such as the tools and services listed above. These are only a small sliver of the many tools and services that targets of harassment need. Companies should offer a suite of services to support their users when being subject to hate and harassment, such as filtering options or more streamlined reporting.

Users who want to go about their day without being bombarded by slurs or coordinated attacks should be able to do so. Social media platforms need to dedicate more time and resources to protect the average user instead of emboldening and amplifying the average troll.

For those seeking immediate help, ADL has published a best practices guide for responding to cyber hate, as well as a cyber-safety action guide.

More from this Section