
Related Content
January 02, 2019
By Ahmad Sultan | Associate Director for Research, Advocacy, and Technology Policy, ADL's Center on Technology and Society
Imagine you just moved into a new town. Your neighbor pays you a visit to let you know she’s been worried about the rise in nearby hate crimes targeting people who share your background. Wanting to understand the scope and nature of the problem, you reach out to your local police department to review the town’s hate crime statistics.
The reports provide only two broad metrics: the number of hate crimes overall and the number reported specifically by the police – with no additional information on who is being targeted, how they are being targeted, or how these problems are being addressed. Moreover, your requests for further information are ignored. How would you respond? Hold that thought for a moment.
During testimony before the Senate Intelligence Committee in September, Twitter CEO Jack Dorsey said Twitter was looking into releasing an abuse transparency report that would detail the company’s progress in removing abusive and hateful content. Instead of an actual abuse report, Twitter released a rules enforcement report which includes two metrics on abuse and hateful conduct. This isn’t surprising to those of us who follow transparency reports published by tech companies. Even when social media companies release metrics on moderation of hateful content, they aren’t terribly informative. We call this “opaque transparency” to highlight the oddly paradoxical nature of these reports.
But real transparency reporting by social media companies is important. In this blog post I’ll explain why and what a social media company should do to provide real transparency.
What is a useful transparency report, anyway?
Glad you asked. To have any utility, a company’s transparency report should let users make informed decisions about their continued engagement with a platform and its community. Think of it as a hate crime report for your town, where the section on abuse provides summary stats of hate crimes in your surroundings. In the physical world, such a report, while far from perfect, would contain metrics on the location and type of hate crime, the individual or community targeted, and how the crime was addressed by local law enforcement.
Similarly, for a tech company’s transparency report to be useful, it should provide detail on how the company defines and identifies hate speech, how it moderates hateful content, and the efficacy of techniques used by the platform’s moderation team. And while not all incidents of hate are crimes, and tech companies are not legally obligated to report these online incidents, it is not only the right thing to do, but it would also be responsive to concerns voiced by Congress and organizations working to combat hate.
But in the world of “opaque transparency,” less is more. Less data on hate speech means more user retention. It’s no wonder a 2018 Pew poll gauging public attitudes towards tech companies revealed that only 25 percent of Americans trust that tech companies will “do the right thing” most of the time. Their “opaque transparency” reports fuel this lack-of-trust sentiment. We know that it’s not in a company’s best interest for users to be informed. If users knew how pervasive hate online is today and decided to disengage, then social networking platforms would suffer.
Opaque transparency reports fuel this lack-of-trust sentiment, a view shared by the co-founder of Black Lives Matter, Patrisse Cullors. Following the revelation that the suspect behind the Tree of Life synagogue shooting in Pittsburgh had allegedly posted racist and threatening comments on the Gab platform before the attack, Cullors authored an opinion piece rightfully calling for greater transparency on the types of hate content being removed online, pointing out that it would not only help protect marginalized voices on the internet, but would also begin to restore public trust in tech companies’ ability to deal with sensitive issues.
But in the absence of transparency, we’re unwittingly entering potentially toxic spaces that can cause lasting harm. And the truth is we need to do a better job of asking social networking companies hard questions. Otherwise, how can we as users make informed decisions about how to engage on various platforms?
Where are we now?
In its review of transparency reports, Ranking Digital Rights’ 2018 transparency index report concluded that technology companies have not provided sufficient information about what hateful content they have restricted and the efficacy of the processes used. It highlighted a persistent and widespread lack of transparency on terms-of-service enforcement.
In its latest community standards enforcement report, Facebook provided only two metrics for hate speech moderation: the quantification of hateful content its moderating team has addressed, and the percentage of hateful content that Facebook found and addressed on its own without it being flagged by a user. The only metric for hateful content in YouTube’s quarterly transparency report is the percentage (and gross number) of total community-flagged content that contained hate or abuse. And while Reddit releases an annual transparency report, it has yet to report any metric specific to hateful content.
The bottom line is that today we are being forced to make ill-informed decisions about the digital social spaces we inhabit, with even less publicly available information than we have for our physical spaces.
Now let’s return to the thought experiment – if you learned that hate crimes were increasing in your neighborhood and in the school your children attended, you would want to know what the police were doing to address the problem. And if the police were being evasive and providing vague responses, chances are you might consider moving to a new neighborhood with less crime.
Similarly, a transparency report from a social media platform can provide parents and teachers with the information they need to regulate children’s online activity, preventing them from visiting toxic sites where they could be targeted or radicalized. And detailed reports help academics and civil society groups to research and combat online hate speech.
What should a transparency report include?
In the long run, the lack-of-trust sentiment will harm technology companies, many of whom are already dealing with a backlash of regulatory efforts and community outrage. We also need to demand accountability in the form of better metrics that provide real transparency while still respecting user privacy.
Companies should:
- Define what they consider to be hateful content that should be taken down under their terms of use (or the law).
- Report the total numbers of hateful content flagged, how and by whom they were flagged, and the amount of hateful content removed.
- Link to a description of how to flag hateful content.
- Accompany each removal with an exact reason that links to the platform’s terms of service.
- State the type of content flagged and removed (image, video, post, comment).
- Further categorize the type of hate based on the intended target of the content (for example, one or more of gay, trans, black, Latino, Jewish, Muslim, immigrants, women), profanity, and threats.
- Report the number of times that flagged posts and removed posts were shared and viewed.
Communities targeted should be able to:
- Report the numbers and percentages of hateful content targeting each identity group.
- Filter flagged and removed content for each community.
Processes:
- Companies should report the percentage split between methods used to flag hateful content. (Currently, either content is initially flagged by a user, by the moderation team, or by automated artificial intelligence systems.)
- Allow consumers to filter these results by identity group, which may reveal that certain techniques are more useful for certain types of hateful content than others.
- Timestamp removed content, including the dates they were posted, flagged, and then removed.
Don’t get me wrong – I understand there are technical limitations and privacy concerns that can complicate this. It is not simple to define and locate hate speech, as opposed to speech that may not target individuals or vulnerable groups. The problem of identifying and removing such content at scale is a real one -- but multi-billion-dollar platforms have the resources to continue to work with machine learning built on human expertise to define, interpret and contextualize hate speech.
Avoidance is not acceptable anymore. Tech companies are realizing that if they want to stop being used as powerful vehicles for spreading hate and extremism, avoid solutions being imposed upon them by law, and otherwise stop the erosion of user trust, they need to put the same resources into self-governance that they put into making profits.
To resolve these challenges, we need to have an open discussion on the merits of specific metrics that should be reported and how to apply them across platforms. If the goal of using social networking is to enrich our lives, then we need to make an earnest attempt to get technology to serve our needs.
Understanding the dynamics of our online communities will bring us one step closer to combating hate, scaling effective responses, and beginning to repair the deep divisions that corrode society.