
Related Content
August 13, 2019
Social media platforms are the new public squares, and their impact on society—both for good and for ill—will only continue to grow. At the same time, the way social media companies operate—especially in terms of how they govern hateful, harassing and extreme content—is alarmingly opaque. Efforts toward public transparency by large social media platforms such as Facebook, Twitter and YouTube—while welcome—have been insufficient to date. For this reason, the first transparency report from the online chat platform Discord, released yesterday, is a welcome addition to these initiatives to make tech companies’ efforts at combating hate and harassment more transparent, though there are some not-insignificant shortcomings.
Discord's transparency report is more granular than its predecessors from other companies, offering more details about content and behavior. For example, Discord's report includes the number of times a user has flagged doxxing on their platform and the number of times they have taken action as a result of this. It also provides helpful descriptions of each of the types of reported content included in the report, which makes it easier for the reader to understand, instead of simply linking back to platform policies and expecting the reader to figure out what the citations mean.
At the same time, there are definite shortcomings in the report. As with other transparency reports from other tech companies, there are no statistics regarding the communities being targeted on the Discord platform. It is crucial, especially for vulnerable and marginalized communities, that tech companies start reporting on the degree to which each of these communities are supported and/or targeted on their platforms so that users can make informed decisions about the online communities in which they participate. Additionally, unlike other transparency reports, there is little information about the degree to which Discord is taking proactive measures on their platform, such as using automated tools or other means to detect content that violates their platform rules, pre-empting user reports. In the future, to expand on this important effort to be transparent, we recommend Discord include these measures in their forthcoming reports.
A recent ADL survey found that 37 percent of Americans experienced severe online hate and harassment in 2018, including sexual harassment, stalking, physical threats or sustained harassment. The same survey showed that 36 percent of daily users experienced harassment on Discord specifically.
Discord's first transparency report is a meaningful step toward real tech platform transparency, which other platforms can learn from. We look forward to collaborating with them to further expand their transparency efforts, so that the public, government and civil society can better understand the nature and workings of the online platforms that are and will continue to shape our society.