You Purged Racists From Your Website? Great, Now Get to Work

The Covid-19 infodemic taught social media giants like YouTube and Reddit an important lesson: They can—and must—take action to control the content on their sites.
rope attached
Photograph: Jorg Greuel/Getty Images

For those who follow the politics of platforms, Monday’s great expulsion of malicious content creators was better late than never. For far too long, a very small contingent of extremely hateful content creators have used Silicon Valley’s love of the First Amendment to control the narrative on commercial content moderation. By labeling every effort to control their speech as “censorship,” these individuals and groups managed to create cover for their use of death threats, harassment, and other incitements to violence to silence opposition. For a long time, it has worked. Until now. In what looks like a coordinated purge by Twitch, Reddit, and YouTube, the reckoning is here for those who use racism and misogyny to gain attention and make money on social media.

For the past five years, I have been researching white supremacists online and how they capitalize on tech’s willful ignorance of the damage they are causing in the real world. At Harvard Kennedy School’s Shorenstein Center, I lead a team of researchers who look into the fraught politics of online life and how platforms connect the wires to the weeds. It’s too often the case that what happens online no longer stays online. Relying on media manipulation techniques to hide their identities and motives, a mass of racists began to come out in public in the lead up to Trump’s election, including the rise of the so-called alt-right. Due to social media we are all witnesses to white supremacist violence, including the murder of Heather Heyer in Charlottesville and the attack on Muslims in Christchurch. Researchers, journalists, and activists have fought to expose these networks and provide the basic research needed to detect, document, and debunk disinformation campaigns.

With Monday's expulsion, it feels like researchers, journalists, and activists are finally being heard.

It’s no coincidence that this newfound boldness on the part of social media companies is coming in the middle of a global pandemic. The past few months of work dealing with medical misinformation surrounding Covid-19 has taught these companies an important lesson: They must—and they can—take decisive action to control who and what is on their sites. It’s about time, and it had better be just the beginning.

What exactly happened? First, Twitch removed streamers who had been accused of sexual abuse and even suspended Trump’s campaign account for violating the policy on hateful conduct. The gaming platform has been struggling to contain a Gamergate-like outbreak of misogyny toward female streamers and trans women on its platform. Earlier this year it appointed an advisory council to create policies for community safety—which in turn sparked a backlash from disgruntled users who saw this as foreshadowing censorship. While niche platforms like Twitch do not get much attention compared to the avalanche of coverage about Facebook and Twitter, it is a central space for young folks who spend countless hours chatting while running around virtual worlds. Like the punk scene in the 1990s, gaming has become a prime territory for recruitment of wayward youth who are still forming their political opinions.

Reddit, despite its dark past, has a recent history of quarantining harmful communities containing hate and misinformation. On Monday, it removed more than 2,000 communities for terms-of-service violations, including The_Donald, an infamous subreddit known for building up Trump’s meme army in 2016. Moderators for The_Donald have long planned for this event and are trying to create their own infrastructure elsewhere. Since 2015 a dedicated group of users has organized on r/againsthatesubreddits, working tirelessly to expose the growth of hate on the platform and to keep the community safe for all users. Even after Reddit's recent actions, this group continues its work to ensure that even the smallest communities of racists do not survive the expulsion.

YouTube also took down accounts for several well-known racists, misogynists, and far right influencers who had been using the platform to discuss their racist political positions and solicit donations. Many wondered why it took so long to remove these accounts, especially since YouTube changed its hate speech policy last year in response to white supremacist content.

Losing access to YouTube, Twitch, and Reddit is not trivial. It means hate-mongers cannot use these mainstream forms of social media to reach new audiences and harass their targets into silence. Because many of the men who were expelled rely on their own names to draw attention and donations, it will be difficult for them to sneak back onto these platforms and develop the same prominence they had before. Already this week, other racist and sexist communities and accounts that produce “borderline content” have been deleting old videos and increasing moderation. Do not mistake this for a “chilling effect,” though; it’s more like an acknowledgement that hateful instigation and harassment of women, people of color, and LGBTQ users will no longer be hidden under the guise of free speech.

Though Twitter and Facebook were not a part of Monday’s anti-hate measures, the purge built off weeks of smaller actions that they’d taken. On Tuesday, Facebook followed up with bans and account removals for individuals associated with the Boogaloo faction, an anti-government group that has been showing up to Black Lives Matter rallies heavily armed and dressed in Hawaiian shirts. It also comes amid two public health crises: Covid-19 and systemic racism. Covid-19 appears to have led all the tech companies to move more aggressively, and concertedly, in recent months; and it laid the groundwork for their taking more action against racist content, as the nation wakes up to the urgency and ubiquity of white supremacy.

Not so long ago, before the pandemic hit, each platform would only tend to its specific user base, keeping up with a triple bottom line by balancing profits with social and environmental impact. Now, having witnessed the terrifying results of unchecked medical misinformation, the same companies understand the importance of ensuring access to timely, local, and relevant facts. After accepting that truth with regard to medical misinformation, it’s impossible to ignore that unchecked racist and misogynist content is terrifying, too, when it’s left out there for anyone to discover at any time. Sadly, we know the violence it brings in its wake. Sadly, we know that the twin crises of racism and Covid-19 are deeply intertwined.

We have seen purges before from YouTube, Twitter, and Facebook. To maximize the benefits of this kind of action, though, we need a plan for what comes next. We can’t let the gains from this great expulsion dissipate as political pressure mounts on tech companies to enforce a false neutrality about racist and misogynist content.

In April 2018, Zuckerberg addressed Congress and told them he would soon have 20,000 people working in security and content moderation. But without a strategy for how to curate content, tech companies will always be one step behind media manipulators, disinformers, and purveyors of hate and fear. Moderation is a plan to remove what is harmful; whereas curation actively finds what is helpful, contextual, and, most importantly, truthful.

Truth needs an advocate and it should come in the form of an enormous flock of librarians descending on Silicon Valley to create the internet we deserve, an information ecosystem that serves the people.The blessing and curse of social media is that it must remain open so we can reap the most benefits; but openness must be tempered with the strong and consistent curation and moderation that these librarians could provide, so that everyone’s voice is protected and amplified.

It is the duty of platform companies to curate content on contentious topics so that their systems do not amplify hate or make it profitable. Tech companies that refuse to adapt for the culture will become obsolete.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com.