This post first appeared on Items.
In the wake of the January 6 attack on the US Capitol, the role of social media in propagating extremism was once again under scrutiny. However, as Deana Rohlinger’s research demonstrates, stronger moderation policies alone would fail to account for the many ways that users express political beliefs through online forums. Instead, she argues that additional direct interventions like political bias training are necessary to both protect against extremism and encourage democratic participation.
Deana A. Rohlinger
On January 6, 2021, the world watched in shock as Donald Trump’s supporters stormed the US Capitol. While some of the mob roamed the halls snapping selfies with statues holding Trump and Confederate flags, others seemed poised to escalate the violence. Some of the rioters swept through the building with bats and zip ties, calling out for the “traitors,” then-Vice President Mike Pence and Speaker of the House Nancy Pelosi, to show themselves. Outside, gallows were erected near the Capitol Reflecting Pool.
When the siege ended, the finger pointing began. Social media platforms quickly emerged as one of the first to shoulder a portion of the blame. Los Angeles Times columnist, Erika Smith, opined that Twitter, Facebook, Instagram, YouTube, and Google were responsible for allowing divisive speech and conspiracy theories “to fester and spread online” virtually unchecked. Evidence quickly emerged that users did more than spread conspiracy theories on mainstream social media. They used Facebook to promote the January 6 event, spread memes, organize bus transportation, plot routes to the Capitol building, and circulate rumors regarding its potential occupation. Unmoderated calls for violence were more prominent on other platforms, such as r/TheDonald and Parler.
When the siege ended, the finger pointing began. Social media platforms quickly emerged as one of the first to shoulder a portion of the blame. Los Angeles Times columnist, Erika Smith, opined that Twitter, Facebook, Instagram, YouTube, and Google were responsible for allowing divisive speech and conspiracy theories “to fester and spread online” virtually unchecked. Evidence quickly emerged that users did more than spread conspiracy theories on mainstream social media. They used Facebook to promote the January 6 event, spread memes, organize bus transportation, plot routes to the Capitol building, and circulate rumors regarding its potential occupation. Unmoderated calls for violence were more prominent on other platforms, such as r/TheDonald and Parler.
A core assumption underlying the conversation following the January 6 event is that extremism can be moderated away.
Over the last year, I have been working with an excellent team of graduate and undergraduate students at Florida State University to systematically assess the characteristics of political expression online and whether moderation might affect how individuals express their political identities and views. The first phase of the project analyzes comments posted to news stories regarding Brett Kavanaugh’s US Supreme Court nomination and accusations that surfaced in 2018 about Kavanaugh sexually assaulting three women. We examined nearly 3,000 comments made by individuals relative to news stories in moderated comment sections about the Kavanaugh nomination in right-leaning outlets (FOX News, Breitbart, Daily Caller, and Gateway Pundit), left-leaning outlets (MSNBC News, HuffPost, Daily Kos, and Raw Story), and more mainstream outlets (USA Today, New York Times, Washington Post, and Washington Times), and found that political polarization and extremism in the United States are not being moderated away.1 Here, I argue that we haven’t thought critically about individual agency and how it affects the expression of everyday extremism on moderated forums.
Individual agency and moderation
Moderation is nebulous territory, in part, because it involves censoring thoughts and ideas that are regarded as bad for a community. The problem is that good and bad are not objective categories. Outlets and forum users negotiate their meanings, mutually constituting what is clearly acceptable and unacceptable within an online community.2 The drive to create an authentic, participatory community that represents its users is one reason why there is so much angst over moderation strategies as well as why we see such diverse moderation strategies across forums. Daily Kos, for instance, provides some general guidance on appropriate comments but ultimately relies on community moderation to determine what ideas are desirable and what ideas are unacceptable. Breitbart, in contrast, uses Disqus to enforce its community standards, which notes that forum users are not to provide content that “is false, misleading, libelous, slanderous, defamatory, obsence, abusive, hateful, or sexually-explicit.”3 But, on a forum that is also committed to promoting freedom through the inclusion of “more voices, not fewer,” what constitutes obscene, abusive, or hateful falls in a vast gray area where the lines between what is appropriate and inappropriate seem unclear. It is in this gray area that individuals find interesting and sometimes creative ways to express themselves.
In my research on individual political expression over the last several years around issues as diverse as the removal of Terri Schiavo’s hydration and nutrition tubes in 2005 to the recent debate over gun control in the wake of the Parkland shootings, I have learned that while some individuals flout the norms of communication in a given forum, others play within the gray area. If researchers want to understand the full range of ways in which political polarization and extremism might be expressed online, we need to think more deeply about how moderation policies and practices create gray areas as well as how individuals might exploit them in their political expression.
What’s in a name?
My ongoing research regarding the Kavanaugh nomination suggests that one indicator of user polarization and extremism is commenters’ profiles, which includes their username and user-selected profile picture. Even in forums that are fairly well moderated, commenters’ find ways to express their political points of view. In the majority of forums, usernames are an easy way for individuals to signal their political identities and priorities to others. The New York Times, for example, has a well moderated comment section, and requires individuals to register in advance of posting their first comment. The site gives potential commenters clear guidelines in terms of what kind of comments they are interested in (see below) but does not require users to provide their real names. The main suggestion regarding names is that users generally indicate where they live so that their comments may be promoted more effectively. There is no mention of profile pictures on the page.
While this may not seem like much wiggle room when it comes to political expression, individuals use their usernames—and sometimes their photos—to punctuate their political opinions. Names such as “Illinois Moderate,” “DemocratPatr8,” and “Jesse the Conservative” all are intended to make clear the user’s political orientation. Some profiles even seem designed to underscore the political dissatisfaction and anger expressed in their comments. Users with names such as “Tired of hypocrisy” and “Son of liberty,” whose profile also included a portion of the Betsy Ross flag (a symbol that has been associated with the extreme right), criticized Democrats for impugning Brett Kavanaugh’s reputation, called Christine Blasey Ford’s character into question, and suggested that the investigation was a “sordid delaying tactic” and a “sham” that harmed “the reputations of all women who have actually been sexually assaulted.” Another user named “The fix is in” criticized Kavanaugh’s high school friend, Mark Judge, who said he was shocked at the behaviors young men got away with. The commenter noted “No GOPer, 0.1 percenter or other fraud or puppet pretending to be a genuine ‘Conservative’ instead of a grand scale pain inflicter [sic] and democracy and planet destroyer finds himself shocked anymore at the stuff he gets away with.”

User profiles, and names in particular, seem to take on increased importance in forums that fall outside of the mainstream. In Daily Kos, where community standards determine appropriateness, and Breitbart, where the moderation appears fairly lax, usernames become an easy and highly visible way for users to express their commitment to politicians, political points of view, and, potentially, the online community in which they see themselves as members. This appears to be particularly true in right-leaning forums where users incorporate variants of deplorable (e.g., “AB Deplorable” and “El Gato Deplorable”), conservative (e.g., “CapeConservative” and “Ultracon”), and attacks on liberals (e.g., “libsrnazi,” “Run, snowflakes, run!” “Laughing at Libtards,” and “Libsareclowns”) into their usernames.
Everyday extremism
While most forums note that they will not tolerate name-calling, there is a fair amount of it happening in comments on all of the forums. More important, there is a fair amount of language that casts political opponents as problematic others that need to be dealt with in some fashion. I am calling this “everyday extremism,” because this kind of language blurs the lines between political polarization and extremism and exists because the language employed by users falls into the gray area of moderation. Here, in the gray area of moderation, users negotiate not only what it means to be a member of a forum, but also how a community can talk about—and presumably think about—its political opponents. More troubling, we find that everyday extremism provides a rationale for the harsh treatment, punishment, or, in some cases, the death of one’s political opponents. Here, I briefly discuss three types of everyday extremism.
Criminalizing Opponents. In all of the news forums, commenters routinely characterized those with whom they did not politically agree as engaging in deceptive behavior that likely violated state or federal law. In discourse surrounding the Kavanaugh nomination, commenters characterized Democrats and Blasey Ford as criminals for everything from promoting a “libelous narrative” of Kavanaugh and committing “perjury” to illegally disrupting the confirmation hearings in a “terrorist effort” designed to reclaim the Supreme Court for themselves.
Pathologizing Opponents’ Behaviors. Another type of everyday extremism fairly common across all of the forums was pathologizing the behavior of one’s opponents. Here, commenters typically cited the negative emotions of their opponents as the source of irrational behavior, which over time manifests in mental illness. Commenters most often pointed to “dislike,” “hatred,” and “denial” as the motivation for the “chaotic” and “unreasonable” choices of their opponents and, eventually, the “sociopathic” and “psychotic” actions they take.
Dehumanization. Discourse in which commenters completely stripped their opponents of their humanity and then, more often than not, called for their injury or death appeared in partisan news forums. A HuffPost commenter responding to Republican Senator Jeff Flake’s affirmative vote on Kavanaugh after he had called for an FBI investigation into the sexual assault allegations made by Blasey Ford, called Flake “a dog” and said he “should be put down.” On Breitbart, a commenter compared Democrats to “rats” and argued that they “should be exterminated.”
What do we do about extremism?
Globally, social scientists are doing an excellent job identifying sources of extremism, how it spreads across media systems, and unpacking some of the meanings associated with seemingly benign images and phrases.4 While this research is critically important and valuable, it can obscure more commonplace expressions of polarization and everyday extremism hiding in plain sight on mainstream forums. I do not doubt that most news outlets have admirable intents when they vet moderation services and create moderation practices. The point here is that individuals will find ways to express their political beliefs and potentially create extremism communities despite outlets’ best moderation efforts.
This does not mean that we should quit putting time and energy into improving our moderation policies and practices. I support academic calls for algorithmic accountability, which would make the automated decisions of platforms more transparent as well as hold platforms responsible for the online cultures they help create.5 I would also point to the social science research that shows just how important moderation is in the battle against violent extremism. Maura Conway and her colleagues, for instance, find that aggressive account and content takedown can effectively disrupt extremist communities online and make radicalization, recruitment, and organization harder.6 Likewise, Bharath Ganesh and Jonathan Bright point out that countermessaging and other strategic communication techniques can help curb extremism online.7 However, I do argue that we cannot just focus our energy on ideologically charged platforms or violent groups. We need to recognize that extremism has become a widespread problem that requires intervention. One potential way to disrupt the everyday extremism described here is to integrate political bias training into our workplaces. Many occupations already require safety, racial bias, and sexual harassment training, it seems that we should begin to discuss how our deeply held political identities8 affect our professional lives as well. While this alone is unlikely to solve our political woes, it would represent a clear step toward recognizing a growing problem.
I would like to acknowledge the Institute of Politics at Florida State University and my fantastic research team for their assistance. The team includes Allison Bloomer, Pierce Dignam, Shawn Gaulden, Alex Cubas, Alejandro Garcia, Jade Harris, Emily Ortiz, and Lauren Torres.

Dr. Rohlinger is a professor of sociology and codirector of Research for the Institute of Politics at Florida State University. Rohlinger’s current research explores incivility, polarization, and extremism in individual claimsmaking around political controversies, including Supreme Court hearings and school shootings.
The feature image is from Flickr.