De-platforming: Why it May Not Be the Solution We Need

Sanjana Samudrala

March 13, 2023

Media has always played a large role in society, which is why there is such a fight for representation through media portrayal. Whether it be a diverse representation of actors in the entertainment industry or a diverse representation of ideas in journalism and news, the one thing that overlaps is having representation that is accurate and non-offensive, specifically in relation to marginalized groups. Social media has always been a platform for people to express themselves, their opinions, and their ideas on their own terms. However, in today’s polarizing climate, these ideas often clash, and the lines between information and opinions become blurred.

Whether it be Twitter, TikTok or Instagram, social platforms have played a large role in the spread of information, and especially the consumption of information through social media. Studies show that about 48% of U.S. adults say they get news from social media “often” or “sometimes”[1]. With social media becoming everyone’s new favorite form of consumption, we run the risk of harmful rhetoric and misinformation being spread.

A prime example of this is QAnon. QAnon “is a decentralized, far-right political movement” based around conspiracy theories. It originated in 2017, which coincidentally also happens to be during the presidency in that is known to post harmful and violence-inciting posts on social media, “as illustrated in the tweets about ISIS, the Mexico Wall, and terrorism” [14]. QAnon’s popularity grew in 2020, during the Covid pandemic, becoming a safe space for anti-vax conspiracy theories, as well as other issues. Unfortunately, the general climate of anxiety and the unknown during this period of time only allowed QAnon to thrive, as their ideas and posts skyrocketed on other social media platforms. For example, QAnon-related posts increased “nearly 175 percent on Facebook, 77.1 percent on Instagram, and 63.7 percent on Twitter”[7]. QAnon has since then become the hub for misinformation, spreading lies and conspiracy theories that hurt people all across the world.

As a result of this, social media platforms use a process we call deplatforming as a way to control this. Deplatforming is commonly known as when a registered user is removed or banned from a mass communication medium, like a social network site[2]. Although this process has a simple meaning to it, it’s become heavily politicized and causes a lot of discourse in society when an influential figure has their social media account revoked. With all the disposition that deplatforming causes, it’s only fair to ask this: how viable of a solution is deplatforming?

To understand deplatforming, it’s important to frame this context in terms of social media platforms. Social media platforms are very much run like a business, meaning they are profit driven. The social media business model, however, relies on leveraging individual users’ data to push highly-personalized content in order to maximize scroll time. This can result in highly extremist content being pushed as users continue to consume related media[3]. Take YouTube, for example. The YouTube algorithm works by identifying the type of content the user is watching, then matching it with other videos and YouTube accounts that post similar content[12]. By proxy of that, studies also show and support that “YouTube’s recommendation algorithm leads people to extreme content”[13]. Although this type of individualized content has become extremely radicalized, and, frankly, borderline harmful, major tech companies have sought to regulate this by deplatforming individuals and removing accounts. However, this regulation is definitely not a result of tech companies acting out of the goodness of their hearts.

As a result of being an active member of civilization, individuals have to follow society’s rules. This includes social media networks, which have their own set of local laws to abide by. Granted, each country has its own set of laws and regulations to follow, but certain banned material such as any form of child abuse (typically sexual) and content sponsored by terrorism, must be removed. Along with that, media companies have a right to portray themselves in any way they want through their business model. They have a right to organize their platform to present the content they feel best matches their mission and values[4]. For example, Twitter’s mission statement is to give everyone the power to create and share ideas and information instantly without barriers[5]. As a result of this mission statement, Twitter has become one of the largest platforms used to share information and post freely.

Nevertheless, there is no law in place that requires social media networks to de-platform accounts that individually post polarizing and hateful content. However, it is a common business practice to take action against hateful and harmful behavior happening in their circle to make their company look more inclusive and, in turn, continue generating revenue[4]. This is one of the reasons why we see individuals like Kanye West, Alex Jones, and Steven Crowder constantly banned from various social media platforms. This form of regulation has caused a lot of controversies, the largest one being that this is an infringement on our First Amendment rights.

Many people argue that deplatforming is a form of censorship, especially in terms of the type of content that’s being deplatformed and the individuals that are being banned from platforms, which has been a focal point in why deplatformization is such a controversy. However, the first amendment has always, and still does, exclusively pertain to the government and is meant to prevent governmental censorship. Social media platforms have no connection to the First Amendment, as they are private platforms with their own set of rules and regulations. However, regarding social media regulations, there comes the issue of political bias and how that plays into deplatforming.

It’s no surprise to most people that Republican agitprop is more likely to be deplatformed than liberal or left-wing rhetoric. Some people argue that it’s liberal bias, while some people argue that it’s because right-wing rhetoric tends to be more extreme. However, there are so many instances that show that right-wingers have a lot of favoritism when it comes to social media and the ideas being presented.

During a House Oversight Committee on February 8th, congressional representative Alexandria Ocasio-Cortez pointed out Twitter’s favoritism of radical–and harmful–right-wing propaganda. Twitter was under fire for not taking down an account that’s known for anti-LGBT+ rhetoric, including threats of violence, a recent one being bomb threats at children’s hospitals[8]. Moreover, former Twitter employee Anika Navaroli also mentioned that Twitter changed its terms and services so that they wouldn’t have to delete a racist tweet made by Donald Trump, where he told the Squad to “go back to where they came from”[9]. Anti-immigrant rhetoric is known to have detrimental consequences on immigrants, especially people of color in America, and has often been a cause of violence against marginalized communities. Elon Musk has also reinstated the accounts of right-wing influencers Jorden Peterson and Babylon Bee, both accounts that have been known for their harmful rhetoric against the LGBT community, especially trans people[10]. All these events are related to right-wing agitprop, where a large social media platform has refused to remove its content, despite clearly having harmful and violence-inciting content on its platform.

Right-wing media has always been favored. For the majority of political bans to be on conservatives can only mean that the rhetoric being spouted is extremely vile and dangerous. The solution that tech companies have for this is disinformation, which comes in the form of deplatforming and is starting to become more and more adopted by media platforms. However, now begs the question, is deplatforming all that useful?

Many studies have come out showing that deplatforming doesn’t really solve anything. One study “found that being banned on Reddit or Twitter led those users to join alternate platforms such as Gab or Parler where the content moderation is more lax”[6]. The idea behind de-platformization is that it helps mitigate the spread of misinformation. However, if platforms like Gab allow for harmful rhetoric and misinformation to spread, then how effective is de-platformization? Other right-wing platforms like QAnon have already shown us how susceptible society can be to misinformation, so is disinformation the solution to misinformation?

The issue at hand is inherently a lack of education. With education comes more diverse voices; more diverse voices contribute to less harmful rhetoric. A lot of misinformation can easily be handled by educating the masses about the topic at hand. Deplatforming doesn’t stop the spread of ideas, it just stops people from seeing them on a certain platform. The solution to misinformation and things like hate speech is educating people from a young age about these topics. That’s how you combat ideas that hurt others. A study called “Correcting Misinformation—A Challenge for Education and Cognitive Science” talks about how education helps to correct the damage done by misinformation. By aligning misconception and correction at the same time, “the effectiveness of using refutational materials in a classroom setting… fosters critical thinking”[11]. Critical thinking, especially from a young age, incentivizes people to research and learn about the issues around them; that, coupled with better education–increased funding for public schools, providing up-to-date textbooks, assigning diverse books, teaching students about marginalized communities, etc.–can cultivate a brighter future, and can be a major solution to the misinformation and spread of harmful rhetoric that is plaguing us today. Deplatforming and banning people doesn’t stop the spread of misinformation because the ideas are still there. Deplatforming is not a viable solution in the long-term because it doesn’t solve the inherent ideas that cause the misinformation and hate speech out there.


Image via Pexels Free Photos.

  1. Atske, Sara. “News Consumption across Social Media in 2021.” Pew Research Center’s Journalism Project. Pew Research Center, September 20, 2021. https://www.pewresearch.org/journalism/2021/09/20/news-consumption-across-social-media-in-2021/.
  2. “Deplatform Definition & Meaning.” Merriam-Webster. Merriam-Webster. Accessed February 24, 2023. https://www.merriam-webster.com/dictionary/deplatform.
  3. “Are We Entering a New Era of Social Media Regulation?” Harvard Business Review, December 13, 2021. https://hbr.org/2021/01/are-we-entering-a-new-era-of-social-media-regulation.
  4. Rosenberg, Scott. “Why Social Media Companies Moderate Users’ Posts.” Axios, October 17, 2022. https://www.axios.com/2022/10/17/twitter-kanye-trump-elon-musk-bans-content.
  5. “FAQ.” Twitter, Inc. – Contact – FAQ. Accessed February 24, 2023. https://investor.twitterinc.com/contact/faq/default.aspx.
  6. Kocher, Chris. “Users Banned from Social Platforms Go Elsewhere with Increased Toxicity – Binghamton News.” News – Binghamton University, July 20, 2021. https://www.binghamton.edu/news/story/3178/study-shows-users-banned-from-social-platforms-go-elsewhere-with-increased-toxicity.
  7. “Qanon.” ADL, September 24, 2020. https://www.adl.org/resources/backgrounder/qanon.
  8. “AOC Condemns Anti-Trans ‘Incitement of Violence’ during GOP-Led Hearing on Twitter.” The Independent. Independent Digital News and Media, February 8, 2023. https://www.independent.co.uk/news/world/americas/us-politics/aoc-twitter-libsoftiktok-republicans-b2278478.html.
  9. “AOC Just Proved Twitter Has Changed Its Rules | Voices.” The Independent. Independent Digital News and Media, February 9, 2023. https://www.independent.co.uk/voices/aoc-twitter-hearing-trump-tweet-b2279170.html.
  10. Fingas, Jon. “Elon Musk Begins Unbanning Some High-Profile Twitter Accounts, Starting with Jordan Peterson and Kathy Griffin.” Engadget, November 18, 2022. https://www.engadget.com/twitter-unbans-jordan-peterson-babylon-bee-kathy-griffin-204407440.html.
  11. Rapp, David, and Jason L.G Braasch. “Correcting Misinformation — A Challenge for Education and Cognitive Science.” Essay. In Processing Inaccurate Information: Theoretical and Applied Perspectives from Cognitive Science and the Educational Sciences. Cambridge, MA: The MIT Press, 2014.
  12. Lars Arboleda, Lars. “How Does the YouTube Algorithm Work in 2023?” NapoleonCat, December 21, 2022. https://napoleoncat.com/blog/youtube-algorithm/.
  13. Brown, Megan A., Jonathan Nagler, James Bisbee, Angela Lai, and Joshua A. Tucker. “Echo Chambers, Rabbit Holes, and Ideological Bias: How YouTube Recommends Content to Real Users.” Brookings. Brookings, December 2, 2022. https://www.brookings.edu/research/echo-chambers-rabbit-holes-and-ideological-bias-how-youtube-recommends-content-to-real-users/.
  14. Gounari, Panayota. “Authoritarianism, Discourse and Social Media: Trump as the ‘American Agitator.’” In Critical Theory and Authoritarian Populism, edited by Jeremiah Morelock, 9:207–28. University of Westminster Press, 2018. https://doi.org/10.2307/j.ctv9hvtcf.13.

Leave a comment