Recently, right-wing internet personality Andrew Tate has made headlines for his near universal ban from social media. A former producer and kickboxer known for his social media presence and his 2013 appearance on British reality show Big Brother, Tate has gained notoriety for his extreme misogyny. Tate has made statements claiming that women “belong to the man” and that rape victims must “bear responsibility” for their attacks—an investigation in April 2022 regarding rape and human trafficking allegations also found two women captive in his Romanian home.
Removed from Facebook, Instagram (where he had 4.7 million followers), Tiktok (where his videos had gathered over 11.6 billion views), and YouTube, Tate’s removal has sparked conversations around free speech, online radicalisation, and the ethics of corporate censorship of controversial figures.
Indeed, Tate’s ban has incited controversy, with some, such as Jake Paul, claiming that while Tate’s views are unacceptable, his TikTok ban amounts to censorship. Others, such as Andrea Simon, director of the End Violence Against Women coalition, or NSPCC (National Society for the Prevention of Cruelty to Children) policy officer Hannah Ruschen, have raised concerns over Tate’s influence on young men, warning that his continued site presence may lead to the radicalization of young, primarily male, viewers—a hallmark of a process known as the “alt-right pipeline”
Yet, while Tate may be the most recent example of an extremist figure stripped of their social media platforms, he is by no means a singular phenomenon. Tate’s removal is representative of a larger issue concerning online extremism and the role of social media sites in perpetuating similar ideologies. Tate is neither the first nor last internet misogynist to be barred from social media; yet, the size of his audience and social media sites’ former algorithmic promotion of his content have influenced uproar around his removal immensely.
It may be easy to praise social media companies for taking action and banning an individual who enacted large-scale harm. Yet, before social media platforms were faced with calls to remove Tate from their sites, many dramatically boosted his reach through their algorithms. TikTok’s algorithm, in particular, is notorious for promoting content on the basis of engagement—that is, the more likes, comments, and shares a video gets, the more it will be recommended through the app’s “For You Page.” This process effectively rewards controversial content, where each angry comment boosts a clip’s engagement and reach. Prior to his removal, Tate’s account enjoyed the popularity this engagement provided him.
This outrage-based attention economy lends itself to polarization and the formation of insular, extremist, online echo chambers. In early August 2022, reporters at the Guardian conducted an experiment where they created a blank TikTok account for a teenage boy. After viewing just two of Tate’s videos, including clips where Tate espoused misogynistic beliefs, they were recommended a slew of similar content. This process occurred for the next four accounts the reporters tested. If sites like TikTok were truly concerned about keeping their platform “inclusive and supportive,” why did they not only neglect to address Tate’s account, but also aid his rise to fame, until faced with public outcry?
Tate’s case serves as a stark reminder of the overarching problems that plague our society today, whether that be political extremism, online radicalization, or the spread of misinformation. Divisive content is not only attention-grabbing, it is profitable—the longer we spend writing an angry comment or rewatching a video in shock, the longer we spend scrolling TikTok’s endless hallways or languishing in YouTube’s ad-covered walls. Andrew Tate’s virulent misogyny, rise to fame, and swift removal are symptomatic of the broader ways social media platforms foster extremism, promote dangerous ideologies, and decline to act until confronted with overwhelming public uproar.
As members of this increasingly digital age, we must keep a watchful eye on the landscape of our internet. Structural approaches towards implementing media literacy education are deeply needed, as are individual conversations around what role we want the internet to play in our efforts towards a more just and equitable society. At Andover, we may begin by incorporating media literacy into EBI curricula and increasing campus-wide discussion of online radicalization and extremism, such as through ASMs or awareness campaigns. It is only by promoting critical thought and consumption that we may adequately prepare ourselves to address the nuances of our many-faced internet—this tangled world-wide web.
This editorial represents the views of The Phillipian, Vol. CXLV.