Commentary

Phillipian Commentary: Algorithms of Extremism

The “rabbit hole effect” is a phenomenon on social media platforms where users are pushed towards more and more extreme content to drive up engagement. Six months ago, YouTube Chief Product Officer Neal Mohan denied its existence in an interview with the “New York Times.” Yet, since then, according to the Gun Violence Archive, there have been 302 mass shootings in the United States[a][b]. Several of these have been committed by known white supremacists, many of whom, such as the El Paso shooter who killed 20 in early August, have credited social media platforms with their beliefs and motivation. Mohan’s denial of the rabbit hole effect was filled with buzzwords and logical inconsistencies. His case rested on the token claim that it is up to the user to choose an extreme path of content. This argument neglects to consider YouTube’s ‘up next’ and ‘autoplay’ features that strongly direct users towards algorithmically-selected content. Twitter’s defense of their algorithms isn’t much better. According to C.E.O. Jack Dorsey, Twitter won’t eliminate all white supremacist posts because the algorithms that punish racist content will also punish some Republican politicians.[c][d][e][f]

Denial of the rabbit hole is becoming deadly. To stop further atrocities from happening, we must call this effect into question: why does it exist, who does it help, and how can we fix it? It isn’t realistic to completely reject these AI algorithms; they are already integral to our economy and our daily lives. Instead, we should focus on scrutinizing these tools and trying to change how they are implemented.[g]

Let us first understand how powerful algorithms are. Three years ago, Cambridge Analytica purchased the personal profile data of 50 million Facebook users. These users–their demographics, connections, and likes–became the ‘training set’ for Cambridge Analytica’s artificial intelligence. This AI profiled users with an ensemble model: an amalgamation of many different machine learning techniques. Ensemble models allow hundreds of different algorithms to ‘vote’ on which data points are relevant, which correlations are true or important, and which demographics are susceptible to different types of persuasion. These tools aren’t specific to Cambridge Analytica. Every social media company relies on an AI system like this one to profile and target users. This particular model can predict a user’s political leaning with up to 85 percent accuracy. Other models that haven’t been made public are likely even better.

Social media platforms commodify the time of their users. Any moment that you spend on the platform can be sold. This means that every tool is used to make you stay on the platform for longer. Machine learning focuses on users as individuals. Other components target universal human behaviors. The feature of pulling the page down to reload it capitalizes on our love for unpredictability. Like casinos, social media platforms remove stopping cues.[h] If you click on anything, you are given an endless stream of content without having to move to a new screen. The way this new content is chosen brings us back to the extremism rabbit hole. Most platforms have stumbled upon the same innovation: people always want the next craziest thing. Someone who searches up an innocuous political video won’t be interested in following it with another one–it’s too predictable. They want something a little louder and a little more opinionated. They will click on something more extreme.

Anyone who has been on YouTube before knows viscerally how easy it is to be tempted by these features. The algorithms are simply too good. In fact, 70 percent of viewers on any given video arrive there from the recommendation bar. Additionally, executives from many major social media companies, including Facebook, have confirmed that social media is purposefully addictive.[i][j][k] A quick search for ‘how to quit social media’ yields 182 million results. With a system that is this good at forcing you to consume content, it is no wonder that young men searching for a belief system find extremism on social media.

This is a huge problem. Over the past few months, several perpetrators of racially motivated mass shootings have specifically stated that they got their views from social media. The number of hate crimes committed in the U.S. has been rising for four years. We have a problem with extremism in the United States, and it is clear that the rabbit hole is one of the causes. At the moment, the leaders of these social media companies refuse to acknowledge this. Every social media giant has pretty much eliminated ISIS content from their platforms. Not a single one has taken similar steps to eliminate white supremacists. The main reason for this is that any ban severe enough to stop this problem would also restrict freedom of expression by eliminating many users who are simply discussing politics. This solution is against everything that social media stands for. It’s unrealistic to expect that the industry will institute it.

Thankfully, it’s not the only option. We don’t have to get rid of all the content. We just have to change the way people get to it. Altering the rabbit hole algorithms could have an enormous impact in stopping people from being swept away by extremism. If people are sent in a different direction, not towards something more shocking and extreme, they could have the opportunity to believe in a different system–one that they have more control over, and one that will influence them to be peaceful, not violent. These companies would lose enormous amounts of money, but if it can stop even one person from developing these beliefs, it is undoubtedly worth it.