The emergence of AI has sounded a cultural shock so loud it has reverberated into the realm of politics. The war between political parties is no new issue; however, it has become increasingly harmful and otiose. A study by Pew Research shows that the purple sector between red and blue belief systems has shrunk more than twofold since 1994. More individuals from the Republican and Democratic party have begun to see the opposing belief as a complete threat to the nation, and this swelling antipathy foreshadows a precarious democracy. Political polarization can be a result of many civil factors. However, AI has been no help.
In today’s world, AI chatbots hold substantial power over the beliefs of their users. Especially as the 2024 presidential election approaches, the persistence of political beliefs from AI leads to the expansion of an already growing political schism by either heightening or diminishing the beliefs of chatbot users. As an enthusiast of dystopian worlds such as that of “Brave New World and 1984,” the idea of tenacious echo chambers and extreme political polarization have both intrigued and frightened me. The connection between AI and extremism has begun to crystallize into a more perspicuous thread, particularly in AI chatbots.
The first question to ask is how AI chatbots even came to portray political preference, given the fact that the programs are intended to practice neutrality. Primarily, the problem with expecting a certain “neutrality” is that it must, by definition, lie in the middle of a spectrum of political beliefs discerned by AI. Yet, such a spectrum is not evenly weighted on both sides. In February, ChatGPT was publicly confronted for being able to write a poem in respect for President Biden, but not Donald Trump. Since then, AI chatbots have been continuously accused and proved to obtain liberal bias. This has repelled conservatives from our most recognized AI chatbots and resulted in them creating their own programs. As AI chatbots begin to lose more and more input from conservative users, their spectrum of information becomes weighted towards liberal beliefs. They become filled with assumptions and stereotypes that are ultimately left leaning.
Nonetheless, this is not to say that the political preferences of AI chatbots are completely out of the hands of their manufacturers. Just a month ago, David Rozado, a social science researcher, conducted a study of Large Language Models (LLMs) that assessed their political preferences using certain test instruments. The study extended across twenty-four different LLMs, including OpenAI GPT 3.5, GPT-4, and Google’s Gemini. The results revealed that most conversational LLMs generated left leaning responses. However, this was after they were steered by supervised fine-tuning (SFT), a process in which the model is trained to associate certain input points with correct answers and labels. Rozado was then able to assert that SFT possessed the ability to etch political preferences into LLMs.
Regardless of the manufacturer’s intentions, it is undeniable that these chatbots have become politically preferenced. When an AI system develops political leanings, people either begin to adopt the views of its outputs more, or completely reject them. And as the gap of indifference dwindles, polarization rises. The result is grim. Political polarization enervates respect for democratic norms and erodes the judiciary’s nonpartisan nature. Most obviously, political polarization hinders the intrinsic human ability to understand one another, and consequently limits the fruitful communication which powers change. This extremism and lack of cross-party understanding causes America’s democratic ideals to wane. Individuals become focused on supporting only their own leaders at any cost, and lose care for others with opposing beliefs. To buttress a democratic country grounded on values of equality, liberty, and civility, fighting political polarization is imperative.
The fact that AI contributes to political polarization is significant because AI is an unstoppable force. This means that while we cannot look towards AI to carry the responsibility of assuaging such polarization, we must look to ourselves. While solutions such as making algorithmic transparency mandatory for AI chatbot manufacturers might be successful for merely spreading awareness, it is up to us to comprehend and respect our own political belief systems. Allowing AI chatbots the power to either uplift or degrade our convictions can unconsciously push citizens to contribute to political polarization, a trap that threatens not just ourselves, but our nation.