News

CaMD Scholar Sarah Pan ’24 Examines Social Consequences of Artificial Intelligence

Kicking off the sequence of CaMD Scholar presentations, Sarah Pan talked about the risks of artificial intelligence.

In front of a packed Kemper Auditorium, Sarah Pan ’24 delved into the evolution of artificial intelligence (AI), from its first conception to its modern-day societal impacts. Her presentation, “Now Approaching Dystopia: The Social Consequences of Artificial Intelligence,” was the culmination of her research as an Office of Community and Multicultural Development (CaMD) Scholar. 

Developing the narrative surrounding artificial intelligence, Pan introduced audience members to the origins of artificial intelligence and how it works. Pan offered examples of different modern-day applications that are powered by AI and offered an explanation for the explosive progress in recent years. 

“Intelligence, as I’m sure Andover has taught us all, is much more than storing and recalling information. As a result, artificial intelligence is about recreating intelligence in ways that are useful for us, whatever that may be… If we look at today, our computers have gotten exponentially faster and the modern neural network, which consists of millions to hundreds of billions of neurons all strung together, is responsible for pretty much everything we know. So if you are familiar with ChatGPT and are familiar with computer vision, which is responsible for things like self-driving cars, these are whole examples of whole neural networks,” said Pan. 

However, according to Pan, the way artificial intelligence functions differs drastically from the way human intelligence does. She explained that systems like neural networks can sometimes become “black boxes” where even experts struggle to understand how the AI models reach conclusions, which can have dangerous ethical implications. 

“It is really important to note that neural networks today don’t work anything like the human brain. Individual artificial neurons are modeled after human neurons, but when they are strung together in a deep neural network, they don’t function anything like the human brain does,” said Pan.

In his first time listening to a CaMD scholar presentation, attendee Kai Wang ’27 commented that he left the presentation with a much deeper understanding of artificial intelligence. He highlighted how Pan’s presentation introduced him to ideas and perspectives he hadn’t previously considered. 

“Her speech really helped me understand the ethical implications associated with AI development and helped me consider issues like bias in algorithms… I always thought that AI would just eventually bring about the apocalypse on humanity but Sarah, although she did say that it’s always a possibility, I thought [that] she really has a progressive approach to all that AI can do for us… I walked out feeling like I really knew a lot more about the roles AI could serve in human society,” said Wang. 

Suhaila Cotton ’24 described how the presentation introduced her to new ideas and terms in the field of artificial intelligence. She noted how the presentation served as a reminder of the potential that AI has.

“I loved learning about the terms like AI bias and AI complacency, I’ve felt those but I didn’t know what the term for it was. So learning about how people overly trust AI is something I definitely took away from this presentation… Being mindful of our AI usage like ChatGPT [is important], these are real systems that can make our world more efficient and better but they can also make it worse, and it really depends on how we use it and our mindset, so definitely using AI more responsibly [was a takeaway],” said Cotton. 

Pan concluded her presentation with science fiction writer Isaac Asimov’s “Three Laws of Robotics,” a set of rules that dictate how robotics and artificial intelligence can ethically serve as a part of human society. She emphasized the importance for attendees to feel a sense of empowerment as individual human beings through the opportunities created by artificial intelligence.

“‘A robot may not injure a human being or, through inaction, allow a human being to come to harm.’ Ideas that last us long aren’t ones we can solve with simple proof and a new technological breakthrough, but rather, ideas like these persist for a reason. Only while wrestling with them can we create meaningful progress and that is progress both technologically and in terms of understanding ourselves as a sort of species or human. When we seek to imbue intelligence, something we know barely through ourselves, within machinery powerful enough to change the world, we are responsible for confronting the past in our goal to create a future,” said Pan.