Students and faculty packed the Tang Institute to listen to Kevin Mills’ newest installment of the speaker series on the impacts of artificial intelligence (AI). While previous presentations focused on the intersection of AI and education, last Wednesday’s discussion turned to the topic of “moral guardrails” in generative AI with a talk by Kevin Mills, a postdoctoral associate in the Social and Ethical Responsibilities of Computing (SERC) at Massachusetts Institute of Technology (MIT).
Reflecting on his own experience growing up and experimenting with technology, Mills highlighted the inaccessibility for regular individuals to develop generative AI models, which separates the models from previous technologies that were open for users to understand and create. He explained that, due to the immense amounts of computing power and training data needed, creating a technology like ChatGPT is nearly impossible for smaller companies or individuals.
“I grew up in the ’90s, and I’ve always been a tech-y guy. The technical platforms that I interacted with were always open, and I think it’s a good thing that they were. I could poke around under the hood. I could learn how to do things on my computer. And what struck me about AI, at least in its current iteration and in foreseeable iterations, is that it’s not open in the same way because the comput[ing] power that’s involved and the data required to train everything is no longer something people have access to,” said Mills.
Mills continued, “It’s a technology that potentially is only ever going to be usable on the terms of the big companies who corner the markets, and I was just thinking about the implications of that and how it differed from all the technologies I was used to that were very open. And, it just struck me that there are potentially problems here.”
The presentation dove into the ethics of “moral guardrails,” which are features implemented by developers that make it difficult or impossible to use the technology in ways that the developers consider to be immoral. One example is censoring by ChatGPT. While there are many reasons to deploy moral guardrails, Mills investigated the possibility that such guardrails limit people from using technologies in ways they should morally be able to.
“In AI, it’s a very small number of companies [that control it], and by the virtue of the technology itself, we’re going to be using it on [those companies’] terms. I’m worried that given this broader push that’s going on in CS [computer science] education and culture more broadly where we are asking developers to take responsibility, and again sometimes with good reason, that we are more likely to see similar guardrails emerge for these platforms where these companies are dictating the terms on which this transformative technology can be used,” Mills said.
Jaylen Daley ’25 attended Mills’ presentation with the hope of gaining insight into another perspective regarding the ethics of generative AI. While his initial interactions with generative AI reinforced many of his concerns and fears, Daley mentioned that he found the discussion on moral guardrails to be informative, helping him reflect on his own position on the development of such technologies.
“There’s a lot of AI and technology innovation going on, and the ethics of it is something that is really important and interesting to me. I’ve seen [posts] online of artists being particularly concerned about its uses, and at the same time, I think it’s very difficult to stop the development of technology. I was hoping to get some clarity and see ways forward that this technology could be used right,” said Daley.
Daley continued, “After ChatGPT came out, that’s sort of like, ‘I can actually look at this, even if there are grammatical errors or it gives me the wrong information, can this still be useful? Is this still something that we can develop and make good with?’ So in regards to the ethical capabilities, I think it really depends on the AI being developed. It’s really hard to make guidelines for how to appropriately use that and give the rights back to the people for whatever they [want to do] within those systems, but at the same time I think there can be a lot that can be streamlined by the development of these AIs.”
Arden Berg ’24, an attendee who took the Ethics and Technology elective offered at Andover this past fall, highlighted the importance of learning more about generative AIs and becoming familiar with using them in the appropriate contexts. He encouraged other students to take advantage of the special opportunity offered by the Tang Institute speaker series to learn from experts in the field.
“Most students who are using [generative AIs] are just using them. It would be good for them to understand [them] more so that they can see their values, their uses, their pitfalls, and also the value of them in conjunction with learning. Not overusing it and still getting the value of your Andover education while using these learning models when appropriate… People should go to these kinds of talks because this is one of the greatest things that Andover does for us. It’s incredible and not an opportunity that a lot of high schoolers get,” said Berg.