Commentary

The Ethics of Self-Driving Cars

I was on my way back home from the San Francisco airport, cool wind whizzing past my ears as I stared at the tall, cramped houses ranging from damp gray to pale pink to the occasional teal or orange. Stuck in traffic alongside me were other cars — normal, for the most part. But one vehicle stood out, not because of the discs whizzing all around it, or its particular bulkiness, but rather, for its emptiness. Self-driving cars are not common in San Francisco, but San Francisco has more of them than just about every city in the United States. Chances are, if you live in the area, you’ve gaped at one of the white, camera-filled cars, muttering something good or bad about the advancements in technology. But this time, as I stared at the empty driver’s seat, I wondered not about mechanics, but about ethics, and I believe this wondering is vital for the future of safe and ethical roads.

When I was introduced to the trolly problem in middle school, I was stumped. The basic premise, coined by Philippa Foot in 1967, describes a trolley headed for five people tied down to a railway track, unable to move. You, the bystander, can pull a lever and save the five from death — but there’s a catch. On the other track, one bystander struggles against their bondage. What do you do?

Around 90 percent of people choose to pull the lever, sacrificing one person to save five. But results vary when the situations are changed slightly. What if instead of pulling a lever, you had to push someone in front of the train tracks? What if you had to harvest the organs of one person to save five others? These kinds of questions might seem silly or unrealistic, but with the dawn of Artificial Intelligence (AI) and self-driving cars, they are becoming more and more relevant. 

Instead of a trolley, imagine a self-driving car, and instead of five passengers tied to a train track, they are walking across the street right in front of you, absentmindedly scrolling on their phones. On both sides of you are thick, brick walls. If the car malfunctions, who should it kill? The five, or you? Studies show most people agree that cars should prioritize causing the least amount of harm possible, in line with utilitarianism. But those same people would outright refuse to buy self-driving cars that might “sacrifice” them to save others. There are thousands more scenarios — in Patrick Lin’s variation, you are driving in front of a huge pickup truck that threatens to topple you. To your left is a motorcyclist with a helmet, who you would probably seriously injure or kill, and to your right is a motorcyclist without the protection of a helmet, who you would almost surely kill. Does the car take its chance with the safer motorcyclist, punishing the driver for following the rules, or run over the other one, enacting its form of street justice? Unlike real-life accidents, where any swerves or turns made in the heat of a car malfunction or crash can be chalked up to human error, self-driving cars will be programmed with instructions to turn right into the brick wall or continue forward toward the pedestrians; deaths don’t feel accidental but deliberate and premeditated.   

Every day, thousands of people across the world die from fatal car accidents, and thousands more are seriously injured. Driving past the streets of San Francisco, any ordinary car is far more likely to kill me than a self-driving one. In the U.S., future self-driving cars are estimated to save around 300,000 lives every decade, reducing fatal traffic accidents by 90 percent. In a way, not investing in self-driving research is its own form of tying millions of people onto a train track and pulling the lever. But these ethical questions must be tackled; the trolly problem, once a silly simulation, now must have an “answer.”

One thing is clear: we can’t leave this modern-day trolly problem in the hands of companies motivated by the promise of profit in a seemingly lucrative, futuristic industry; without ethicists working alongside CEOs, our streets could turn into a dystopian horror, where the rich can pay for increasingly protective cars and the poor are left in the crossroads. 

This may all seem daunting at first, as it certainly was to me, and I’m not here to propose a simple solution to this complicated problem, as many philosophers have tried before me. Yes, these problems are tricky, but they give me hope. If we intertwine ethics with technology, we can create a much more moral and good world — a world where all drivers enjoy safe roads, in cars that make difficult choices designed by diverse, complicated humans who just want what’s best for others. This applies not just to passengers of self-driving cars but also to social media or ChatGPT users. New technology is exciting and promising, but what’s even more remarkable are the people behind it — both the programmers and the philosophers, searching for the impossible answers that will make the world a better place.