top of page

The Modern Ethical Dilemma Posed by Driverless Cars


Source: Getty Images

The pedestrian and the driver often have a strange, unspoken trust, upheld by the belief that the right-of-way belongs first and foremost to the pedestrian, creating a dynamic somewhat similar to the omnipresent “customer is always right” approach in the retail and restaurant world. This relationship even translates, at times, into situations where a pedestrian disobeys the law, but gets away with it. Even jaywalking across the street on a red light, eye contact with the driver, signalling the intent to slow down, is sufficient to risk one’s life to lazily avoid having to use the crosswalk. However, the now-evolving landscape of contemporary automobiles begs the question: would this faith remain if there were no driver in the carat all ?

With advancing artificial intelligence technology and the growing presence of increasingly autonomous vehicles, manufacturers are now faced with the challenge of designing vehicles that will be able to make split-second moral decisions in life-or-death scenarios. For example, if a pregnant woman was still crossing the street as the streetlight turned red against her and an oncoming autonomous car’s brakes were to fail, should it decide to strike the woman or collide with a man standing on the sidewalk, properly waiting for the green light? What if colliding with the pregnant woman would allow for the car’s passengers’ survival? What if a crowd of law-abiding people could be avoided by colliding with the pregnant woman?

In other words, should the car take a utilitarian approach by choosing to save as many lives as possible, or rely on another school of ethics, such as agent-based theories, by basing the decision to swerve on the intent to avoid the pregnant woman, regardless of whether anyone else is harmed by swerving? Without access to the personal information of those affected in collisions, such as their age, gender, job, and social status in the programming of driverless cars, automakers are left with the task of “playing god” in a sense by deciding what life is more valuable: that of the passenger or that of the pedestrian, as well as more specifically, the weight of factors including gender and age.

As a G1 apprentice and Theory of Knowledge student, I understand that this can also be viewed as a variation on the classic trolley problem, as many of my classmates have learned, in which a group of five people lying on train tracks, imminently going to be struck by an incoming train, can only be saved if a bystander intervenes by diverting the train to kill only one person, but still left with the guilt of responsibility for his actions.

This situation especially applies to cars that are only partially autonomous, including current models by Tesla, a company that has faced backlash for choosing not to “use its Automatic Emergency Braking (AEB) system when a human driver is pressing on the accelerator pedal,” according to a Forbes article. Simply put, this means that the car will not automatically brake or swerve to avoid a pedestrian if an inattentive (or texting) driver does not do so himself. This decision, founded mainly on the principle that “by allowing the driver to continue forward, the automaker might fail to prevent that big accident, but it at least has no responsibility for creating an accident,” has resulted in a lawsuit recently filed against Tesla.

Contesting its proposed “legal duty to design a failsafe car,” Tesla’s non-interventive approach is comparable to the bystander in the trolley problem if he were to choose to not pull the lever, avoiding holding himself accountable for the consequences of his decisions. However, as in many ethical debates, there is a counterargument to this, based on the understanding that doing nothing, or not pulling the lever, is a decision in itself.

In these lose-lose situations where harm cannot be prevented, the programming of completely autonomous vehicles complicates, as there is no driver to take responsibility. Interested in “gathering a human perspective on moral decisions made by intelligence machines” in life-threatening scenarios, the Massachusetts Institute has launched the Moral Machine, an application that allows users to select a course of action on behalf of a self-driving car, whether it be deciding between colliding with a puppy litter or twin sisters.

Source: Mercedez-Benz.com

With an increasing omnipresence of car companies advertising “hands-off” driving features and the computerization of vehicles, previously hypothetical discussions of the futuristic concept are now relevant debates, rallying philosophers and car companies (and TOK students) alike into conversations that may impact the humanity of artificial intelligence. Furthermore, it may determine whether our irrational faith in cars that defines our “street smart” decisions will persist in upcoming years.


bottom of page