Essay Undergraduate 2,362 words Human Written

Trolley Problems and Self driving Cars

Last reviewed: ~11 min read
80% visible
Read full paper →
Paper Overview

The Limits of Deontology and Utilitarianism in the Trolley Problem Introduction The trolley problem is an old moral quandary that essentially has no wrong or right answer. It is a kind of worst case scenario in which one must choose the lesser of two evils. For example, a runaway trolley is set to crash and kill five people, but by throwing a lever you might...

Full Paper Example 2,362 words · 80% shown · Sign up to read all

The Limits of Deontology and Utilitarianism in the Trolley Problem
Introduction
The trolley problem is an old moral quandary that essentially has no wrong or right answer. It is a kind of worst case scenario in which one must choose the lesser of two evils. For example, a runaway trolley is set to crash and kill five people, but by throwing a lever you might spare those five but take the life of one innocent man crossing a connecting set of tracks. Is there a morally wrong or right answer to the question? And how does it apply in the case of self-driving cars? How should an engineer program an autonomous vehicle to respond to such a worst case scenario? Should the machine be programmed to swerve and take the life of an innocent man on the sidewalk so as to avoid taking the lives of five people dead ahead who have been stopped for some reason? Or is such a scenario even worth thinking about? The reality is that the trolley problem is more useful as a philosophical tool for identifying the differences between various ethical perspectives, such as utilitarianism and deontology (Carter). Outside of that exercise, it really has little merit. At the end of the day, the engineer of the self-driving car is going to have to decide upon what ethical perspective is guiding him and then program the machine accordingly. As Nyholm and Smids point out, other than the legal ramifications of how an engineer programs a self-driving car, the morality of solving the trolley problem is too elusive to solve: obviously it is important to take ethical problems seriously, but “reasoning about probabilities, uncertainties and risk-management vs. intuitive judgments about what are stipulated to be known and fully certain facts” is just not something that can be effectively left up to a machine that is guided by pre-programmed data (1). People make the mistake of thinking logic and reason can be applied to machine learning—but they forget that long before there were the deontological and utilitarian ethical frameworks there existed the classical ethical theory of virtue ethics, i.e., character ethics. It is the argument of this paper that character ethics is the best approach to solving moral dilemmas for humans. Leaving morality up to machines is a bad way for anyone to have to live.
The Self-Driving Car is Worse than a Trolley Problem
The trolley problem is an ethical puzzle. The self-driving car is a dangerous reality that is already happening in today’s world. As Himmelreich points out, the trolley problem pales in comparison to the ethics of autonomous driving. Essentially, the self-driving care is a deadly object hurtling forward through space and time, whose resistance depends upon the strength of the programmer’s skills and the technology’s efficacy. Accidents happen all the time. Teslas are notorious for crashing in auto pilot mode. They are sold as being fully self-driving, yet a German court recently found that Teslas are not self-driving and that marketing them as such is false advertising (Ewing). Does the world need more false assurances and false sense of safety? Should people really put so much trust in machines? The argument is that, “Planes basically fly themselves these days.” But the argument is disingenuous: plays may be largely flown on auto pilot, but there are always real pilots in the cockpit who are trained to fly the plane should they actually need to. It is when the auto pilot function cannot be overcome by the actual real life human pilots that bad things happen. See Boeing’s share price and reputation for evidence of that.
People look at the self-driving car and say that it makes life easier: they can sleep on the way to work or they can read a novel. The reality is that self-driving is really just a novel technological development that is not even yet perfected (and likely never will be, which is why pilots are still required for air travel). Trusting one’s life and the lives of others to a machine programmed by a programmer from some other part of the world, a programmer who will never be held accountable should an accident occur, is the height of absurdity. Human beings are capable of reason and have a free will, but they often act irrationally and seem desperate at times to give up the use of their free will and make themselves into slaves, whether of passions, of other men or of machines. From a virtue ethics standpoint, men should be reluctant to give up their own autonomy to a robot—yet with self-driving machines, they are asked to do just that. Thus, the self-driving car is a greater moral problem than the theoretical trolley problem. But even the trolley problem is problematic.
The Problem with the Trolley Problem
The trolley problem only begins to explore the differences in a few ethical theories. It can help one to juxtapose, for instance, utilitarianism with deontology. Deontology is concerned with duty ethics and posits that the most moral course of action is that which pertains to one’s duty. What is one’s duty in the trolley problem? Should one save the innocent life or let the greater number of people die? Conversely, utilitarianism posits that the greatest moral good is that which benefits the greatest number of people. Utilitarians typically argue that the trolley problem is solved by saving the greatest number of people. Deontologists argue that one has a duty to preserve innocent life and thus the one individual cannot be sacrificed to save the others. By approaching it in this manner, the trolley problem is played out and serves its hypothetical purpose. The problem is that it does not really allow one to look at the issue from the standpoint of virtue ethics, which is concerned with character formation and the ideal good. Virtue ethics is concerned with that which makes a person better. The standards are deemed universal and eternal. The inherent assumption is that man is more than matter. This matters for a reason. The reason it matters is that the trolley problem is an attempt to box one into a way of thinking that does not align with virtue ethics. Virtue ethicists would argue that the trolley problem is irrelevant. Either course of action could be justifiable based on one’s perceptions at the time. Neither course of action could be seen as besmirching of one’s character. Why then debate it?
Self-driving cars, or vehicles with an automated driving function, have been promoted in the media ever since the rise of Elon Musk and Tesla began dominating news cycles. Numerous businesses have gotten on board with the idea of driverless cars, companies like Uber, Kroger and others have all signaled that they are exploring the technology to see how they can make it work for them. Autonomous driving vehicles are viewed as the next technological evolution, as inevitable as sunshine after a downpour. And yet what is the real contribution to society of autonomous driving? Does it make the roads safer? A quick glance at the history of Tesla car accidents (and fatalities) when the vehicle was in what has been called “autonomous” mode might reveal the extent to which the technology simply is not safe. But is the technology even moral? From what ethical perspective should it judged? It could be said to have utility if it benefits the common good, but what about its impact on the human character? From a virtue ethics standpoint, the self-driving car could be said to be dehumanizing, removing from man’s control the ability to manage a vehicle on his own and instead becoming reliant and dependent upon a machine for what he could do for himself. The trolley problem applies to the issue of self-driving cars in the sense that one has a duty to protect society from anything that might be harmful for it. The issue is in explaining why self-driving cars are an inherent problem for society.
Objection
The objection to the argument of this paper is that machines can be programmed to respond because they can be designed to think and learn (i.e., machine learning). Therefore, settling the trolley problem is not a waste of time. It can actually help to save lives.
This objection is faulty for one reason: it rests on an assumption that self-driving cars is a good thing for society. It assumes that these cars, carefully programmed, can make roads safer and life better all the way around. Inherent in this assumption is the idea that man is nothing more than a machine himself so there is no danger in replacing man with a robot.
Virtue ethics, once again, shows why this assumption is dangerous. Virtue ethics posits that man is a creature whose happiness depends upon his pursuit of virtue, which shapes his character and enables him to become the best version of himself that he can become in this life. The virtues can be known because they are universal and unchanging; they are transcendental in nature. Aristotle and Plato both touched upon them in their philosophical musings (Snow). They are not the only ones to arrive at the concept of virtue ethics, however. Even in the ancient east, it was part of the wisdom passed down to Confucius, which he thereupon passed on to succeeding generations. It is an idea that is as old as humanity itself because it goes beyond the material nature and stretches into the spiritual realm. Today’s machine-obsessed, materialistic, consumeristic culture often fails to consider the importance of the spiritual element. To its own neglect, it focuses wholly on the machines. But man is more than machine; more than animal, and virtue ethics is the sole reminder of this.
Yet, the objector would respond once more that self-driving cars can be safer, regardless of what the nature of man may be. Still, the objection rings hollow. It is like supposing utopia can be achieved if only the right conditions were met. Besides, even developers say the trolley problem is not very helpful for what they are trying to do (Marshall).
Yes, perhaps with the right lidar equipment and the right programming, autonomous vehicles can be safer than anything currently driven by man. But history shows that nothing is ever perfect. Boeings sometimes fall from the sky. Teslas sometimes rear end parked police cars. This is the reality of technology in the modern world. One must assume the responsibility of taking control of one’s own life and manning the machines that barrel down the roads. Unless one is running on a controlled loop (but even these are imperfect as the trolley problem shows), one is never wholly safe. The best course of action, therefore, is to take ownership of one’s life and work on developing one’s character for the best (Pojman and Fieser). Putting all one’s effort into trying to create machines that think and react and respond like humans or like the most ideal or perfect human is simply naïve (Nyholm). Frankenstein tried to create the perfect man and instead created a monster—not because it was inherently corrupt but because the creator himself was imperfect and could not give to his creation that which he himself did not possess. Thus, every programmed machine is only as good as its programmer, and no programmer is beyond reproach because no programmer is perfect.
Conclusion
This paper has explained why self-driving technology is a negative contribution to humanity at large because it takes away from humankind the ability to manage his own destiny, instead putting his life in the hands of a machine. Once this is admitted the trolley problem essentially disappears and fades away. It is no longer a problem that has to be considered for engineers because the autonomous car itself is simply an unrealistic solution to the dangers and risks inherent in driving. Death while driving is an accepted risk that humans take each time they get on the road. In life, as on the road, the unexpected can happen. Attempting to program a machine to respond morally to the unexpected is like asking Victor Frankenstein to create a beautiful (in appearance) human being from the body parts of old cadavers. It is not realistic. Human beings make moral or immoral decisions, and in worst case scenarios they must live with the fact that sometimes it is impossible to decide or even to know what to do in any given situation. This is why virtue ethics is the best approach to living. Instead of trying to address moral problems from the standpoint of how an action impacts the greatest common good or whether it is within the realm of one’s duty, one merely need look at how the action impacts one’s own character. Does a particular action make one a better human being, in line with the virtues that define ideal goodness? If so, then it is an action worth taking. Men who are more interested in handing over their own sovereignty to a bunch of machines and hoping that a programmer programmed them correctly are men lacking in character and the noble qualities. Too many Tesla wrecks while drivers assumed they were safely driving on auto pilot should serve as a testament to the risk of putting one’s life in the hands of a machine on the road.
Works Cited
Carter, Stacy M. "Overdiagnosis, ethics, and trolley problems: why factors other than outcomes matter—an essay by Stacy Carter." Bmj 358 (2017): j3872.
Ewing, J. “German Court Says Tesla Self-Driving Claims Are Misleading.” New York Times, 2020. https://www.nytimes.com/2020/07/14/business/tesla-autopilot-germany.html
Himmelreich, Johannes. "Never mind the trolley: The ethics of autonomous vehicles in mundane situations." Ethical Theory and Moral Practice 21.3 (2018): 669-684.
Marshall, Aarian. “What Can the Trolley Problem Teach Self-Driving Car Engineers?” Wired, 2010. https://www.wired.com/story/trolley-problem-teach-self-driving-car-engineers/
Nyholm, Sven. "The ethics of crashes with self?driving cars: A roadmap, I." Philosophy Compass 13.7 (2018): e12507.
Nyholm, Sven, and Jilles Smids. "The ethics of accident-algorithms for self-driving cars: An applied trolley problem?." Ethical theory and moral practice 19.5 (2016): 1275-1289.
Pojman, L. and J. Fieser. Ethics: Discovering Right and Wrong. Cengage, 2012.
Snow, Nancy E. "Neo-Aristotelian Virtue Ethics." The Oxford Handbook of Virtue. Oxford University Press, 2018. 321.

473 words remaining — Conclusions

You're 80% through this paper

The remaining sections cover Conclusions. Subscribe for $1 to unlock the full paper, plus 130,000+ paper examples and the PaperDue AI writing assistant — all included.

$1 full access trial
130,000+ paper examples AI writing assistant included Citation generator Cancel anytime
Cite This Paper
"Trolley Problems And Self Driving Cars" (2020, July 26) Retrieved April 21, 2026, from
https://www.paperdue.com/essay/trolley-problems-self-driving-cars-essay-2175491

Always verify citation format against your institution's current style guide.

80% of this paper shown 473 words remaining