Thursday 27 February 2020

Does the Trolley Problem Present a Reductio Ad Absurdum Argument Against Utilitarianism?


I have been thinking about the Trolley Problem and the Teselacta from Dr. Who. Typically trolley problems are viewed as thought experiments that highlight competing ways the predominant Utilitarian and Deontological theoretical traditions can go about deriving answers to moral conundrums. In other words, they're meant to provide a focal point for debate between these two traditions. But what I think they actually do is highlight the deficiencies of Utilitarianism.

What Trolley problems reveal about Utilitarianism is that other factors besides happiness/utility are absolutely necessary for moral decision-making. In the problem's simplest form, choosing the death of five people or switching the track to kill one, a Utilitarian analysis, and I would argue, any kind of proper moral analysis should have one try to save the five. The truly interesting debates arise once one begins to add other factors, such as relationships with the other parties involved. In the empirical studies utilizing trolley scenarios people begin to substantially change their answers when other such factors enter in. And this makes sense if other things can also be considered good besides simple "happiness experiencing potential" of the sentient units possibly influenced by one's choices.

A more interesting variation of the trolley problem (this is where the Teselecta part comes in) might be to ask if the tracks had only one individual on each track: Hitler and Gandhi, both appearing through time vortexes immediately before their actual historical deaths. Your action of killing them would make no significant historical difference. Both will be killed and then swept back to their own time as dead bodies leaving the subsequent timeline intact. No further lives will be saved or influenced by the act. The train is hurtling towards Gandhi. Should you change the track and switch it to Hitler?  How should you make choices about such, as my son called them, "micro utilities?"

One might argue that the scenario is null-- One's choice will make no significant moral difference. Leave the train to run over Gandhi, and then let him be swept back in time to have the assassin's bullet hit his lifeless body and subsequent events unfold.  Or switch the track and play a role in "giving one" to Hitler just before he would have given one to himself with his pistol in the bunker. Since there are no significant differences in consequences for history, Utilitarianism might say there is no real moral issue at all. But how would one be able to reach such a conclusion?

If one could reduce suffering even by a tiny amount more, then the maximization of utility in the universe might only be achieved by putting Hitler out of his misery just a few moments earlier. Whereas, killing Gandhi who was conceivably feeling fine before the assassin's bullet felled him, might not be so effected by losing a few seconds of his general equanimity.  How could we weigh those few seconds against each other?  Is eliminating a few seconds of the undoubted anguish of Hitler in his last moments more weighty than shortening a few seconds good feeling of Gandhi basking in the praise of his supporters?  What if he happened to be worrying about something? Which tittle of utility should decide the matter?

And how does my own feeling factor in?  If I might feel good at giving one to Hitler, would this tip the balance?  Clearly it would be needed to be added to the scale.  But what if I experienced discomfort at being forced to make such a decision?  Having to think these potentially weighty matters through, especially under time pressure, might have its own displeasure attached to it.  I should certainly not let that displeasure rise to a level that I would tip the balance of the universe's utility in a lower direction than it might be otherwise. So perhaps I really should simply make an arbitrary decision to avoid such a possibility?  But what of possible regret?  Shouldn't be a problem.  Just try to make a reasonable estimate of the utility in the time allowed and choose (don't be a Chidi Anagonye).

When dealing with such minuscule amounts of possible utility/happiness, we are faced with a complex decision involving a possible return on investment of our moral unease and earnest concern.  Mere seconds at issue with the feelings of the subjects of our decision weighed against our own discomfort resulting from having to take our basic moral responsibility under Utilitarianism seriously.  If  situation were real, such reflections would likely be moot. Aristotle might be right that ethics in real life requires well established habits rather than complex accounting of consequences. It is only when the scenario is hypothetical that we can lavish time on trying to think such matters completely through.

But one thing we can't avoid is at least risking undergoing some uncompensated discomfort from undertaking our duties under Utilitarianism in order to provide the beginnings of an assessment of what the possible return on our own initial moral unease might be versus the possible utility that could be obtained from a possible moral decision. In other words, Utilitarianism demands such an initial investment of our own moral discomfort. We might, after having made such an investment, discern, as I suggest above, that maybe it is simply not worth my discomfort to intensively engage with a specific moral problem, but we can't be guaranteed that we might not overshoot before being able to make that judgement because we can't know what our assessment will turn up before we have undertaken it.  But if we discover that we have suffered more than any possible suffering that we could have prevented (when weighted against pleasure obtained), and then decide to cut our losses, this does not change the fact that Utilitarianism demands of us that we should have undertaken the initial assessment. We always must make such an initial investment of our possible moral discomfort REGARDLESS of the utility that might result.

In other words, seeking to maximize potential happiness cannot be the only criteria worth considering when when it comes to fulfilling our moral responsibilities. We would end up in a infinite regress otherwise.  Should I make the initial investment of moral discomfort to discern whether I should make the initial investment of moral discomfort?  Where could such processes of thought end?  Clearly, I need a principle: One has a prima facie duty to risk reducing the world's overall utility from what it otherwise might be to undertake a basic moral assessment of any situation possibly requiring moral assessment. And I should discern whether Utilitarianism is the right way to frame such questions, which will require undertaking even further risks of possible unrecoverable moral unease, unless I can simply assert that it is impossible in principle for Utilitarianism to be wrong. In other words we have a prima facie duty to engage in theoretical inquiry of a certain sort, regardless of possible maximization of utility. As Krishna say, you have a right your labours, but not the fruits of your labours.