9.7 KiB
title | date | tags | topics | image | draft | |||||
---|---|---|---|---|---|---|---|---|---|---|
Autism and Trollies - Against Utilitarianism | 2018-02-15T22:57:44Z |
|
|
/img/jeremys-head.jpg | false |
In recent years, it has been speculated that Jeremy Bentham was an autist. This speculation arises out of Bentham’s extreme attempts at systematizing human interactions in his formulation of Utilitarianism. Though I realize modern Utilitarianism is much more sophisticated now (in various forms of sociology and econometrics), I think they all still suffer from the fundamental assumptions laid down by Bentham. In this essay, I will show how one of those basic tenets leads to absurd conclusions, and hides imported value assumptions from other forms of ethics. What better way to do this, than with Philippa Foot’s trolley problem, a common modern tool of the Utilitarian.
Initial assumptions
- I’m working with traditional Utilitarianism, not any of the more modern econometric notions of Utility. The more sophisticated versions of Utilitarianism would pretend to have an answer to this problem, but I don’t have the space to deal with that here.
- I’m assuming “aggregate” pleasure is what we’re after, and not individual pleasure, since neither Bentham nor Mill were willing to concede to pure individualistic hedonism.
- I’m assuming all the passive participants in the trolley scenario are “blank slates”, and are of equal absolutely “value” in some objective sense, in order to force the dilemma (i.e., it wouldn’t be much of a dilemma if the 5 were orphans, and the 1 was Hitler).
The Groundwork
Now, Bentham had this idea that we might be able to parse pleasure and pain into quanta of measurable units. In keeping with the mindset of the time, and in an attempt to take Bentham’s idea to its logical limits (something he often did impulsively), let’s call these quanta, “hedons” and “dolors”. Where, Hedons are the finite quanta of pleasure (from ‘hedonism’), and Dolors (from the latin for “pain”) are the finite quanta of pain. For each individual, then, imagine a one-dimensional graph in which the zero-line runs through the horizontal center. Zero is equivalent to “indifferent”, anything above zero is equivalent to “pleasurable”, and anything below is equivalent to “pain” (like a barometer that can go into the negative). For example:
{{< figure src="/img/hedonic_scale.jpg" title="Hedonic Scale" >}}
Where +10 would be something like an orgasm whilst simultaneously eating a custard eclair in a warm Jacuzzi bath, and -10 would be something like having your Johnson burned off with an acetylene torch, whilst rabid dogs gnaw your fingers off, in an ice storm.
Since we’re assuming “blank slate” participants, everyone starts out at zero (absolute indifference), and everyone has an equal capacity for either +10 or -10. Also, since we’re dealing with aggregates, rather than individuals, we need to take an accumulation of this for all six passive participants. That would be a maximum potential of +60 or -60 for the group. (6 people X 10). Lastly, since you can feel neither pleasure nor pain when you’re dead, you cease to count toward the aggregate once you are dead.
In the trolley case, we are assuming that the trolley is going to kill whichever passive participants it strikes, not just seriously maim them. That means whomever it hits is effectively removed from the aggregate of total hedons and dolors available to make our “greatest good” calculation. Next, I think it’s safe to assume a reasonably sympathetic disposition in most people. So, witnessing a horrible tragedy is going to cause some serious distress. Therefore, we have to decide how many dolors that amounts to. I am willing to concede, also, the possibility that the relief at realizing it’s not me that got hit by the train will result in the addition of some hedons. Let’s say, witnessing the tragedy is equivalent to 2 dolors, and the self-interested relief is equivalent to 1 hedon.
The Experiment
The trolley scenario I face today, is as follows:
- (a) If I pull the lever to the left, I drive the train over the five passive participants.
- (b) If I pull the lever to the right, I drive over one passive participant.
In situation (a), 5 individuals are removed from the aggregate total of hedons and dolors. So, we are left with only one person on the opposite track. He experiences 2 dolors witnessing the tragedy, and 1 hedon of relief, for a total aggregate score of -1 on the “greatest good” scale.
In situation (b), 1 individual is removed from the aggregate total of hedons and dolors. This leaves us with a total aggregate potential of +50/-50 (the five people on the other track). Each experiences 2 dolors at the witnessing of the tragedy on the other track. That is a total aggregate of 10 dolors. Each experiences 1 hedon at being relieved they weren’t the victim. That’s a total of 5 hedons. So, basic number line calculation would be: -10 + 5 = -5. In other words, we’re left with an aggregate “greatest good” scale calculation of -5.
So you see, since one dolor of pain is better than five dolors of pain, on an aggregate scale, it is therefore better to run over 5 people, than it is to run over one (all other things being equal).
Interpreting The Results
Now, outside of the framework of Utilitarianism as I have described it here, do I subscribe to this as a reasonable moral theory? Would I actually be willing to run over 5 people instead of 1? In real life, this is a choice I’m not likely to ever face. But if I were, my response to it is going to be driven by psychological and emotional causes, not Utilitarian calculations, which are far too speculative and complex to aid anyone in a moment of extreme stress. Of course, Mill would tell you that constant practice and study would leave you with something like a “second nature” that would respond to such situations. But this begs the question. In any case, I am inclined to refuse to answer the question of trolly scenarios.
Firstly, the natural impulse to run over one instead of five has more to do with the contrived nature of the trolley experiment, than it does with proving Utilitarianism. Why should we assume “blank slates” are standing on the tracks? What if the five are a euthanasia club awaiting their prize? If you pulled the lever, you would thus cause great distress because they would not have their wishes fulfilled. On the other hand, what if the one man on the other track is a Nobel winning agricultural scientist who is on the verge of solving the world hunger problem? Seems to me, killing five to save him is well worth the cost.
Secondly, these trolley scenarios, and Utilitarianism more generally, masquerade individual prejudices for objective values. Who am I to decide which people must die, and which must live? Why is my calculation of what’s more pleasurable, in any sense synonymous with the objective discovery of what’s good? Aristotle, for one, would have scoffed at such an equivocation.
Thirdly, the whole scenario is implicitly adopting life itself as a value above and beyond Utilitarian considerations of pain and pleasure. In other words, It would be better to be alive and suffering from the loss of a limb due to a trolley accident, than to be dead and suffer no pain at all. This value cannot be coherently established in Utilitarianism, and there are some philosophers who have actually committed themselves to therefore denying that value. David Benatar comes to mind, who argues more or less from the same Utilitarian presuppositions as I have established in this essay: the whole of the human race should be rendered impotent, so as to prevent any more human beings from coming into existence, because the accumulated dolors vs hedons (my terms) of existence outweigh the net null of not existing at all.
The Conclusion
Clearly, any framework for ethical calculus that can lead us to the conclusion that death is preferable to life, is fundamentally flawed. Even David Benatar himself asserts that the presently living have some sort of “interest” in remaining alive (confusingly, despite still insisting that their suffering far outweighs any interest that might promote being alive). Worse yet, any ethical system that implicitly requires the elevation of some individual or small group of individual judgments, as arbiters of an imaginary objective “greater good”, is demonstrably a bad thing. The late 19th, and all of the 20th century is a wasteland of Utilitarian utopianism – giant state bureaucracies filled with officious autistics, and political systems overrun by narcissistic do-gooders, all hell-bend on “making society compassionate”, at all costs.
The trolley scenario I have laid out here, is a metaphorical demonstration of just this problem. Utilitarianism, as an ethical system, is at best a decision-making tool to be used in very specific, very short-term situations, after we’ve already established a set of moral presuppositions from which to frame the calculations. The Utilitarianism of this trolley scenario relies on the presupposition of life as a value; specifically, human life. But Utilitarianism as a doctrine need not also presuppose such a value. This is why many philosophers criticize Utilitarianism for failing to properly protect rights – they’re intuitively recognizing the fact that Utilitarianism is anti-life. When human lives themselves becomes an expendable means to some other greater abstract goal, the ethical system that led us to that is highly suspect at best. There are all sorts of other problems with Utilitarianism, but this this problem is enough by itself to suggest that we ought not adopt it with any degree of confidence.