Imagine you’re a surgeon specialising in heart transplants. Your colleague specialises in transplanting lungs. Two patients are brought in, one with a healthy heart and damaged lungs, and the other with healthy lungs and a damaged heart. Each patient urgently needs an organ transplant, and the only way you can save one of your patients is to kill the other and take their missing organ. If you do nothing, both will die. Is it acceptable to sacrifice one, so that at least somebody gets to live? You and your colleague need to agree but this is a case where your principles could easily differ.
Most of us will never have to deal with a moral question this harrowing. However, we’ll probably all encounter some situations where other people’s fundamental values are fundamentally different to our own. These situations will only become more common as our societies become more pluralistic and more value systems have to coexist within them. This leads us to a very difficult challenge to solve. How do we decide what the right thing to do is when we don’t agree on what’s right?
The traditional approach to moral reasoning assumes that if you can just find the right moral principle, you’ll be able to do the right thing. This assumes not only that there is an objective moral truth, but that it is knowable. Its feasibility depends on people either already agreeing on what is morally right in a given situation (allowing them to quickly apply the correct value) or on them being able to reach an agreement over the best principle from an initial position of disagreement. It goes without saying that pluralism increases the chances that people start off by disagreeing on moral principles.
Furthermore, psychology suggests that coming to an agreement can be very difficult. We perceive attacks on our core values as attacks on ourselves, which makes us defensive and less likely to listen. Meanwhile, our confirmation bias means that we give more weight to information that supports our preconceptions while ignoring information that contradicts them. Even if we were able to think perfectly rationally, we would still have trouble reaching a consensus on some moral judgements. If there is anything that thousands of years of philosophical debate have taught us, it’s that there are often sound arguments for opposing positions. For example, we could justify using either a utilitarian (the greatest good for the greatest number) or a liberal (personal autonomy is a priority) approach in the moral dilemma presented at the beginning of this article.
This difficulty in reaching a consensus can make it difficult to take action. In the moral dilemma presented earlier, both doctors need to cooperate to maximise the chances of performing the surgeries correctly. If they can’t agree on whether to apply liberal or utilitarian approaches, they probably won’t be able to do this. In real life, disagreement on political and moral issues can lead to various effects, from civil unrest to criminal behaviour and even outright ignoring the law. An infamous example of the latter is Prohibition in the US, which was a constitutional ban on alcohol implemented due to feminist and religious mobilisation. A large proportion of the population disagreed with the ban and the values of its proponents, leading to so many people ignoring the new law that alcohol consumption may have increased. If we want to be able to take lasting action on moral and political issues, we need consensus. Adversarial attempts to decide which values are correct are not likely to get us there.
So what can we do instead? Traditionally, we would assume that there is a correct rule to apply and that it is possible to know what it is. We need to relax this assumption without sinking into moral nihilism (the idea that objectively, there is no such thing as morality). To do so, we should assume that there is a correct moral rule which we need to apply, but that we have no way of conclusively knowing which one it is. In this situation, the best we can do is find a solution which would be consistent with every moral principle we could apply. Where this isn’t possible, we must be content with aligning our situation with as many moral principles as we can.
Let’s take the thought experiment we introduced at the beginning. Two doctors need to decide whether they’re going to sacrifice one dying patient to save the life of another. Let’s say one of the doctors is a utilitarian. They say that sacrificing one patient for the other maximises the number of lives saved. The other doctor is a liberal and believes killing one person to save another is the ultimate infringement on personal autonomy and is therefore unacceptable. There is an obvious solution which could be morally justified according to both doctors’ moral principles. If one of the patients agrees to sacrifice themselves to save the other, we can save the highest number of lives without infringing on anyone’s autonomy.
Readers might be wondering what happens if neither patient agrees to die. Let’s imagine that the doctors’ first attempt involves simply asking each patient if they want to sacrifice themselves for the other. There’s a high likelihood that both patients will refuse. They might be pleased if the other chose to make the sacrifice, as this would allow them to live, but don’t want to guarantee their deaths by choosing to die for a stranger. Knowing this, the doctors might give them a different choice. They might ask the patients to bet on a coin flip. The winner gets to survive, and the loser must sacrifice themselves. As this raises each patient’s chances of survival from 0% to 50%, they are likely to agree to this deal. Whether utilitarianism or liberalism is better, we know we addressed the situation ethically because our solution is acceptable according to both value systems.
What happens if the patients continue to refuse? What happens if we’re faced with a different moral dilemma where we can’t find an acceptable compromise? This scenario is extremely possible. However, in many situations, finding a compromise could be more likely than convincing people to change their moral values. Besides, failing to find a creative solution just brings us back to our original dilemma, no worse off than we were before. In an increasingly pluralist and increasingly divided society, our first reaction to a value clash should be to try to understand each other’s positions and reach a mutually acceptable solution. Only once this has failed should we debate whose values are better.
Featured image: Pixabay