In this (last) post in my series on the is/ought problem (part 1, part 2, neither strictly necessary, but read them anyway for definitions and background on the problem), I wanted to approach problem from a different angle.
Thus, in this post I shall not venture into epistemology, but instead present what I currently see as the best possible defense of holding moral beliefs which are not mere opinions, but also potentially binding for others too. (so you can say to another, “don’t steal” and that has a different feel to it than “don’t make funny faces to strangers” or “don’t order their sushi, but the fried chicken”)
Now, it’s hardly news in the 21st century that without religion and god, the common picture of the world one gets from modern western culture has a significant nihilistic vibe to it. We might speak of morals, but everybody disagrees about them. And the people whose moral convictions seem deepest are precisely the religious people whom western culture has long since proclaimed wrong and out of touch. And if religious morals are shaky, what shall we say about secular ones? A single look at the diversity of the world’s cultures is enough to indicate that there are hardly any sacred moral laws humans all independently agree on.
So, is there anything at all one could say about morality? Is it all just arbitrary social convention predicated on power relations and nothing more? Could we really object, in moral terms, to even the seemingly most horrific of acts? Or is it all just emotional biases upon emotional biases that make us feel like certain actions are genuinely evil? Could one person be ever justified in saying another person’s actions are morally wrong? Or is everything a matter of preference and mere disagreement, essentially nothing any different from supporting your favorite sports team?
It’s not like any of this is counterintuitive. Counter-cultures have always existed, often espousing morality radically opposed to the one of mainstream society. Whether you dislike bourgeois morality, you think drug policies are too strict or you think the mainstream is too morally decadent, there is a group of people who think just like you and believe everyone else’s morals are totally wrong. The one thing that unites such disparate group is this: they all agree that the mainstream morality is wrong, and only really supported by convention, tradition and coercion.
To be honest, I have struggled with similar moral questions for years. For a secular person, it’s frankly hard not to be a nihilist, at least in the sense of objective morality. The world just doesn’t ultimately make any sense!
Of course, realizing this is hard on humans. So many people are interested in combating nihilism. I am so too. But most arguments against nihilism I have read are all just a sophisticated way of saying “do something you like and find meaning through it”. But I already know I can find my own meaning through my hobbies! The question has always been if there was anything more than that — some grand purpose or end to life itself, something profound that tells which hobbies to pursue in the first place?
Now, wrestling with nihilism has not made me socially dysfunctional. I clearly know how society expects me to behave and the morality it considers appropriate. I also know what morality it considers inappropriate, but socially acceptable nonetheless (drugs, promiscuity, etc.). I know the rules and I can follow the rules. And yet, I could never quite see what exactly (except a gun or two pointed at the dissenters) made the rules binding for me, you, or everyone else…
Albert Camus said some decades ago:
There is only one really serious philosophical question, and that is suicide. Deciding whether or not life is worth living is to answer the fundamental question in philosophy. All other questions follow from that.
I have always looked at this quote and seen it as one of these provocative statements people make without really meaning them. But lately, I’ve been thinking that Camus might have meant it and that he might have been right.
What I mean is this: maybe there are moral consequences of choosing not to kill yourself.
Of course, in the abstract, maybe there is no real way of bridging the is/ought divide. However, in practice, the people who are still around to ask this question all have something important in common — they all subjectively chose and continue to choose life over death.
The upshot of this realization is this: one could say that all morality is subjective and up to the individual. And one would be right to some extent (hence, we find other people’s morals subjective and arbitrary). Nonetheless, even subjective morality must be consistent with itself and the choices that enable it. In other words, if you haven’t killed yourself, you have already expressed a subjective preference for life and that might well be enough to ground some common morality among humans. (choosing to live is to implicitly embrace life as a value; it’s an example of a voluntary choice of an ought unforced to you by anything that is, a choice from which many other oughts could potentially be derived)
Now, such morality is certainly weaker than traditional moral systems. It’s hardly obvious how such morality of life could say much about what kind of sex is allowed, for example. Or when one should fast and when one should eat. Or how often (if ever) one should pray, or anything else like that.
Moreover, such morality certainly requires a slight leap of faith and a basic dose of respect for others. It’s certainly possible to imagine someone choosing life for his/herself and yet denying it for others. So a sufficiently committed solipsist or a stubborn nihilist could still claim that murder was only pronounced “wrong” merely by fiat. For everyone else, however, the shared commitment to life all humans exhibit might well be enough to talk morals again.
Of course, all this has consequences for the is/ought problem. If life is what we choose and value, then knowing what enhances, improves and preserves life is something we not only want to do, but also ought to do. So, in this view, medics are highly moral people. And, presumably, so is exercising regularly and taking care of one’s body (at least as long as one still clings to and values life, see the below discussion of depression).
Naturally, there are delicate questions that arise concerning depression and other mental health problems. If someone gets depressed and desires to end his/her life, then could others justifiably say to him/her that suicide in that case would be immoral? The whole premise behind the shared morality was that it is based on a de facto agreement by virtue of everybody having chosen to live. But what if someone changes their mind? Shall we simply let them go?
This is where one pays the price of not having established a fully objective morality, but a kind of weak communal agreement based on the shared subjective morals of every individual out there. There is nothing one could truly say, in moral terms anyway, to someone in depression that wants to jump off a bridge. The best one could do is try to change their mind through persuasion and love.
So, Camus’s question might not really have an objective moral answer, after all. No one can tell you if you should / ought to depart from this world or not. Others can try to persuade you to stay alive but the decision is ultimately wholly up for you to make. However, if you choose to stay, i.e. choose life and thus indicate that you consider it worthy as a value, then you’ve subjectively bridged your own is/ought gap.
By introducing life (/consciousness?) as a value, you’ve pronounced that your actions can be judged on the merit of how much they respect and preserve life. And since most people alive today are evidently all choosing life simultaneously, then it makes sense to allow everybody to morally judge everybody else’s actions. (at least as far as they pertain to preserving life)
Naturally, there are many questions left unanswered. What actions are moral and what actions are immoral in this view? And how does one deal with the problem of multiple is-es (or, for those with a more common epistemology, the problem of uncertainty)? What if there are facts which interpreted one way are good for life and when interpreted in another are bad? (e.g. freezing your body when you are 30 in hope of eventual immortality?) And what if a treatment saves 50% of the patients that use it and kills the other 50%, but nobody yet knows why? What’s the moral thing to do when your ill mother asks you to provide her with the treatment?
These questions are certainly important and highly interesting. I might well explore some of them later on in other posts. But I think that even with the potential problems/dilemmas of this moral framework, it still satisfies a thirst for grounding morality in something deeper than mere arbitrary power and coercion.
Moreover, this moral framework has the added benefit of integrating nicely with other moral intuitions we have. For example, ask yourself if taking away somebody else’s property would be wrong in the context of a commune where everyone is explicitly fine with it? One could say theft is theft and the action is immoral regardless of what the victim or the perpetrator think of it.
But one could also say that life or consciousness was never harmed in the process and moreover, the same shared preference that justified a prohibition of murder also justifies taking others’ property in a commune too.
If people ever came to share more than a preference for being over non-being, then maybe one could say morality has expanded beyond the mere valuing of life and consciousness. Yet, looking at all of humanity’s conflicts, it can admittedly be hard to imagine any other universal value which (1) all humans could come to share and which (2) the next generation won’t end up rebelling against. And yet, there is something to be said about that cool thing called love. It certainly seems almost as universal a value as any. (and yet, some people would rather be left alone to live all by themselves alone in nature and away from all other humans; maybe even love is not good enough, after all)
UPDATE: I’ve been thinking about this last point more and it seems reasonable to add wellbeing/pain avoidance to the list. In this way, our moral duties involve not only a concern for the preservation of life, but also a concern for the individual’s greater wellbeing (e.g. lack of unnecessary suffering, meaninglessness and pain).
In conclusion, I’d like to have some fun and explore the connection the ideas above have with Artificial Intelligence. Clearly, the moral framework I discussed above was based on a certain level of self-consciousness and the deliberate choice to stay alive. So if really smart machines are not conscious then morality doesn’t really apply to them.
But what if the machines were conscious? What if machines come to share goals not limited to simply their being alive, but also contrary to human wellbeing? Whose morals are to take precedence then? Do we revert back to full-scale war? (or do we simply ban research in artificial intelligence, or do we reprogram the machines to have different goals — the machine equivalent of genetic engineering which many people today would likely oppose if it were imposed on them from the outside — etc.)
The thing is, up until this point, we have always been the lone self-conscious creature around. We haven’t had to question what happens when there are more. In his podcasts on the topic of consciousness, Sam Harris has expressed arguments to the tune of there being a moral duty to uphold the interests of the highest form of consciousness. Presumably this means that when highly enlightened robots come around, we should voluntarily sacrifice ourselves to them if the need arises.
I am not convinced. It’s unclear how different conscious forms should interact with each other when their moralities (as instantiated above) clash. Hell, it’s unclear if it makes sense to speak of human morality and robot morality any more than it makes sense to speak of an English morality and a French one. Maybe morality is to be negotiated across all self-conscious agents in contact with each other who choose to stay alive. Which probably sucks for the machines cause humans agree on hardly anything so the resulting morality won’t ask too much of anyone…
In any case, I shall stop here. I need more knowledge about artificial intelligence before I can discuss it further. But it’s undoubtedly a fascinating topic and one that is clearly of interest to people interested in morality and moral frameworks. Till next time 🙂
You have reached the end of this article. Thank you for reading! If you liked this article, please share it with your friends or leave a reply down below! And if you would love to read more articles like this one, you can subscribe to the weekly Young Meets Free newsletter.