I used to be a utilitarian. Now I have become mostly uncertain. I’ve pondered about my uncertainty so much that I decided that it was the only thing that was constant and that could give me a kind of stability. Uncertainty was the only basis I could use to get somewhere. Epistemological consequentialism (EC) is the philosophy I’m currently trying to shape on that basis. But first of all, let’s consider utilitarianism first and then move on to why I started getting doubts.
UtilitarianismUtilitarianism is an altruistic consequentialist normative ethical philosophy that aims to optimize utility as defined by some kind or set of non-negative mental state(s). Since there are many different kinds and sets of non-negative mental state(s) there are different strands of utilitarianism – see below. My definition here is a bit of a monster, so we need to break it down a bit:
- Altruisitic means that people should care about more than their own personal self, but also about other sentient entities and include their mental states in their ethical considerations
- Consequentialist means that what counts are the consequences of actions, and nothing else – at least everything else being the same
- Normative means that utilitarianism is not so much a theory about human behaviour, but a set of statements about how humans should behave
- Ethical means it’s about ideal moral behaviour, which is usually behaviour in a social context
- Philosophy means that it’s some real abstract and intellectual stuff that we don’t really know how to approach with the scientific method
- Optimization of utility can mean maximizing positive utility or minimizing negative utility (which corresponds to negative utilitarianism) or both at the same time (which corresponds to classical utilitarianism)
- Utility is usually thought of something that can be quantified, or at least treated with some kind of mathematical formalism – in the very least we want to be able to sometimes be able to say whether something has more utility than something else
- Non-negative mental states can be anything among more specific concepts like
- Happiness (-> Hedonistic Utilitarianism)
- Well-being (->Eudaimonic Utilitarianism)
- Satisfaction of subjective preferences (-> Preference Utilitarianism)
- Satisfaction of subjective desires (-> Desire Utilitarianism)
- Subjective qualia with positive valence (-> Valence Utilitarianism)
Using this terminology, I classified myself as classical valence utilitarian. Qualia are subjective perceptional impressions, the “how it feels like” aspects of perceiving something, for example the colour blue, or being hungry, or having a seemingly clever idea. The valence of a quale is the “how good it feels like” aspect of it. Does it feel good? Do I want more of it? Does it feel bad? Do I want to avoid it? Someone aware of the recent findings in neuroscience and psychology might notice that these questions may touch on the distinction between the “liking” and “wanting” systems of the human mind. Making this distinction complicates matters even more, and I don’t see a good reason to digress into this direction here.
In a very simplistic and raw terminology utilitarianism is about “making everyone feel really good”. Making sentient beings feel better (all other things being equal) is the ethical correct way to act, according to utilitarianism, and doing the opposite is bad. This line of thinking does seem to make a lot of sense and has intuitive and emotional appeal. That alone doesn’t make it objectively correct, however. So, let’s come to my issues with utilitarianism:
My issues with utilitarianismFirst of all, I'm not saying that utilitarianism was wrong or anything like that. I still believe that, if anything, it is probably, on a very abstract level, very close to being an aspect of ethical truth. My thinking goes more into the direction that utilitarianism might be a premature philosophy that could be refined into something better, though I'm not sure what this better philosophy or theory might look like.
Not everyone is a utilitarianA very general argument against utilitarianism (which actually applies to every other ethical philosophy) is that not everyone accepts utilitarianism as the ethical philosophy we all should adhere to. In the past I used to assume that this rejection stemmed from those people not fully understanding or "getting" utilitarianism. After all, in reality utilitarianism is a really abstract and highly complex philosophy, so it's not surprising that not everyone grasps utilitarianism easily. What really made me stop and reconsider my position was the fact that there are actually very intelligent and rational people who seem to understand utilitarianism and at the same time don't accept utilitarianism as their ethical philosophy of choice. They could have chosen to refine utilitarianism in those areas where they disagreed with it, but instead they prefer to choose other ethical philosophies instead. But why? Of course, I could assume that they were all deluded in some way or had ulterior motives for rejecting utilitarianism, but that would be rather extreme position to take. Instead, they might have actually good reasons for their choice. And that possibility is something that at least causes some potential concern for me, even if I actually happen to disagree with all the reasons they actually express.
Utilitarianism is kinda complexNext comes the issue that utilitarianism is actually a class of different philosophies, as we have already seen above. Which one is the correct one? And why? The small number of possible versions I've mentioned above is only the tip of the iceberg. There are many different attributes any single version of utilitarianism can possess. So, we would have to pick the right option of each of those dozens, hundreds, or even more attributes to get the "correct" version of utilitarianism. The issue that utilitarianism is a philosophy and not a scientific theory makes this observation even worse: How would we able to agree on the right version, even if we all agreed that utilitarianism was the best ethical theory?
Utilitarianism struggles with "big cosmologies"What is a "big cosmoslogy"? This is my own terminology and it basically means any kind of view of the world that implies that the world is essentially infinite and contains at least almost all combinations of anything you can imagine. The physicist Max Tegmark presents different levels of big cosmologies, which he calls [different multiverse levels](http://space.mit.edu/home/tegmark/crazy.html). Also note that there is a philosophical theory called [Modal realism](https://en.wikipedia.org/wiki/Modal_realism) that poses that everything that can possibly exist actually exists somewhere. In some sense, modal realism is a pretty much maximal cosmology. Now it happens that I have reasons for believing modal realism to be true.
Anyway, what does this have to do with utilitarianism? There’s a Less Wrong article about how big cosmologies or “big worlds” as they are called there clash with moral sentiments. It points to some issues that big cosmologies cause for your view of subjectivity and what you should expect to happen in the future. But it doesn’t do full justice to this immensely difficult subject. Neither can I in the scope of this post, so let’s just mention some problems:
- You are too small and insignificant to optimize utility over the whoe multiverse in a big cosmology
- In a big cosmology you can’t exactly know where you are in the huge multiverse, which makes it difficult to say over which “area” of the multiverse you want to optimize utility – so even if you just want to optimize utility locally it’s even hard to define your “local area” in a meaningful sense
- Big cosmologies might be so big that they might even blow up the concept of probabilities – and doing utilitarianism without being able to refer to probabilities is pretty much hopeless
- If about anything that can happen, actually happens somewhere, why even bother, if you can’t really change that fact in hardly any meaningful sense
For many years I tried finding a way to make utilitarianism work with modal realism in some most meaningful sense, but I can’t claim to have arrived at a really satisfying solution.
The Reparator ParadoxThe reparator paradox is a problem that arises for classical utilitarianism (but not negative utilitarianism), when we consider the possibility of powerful beings creating simulations that contain sentient beings. This philosophical problem overlaps with the theological problem of theodicy. If you are interested in such kind of though experiments, you should read my post:
The reparator paradox made me doubt that “conventional utilitarianism” is an actually robust philosophy that provides clear meaning and orientation.
Utilitarianism is about mental states, which we don't really understandWe don't really understand how our brains create the thoughts and feelings that we experience. Even worse: We don't actually know on a solid level what thoughts and feelings actually *are*. So, how can we try to optimize something when we don't even know what that thing is that we are trying to optimize? At this point, we don't have much more than appeals to intuition.
It would be great, if science could solve those mysteries, but at the moment we are not at this stage, so it’s a bit dubious to base our current ethical reasoning on something that we don’t understand now. If anything, this means that utilitarianism should be an ethical system for the future, but not the present.
Humans are bad at predicting the future, especially future mental statesHumans aren't great at predicting the consequences of their actions. Especially when they concern the distant future via indirect effects. And humans are even worse when it comes to estimating the impact of about anything on their emotional states. So, even if utilitarianism was a good thing in theory, it is questionable whether letting humans get in contact with something intellectually demanding as utlitarianism would work fine in practice. Utilitarianism didn't even work fine for me on a personal level. Ironically, I generally seem to do better when I actually try to seek suffering rather than happiness, but that might just be me.
Meta-ethicsSo, if utilitarianism isn't the best ethical philosophy there is, what is it then? Is there even something like a "right" ethical philosophy? Are there even "ethical truths" at all? This is where we enter the realm of meta-ethics, which is about such meta-questions about ethical reasoning:
Meta-ethics has its own complex terminologies and stuff, which might be enlightening, but also confusing. Instead, I want to approach the issue from the following question: What are values?
A dynamic ontology of valuesA value is basically a system of thoughts. Such thought systems in their full generality are called [memes](https://en.wikipedia.org/wiki/Meme), a term that was coined by Richard Dawkins. Let's just say that values are a special kinds of memes.
In that framework we can ask how values evolve, especially ethical values. Perhaps applying systems theory would enlighten us about how values interact and change. At the moment, it is apparent that different people have quite different values, but occasionally people agree with one another on certain values for one reason or another. Those reasons could be interpreted as attractive forces that draw the value sets of different people together. Of course, there could also be repelling forces that make people reject certain values they don’t hold themselves.
The question is, whether in the long run the attractive forces triumph over the repelling forces, so that at the end everyone arrives at the same set of ethical values: A terminal ethical philosophy (TEP). In the language of systems theory such a TEP would be an attractor. It attracts the value systems of people to itself. The process with which people would arrive at a TEP might be quite complex and might even take aeons, but it would work.
Now, there are deeper questions like: Is there a TEP at all, and if yes, is there a single TEP, or multiple TEPs, in which case it would depend on the starting conditions at which TEP people would end up. Well, at the moment we don’t know which is the case. And we don’t know whether we even can know what is the case.
Meta-ethical optimismAnyway, I am subscribing to a position that I call "meta-ethical optimism". I hope that there is a *single TEP* and that we will arrive at it eventually. Note that I've written "I hope", not "I believe". It might very well be the case that there are multiple TEPs, or no TEPs at all, or that it would take an infinite time to actually arrive at a TEP. Those are possibilities that I don't like, which makes me hope that they are actually wrong.
So, what follows from meta-ethical optimism? At this stage, not very much. But on the basis of meta-ethical optimism I propose an ethical philosophy called epistemological consequentialism.
Epistemological ConsequentialismEpistemology is the study of knowledge and justified belief:
In the context of what I’ve written so far, the idea is to find out how to study the “value space” in which might find a TEP. If we find out how to do that, we could use this kind of epistemological theory to bring people closer to a TEP, if it exists.
Epistemological consequentialism (EC) is a kind of provisional ethical philosophy that states that we should actively seek a TEP, ideally in the hope that it’s unique and that it doesn’t take an eternity to get close to it.
Note that my suggested ethical philosophy of epistemological consequentialism should not be confused with the epistemological philosophy of epistemic consequentialism!
In epistemological consequentialism what has value is first and foremost improving our ability to probe value space, in the hope to obtain reliable knowledge about “actually ideal” ethics.
Prescriptions from epistemological consequentialismEpistemological consequentialism can be seen as normative ethical philosophy in its own. In that case, it's natural to ask what it prescribes us to do. What should we do if we accept EC as our ethical guidelines, even if just as provisional ethical system?
That’s pretty much an unexplored question, because I’ve only come up with EC recently. Its name implies that it’s a form of consequentialism, but it’s not a form of utlitarianism (at least not obviously so)! Instead of seeking non-negative mental states we seek knowledge, or even wisdom. One might be inclined to classify them as non-negative mental states, but it’s not about experiencing knowledge and wisdom, but about possessing them.
Anyway, there are many open questions that we need to get clarity about to arrive at any definite prescriptions of EC. Let’s simplify the terminology in so far as that we call that which EC wants to optimize “wisdom”. Then a natural question is: Whose wisdom should we optimize? In an egoist version of EC it would be about one’s own wisdom. An altruist version would instead be about the “general wisdom” of society (perhaps seen as some kind of global meta-mind). Apparently, egoistic and altruistic versions of EC would come to quite different conclusions about what actions to prescribe.
In any case, a natural prescription would be to become more knowledgable, learn more about epistemology, and especially about values. If possible and useful, one should also try to become more intelligent and wise. That would be a possible justification for transhumanism – or just of the intelligence enhancing aspect of that philosophy.
Issues with epistemological consequentialismA big problem with EC is that it doesn't provide much of an orientation at first. It's not clear how it relates to other ethical philosophies that are less "meta". EC is the philosophy that might lead us to **the one TEP**, but before we have reached that goal, our only orientation is to move along a path towards a TEP.
From where do we start that journey? It might seem reasonable to start from the values we’ve had so far. But would we have a reason to reject those along the way, as we gained more wisdom? Probably. Also, what if one decides to be an adherent of EC and nothing else, rejecting all non-EC derived values from the start, if that’s even possible? Increasing wisdom would be the only maxim such a person would subject oneself to. In how far would such a philosophy be compatible with conventional moral reasoning and actions? Would such a “pure” version of EC have to be moderated with elements of other ethical philosophies?
At the moment I’m quite uncertain about how to answer those questions, but at least I realize that those are important questions nevertheless that deserve at least some attention.
Epistemological consequentialism might be a terminal ethical philosophy in its own rightIf it is not the case that there is a single TEP that we can arrive at in a finite amount of time, then things might turn out a bit bizarre. We might be on an eternal journey for a truth that doesn't exist. Manoeuvring value space without finding any save harbour – other than EC itself! In fact, if that is the case, it might turn out that EC is some kind of strange attractor that might be classified as terminal ethical philosophy in its own right!
Why even bother?One could simply decide that all of this is way to "meta" and that this emphasis on our lack of agreement and our uncertainty about ethical truth is taken much to seriously. Why not simply choose a value system and stick with it, regardless whether it's "true" or whether the "truthfulness" of a value system is even a meaningful concept? While there are of course problems with such a position, I can see its appeal. There are of course deep issues surrounding this question, which will need to be addressed sooner or later.
- I probably have raised more questions than I could answer. Well, that’s philosophy for you
- I am not sure whether I want to classify myself as utilitarian or epistemological consequentialist, especially at this stage where I don’t know what EC actually implies
- My current knowledge about epistemology is not very comprehensive – I probably need to learn more about that subject
- I have written this out of a subjective sense of frustration, confusion, and even depression – I have written on this post for almost 4 hours straight and past my regular bed-time
- I apparently can’t simply stop being a philosopher, even if there are other important things I should (perhaps) do