Epistemological Consequentialism

I used to be a utilitarian. Now I have become mostly uncertain. I’ve pondered about my uncertainty so much that I decided that it was the only thing that was constant and that could give me a kind of stability. Uncertainty was the only basis I could use to get somewhere. Epistemological consequentialism (EC) is the philosophy I’m currently trying to shape on that basis. But first of all, let’s consider utilitarianism first and then move on to why I started getting doubts.

Utilitarianism

Utilitarianism is an altruistic consequentialist normative ethical philosophy that aims to optimize utility as defined by some kind or set of non-negative mental state(s). Since there are many different kinds and sets of non-negative mental state(s) there are different strands of utilitarianism – see below. My definition here is a bit of a monster, so we need to break it down a bit:
  • Altruisitic means that people should care about more than their own personal self, but also about other sentient entities and include their mental states in their ethical considerations
  • Consequentialist means that what counts are the consequences of actions, and nothing else – at least everything else being the same
  • Normative means that utilitarianism is not so much a theory about human behaviour, but a set of statements about how humans should behave
  • Ethical means it’s about ideal moral behaviour, which is usually behaviour in a social context
  • Philosophy means that it’s some real abstract and intellectual stuff that we don’t really know how to approach with the scientific method
  • Optimization of utility can mean maximizing positive utility or minimizing negative utility (which corresponds to negative utilitarianism) or both at the same time (which corresponds to classical utilitarianism)
  • Utility is usually thought of something that can be quantified, or at least treated with some kind of mathematical formalism – in the very least we want to be able to sometimes be able to say whether something has more utility than something else
  • Non-negative mental states can be anything among more specific concepts like
  • Happiness (-> Hedonistic Utilitarianism)
  • Well-being (->Eudaimonic Utilitarianism)
  • Satisfaction of subjective preferences (-> Preference Utilitarianism)
  • Satisfaction of subjective desires (-> Desire Utilitarianism)
  • Subjective qualia with positive valence (-> Valence Utilitarianism)

Using this terminology, I classified myself as classical valence utilitarian. Qualia are subjective perceptional impressions, the “how it feels like” aspects of perceiving something, for example the colour blue, or being hungry, or having a seemingly clever idea. The valence of a quale is the “how good it feels like” aspect of it. Does it feel good? Do I want more of it? Does it feel bad? Do I want to avoid it? Someone aware of the recent findings in neuroscience and psychology might notice that these questions may touch on the distinction between the “liking” and “wanting” systems of the human mind. Making this distinction complicates matters even more, and I don’t see a good reason to digress into this direction here.

In a very simplistic and raw terminology utilitarianism is about “making everyone feel really good”. Making sentient beings feel better (all other things being equal) is the ethical correct way to act, according to utilitarianism, and doing the opposite is bad. This line of thinking does seem to make a lot of sense and has intuitive and emotional appeal. That alone doesn’t make it objectively correct, however. So, let’s come to my issues with utilitarianism:

My issues with utilitarianism

First of all, I'm not saying that utilitarianism was wrong or anything like that. I still believe that, if anything, it is probably, on a very abstract level, very close to being an aspect of ethical truth. My thinking goes more into the direction that utilitarianism might be a premature philosophy that could be refined into something better, though I'm not sure what this better philosophy or theory might look like.

Not everyone is a utilitarian

A very general argument against utilitarianism (which actually applies to every other ethical philosophy) is that not everyone accepts utilitarianism as the ethical philosophy we all should adhere to. In the past I used to assume that this rejection stemmed from those people not fully understanding or "getting" utilitarianism. After all, in reality utilitarianism is a really abstract and highly complex philosophy, so it's not surprising that not everyone grasps utilitarianism easily. What really made me stop and reconsider my position was the fact that there are actually very intelligent and rational people who seem to understand utilitarianism and at the same time don't accept utilitarianism as their ethical philosophy of choice. They could have chosen to refine utilitarianism in those areas where they disagreed with it, but instead they prefer to choose other ethical philosophies instead. But why? Of course, I could assume that they were all deluded in some way or had ulterior motives for rejecting utilitarianism, but that would be rather extreme position to take. Instead, they might have actually good reasons for their choice. And that possibility is something that at least causes some potential concern for me, even if I actually happen to disagree with all the reasons they actually express.

Utilitarianism is kinda complex

Next comes the issue that utilitarianism is actually a class of different philosophies, as we have already seen above. Which one is the correct one? And why? The small number of possible versions I've mentioned above is only the tip of the iceberg. There are many different attributes any single version of utilitarianism can possess. So, we would have to pick the right option of each of those dozens, hundreds, or even more attributes to get the "correct" version of utilitarianism. The issue that utilitarianism is a philosophy and not a scientific theory makes this observation even worse: How would we able to agree on the right version, even if we all agreed that utilitarianism was the best ethical theory?

Utilitarianism struggles with "big cosmologies"

What is a "big cosmoslogy"? This is my own terminology and it basically means any kind of view of the world that implies that the world is essentially infinite and contains at least almost all combinations of anything you can imagine. The physicist Max Tegmark presents different levels of big cosmologies, which he calls [different multiverse levels](http://space.mit.edu/home/tegmark/crazy.html). Also note that there is a philosophical theory called [Modal realism](https://en.wikipedia.org/wiki/Modal_realism) that poses that everything that can possibly exist actually exists somewhere. In some sense, modal realism is a pretty much maximal cosmology. Now it happens that I have reasons for believing modal realism to be true.

Anyway, what does this have to do with utilitarianism? There’s a Less Wrong article about how big cosmologies or “big worlds” as they are called there clash with moral sentiments. It points to some issues that big cosmologies cause for your view of subjectivity and what you should expect to happen in the future. But it doesn’t do full justice to this immensely difficult subject. Neither can I in the scope of this post, so let’s just mention some problems:

  • You are too small and insignificant to optimize utility over the whoe multiverse in a big cosmology
  • In a big cosmology you can’t exactly know where you are in the huge multiverse, which makes it difficult to say over which “area” of the multiverse you want to optimize utility – so even if you just want to optimize utility locally it’s even hard to define your “local area” in a meaningful sense
  • Big cosmologies might be so big that they might even blow up the concept of probabilities – and doing utilitarianism without being able to refer to probabilities is pretty much hopeless
  • If about anything that can happen, actually happens somewhere, why even bother, if you can’t really change that fact in hardly any meaningful sense

For many years I tried finding a way to make utilitarianism work with modal realism in some most meaningful sense, but I can’t claim to have arrived at a really satisfying solution.

The Reparator Paradox

The reparator paradox is a problem that arises for classical utilitarianism (but not negative utilitarianism), when we consider the possibility of powerful beings creating simulations that contain sentient beings. This philosophical problem overlaps with the theological problem of theodicy. If you are interested in such kind of though experiments, you should read my post:

The reparator paradox made me doubt that “conventional utilitarianism” is an actually robust philosophy that provides clear meaning and orientation.

Utilitarianism is about mental states, which we don't really understand

We don't really understand how our brains create the thoughts and feelings that we experience. Even worse: We don't actually know on a solid level what thoughts and feelings actually *are*. So, how can we try to optimize something when we don't even know what that thing is that we are trying to optimize? At this point, we don't have much more than appeals to intuition.

It would be great, if science could solve those mysteries, but at the moment we are not at this stage, so it’s a bit dubious to base our current ethical reasoning on something that we don’t understand now. If anything, this means that utilitarianism should be an ethical system for the future, but not the present.

Humans are bad at predicting the future, especially future mental states

Humans aren't great at predicting the consequences of their actions. Especially when they concern the distant future via indirect effects. And humans are even worse when it comes to estimating the impact of about anything on their emotional states. So, even if utilitarianism was a good thing in theory, it is questionable whether letting humans get in contact with something intellectually demanding as utlitarianism would work fine in practice. Utilitarianism didn't even work fine for me on a personal level. Ironically, I generally seem to do better when I actually try to seek suffering rather than happiness, but that might just be me.

Meta-ethics

So, if utilitarianism isn't the best ethical philosophy there is, what is it then? Is there even something like a "right" ethical philosophy? Are there even "ethical truths" at all? This is where we enter the realm of meta-ethics, which is about such meta-questions about ethical reasoning:

Meta-ethics has its own complex terminologies and stuff, which might be enlightening, but also confusing. Instead, I want to approach the issue from the following question: What are values?

A dynamic ontology of values

A value is basically a system of thoughts. Such thought systems in their full generality are called [memes](https://en.wikipedia.org/wiki/Meme), a term that was coined by Richard Dawkins. Let's just say that values are a special kinds of memes.

In that framework we can ask how values evolve, especially ethical values. Perhaps applying systems theory would enlighten us about how values interact and change. At the moment, it is apparent that different people have quite different values, but occasionally people agree with one another on certain values for one reason or another. Those reasons could be interpreted as attractive forces that draw the value sets of different people together. Of course, there could also be repelling forces that make people reject certain values they don’t hold themselves.

The question is, whether in the long run the attractive forces triumph over the repelling forces, so that at the end everyone arrives at the same set of ethical values: A terminal ethical philosophy (TEP). In the language of systems theory such a TEP would be an attractor. It attracts the value systems of people to itself. The process with which people would arrive at a TEP might be quite complex and might even take aeons, but it would work.

Now, there are deeper questions like: Is there a TEP at all, and if yes, is there a single TEP, or multiple TEPs, in which case it would depend on the starting conditions at which TEP people would end up. Well, at the moment we don’t know which is the case. And we don’t know whether we even can know what is the case.

Meta-ethical optimism

Anyway, I am subscribing to a position that I call "meta-ethical optimism". I hope that there is a *single TEP* and that we will arrive at it eventually. Note that I've written "I hope", not "I believe". It might very well be the case that there are multiple TEPs, or no TEPs at all, or that it would take an infinite time to actually arrive at a TEP. Those are possibilities that I don't like, which makes me hope that they are actually wrong.

So, what follows from meta-ethical optimism? At this stage, not very much. But on the basis of meta-ethical optimism I propose an ethical philosophy called epistemological consequentialism.

Epistemological Consequentialism

Epistemology is the study of knowledge and justified belief:

In the context of what I’ve written so far, the idea is to find out how to study the “value space” in which might find a TEP. If we find out how to do that, we could use this kind of epistemological theory to bring people closer to a TEP, if it exists.

Epistemological consequentialism (EC) is a kind of provisional ethical philosophy that states that we should actively seek a TEP, ideally in the hope that it’s unique and that it doesn’t take an eternity to get close to it.

Note that my suggested ethical philosophy of epistemological consequentialism should not be confused with the epistemological philosophy of epistemic consequentialism!

In epistemological consequentialism what has value is first and foremost improving our ability to probe value space, in the hope to obtain reliable knowledge about “actually ideal” ethics.

Prescriptions from epistemological consequentialism

Epistemological consequentialism can be seen as normative ethical philosophy in its own. In that case, it's natural to ask what it prescribes us to do. What should we do if we accept EC as our ethical guidelines, even if just as provisional ethical system?

That’s pretty much an unexplored question, because I’ve only come up with EC recently. Its name implies that it’s a form of consequentialism, but it’s not a form of utlitarianism (at least not obviously so)! Instead of seeking non-negative mental states we seek knowledge, or even wisdom. One might be inclined to classify them as non-negative mental states, but it’s not about experiencing knowledge and wisdom, but about possessing them.

Anyway, there are many open questions that we need to get clarity about to arrive at any definite prescriptions of EC. Let’s simplify the terminology in so far as that we call that which EC wants to optimize “wisdom”. Then a natural question is: Whose wisdom should we optimize? In an egoist version of EC it would be about one’s own wisdom. An altruist version would instead be about the “general wisdom” of society (perhaps seen as some kind of global meta-mind). Apparently, egoistic and altruistic versions of EC would come to quite different conclusions about what actions to prescribe.

In any case, a natural prescription would be to become more knowledgable, learn more about epistemology, and especially about values. If possible and useful, one should also try to become more intelligent and wise. That would be a possible justification for transhumanism – or just of the intelligence enhancing aspect of that philosophy.

Issues with epistemological consequentialism

A big problem with EC is that it doesn't provide much of an orientation at first. It's not clear how it relates to other ethical philosophies that are less "meta". EC is the philosophy that might lead us to **the one TEP**, but before we have reached that goal, our only orientation is to move along a path towards a TEP.

From where do we start that journey? It might seem reasonable to start from the values we’ve had so far. But would we have a reason to reject those along the way, as we gained more wisdom? Probably. Also, what if one decides to be an adherent of EC and nothing else, rejecting all non-EC derived values from the start, if that’s even possible? Increasing wisdom would be the only maxim such a person would subject oneself to. In how far would such a philosophy be compatible with conventional moral reasoning and actions? Would such a “pure” version of EC have to be moderated with elements of other ethical philosophies?

At the moment I’m quite uncertain about how to answer those questions, but at least I realize that those are important questions nevertheless that deserve at least some attention.

Epistemological consequentialism might be a terminal ethical philosophy in its own right

If it is not the case that there is a single TEP that we can arrive at in a finite amount of time, then things might turn out a bit bizarre. We might be on an eternal journey for a truth that doesn't exist. Manoeuvring value space without finding any save harbour – other than EC itself! In fact, if that is the case, it might turn out that EC is some kind of strange attractor that might be classified as terminal ethical philosophy in its own right!

Why even bother?

One could simply decide that all of this is way to "meta" and that this emphasis on our lack of agreement and our uncertainty about ethical truth is taken much to seriously. Why not simply choose a value system and stick with it, regardless whether it's "true" or whether the "truthfulness" of a value system is even a meaningful concept? While there are of course problems with such a position, I can see its appeal. There are of course deep issues surrounding this question, which will need to be addressed sooner or later.

Final remarks

  • I probably have raised more questions than I could answer. Well, that’s philosophy for you :smiley:
  • I am not sure whether I want to classify myself as utilitarian or epistemological consequentialist, especially at this stage where I don’t know what EC actually implies
  • My current knowledge about epistemology is not very comprehensive – I probably need to learn more about that subject
  • I have written this out of a subjective sense of frustration, confusion, and even depression – I have written on this post for almost 4 hours straight and past my regular bed-time
  • I apparently can’t simply stop being a philosopher, even if there are other important things I should (perhaps) do
2 Likes

Ok, this is a difficult topic to write about, but generally speaking, how well people are able to predict their own mental state depends a lot on how well they know themselves. It’s actually not accurate to speak of predicting the mental state. Someone who knows him/herself well doesn’t predict their mental state. They create it for the needs of the moment.

Without someone whose example to follow, it tends to take quite some suffering before a person realizes they can actually make up their own mind. Literally. However, that tends to be difficult to communicate effectively because there are all kinds of mental blocks present in the cultural programming we’re infused with since birth. The thought will most likely seem beyond scary if those blocks aren’t worked on first.

I personally believe that a lot of the repellent effect is more due to the mode of communication used to express the values than any actual difference. I sometimes jokingly call it violent agreement (although, it looks and feels like violent disagreement because that’s what they think it is). Sometimes I see people arguing in a dirty way about something where the arguers think they disagree but to me it seems they actually agree and are talking right past each other. It’s often difficult to decide whether to laugh or cry seeing that.

I guess that shows how much some people care about actually understanding what others are saying.

At the very least, one core principle in such a TEP needs to be to never assume you actually perfectly understand what other people are talking about and also to never assume they understand you perfectly. You can only ever talk about your interpretation of things, afterall. It’s rather easy to get that wrong, especially when you don’t know someone, but the biggest danger is when you think you know them and stop assuming imperfect understanding.

2 Likes

I have thought about whether to see myself as epistmological consequentialist or utilitarian and have come to the conclusion that being an epistemological consequentialist and utilitarian at the same time is the answer that makes most sense at this time. My personal preference is for an ethical system that provides as much guidance as possible, even if that guidance comes in the forms of guidelines or heuristics. It’s really the intersection of EC with utilitarianism that is most interesting:

More boundary conditions

On one hand, EC provides additional boundary conditions for utilitarianism: People shouldn’t assume that their current form of utilitarianism and its derived prescriptions is the best way to go for all eternity. We need to move towards being able to understand things on a deeper level and to make better predictions about the future. On the other hand, utilitarianism provides additional boundary conditions for EC: Experiments on sentient beings shouldn’t be done, if they cause too much suffering, for example. And we also shouldn’t try to create a philosophical elite that is busy finding out what is really ethical, while everyone else suffers from poverty and oppression, even if such a scenario actually made sense.

Epistemological Consequentialism is weak on its own

I don’t see pure EC making profound and satisfying ethical prescriptions anytime soon. It’s even hard to argue for any kind of deep altruism on the basis of EC alone. Rather, pure EC might favour enlightened egoism. Killing or making those suffer who can’t depend themselves might be seen as justified in such a philosophy, if it somehow served the absolute goal of obtaining knowledge and wisdom. Would we want to have philosopher kings who bathed in the blood of freshly sacrificed virgins (or did something equivalent), because it optimized their intelligence and wisdom? :smiling_imp: It comes not too hard for me to imagine people absolutely loving such ruthlessness pursued in the name of a higher purpose, but why not seek for morally acceptable ways to reach the same goal?

More robust conclusions from the union of Episteomological Consequentialism and Utilitarianism

If we consider the common themes of EC and utilitarianism, we get a broader basis for ethical conclusions. The basic commonality between EC and utilitarianism is that both ethical philosophies are a form of consequentialism. If we want more positive consequences to happen, we need to find out how to generate those positive consequences. Therefore we need more intelligence, foresight, knowledge, and wisdom.

On the basis of that conclusion we would advise people to focus on improving their own intellectual and rational capacities. Creating a better educational system (at least for ourselves) would be a big priority, as would be upgrading our intelligence and wisdom in general.

All of this follows naturally from merely considering the importance of consequences.

It is interesting to note that from a purely consequentialist perspective intelligence and wisdom and requirements for ethics, but at the same time, EC argues that intelligence and wisdom should also be goals of ethics.

Is that enough to form a complete ethical framework? Not really, but it’s a good start. To move further along this line, we can use utilitarian reasoning to reach further conclusions.

So, I’m back to calling myself a utilitarian again. Well, that might be reassuring for some.

After reading your latest update, I felt like I wanted to point out that there actually exists a very selfish motive for altruism. Basically, when you act, your actions change the world directly but also indirectly. Part of the indirect change is the change of motivations of other people. Humans naturally try to see the motives behind other people’s actions. This is a mostly unconscious process, but we are affected by what we see others around us doing, especially what we perceive as why. It very slowly changes us towards some kind of a median of what we see. At least until we become consciously aware of that process.

Anyway, every time you act from altruistic motives, you change the world to be a little more altruistic. The effect of a single choice is, of course, miniscule but when you count the choices a person does in their lifetime, the effect stops being miniscule. Conversely, every time you act from purely selfish motives, you change the world to be a little more selfish. This effect is the biggest in your immediate vicinity, so you yourself are the person to see the most of the change you yourself are making, even if the change tends to happen in waves…

There are also factors that amplify the effect. For example the tendency for people to surround themselves with others who act from similar motives.

Basically, the old saying “you reap what you sow” refers to this I believe. This also likely the thinking behind the “treat others as you’d like to be treated yourself.”

The effect is somewhat of a long term one, so it could take years before you see the results, but it does make for a convincing selfish based motive to be altruistic.

2 Likes

You and Dana are dragging me screaming into spending time on Consequentialism.

OK. I’ve put your Epistemological Consequentialism on my to do list.

Wow, your post was the best explanation of the benefits of altruistic behaviour though the lens of selfishness that I’ve ever read. Concise, plausible, and free from “feel good woo”. :clap: :clap: :clap:

It definitely strengthens the position that enlightened egoism would be quite similar to actual altruism.

Personally, I use a different approach to bridge egoism and altruism. My starting point is the question of “who / what am I”? If you think deeply enough about this question, you hopefully realize that the “self” is more or less an arbitrary (though socially and evolutionary enforced) construct.

  • On a microcosmic perspective I am a colony of trillions of cells, most of which are not even human cells, but the bacteria of “my” microbiome.
  • On a mesocosmic perspective I am a person among many, though the separation between different persons might be partially bridged by different forms of deep and broad communication.
  • On a macrocosmic perspective I am a part of an integrated holistic adaptive complex system, a collective of minds and agents, a neuron in a “world mind”
  • On a phenomenological perspective I perceive. As such my identity is not distinguishable from other “perceivers”. Consciousnesses are universal in the sense that they are all basic perceivers. Thus, in some sense, I am all who perceive

In the light of the existence of those different perspectives, the existence as individual separate human just becomes one more or less arbitrary model among many. So, egoism doesn’t have a strong basis, because it’s main object, the ego, is not sharply defined.

I’m pondering on a rather devious philosophical problem. For a short while I had assume that any harm that could be made due to the lack of wisdom would exceed any harm caused in the process of acquiring wisdom. But such as assumption leads to a philosophy in which each action in favour of increasing wisdom justifies any means. It would become a blank cheque for doing any exploratory action that might increase the general level of wisdom, even if it caused tremendous involuntary suffering.

So, of course this assumption needs to be questioned. Perhaps we shouldn’t be completely ruthless in our pursuit of wisdom. But the difficulty is in finding the line between “appropriate” harm-causing wisdom-increasing activities and those which cross the line of appropriateness. The problem is of course that we lack the wisdom to pinpoint the exact location of this line (or even whether there’s actually such a line or not). What can be rely on, then? Is this a personal choice that everyone has to make, or are there relatively objective criteria that we can use as guidance?

Can it really be said that gods are morally allowed to condemn whole universes to hellish suffering just for a tiny chance of gaining one quantum of wisdom? Well, what if that tiny quantum has a huge positive effect over an astronomical period of time of zillions of simulated universes? You never know in advance. We can’t know that in advance, because our wisdom is limited. And there can always be the hope that the wisdom you gain from highly questionable actions eventually compensates the horribleness of those actions. Only if we somehow managed to obtain ultimate wisdom (a highly unlikely scenario), this line of reasoning would break down.

So, is the pursuit of wisdom the ultimate justification for any means useful towards that purpose? Is this the final solution to the theodicy problem? That gods (= universe simulators) are imperfect in their wisdom, so that they create worlds filled with suffering in order to somehow complete their wisdom? And if that was true, what basis would we have to condemn them? Wouldn’t we do the same in their stead? Wouldn’t we actually be morally compelled to do the same?

Those are extremely disturbing thoughts, but as a hardcore philosopher I can’t avoid facing them. And I feel compelled to share these thoughts with the world, even if the world may not be ready for them, yet. Perhaps we just need to mature and face the reality of inconvenient (potential) truths like this one and accept them.

Who are we to demand from the truth to be beautiful and convenient and fulfilling. Instead, what if the truth is actually devastatingly disturbing and left us with feelings or horror, pointlessness, meaninglessness, desperation, and powerlessness? Should be rather stop our pursuit of truth and wisdom? Well, no! If we have to go through all these negative feelings to do the right thing in the end, it’s our obligation to do so! Not doing that would mean that we accept a system of convenient lies that make us do worse things over and over again.

Truth may be a horribly bitter pill, but it’s still better than the alternative.

Does this mean that I am a monster for accepting any price for increasing the wisdom in this world? :japanese_ogre: Or am I simply becoming philosophically mature, and that comes with the price of feeling like a monster? :crying_cat_face: Or am I simply being horribly deluded here? :ghost:

1 Like

I think this is a clear example of why can’t take an ethical philosophy to its extreme.

Althought you may be right in your dark logic, following this line of thought would almost certainly prove to be extremely harmful for human society and human relashionships.

We could probably aquire a lot of wisdom right now just by legalizing human experimentation, but that would certainly cause massive societal unrest and make many people resent science, philosophy and the pursuit of wisdom in general. And, frankly, could we really comdemn the familiars and friends of those sacrificed in the experiments to be angry at the mad scientists who caused so many suffering just in the name of the abstract notion of wisdom?

It’s true that the more we know about the universe the greater is our power to avoid suffering, so I think that it’s allright to take risks and cause some suffering in order to get closer to the truth, but we also need to remembers that we are human beings (not creatures of pure and cold logic- if that was the case, I don’t even know if avoiding suffering would be valid as a goal) who have to live with their human minds and among their human companions.

If we forget basic human values, like kindness and compassion, we will have no reason and no way to look for the truth.

Your reasoning does make some sense, but this is, in my opinion, the kind of self-destructive philosophy that will only get us further away from wisdom

1 Like

This is naturally limited by the fact that the more suffering you cause, the more likely it is that you’ll end up permanently removed and will never gain the wisdom you were seeking. So, too aggressive wisdom seeking in ways that cause suffering will lead to inability to gain more wisdom and is thus clearly far less beneficial choice than avoiding seeking wisdom that particular way.

It’s probably tempting to think that simulators might allow you to sidestep this issue. To a degree, they probably do. However, it’d be mistake to think they entirely remove the danger. It’s hard to do things perfectly.

1 Like

Yes, that’s an extremely important point. It is extremely important to consider indirect effects in consequentialist reasoning, instead of trying to naively optimize for the target value while disregarding all other values. That naive optimization is often suboptimal, and you’ve provided pretty good reasons why it’s suboptimal: It disregards how humans actually react to extreme measures. That’s why it’s often a rational choice to choose more moderate means, instead of pursuing goals directly in an extreme and direct way.

I’m not sure I get your line of reasoning or what you want to express with that sentence.

It would probably be the best strategy to paint the pursuit of wisdom in a positive and benign light. Things that cause “negative publicity” and aversions against that pursuit should be avoided, in order to prevent backlashes. A rather safe option would be to advocate for more education, critical rational thinking, and philosophy.

1 Like

Yes, absolutely. This is a matter of real instrumental rationality.

Yeah. Sounds true enough. Do you have any specific failure modes in mind?

Yeah, I guess that one sentence is not enough to explain what I meant.

Let me elaborate on it a little.

I think that everyone agrees that reason should be our main tool for seeking the truth, and reason is all about explaining certain laws and principles by extrapolating them from more basic and incremental laws and principles. We can do this for almost every law and principle in science and philosophy, but, if we go deep enough, we’ll find a few principles, the most elementary of all, that we simply cannot prove, but that we (and by “we” I mean every human being) instinctively accept as being true.

In mathematics, for example, that principle is 1+1=2. We can’t mathematically prove this operation simply because it’s the most elementary of all. It’s the thing that ultimately makes all other operations true, and mathematics would stop making sense if we rejected it for some reason.

In ethics, the most elementary principles (equivalent to 1+1=2) are the basic human values, things like kindness, compassion and the pursuit of happiness. All rational ethical philosophies are based on these things.

Utilitarism in specific, is based on the premise that it’s rational for human beings to pursue happiness in life and that we, being compassionate creatures, should strive to make as many people as happy as possible.

Denying a basic human value like compassion to prove a high-leval ethical postulate would be akin to denying that 1+1=2 to solve a complex mathematical equation.

That’s we should use basic human values as moderators for our ethical reasoning so that we can delineate a humane, althought effective and pragmatic, route to wisdom and happiness.

1 Like

One example I can think of that gives a hint of the sort of thing that’s possible are the so called side channel attacks in cryptography. As an example, researchers have found ways to listen to computers and decipher private keys passively. I’ve read about this being done with a microphone as well as a radio receiver.

It’s the kind of thing that it seems insiginificant at first glance but when you scale up, who knows what else the simulation as a whole might learn to do with the hardware. Things the simulator wasn’t designed to do, things that are completely unexpected.

Yeah, the story Crystal Nights by Greg Egan seems to be pretty relevant in this context. It’s very worth reading.

Thank you for that. That was worth reading.

1 Like