Four posthuman ethical frameworks

I have been thinking about p-factions in the context of my “Exaltation” sci-fi universe. It seems that a very important difference between many of those factions is their ethical framework. So, I’ve identified four frameworks that make sense (are coherent and not based in silly moral dogmas) in a posthuman context (the names of these frameworks might be improved later on):

  1. Opportunism: You can do whatever you want, if you can get away with it
  2. Pacifism / Nonviolence: You can do whatever you want, as long as you don’t do any harm
  3. Eu-consequentialism: You can do whatever you want, as long as you do more good than bad
  4. Compensationism: You can do whatever you want, as long as you compensate everyone for all the harm you caused them
    Note (2015-09-17): I’ve renamed “compensationism” to “reparationism

Edit (2015-06-02): On Facebook, I’ve been criticized for leaving out utilitarianism from this list. This is of course a big omission. Utilitarianism certainly is one of the best frameworks for a posthuman world, but at the same time it’s also very difficult to interpret and apply it correctly. These difficulties call for simplified frameworks, which are the frameworks presented in this post. :blush:

Now let’s analyse each of those frameworks separately:

Opportunism

This "machiavellian" framework sounds rather unpleasant, but at least it could be seen as brutally honest: The universe does not have any built-in moral framework, so everyone is free to try to make the best out of his chances without needing to respect the interests of other contenders. A society made up of opportunist needs mechanisms that make it viable, so that it doesn't end up in a brutish fight of everyone against everyone. Essentially, some kind of a [Hobbesian Leviathan][2] that enforces a social contract (respectively laws) is needed. Laws can be broken, but if the Leviathan is strong enough, perpetrator is punished. This punishment is simply bad luck for those who have succumbed to the Leviathan. The better, or luckier, opportunists break laws and get away with it.

That is a reality that might be seen as unacceptable. If it’s really unacceptable for those in power, a future society might create a kind of super-Leviathan that actually prevents everyone from breaking any law in the first place. For example, everyone could get an inhibitor implant that paralyses him if he is about to break a law. Opportunists would want to escape the grip of this super-Leviathan, but whether they will succeed with that is another question. In fact, such a super-Leviathan might be implemented even in a society in which opportunists are rare, just in order to be safe from them.

Pacifism / Nonviolence

For simplicity's sake I call adherents of the principle of nonviolence pacifists, even if pacifism is generally rather used to the context of the rejection of war.

Negative utilitarianism could be seen as a version of pacifism. Negative utilitarianism focuses on the minimization of pain and suffering. Radical utilitarians like David Pearce want to eliminate involuntary suffering though technological means, a position that is called (bioethical) abolitionism.

Also, the abolition of involuntary suffering is framed as desirable goal in the original transhumanist declaration, but it doesn’t seem to have the status of an ethical imperative there:

  1. Humanity stands to be profoundly affected by science and technology in
    the future. We envision the possibility of broadening human potential by
    overcoming aging, cognitive shortcomings, involuntary suffering, and
    our confinement to planet Earth.

The same goes for Dirk Bruere’s updated version of the Transhumanist declaration:

Humanity stands to be profoundly affected by science and technology in the future. We assert the desirability of transcending human limitations by overcoming aging, enhancing cognition, abolishing involuntary suffering, and expanding beyond Earth. We intend to become more than Human.

This seems to suggest that it’s not actually necessary for transhumanists to be bioethical abolitionists, even though it is seen as desirable philosophy to follow.

In a posthuman pacifist context bioethical abolitionism can be justified by the technical feasibility of eliminating involuntary suffering together with the view that not preventing harm is morally equivalent to actively doing harm. Of course, the latter view can be contested, so posthuman pacifism only implies bioethical abolitionism if that view is actually accepted.

Pacifism seems to be an intuitively sound and noble ethical philosophy, but its practical feasibility and even its desirability can be questioned. Most important is the question what does and what does not constitute actual harm.

  • Do I harm the environment by emitting or simply breathing out CO2?
  • Do I harm someone else by standing in his way?
  • Does free speech harm others?
  • Do surgeons necessarily do harm even though they have the intention to reduce greater harm with an operation?
  • Is owning private property a form of harm if I’m not willing to share it with others?
  • Is my immune system doing harm to the microorganisms trying to live inside of me?

These questions teeter on the brink of the absurd, but they demonstrate that a clear delineation between harm and non-harm is very difficult.

But there is a significant reason why true pacifism may be undesirable in an advanced posthuman world:

Simulation Ethics

Simulation ethics refers to the ethical considerations related to simulating world with sentient beings living within that simulated world.

Why would anyone do such a thing? Well, there are multiple plausible motivations:

  • Simulating a game world with realistic “natural” inhabitants that behave authentically, because they are authentically simulated sentient beings. Such games could be totally popular, even though they might seem ethically questionable.
  • Scientific curiosity: Is there actually interesting life in certain simulated universes?
  • Creating an ancestor simulation to find out how our ancestors lived and possibly to bring them back to life by creating new bodies for them and downloading their simulated minds into those new bodies.
  • For playing though possible scenarios of the future, in order to make better decision in the present.
  • Because a sufficiently superintelligent mind might necessarily become so vast that it effectively contains small simulated worlds with models of sentient minds that are so complex that they are sentient in their own right!

All of these points make it seem plausible that any posthuman civilization will have a hard time resisting the temptation to create simulations that contain sentient beings.

While it may be possible to create harm-free simulations, the vast majority of interesting simulations will necessarily contain some involuntary suffering. The following frameworks deal with that issue by allowing such simulations, while still maintaining some noble ethical guidelines.

Eu-consequentialism

The solution of eu-consequentialism to simulation ethics is that you may cause harm to the simulated entities, but at the same time, you have to compensate for that suffering by creating so much positive emotions or well-being that it outweighs the created harm.

In a classical utilitarian posthuman society the question may be whether having some simulations with suffering sentient beings may be necessary to maximize overall happiness, because the gain for the simulators may not only outweigh the suffering of the simulated entities, but this gain could actually be impossible to reach with suffering-free methods!

In that case, eu-consequentialism could be seen as minimal acceptable ethical standard for simulating worlds with sentient beings.

But of course, higher standards might be preferable:

Compensationism

**Note (2015-09-17):** I've renamed "compensationism" to "**reparationism**"

There is a serious problem with the eu-consequentialist framework: It’s really subjectively terrible for the simulated entities who suffer a lot. While their suffering may be globally compensated by mood enhanced simulated sentient beings living in heavenly simulations, this doesn’t help the suffering beings in the less pleasant simulations in any way.

In this situation, compensationism comes to the rescue: Every suffering is compensated for everyone! If you are a being in a rather unpleasant simulated world, and the simulators are compensationists, you are not so unlucky after all: Your suffering will be compensated by being relocated into a much more pleasant world, or by becoming radically mood enhanced, so that you will experience levels of bliss that really outweigh all the suffering you have ever experienced!

The idea of compensationism might seem fairly weird, but it makes a lot of sense: You simply make up for the damage that you have caused. It could be seen as a form of karmic logic. Or it could simply be called responsibility.

One can make the case that compensationism is preferable to pacifism for various reasons:

  • A posthuman compensationist civilization may have higher overall well-being than a pacifist one, if the “problematic” simulations do create benefits that cannot be achieved in any other way
  • It can be argued that it’s preferable to live in a simulation that allows for severe suffering, but to be compensated for that suffering in the end, than not having existed in the first place.
  • Posthuman pacifist civilizations may be comparatively weak, so compensationism might be seen as the least realistically possible evil when considering the alternatives of eu-consequentialism or even opportunism
  • If one really values resurrecting the dead, then compensationism seems like the best framework that makes that act legitimate
  • If one believes in modal realism, then all worlds are simulated anyway by all the different posthuman civilizations. If you are simulated in a world created by compensationists, at least your life will have a positive continuation. The compensationists have the task to make up for the subjective suffering that is caused by themselves and other simulating civilizations.

Discussion

  1. Do my concepts make sense to you?
  2. Are there other comparable coherent ethical frameworks?
  3. What framework do you find most preferable?

you want to avoid moral dogma. i think this is difficult, when you make use of terms like “doing good” , “doing bad” ,“law” and “punishment”.
one basic problematic implication i always found when people want to better society with ethical considerations is, that human beings would want to cause problems for others. in addition to that, important knowledge is often excluded in those deliberations: problems of a society are due to structural violence in many cases. if you keep the majority of people in a state/condition of poverty, hopelessness, coercion and dependence then theft, robbery and murder will be considered as expedients to end this condition. if you ask those delinquents whether they wanted to cause problems and harm, in most cases they would deny that. they just felt desperate without any alternatives to act otherwise.

my approach to this problem is to distinguish between the impressions of the individual due to intentions and motives. i refer to this approach in this example: that means, that it feels different for me when somebody attacks me in rage and steps on my toe purposely or when somebody steps on my toe by mistake in a crowd. although the consequence, that my toe hurts, is the same, i would not want to report the one in the crowd to the police. and i am glad that our law respects motives and intentions as well as consequences. but you can go a step deeper and take all the feelings into account. in the aforementioned example it would be that the violent attack creates a feeling of fear in me and i felt hurt in my dignity as additional consequences of the physical pain. this way you can explain every moral value of an action with consequentialism, because it already includes the consequences of intention. but a consequentialism that defines itself by excluding motives and intentions is flawed.

and this is how i distinguish between"violence" and “necessary/inevitable harm”. the example of the surgeon is a good one. nobody will experience a helpful therapy and a painful healing process as a violent attack of the doctor. even pets are able to recognize that a vet means “help” for them, although the treatment might be painful; not all animals and not the first time, but they are capable to learn.
BUT: if it turns out, that a treatment was not helpful or did further damage, the question becomes crutial, whether this was part of an inevitable risk and not intentional or if the doctor could be considered unethical. so patients will have the impression of violence against them if :

  • the doctor refused to give them the best treatment possible because of economical reasons, or other reasons. ( it is the feeling to be reduced and could be considered as a violation of the first formulation of the categorical imperative. the doctor would want the best treatment possible for himself and his relatives. and the maxim “give patients the best treatment possible” could be recognized as universal )
  • the doctor applied an unnecessary method of therapy to maximize his profit (it is also the feeling to be reduced and to be instrumentalized. this could be considered as a violation of the second formulation of the categorical imperative because the patient is reduced to a means to an end )
  • the doctor caused damage because he was careless. ( the feeling to be treated disrespectful. the pain of the damage and the pain that this damage was unnecessary. also a violation of sentence one and two.)
    …and there are many more examples for treatments that would be considered as violence against the patient, because of the motives and not the pain of a therapy.

it is the same with my example with someone stepping on my toe. if it is your intention to block someones way to cause harm and it would be no harm to you to step aside, then yes, you do harm. but if you have your own motive to stand there and that motive has nothing to do with the person who feels blocked because of you, then it is up to the other person to decide what to feel (even dogs understand when somebody hurt them accidently and would not attack back). there might be cases where someone reports someone who steps on his toe to the police although it was an accident. from an ethical point of view this person causes harm if she can not distinguish between purposely or not.

information could cause harm. but i think that it is obvious that it was low-level-thinking and unethical to punish or kill the messenger for bringing bad news. and if free speech would have the same effect, it has to be considered unethical and harmful for the speaker.
“free speech” is a kind of an ideal. and it is a difficult discussion to determine what is meant by it.
if somebody has the intention to purposely hurt others with what he say and it doesn´t matter for him whether he tells the truth or he tells lies because it only counts for him to inflict pain and he could easily say nothing when there is no chance for him to do harm with words, than i would not consider this as “free speech”. people who have the longing for “free speech” are not motivated to cause harm, (but quite the opposite!), although it is often a consequence that they cause harm.

if your belongings make you happy it would be painful for you if others expropriate you. but the motives concerning belongings are very complex and many of them are unconscious.

one part of ethics is to me like a crime story: you would accomplish nothing when you don´t understand the motives of the agents.

the other part of ethics is much more complicated, because many human motives are unconscious and a secret to themselves. what could be the best ethical framework for a better society?
the ideas of immanuel kant are not so bad but they should be developed further.
a good start for new ethics might be, to recognize the tragedy : most of the people… nearly everybody wants to be a good person when asked.

You are thinking about intentions to do harm. I don’t think that such intentions actually exist. It’s a straw man! People do things for their own advantage (when they aren’t misdirected by memes or stupidity). The right distinction to make between different intentions is not whether they have harm as their goal – they never actually have that as final goal – but rather how much harm to others one person sees justified for his own good.

So, the question of intentions becomes more of a question of degree rather than black and white thinking. How much other-harm are you willing to “pay” to get what you want? There is no absolutely clear line here. One could argue that if total harm outweighs total benefit for all involved, then an action might be admissible in first approximation. This basically would be the eu-conseqeuentialist position.

From an individual perspective it’s easy to see that the reparationist (compensationist) position is a safer bet for all involved. Do you think that there’s anything wrong about it?

if people would be always that rational to do things only for their own advantage, they would also be that rational not to “pay” for it. ( aside from the fact that it is completely impossible to “pay” for an action with the harm of another one. not on this “rational” level. on a karmic level, maybe.) and if they should be conscious about the harm they inflicted, how could you ever be sure that the harm itself is not intended?

i don´t believe that. what is the “justified own good” a group of aggressive young man gains when it beats a stranger to death just because of his appearance?

this is a highly arrogant position and just a belief with no proof. if you are not omnipotent, you could never know the total harm and the total benefit not even rudimentarily. because you have to consider time as well. if you sacrifice one person for the benefit of three, today, you could have killed the person who would/will have find a cure against cancer two years later or who will be ten years later the parent of the person who will end all wars in the world thirty years later… and so on.
aside from the fact that it is impossible to measure the individual pain, because it is imaginable that one person suffers much more because of a decision that is made in favorite of the benefit of three, than the three together will be happy about.
a god might consider that logic but to follow this framework when you are an entity less than omnipotent it is pure arrogance.
and in case of the aforementioned example the group that kills a stranger could claim to experience a good thrill and if a god- like entity could measure, that the thrill of the 5 or more killers in the group outweighs the suffering of the victim because he fell into coma within the first minutes of the attack, then killing just for the thrill of it has to be considered as admissible.

yes.

  1. it is a religion. a belief-system.
  2. everybody who is convinced of this belief has no responsibility anymore for his own actions. (it would be easy to give up self-reflection and to cause harm whenever it felt easier to do so than to search for a better solution without harm at all. and if you know, that the person you inflict pain on will be rewarded for it in sto´vo´kor, what should be your incentives to avoid harm and be ethical at all? what should be your motive to care and to avoid harm for yourself? you could let yourself go, waste your life, shorten it with drugs and violence because there is no need to contribute in any positive way to the world; and the suffering of your wasteful- lifestyle is just a side effect for you that increases a thrill of anticipation for your next life)
  3. to delegate ethics to higher entities is fatal for ethics.
    the idea of reparation and compensation is a good one, when it is limited to this life. we have no knowledge about afterlives, whether they are always a new start with no connection and memory to any other life or whether there is reincarnation at all. when the people have the responsibility to repair the damage they caused, it is a highly ethical idea. but not when everybody could delegate this responsibility to other entities.

Most actions have costs in one form or another. Costs in terms of energy, effort, time, money, pain, and discomfort or even suffering caused to others. The latter is maybe a more subtle form of a cost, but it’s still real and frequent. It can be understood as moral cost, or karmic cost, but also as cost to one’s own safety and comfort, because others may retaliate against you for the cost (discomfort / suffering) you have imposed on them.

Because primarily, causing harm is no good in itself, but only a side-effect of a certain action that is intended to reach a certain primary (or secondary, …) goal.

Of course people are prone to confuse primary and secondary goals, which is why they can frame secondary goals as primary goals. This is a kind of heuristic that’s not exactly rational in the strictest sense of pure rationality, but can be useful sometimes. This reframing of secondary goals as primary goals is what happens when people actually intend to do harm to others. Strictly speaking, when people do that, they aren’t “immoral”, they are just applying a mental / emotional heuristic. The deeper truth is that this would be totally irrational, if humans had the capacities that were required to make heuristical thinking and feeling unnecessary. In reality, the mental capacities of humans are limited, which is why they actually need to use “bounded rationality” methods, e.g. heuristics.

Stress relief, distraction from own negative emotions; conformity to group norms, in the hope to increase ones standing within the (doubtlessly very primitive) group. Granted, these goods aren’t very “noble”, but they are nonetheless real. Ruthless violence is just a means to reach those goods. This doesn’t justify such acts morally, but it makes them understandable.

If you accept that line of reasoning, you are treading a very corrosive path. After all, if you value uncertainty of consequences so highly, what kind of orientation could you rely on? You can justify any kind of action with the argument that it might produce great good in the end. And, at the same time, you can condemn any kind of action with the argument that is might produce great harm in the end. What you will end up with, is complete and utter nihilism. This is reason enough for me to reject your statement.

Now going on with the more reasonable argument:

From a consequentialist point of view that would be totally valid reasoning, though it’s frightening to admit that much. What is left out of course, are the ramifications of such actions beings potentially admissible. Living in a world in which such actions were often regarded as admissible would cause a lot of fear, which would have to be included as negative component in truly complete consequentialist computations. The plausible result is that it would have quite a high utility to have rules and mechanisms in place that prevent more or less random acts of violence.

That is an excellent question! I don’t claim to have a totally satisfying answer to that at the moment. Anyway, just for clarity: We are now talking about reparationism as seen from entities who believe they are in a simulation that is run by a reparator. That believe might obviously be wrong for various reasons:

  1. They might not live in a simulation
  2. The simulator might not follow reparator ethics

But given the case that they indeed live in a simulation created by a reparator, your question becomes actually very challenging to answer.

Another excellent question!

Yeah, that’s a valid line of reasoning. But on the other hand, you might just do anything with your life. And that anything might be subjectively more meaningful viewed from the framework of values within the simulated world.

True. That is a very big problem. I do not know how to resolve that problem. One solution is to rely on your own reasoning, no matter how well meaning higher entities appear to be. You might be mistaken about their benevolence (or competence or wisdom) after all – unless you actually possess godlike knowledge and wisdom, which is about the only protection you have.

Thanks for admitting that much. Yes, thinking about the possibility of world simulatons complicated things significantly. But in a posthuman world in which we have the ability and power to repair the damage we have done to others, it would actually be a good idea to do so!

this might concern more reflected people, but the average wrangler just looks for his immediate “win”. would the killers, i mentioned, make a calculation before they attack, that they risk to be imprisoned or a bloody revenge of a relative of that stranger, they would never attack in the first place.

the thrill of destruction would not work with a thing that can´t be broken or damaged and the thrill of violence would not work with a person that could not feel pain.

i think there is only one goal at the bottom of every action: the supposed easier way. and this is often far away from the rationality of “advantage -thinking” and creates instead disadvantages for all.

yes. and because they are not aware of their “negative emotions” ( otherwise they don´t need distraction) such actions could not be labeled as “rational” and could not be considered as an advantage.

the immediate empathy with the beings i influence with my actions.
this is difficult enough, because:

  1. from all animals the human species has the highest deficiency in empathy.
  2. it is most likely in most situations the hardest way.

if i would follow the utilitarian approach, but i don´t. i think utilitarism is arrogant after all.
because i know, that i could never measure “the total good” i have a more humble kind of ethics. the idea is, that it would work, if everybody would follow. a systemic approach. or one might call it “fractal”. if one person of the group ( in my killer-example ) decide not to hurt the stranger, not even threaten him and do the right thing in that moment: turn away from the group, try to convince the “weakest” (= ethical strongest) part of the group to join and then run and call for help, it would be the hardest way in that situation instead of “the supposed easiest way” to just follow the group, because you will not only lose your alledged “friends” you will make them your enemies and you lose your social contact and the life you were familiar with, and probably you can´t save the stranger. so…maybe nothing for utilitarians. but if everybody would decide the right thing in the moment of “heuristic” decision-making, nobody would be killed.

you shouldn´t reject a statement unless you have overcome nihilism yourself. but your arguments are good.
although the question could not be whether you like it or not. you once admit that the butterfly- effect is real. and would the path be less corrosive, if you convince yourself that you are able to measure “the total good” for others? i think humans much too often are arrogant in that way, to believe themselves as wise to know what would be the best for others. the history of christianization is one example.

the reparators would be wise to let the entities believe and be totally convinced that their life is precious and will be over with death. that puts pressure on them to create the best world possible within those parameters, the reparators granted them power over their lives. no problem for a reparator to surprise the dead entities with a blissful afterlife and the compensation for harm they could not prevent.

this sounds a little bit too thin for my taste.

this is the only solution! ask kant…this is also your responsibility and your freedom as well! no problem to be surprised after death. but in the meantime the here and now is our responsibility. and wise entities would sort out, what power they have to make the world better. they would want to understand what problems they caused themselves, apart from the given situation the simulators are responsible for. because to sort that out is the only chance to optimize life for all entities and it would be the most interesting simulation if i imagine to be a simulator: “when will my entities wake up, realize their power and create the best world within the givens we provide them with?”

not only in a posthuman world. it would be one of the best ethical approaches for us now.

Are you implying that the actual primary goal of everyone is to minimize discomfort?

Why would being aware of negative emotions make any difference? If you are aware of them, you seek to eliminate them directly or distract yourself from them. If you aren’t aware of them, you still seek to improve your emotional state and distract yourself from the negative aspects of your current experience.

Immediate empathy is of no use on its own, just like the qualia of emotions is of no use on its own. Perception alone does not instruct you what to do or how to do it. What needs to be added to perception is learned behaviour or instinct to trigger an actual reaction to the perception. Otherwise it’s just a perception you are unable to translate into any kind of (appropriate) action.

What does it mean that “it would work”? If everyone had the moral imperative to be mean to everyone else, that would probably work as advised. Or do you have a more specific definition of “work”? Is there even a possibility of ethics “not to work” in your thinking?

You mean I should accept your statements which throw us deep into nihilism and try to work with nihilism instead of around nihilism?

That depends on the variable correlation between human intentions and the consequences of their intentions. This correlation does depend on wisdom and knowledge, but it’s not clear how this dependence actually looks like.

In the worst case (which I call the “super perverse case” for clarity), the correlation is negative and become even more negative with more knowledge: So, if I want to do good, I actually do bad and vice versa. The more I learn, the more bad I do. So, if I actually valued doing good and had insight into this correlation, then I would simply reverse my intentions: In order to do good, I would intend to do bad, which would have the consequence of me doing good. But this insight would contradict the relationship that increased wisdom reduces the correlation to even more negative numbers, so simple “change of sign” is not allowed. Ethical thinking could have dramatically negative consequences in the super perverse case.

It’s not plausible to assume that the super perverse case is actually true, but on the other hand, we can’t rule it out with certainty! This is one reason why I hope that there is some kind of ultimate wisdom that will dissolve those kinds of uncertainties.

That would be a form of active manipulation that may not be appropriate in all cases. For example, if you want to create an authentic simulation of the past, you wouldn’t be able to do that without destroying the authenticity of the simulation. And obviously we don’t live in a world in which that kind of manipulation happens, because some people do believe that there is an afterlife.

How do you define responsibility?

Yes, very true. I’ve come to the same conclusion.

1 Like

this is a simplification, but not wrong. if i simplify your statement it would be that every action follows the mere rationality of ones own advantage. maybe there is a fusion possible with less simplification.

let´s have a look at the details. you said: “Stress relief, distraction from own negative emotions; conformity to group norms, in the hope to increase ones standing within the (doubtlessly very primitive) group.”---->
this is more conscious than “i can´t stand how this guy looks, i want to punch him in his face” what is often enough motivation and enough of the “thinking-work” to let an action of provocation follow. if a person would be able to think “my wish to punch this stranger in his face, is just my hope of stress relief, distraction from my negative emotions and my wish to gain the respect of this primitive group”–> this awareness- level could yet be “dangerous” for the action and prevent it. a further level could be: “i feel completely hopeless and i am angry that they fucked -up my life. the job center let me down, my girlfriend let me down, my parents understand nothing and these desperate, primitive guys are the only chance for me not to feel like a loser, because we share questionable values that gave us the illusion of orientation and strength. my desperation has nothing to do with this stranger i initially want to thrash and if i would do that, it would not solve my problems. on the contrary, it could make my situation worse.” —> this awarenesslevel would most likely prevent the action. to be aware of ones negative emotions means, to know, who caused them, what they are in detail and who is to blame for, instead of using a scapegoat. but there is a further level.

you asked me what kind of orientation i have. and the orientation for my actions could be empathy. so i did not have the intention to tell you, that the mere qualia is enough without an action. although kant thought, that the will to do the right thing is enough even if you are not able to act. but let´s drop that for now.
i don´t believe you and you contradict yourself in that case. you said:

you refer to nothing less but qualia as the final goal. so your statement can´t be right, that qualia is of no use. and we talked about qualia for quite a while. “stress relief”, “distraction”, “discomfort”…eudaimonia…the qualia of emotions is the final goal for all. nobody wants to become rich to feel lonely, bored and anxious. most people anticipate, that they would feel free, powerful and happy if they manage to become rich.

maybe.

i will take that as a philosophical question. what functions for what, what works for what goal?
you yourself imply that every action follows a kind of ethics. and a kind of rationality. when you say that people consider their own advantage and the possible harm, they might cause, but feel able to “pay”, you imply ethical deliberations for every action. even if it turns out that somebody is willing to begin a war for his anticipated advantages and is willing to “pay” the harm, this is an ethical deliberation although not one that could be considered as “good”. but i don´t believe that people really do this. not all the time. i think, that in most cases, human actions are not the result of ethical deliberations. partly this is a consequence of our law. the most intense thoughts before an action would be, if the person could get away with it. and not if it is ethical good or bad.

no. i admit it was confusing. i meant if you are a nihilist you can reject everything. or better: you should reject everything…if you don´t want to contradict yourself! but if you come to the conclusion, that nihilism is illogical and you don´t want to be thrown into nihilism, you should do a little bit more with statements than just reject them. consider them, for example. try to find out if they are plausible or only partly true…no matter if they appear to make things more complicated at first. complexity is not nihilism, and losing your orientation neither. and i think, that you just have the impression of nihilism, because you cling to utilitarism and consequentialism. but we have many good thinkers apart from those concepts.

your “super perverse case” sounds funny. but is not so far away from truth. many people intend to do the best they could and end up with the worst for all involved. but not because they increased their wisdom. they just took the easy way. so the real “super perverse case” is not so perverse at all, because it would really help the world to seek wisdom.

i have a problem with religion, when i feel forced to believe it, too . some people believe in the flying spaghetti monster (initially a nice idea), some in the flat earth, some religious believes might be more plausible than others, but that is no orientation for me. it might be, that this is a simulation, no problem. and it might be, that our simulators live in a simulation as well. and this could be endless. but this doesn´t change a thing. if this is a simulation-game, i want to find out the rules. i want to know what power i have and how i could play this game in the best way. how does it function, how does it work? so my way to question and to do research and exploration concentrate on the comprehension what works for what. and that is what i meant with “work”. the idea to act ethical good has the purpose to manage the life for all entities the best way possible.

and i can´t imagine, that the utilitarian method in its current shape could do that. if i imagine that a collective will apply your idea of consequentialism and utilitarianism consequently, everybody would be terribly frightened to be sacrificed one day for the seemingly “greater good” this collective believes in. in addition to that, people would lose the idea to have dignity, completely and would understand that they are only the instrument for many others. selfrespect and self-worth would depend only on the utility/benefit for others in mere quantity. has my life value in itself? do i deserve to live, even if i could not contribute? the answers to both questions seem to be “no”. nobody will have the right to live because this would depend on the “greater good” defined by the collective. this idea is enough, to produce great harm and fear. it will also be difficult to find the orientation what “the good for the many” might be. even if you apply the idea of eudaimonia. people with addictions for example might claim to be more happy with their addictive substance than without. and in the worst case, if you want to rule a collective with most of the participants being addicts in one way or the other, you will not be able to heal them because the healing- process would produce great unhappiness.

i could imagine, that a society “ruled” by immediate empathy would “work” to improve the wellbeing of all entities involved (the utilitarian goal!). partly, because i observed, that whenever people produce harm, a lack of empathy was the cause. because it is easier to live without empathy. you might be right, that a collective, ruled by immediate empathy at first might suffer from the disorientation what to do and how to do it, but if you consider my example of the violent group, it would be worthwhile to just let it be. and this is what everybody could do: stop being violent.

this is great, because than it is up to us, to be wise, apply occam´s razor (for example) and realize that we have no clue, about what comes after this and that we are left alone with no guidance from higher entities in our attept to improve our life.

Before I actually reply to what you’ve written, @zanthia, let me tell you that your post made me feel angry. It make me feel angry, because I felt completely misunderstood and misrepresented, and because of my thought that you shouldn’t do this. It would be easy for me to claim that you have done that intentionally, but I doubt that you have such motives. It would also be easy to me to claim that you are just not competent enough to really get what I am writing about. But that would be unfair, since I can’t expect from you to have thought about my thoughts as deeply and thoroughly as I have – especially long before this conversation actually started – and vice versa.

This argument between us represents seems to be a typical problem that arises between supporters of consequentialism and supporters of deontology. It seems that both sides don’t really “get” the position of the other side, and instead have a distorted straw-man image of it that doesn’t make actual sense. The conclusion is that both sides attack supposed positions of the other side that they actually don’t support! That’s totally unproductive and leads to a situation in which each sides starts believing that the other side is malicious, stupid, deluded, or otherwise deficient. I don’t want to get stuck in such a situation here. However, I don’t see how we could avoid such an outcome.

Why? This is how I see this discussion continue in abstract terms:

I: "That is totally not what I had thought. You were misrepresenting me."
You: “Then what about [another misrepresentation of my line of reasoning].“
I: “No, that’s also totally not what I had in mind.” (also silently thinking “WTF? %”§/%! #”§%$ “%”%###1!111!!!”)
… and so on, interspersed with me accusing you of having an ethical framework that is based on nothing and that can’t possibly work…

It would be good if we could actually come to a real understanding of how we actually thought, but I don’t see that happening before either of us give up in utter frustration.

So, is there an alternative? Sure, we could avoid the central conflict between consequentialism and other approaches to ethics. However, that would be off-topic since the ethical frameworks I’ve mentioned in the opening posts are framed in consequentialist terms. That is apparently due to my bias that reasonable beings should follow consequentialist reasoning – which is actually something I see apparent “non-consequentialists” do (secretly, or unconsciously, or between the lines). I am simply unable to imagine a “well-functioning” ethical system that does away with consequentialist thinking. Maybe that’s a deficit of my imagination, or maybe I am right, I don’t know.

Perhaps it would make therefore sense to analyse this further. Does valid and acceptable ethical reasoning necessarily include at least aspects of consequentialism, or can ethics actually “work” without considering consequences at all?

I see a general problem with this line of inquiry. The consequentialist in my would denounce any such alternative ethical system by analysing edge cases in which the alternative system would produce massively undesirable consequences. If there were no such cases, I would be positively baffled! Anyway, that’s a consequentialist argument against a non-consequentialist system, and therefore doesn’t fly, because it doesn’t attack the inherent logical consistency of the alternative system, but an external consequentialist metric of it. Nevertheless, that would be a valid intuitive approach, because intuitive ethics does have a lot in common with considering consequences.

Nevertheless, there may be many coherent non-consequentialist ethical frameworks that just pose for example rules that never contradict one another. So, they may be logically valid, but their meta-ethical value remains very unclear without using the tool of analysing the consequences of such frameworks. We would need to make this discussion even more abstract and ask what general criteria we should apply to ethical frameworks, besides logical consistency and conformity with intuitiveness or quality of their consequences. Perhaps simplicity might be one such criterion, but it would be questionable whether that would be a good one. Why should ethics be simple, if reality seems to be highly complex?

We could avoid this grasping for straws, if you simply admitted that considering the consequences of ones actions is a necessary component of ethical reasoning, even if that doesn’t necessarily imply that doing that represents the basis of any ethical framework. Actually, from my point of view this seems to be something that you are getting at:

You seem to claim that a non-consequentialist ethical framework could produce results that are superior to the results of a consequentialist ethics – as measured by consequentialist standards. Given the infinite stupidity of humans, that is actually a plausible claim! But that is not an argument against consequentialist reasoning in itself, but only an argument against applying consequentialist reasoning in a direct way. Such arguments are fairly old, and have been the reason why for example rule utilitarianism has been developed:

In a sense, you are actually using the quality of consequences as meta-ethical criterion for comparing ethical frameworks. Taking that as agreed upon criterion would make this discussion probably much more productive.

What’s your take on that? Please answer that question.

In the beginning I may have indicated that I wanted to actually reply to your post in detail. While I could do that, what I have written in the introduction would actually advise against that. However, I can give a meta-information that sounds unreasonably harsh, but represents my frustration with this kind of discussion. Also this meta-information is not really correct, but a useful approximation to truth: “I disagree with everything that you have written in your post, either because I see it as wrong, or because I see it as useless.” If you want more details, you are still free to ask. Just bear in mind that I won’t try being anywhere near nice in my criticism.

i am sorry for this development. philosophy is very important for me and especially ethics, but i don´t want to discuss more details in this thread for now.

agreed.

Excellent! It’s great that we can be civilized about our differences and actually are able to agree on at least something. :smiley:

Anyway, I see a general problem with this thread. I was writing about ethical frameworks for posthumans, not for humans. This obviously includes wild speculation, because I can hardly know how posthumans can think, although it’s plausible to assume that they will at least think along certain lines that I can think about, among many other lines that I’m currently unable to think about. :flushed:

This is important, because we don’t know how much better posthumans will be at predicting the consequences of their actions than humans are. Humans are arguably rather bad about that, especially when emotions are involved. I am quite optimistic that posthumans will be rather good at predicting a lot of the consequences of their actions, but even they will have their limits. That could be an argument in favour of indirect methods for optimizing consequences. Consequentialists will of course say that these methods are merely heuristics, but sometimes (if not always) it’s actually the best strategy to apply heuristics.

These indirect methods usually either come in the form of rules, or in the form of virtues. It’s actually just a question how you frame things. Both of these approaches seem to be equivalent for the following reasons:

  • Following ethical rules could be framed as prime virtue
  • Being virtuous could be framed as ethical rule

While I’m at it, I could add consequentialist deliberation to the mix in order to demonstrate that all forms of ethics are almost equivalent:

  • Considering the consequences of your actions could be set as ethical rule
  • Optimizing the consequences of your actions could be defined as prime virtue
  • Being virtuous usually has the best consequences
  • Following ethical rules typically leads to very good consequences

The equivalence is incomplete. Direct consequentliast deliberation would always lead to the best consequences, if it was infallible, which is not the case. It cannot be said in turn that ethical rules and virtues would always lead to the best consequences, unless the rule states that you have to apply perfect consequentlialist deliberation, or must apply the virtue of predicting the consequences of your actions perfectly, which is not possible.

Either way, we have to deal with ethical imperfection. Doing that is hard, but unavoidable. If anything, this kind of ethical imperfection should imply that we need to become better at doing ethics. From my point of view, that is actually the strongest argument in favour of becoming posthuman!