How to make decisions

This week I pondered about how to make decisions. Only now when writing this I realize that this problem is what characterizes my approach to philosophy, which starts with the question “what should I do”. Decision making is a deeply philosophical problem, even though it seems to be a rather mundane issue. In fact, the way you make decisions is an expression of your philosophy, whether you are aware of that or not.

What is a decision?

But let’s start with some basic ideas. What is a decision? A decision is a choice between multiple options. If you only have one option, you take that one option – there is no decision to make. Decisions usually become harder, as the number of options increases. The reason for that is that you need to evaluate more options, if you have more options. The problem of decision making can be reduced to the problem of evaluating options.

How to evaluate options?

The problem of making a decision becomes effectively solved, if you manage to assign a real number to each option, the value v(O) of that option O. Then you simply pick the option with the highest value, and you have made your decision. So, you need to figure out how to produce that number v(O). It would be convenient, if there was a universal algorithm that could evaluate each option meaningfully (of course, it’s possible to evaluate options meaninglessly, for example by assigning each option the value 0). Given that we don’t have such an algorithm, we need to think about something else.

Subjective values

We all have different subjective values, many of which can be made up on demand. But let’s assume that we need to evaluate n options: O1, …, On, and that we have m subjective values: v1, … , vm. To make a decision we can evaluate
v1(O1), … , vm(O1),
v1(O2), … , vm(O2),
.
.
.
v1(On), … vm(On).
Those are n times m evaluations to make, so the more values (and options) we have, the more difficult it becomes to make decisions. In the easiest case, there is an option Oi for which all values are maximal. That is a clearly superior options, because it’s the best for each value. But usually we aren’t so lucky: Generally, there will be options that are maximal for one value, but not for another value. Those are the hard choices. The following TED talk “How to make hard choices” by Ruth Chang is a philosophical consideration of such hard decisions that’s worth watching:

Unfortunately, the video doesn’t really answer the question how to make hard choices.

Reduction to a global utility function

A possible approach, when staying in the mathematical framework I’ve described above, is to somehow combine the different values v1, … , v_m into a single utility function u that also spits out real numbers. The utility function reduces the complexity of a multi-dimensional evaluation to a one-dimensional evaluation. A potential simple approach would be to take the product v1 * … * v_m of all individual value functions, though that comes with some nonsensical conclusions, for example that if you have two negative values that the total utility becomes positive again. The general issue here is that we don’t know how to reduce multiple values to one global utility in the right way. It’s just another subjective approach that isn’t clearly superior to picking one single value and taking the result of that as the answer to the decision question at hand.

But it gets worse with time

In reality, our values are not only multiple, they even change over time, as we change our opinions, and hopefully become wiser. So, time becomes another factor in our considerations. If you could somehow transmit your current knowledge and values to your past self, you would probably make different decisions, often better ones. Our decision making process is dynamic. Our values are dynamic. Even our options are dynamic, since with more knowledge, we often see more options that we can consider. It would be nice, if we could predict our own future values, so that we could make temporally robust decisions, decisions that you won’t be tempted to undo in the future. We can’t reliably predict our future values, but we can try making them more robust by making them as rational and based on reality as possible.

Dealing with uncertainty

As if dealing with values that change over time was not enough, we also need to deal with uncertainty. In reality, we usually don’t have all the relevant data and information to get any clear value for even one single option. The values we can compute usually come with different degrees and types of uncertainty. Getting more and better information may help, but it doesn’t fix that problem in general. Also, getting more and better information increases the costs for making any decision, and makes it very much dependent on the information you are able to acquire.

Robustness to information uncertainty

Of course, we could base our day to day decisions on numeric data. but that would force us to revise our decisions each time we get new data that would suggest we have made the wrong decision in the past. Paddling back might be possible, but bears its own costs, which can be overwhelmingly high. So, it’s sometimes preferable to have a decision making strategy that doesn’t rely so much on data and information as primary variables for decision making. Such a decision making strategy should be more robust under information uncertainty. But how would that look like? Well, we could base our decisions more on principles.

Principled decision making

With principles, we don’t base our decisions on data, but rather on the degree that the options in question are compatible to our principles. An additional advantage of that approach is that it reduces the number of options that we need to consider. We filter our those decisions that are simply incompatible with our principles. Even if it was true that we could get what we want via some form of ethically abhorrent criminal activity, we wouldn’t consider such an option, if we took our ethical principles seriously. Principles make the complex problem of making decisions more manageable. In the extreme case, we could choose the option which is most in alignment with our principles – though finding out what that “most” means is again quite similar to the “n times m” decision making approach above.

What principles should we have?

Now we are in deeply philosophical terrain, and have come to the conclusion that the question of how to make good decisions is a really philosophical issue. Where do our principles come from? How do we create principles? How do we update our principles?

At this point, I take the easy way and point to my previous thread:

Interestingly, I am basing my current philosophy and my principles on uncertainty about values. This approach may sound paradoxical, but it actually makes sense: If the value of decreasing value uncertainty trumps every other consideration, then increasing wisdom becomes an overriding concern and a guiding principle.

That alone doesn’t suffice to make concrete decisions, but it’s a viable basis for deriving further principles that eventually help a lot in the decision making process.

Wisdom transhumanism

Let’s start with an argument for a kind of transhumanism that I call “wisdom transhumanism”. Since humans are apparently not capable enough to agree of a set of philosophical values, or even any value at all, it may be the case that we need to increase our capability to arrive at clear philosophical values in a way that can convince basically anyone. If humans are not up to that task (at least not yet), we could ask ourselves why that is the case. Is there some quality that we lack? If so, we might solve the problem by increasing that quality – whether it’s intelligence, rationality, empathy, willpower, or something we aren’t even aware of, yet. We should become better humans, or even more than human. So, this is a very brief argument in favour of transhumanism. We need to become transhuman (or posthuman), so that we can achieve a higher level of wisdom.

How to become transhuman?

There are certainly many different approaches to increase the chance that we become transhuman. Here some examples:

  • Develop artificial general intelligence and somehow merge with it
  • Develop technologies that augment human capabilities directly
  • Develop longevity treatments, so that we have more time for becoming wiser and don’t lose our mental acuity
  • Promote transhumanism, in order to facilitate the approaches above
  • Improve the performance of our societal systems in order to facilitate the approaches above

The approaches should be synergistic rather than conflicting. Still, the decision remains what approach one should focus on. This probably depends more on fitting individual talents, skills, and predispositions than anything else. I don’t claim that the list above is in any way comprehensive, but at least it shows that there are a number of different strategies that aim towards the same overall goal.

In order to make the best decisions, you also need to know yourself.

More concrete principles are needed

Even if the overall goal and strategy are fixed and clear, there are still a lot of tactical and everyday decisions to make. For those decisions, we need further principles to narrow down the range of appropriate options, so that we don’t get overwhelmed by the sheer amount of options. However, this is a more subjective concern and would exceed the scope of this thread, so let’s not discuss more concrete principles here.

Conclusions

Making decisions works by evaluating different options with values and principles. Principles can be used to filter out unsuitable options. The way you make decisions is a reflection of your values and principles.

Further questions

  1. What is your general decision making strategy?
  2. How do you make decisions in reality (as judged by observing your actual thoughts and behaviours when making decisions)?
1 Like

Ok, I’ve come up with a decision making procedure that seems to work quite nicely and doesn’t take too much time.

  • Step 1: Identify options
  • Step 2: Reduce to most important binary decision (do most critical option O or don’t do most critical option O)
  • Step 3: Generate relevant criteria to evaluate the options
  • Step 4: Evaluate both options according to the criteria
  • Step 5: Count how often one options is superior in one criterion than the alternative option
  • Step 6 (optional): Sleep over the decision.