Moral Machine

With the upcomming technologies of self driving vehicles a discussion has started (a while ago), how an AI should decide in case of an moral dilemma.
Above I have linked a test, where you can judge for yourself what outcome you find acceptable in given dilemmas. I bet you all have your own thoughts on ethic and I’m curious of the results.

(Don’t worry Steven, there are just pictures and no english text ;D )

My results:

Moral Machine (link doesn’t work, it seems…)

So make a screenshot, maybe.

To be honest the results are kind of worthless, because 13 questions are not enough for a differentiated result. For example you can get a gender preference, even if gender isn’t important for you at all. The programm simple count how many men and women you kill.
Second example: I made the test twice, after the first run my “fitness prefernce” mark was exactly in the middle, after the second run it was completely left at the “fit people” end …

But you still can compare your results with other participants. The “other” mark is always based an the exact 13 questions you had.

1 Like

Interesting “game”. When you consider that the results of how you play might influence how self-driving cars will act in moral dilemmas it gains a really interesting quality.

When I tried the site, it was terribly slow to load the graphics and results. In fact, so far it only showed that I saved female executives most and dogs least (would this have been different for wolves, I wonder ;)).

But yes, the results may not be very telling. It might be interesting to try extrapolate the “values” of different types of individuals from such tests. How many criminals are worth an executive? What if the executive is a criminal, too? :smiley: But yeah, you would need a much longer test for that.