With the upcomming technologies of self driving vehicles a discussion has started (a while ago), how an AI should decide in case of an moral dilemma.
Above I have linked a test, where you can judge for yourself what outcome you find acceptable in given dilemmas. I bet you all have your own thoughts on ethic and I’m curious of the results.
(Don’t worry Steven, there are just pictures and no english text ;D )
My results:
Moral Machine (link doesn’t work, it seems…)
So make a screenshot, maybe.
To be honest the results are kind of worthless, because 13 questions are not enough for a differentiated result. For example you can get a gender preference, even if gender isn’t important for you at all. The programm simple count how many men and women you kill.
Second example: I made the test twice, after the first run my “fitness prefernce” mark was exactly in the middle, after the second run it was completely left at the “fit people” end …
But you still can compare your results with other participants. The “other” mark is always based an the exact 13 questions you had.