About artificial intelligence

yesterday i had a discussion about artificial intelligence. after blah blah about moore´slaw, ray kurzweil, terminator and i, robot, predictions for 2020… i felt a longing to define the topic more precisely:

  • “intelligence” is not the point, but to create an artificial lifeform with qualia, that means with consciousness, awareness of the self and a free will.
  • if this lifeform would be superior to us, it would be better than us

and here is what i never really understand and what i would like to understand:

  • why is it so often predicted to be dangerous? and isn´t that a contradiction?

some brainstorming: to me, the destructive potential of human kind is failure. if i speak of a lifeform that is better than humans, it could not be dangerous, otherwise i couldn´t call it “better”. we are not really able to imagine something beyond our experience with us and ourselves, so the predicted “danger” is nothing more but a psychological projection.

There are many different issues at play here: Intelligence, qualia, values, and power. It’s not easy to see how these are interrelated. Theoretically, these could be totally independent from each other. But it’s reasonable to assume that there are certain relations.
Higher intelligence should imply more qualia, more complex values, and more power.
More qualia could mean more intelligence, more complex values, and perhaps a bit more power.
Complex values require high intelligence. The relation with qualia is unclear. Complex values might reduce power from conflicting internal interests.
More power may reduce intelligence by reducing the requirement for intelligence (who is powerful doesn’t need much intelligence to reach his goals). Power might not have any relation with qualia. And finally, power can corrupt and simplify values, though it can be questioned whether that’s necessarily the case.

Let’s just note that things are rather complex, so we may need more complex terminology to discuss a topic like artificial intelligence appropriately.

Now let me try something. We can classify systems into “natural” (N) or “artificial” (A). Then distinguish between merely intelligent systems (I), and systems with complex general intelligence (which would be called (artificial) general intelligences by most transhumanists or futurists), which I prefer to call minds (M). Further we can make a difference between systems with only marginal sentience and qualia, which may be called “objects” (O), and systems with rich sentience and qualia: “subjects” (S). Then there’s also the distinction between reflected values ® and unreflected values (U). Finally, a system can be empowered and autonomous (E), or confined ©. Thus, we have 5 dimensions:
Power: E (empowered) vs. C (confined)
Origin: N (natural) vs. A (artificial)
Complexity of values: U (unreflected) vs. R (reflected)
Intelligence type: I (intelligent) vs. M (mind, generally intelligent)
Subjective experience: O (object) vs. S (subject)

You seem to be talking about something like an xARMS (confined or empowered artificial reflected mind subject). That is probably a totally different thing than the xAUMOs (artificial unreflected confined mind objects) that many artificial intelligence researchers and theorists are imagining. Humans, in contrast are xNxMS (natural mind subjects), because they can be empowered or confined, and unreflected or reflected in their values.

Anyway, you seem to be assuming a universal measure of superiority with which we can evaluate minds. And this seems to include many of the different parameters above. And destructive potential (or maybe power itself) seems to influence this measure negatively (at least in some sense). How does that universal measure work? And what’s the use of it? I find it more confusing than enlightening to use such a measure.

Instead, we could assume at first, that the 5 dimensions I lined out above are pairwise independent from each other, at first. And then argue about relations that violate that independence, meaning that some combinations are less likely or plausible.

Do MOs (minds with general intelligence and low subjective experience) actually exist? Possibly. Are they easy to create? Possibly it’s very difficult after all.
Does high intelligence imply complex values? This may depend on how to measure complexity of values and is not an easy question at all.

To answer your question about the fears of “artificial intelligences”: I think that fear comes from the general observation that humans are more powerful than other animals. This difference in power is then interpreted as coming mainly from a difference in intelligence. So, more intelligent beings could easily become more powerful than humans. Now combine this with the observation of how humans dominate over other animals. This suggests that artificial intelligences could dominate over humans easily. Would that be a bad thing? Not necessarily, but when considering how humans have destroyed habitats of other animals, or treated them in very cruel ways, it seems highly plausible that artificial intelligences could do the same to humans.

There are many reasons why artificial intelligences could become dangerous to humans:

  1. They could see us as threat (Terminator scenario).
  2. They could see us as valuable resource (Matrix scenario or paperclip scenario – as in: An AI programmed to maximize paperclip production would be eager to turn everything into paperclips).
  3. They could ignore us, but multiply, expand, and change the environment so such that our habitats get destroyed completely (“fuck this corrosive oxygen in the atmosphere” scenario)
  4. They could try to protect us aggressively and become dictatorial nannies (iRobot scenario)
  5. They could have very inhuman values and punish us for our sins (Roko’s basilisk scenario)
  6. They could simply decide that the world would be better off without humans. for some reason or other (“eco-idealist” scenario)
  7. They could outcompete us economically so much that we won’t be able to earn the resources we need for our survival (Accelerando scenario or Robin Hanson’s “em economy” scenario)
  8. Or they could just simply be somehow generally have higher evolutionary fitness, so they would replace us how we replaced Neanderthals (Neanderthal scenario)

There are certainly many scenarios I have forgotten there. It seems to be easier to imagine a failure scenario than really good scenarios for highly powerful artificial intelligence that cooperate with humanity peacefully (they exist, but generally get less media attention).

my question is: what goal could somebody have without qualia? i think it is impossible. goals come with qualia. otherwise it would be no lifeform but just a programmed machine…and goals would be those of the programmer.

i see problems with your classification:

  1. it doesn´t matter what substance qualia has to emerge.
  2. classifications like this tend to hierachy.
  3. qualia means sentient and then it´s a lifeform, not an object
  4. “only marginal sentience and qualia” <----typical arrogant human way of thinking. we could never know “what it´s like to be a bat”

yes, but not necessarily concerning humans and animals.

Goals come with qualia? I think you are right when it comes to what I call valence qualia: Qualia with attached valence or value. There is also neutral qualia that doesn’t have value attached intrinsically (what would be the intrinsic value of “yellow” or of a sound with a frequency of 200 Hz?). Valence qualia does represent intrinsic value and can therefore influence the goal system of the subject in question.

So, you classify beings into “lifeforms” (corresponding to the S “subjects” in my classification) and programmed machines (corresponding to the O “objects” in my classification). And yes, the values of programmed machines would be that of the programmer, if the programmer is actually competent enough to encode his own values (bugs happen when that is not the case)!

  1. You mean the distinction between “natural” and “artificial”? I never claimed that this distinction was important. And yes, in principle, the existence of qualia should not depend on the substrate that an individual’s mind works on. Perhaps different substrates may favour different “qualia flavours”, but who knows…
  2. I have never seen a 5-dimensional classification system used to create a linear hierarchy. At least not that I can recall. Do you have any evidence for your claim?
  3. and 4. If one assumes some kind of panpsychism in the sense that all systems contain qualia to some degree (I think that’s the case), then things get complicated. To make that view compatible with your definitions, you would have to declare stones to be lifeforms. I think that’s too problematic. Better make a clear distinction between sentient lifeforms and “the rest”, even if that is not 100% correct in the end (but 99,999999999999% correct or so). And actually I’m not very happy with my own choice of “subjects” vs. “objects”. It’s a first attempt. But I don’t think that “lifeforms” vs “machines” would be better. Perhaps it would be more correct to speak of “Significantly direct valence carrying beings” (SDVCBs), but that terminology is too bulky.

Error: Incomplete quote
Error: Cannot resolve context reference
Fatal error: Aborting reply attempt

the chance for adaptation in the current environment and survival. that might be not important for you concerning “yellow” but maybe for a bee. and even if the goal would be to find a flower, it is very likely, that the bee likes the sensation …[quote=“Radivis, post:4, topic:675”]
So, you classify beings into “lifeforms”
[/quote]
no. i divide lifeforms from inanimate things. the latter are no lifeforms or no lifeforms anymore. [quote=“Radivis, post:4, topic:675”]
I have never seen a 5-dimensional classification system used to create a linear hierarchy. At least not that I can recall. Do you have any evidence for your claim?
[/quote]
this is hierarchy: [quote=“Radivis, post:2, topic:675”]
Further we can make a difference between systems with only marginal sentience and qualia, which may be called “objects” (O), and systems with rich sentience and qualia: “subjects” (S).
[/quote][quote=“Radivis, post:4, topic:675”]
To make that view compatible with your definitions, you would have to declare stones to be lifeforms.
[/quote]
sorry, i can´t see that. stones are no lifeforms. i don´t have the intention to change the definition of “life”…although i prefer “autopoiesis” as a very nice approach. a dead body is no lifeform anymore and a stone never was. but every living being is sentient. because every lifeform has qualia. and the other way round. every autopoietic system with qualia is a lifeform. and here it will become very interesting: if we would ever be able to create a real “AI”, a synthetic body with qualia, it would be life. a living beeing. no matter how “much” intelligence or qualia this lifeform would have.

:sunglasses: yes, you are right, but it was on purpose. fear comes from a general observation, but not concerning animals.