There are many different issues at play here: Intelligence, qualia, values, and power. It’s not easy to see how these are interrelated. Theoretically, these could be totally independent from each other. But it’s reasonable to assume that there are certain relations.
Higher intelligence should imply more qualia, more complex values, and more power.
More qualia could mean more intelligence, more complex values, and perhaps a bit more power.
Complex values require high intelligence. The relation with qualia is unclear. Complex values might reduce power from conflicting internal interests.
More power may reduce intelligence by reducing the requirement for intelligence (who is powerful doesn’t need much intelligence to reach his goals). Power might not have any relation with qualia. And finally, power can corrupt and simplify values, though it can be questioned whether that’s necessarily the case.
Let’s just note that things are rather complex, so we may need more complex terminology to discuss a topic like artificial intelligence appropriately.
Now let me try something. We can classify systems into “natural” (N) or “artificial” (A). Then distinguish between merely intelligent systems (I), and systems with complex general intelligence (which would be called (artificial) general intelligences by most transhumanists or futurists), which I prefer to call minds (M). Further we can make a difference between systems with only marginal sentience and qualia, which may be called “objects” (O), and systems with rich sentience and qualia: “subjects” (S). Then there’s also the distinction between reflected values ® and unreflected values (U). Finally, a system can be empowered and autonomous (E), or confined ©. Thus, we have 5 dimensions:
Power: E (empowered) vs. C (confined)
Origin: N (natural) vs. A (artificial)
Complexity of values: U (unreflected) vs. R (reflected)
Intelligence type: I (intelligent) vs. M (mind, generally intelligent)
Subjective experience: O (object) vs. S (subject)
You seem to be talking about something like an xARMS (confined or empowered artificial reflected mind subject). That is probably a totally different thing than the xAUMOs (artificial unreflected confined mind objects) that many artificial intelligence researchers and theorists are imagining. Humans, in contrast are xNxMS (natural mind subjects), because they can be empowered or confined, and unreflected or reflected in their values.
Anyway, you seem to be assuming a universal measure of superiority with which we can evaluate minds. And this seems to include many of the different parameters above. And destructive potential (or maybe power itself) seems to influence this measure negatively (at least in some sense). How does that universal measure work? And what’s the use of it? I find it more confusing than enlightening to use such a measure.
Instead, we could assume at first, that the 5 dimensions I lined out above are pairwise independent from each other, at first. And then argue about relations that violate that independence, meaning that some combinations are less likely or plausible.
Do MOs (minds with general intelligence and low subjective experience) actually exist? Possibly. Are they easy to create? Possibly it’s very difficult after all.
Does high intelligence imply complex values? This may depend on how to measure complexity of values and is not an easy question at all.
To answer your question about the fears of “artificial intelligences”: I think that fear comes from the general observation that humans are more powerful than other animals. This difference in power is then interpreted as coming mainly from a difference in intelligence. So, more intelligent beings could easily become more powerful than humans. Now combine this with the observation of how humans dominate over other animals. This suggests that artificial intelligences could dominate over humans easily. Would that be a bad thing? Not necessarily, but when considering how humans have destroyed habitats of other animals, or treated them in very cruel ways, it seems highly plausible that artificial intelligences could do the same to humans.
There are many reasons why artificial intelligences could become dangerous to humans:
- They could see us as threat (Terminator scenario).
- They could see us as valuable resource (Matrix scenario or paperclip scenario – as in: An AI programmed to maximize paperclip production would be eager to turn everything into paperclips).
- They could ignore us, but multiply, expand, and change the environment so such that our habitats get destroyed completely (“fuck this corrosive oxygen in the atmosphere” scenario)
- They could try to protect us aggressively and become dictatorial nannies (iRobot scenario)
- They could have very inhuman values and punish us for our sins (Roko’s basilisk scenario)
- They could simply decide that the world would be better off without humans. for some reason or other (“eco-idealist” scenario)
- They could outcompete us economically so much that we won’t be able to earn the resources we need for our survival (Accelerando scenario or Robin Hanson’s “em economy” scenario)
- Or they could just simply be somehow generally have higher evolutionary fitness, so they would replace us how we replaced Neanderthals (Neanderthal scenario)
There are certainly many scenarios I have forgotten there. It seems to be easier to imagine a failure scenario than really good scenarios for highly powerful artificial intelligence that cooperate with humanity peacefully (they exist, but generally get less media attention).