TR Amat "Need More Robots"

I was signed up to the previous version of FFF, after I met a rather horsey avatar at a transhumanist event in Second Life (SL). 2009/10? Yes, you can find the avatar “TR Amat” in SL, rez-ed in Feb 2007. Since an origin in SL, as a tool for AI research, TR has been found in a number of places, including Google+, Facebook (though not actively), and probably places I’ve forgotten.

TR grows out of a number of decades of interest in robots, as a tool in making the world a better place for everyone involved (including the robots). There are hints about this philosophy in the SL Group “Robot Live!” - any future which involves robot overlords or robot serfs is considered to be pretty broken. Where your home is a robot, that looks after you, as you do in turn. It looks like a future where bits of you happen to be robots might not be the worst thing that could happen…

Yes, you can talk about AIs, but, one view is that consciousness requires a viewpoint, such as a ‘robot’ (virtual or otherwise).

BTW, the name TR Amat comes from “Test Robot (built from) Available MATerials”.

1 Like

Interesting. I think this could end up in a discussion of diversity versus egalitarianism. What about people following a robot overlord voluntarily, because they think that’s a really good idea? How would you prevent huge differentials in power and influence in a world in which a vastly ecology of different (robot) minds and bodies exists? Or would you rather try to limit the extent of mind and body variability?

Does everyrobot need the same rights, or would a set of different rights for different kinds of entities be preferable?

When I hear quite a few of those questions asked, I tend to think about the “Turing Police” of several science fiction stories.

And, yes, humans as a group don’t tend to handle diversity well…

Then, I can imagine a few people who’d cheerfully murder in response to their homes being harmed…

More seriously, I think you get somewhere by thinking about the balance between “rights and responsibilities”.

That would be sad as the two are neither in opposition nor are they mutually exclusive.

Egalitarianism between humans is not so difficult, in theory. But egalitarianism applied to vastly different minds, from the smart toaster to the world spirit (AI), is quite a challenge. Should we distribute influence (voting power) equally, or rather in proportion to synapse equivalents (I think that would be a reasonable initial approach) or another measure? It is this diversity in size, not the diversity in flavour that poses the tough questions.

Heinlein said, through ‘Lazerus Long’, that “There’s no such thing as rights, there’s only opportunities”. Rights are a (human) social construct, and typically only apply to beings within an agreed ‘moral universe’. They depend on such things as ‘Agreement Zero’, and there being enough resources to implement them. If you’re curious about (one version of) Agreement Zero:

http://romsys.co.uk/whatwedo/publish/agreements.htm

As whole, loads of this comes back to considerations of power relationships, and agreements. Then there’s that ‘moral universe’ thingy. :slight_smile:

Power relationships determine the outcome in the situation that no agreement is reached. In this outcome, the more powerful factions are at an obvious advantage. Therefore, the more powerful a faction is, the less incentives does it have to subscribe to an agreement. For that reason, power relations necessarily influence the agreement that is made, if there is an agreement at all. Ignoring power relations is not a realistic option.

How do you define “moral universe”?

Where there are unequal power relationships, or power relationships which are used inappropriately, then agreements are not possible. The situation is made more complex by there being multiple power relationships which apply at any particular time. Further complexity comes from there being groups involved, rather than only individuals. Making sense of things is made even more complex by much of human behaviour involving “mental short-cuts” - which can involve “throwing the baby out with the bath water” scenarios, when it is felt the situation gets unacceptable, maybe due to intractable opposition, or perceived complexity.

So, beware the International Union of Smart Toasters! :slight_smile:

On the ‘moral universe’ front, this is about whether entities are considered to have a moral dimension, whether they are considered to have any protection under law, or at worst being considered a (common) resource. Classically, “those who live on the other side of the mountain (who worship demons, and eat babies)”, are not within the moral universe - ‘us’ and ‘them’ logic, where ‘them’ are outside moral considerations.

Again, ‘metal short-cuts’ are generally used in defining the moral universe.

Very few humans consider robots to fall within their moral universe - the most protection they get under law is due to being property. They certainly aren’t considered to have an identity, even as ‘minors’.