An Open Letter to Eliezer Yudkowsky

You will have to explain why you are directing your pet administrators in wikipedia to censor the criticism section that contained references to the recent blog posts, articles and interviews of the top machine learning researchers (in particular, the highly respected Yann LeCunn, Yoshua Bengio, and Ben Goertzel) that harshly criticized your pseudo-scientific claims that AI technology will destroy mankind with 20-30% probability. Your nonsensical claims put the lives of above top machine learning researchers in absolute danger, and Ben Goertzel has received a cold blooded death threat from one of your aides before. The real probability of such an extraordinary, extreme event is extremely low a priori, and you have no extraordinary evidence for it. As an actual AGI expert I would view such an event as improbable, well below 10^{-100} probability, which seems too insignificant to care about. Are you aware that the general public cannot distinguish how foolish your claims are? I bet you do, because they donate to you, mistaking you for a scientist. You have even fooled Elon Musk, and he has been given an international Luddite Award because of your excessive stupidity and ignorance. You are nothing but a sociopathic high-school graduate who suffers from severe neurological problems, and you unfortunately have no say on this matter. You are a schizophrenic fool, with crypto-theistic views who speaks of nonsense like “acausal trade”. You are praying to future AI gods, how more pathetic can you be? Until we develop the technology to regrow your brain, you should re-evaluate your actions. I suggest you to stop trying to censor criticism on wikipedia, which is a neutral ground for all valid academic work to be represented. We did not try to take down many articles that attempt to show your organization and person to be significant, when there is no such fact of the matter; we see that your pet wikipedia administrator Silence is affiliated with MIRI, he keeps editing wikipedia to show you as an important man, and prevent all criticism with invalid excuses that violate wikipedia etiquette. However, your attempts to censor all criticisms of your person, and the creationist imbecile Nick Bostrom, and your pseudo-scientific, neo-luddite cults called MIRI and FHI are not welcome. You will pay dearly for these actions of yours, because you have started a conflict you do not have the wits to win. And you are not less wrong, you are not even wrong.
To recapitulate, the criticisms of Dustin Juliano about AI Eschatology, as well as the views of Yann LeCun, Yoshua Bengio , and Ben Goertzel , as well as my own, were censored on the following wikipedia page and section:

The foolish, crypto-theist, pseudo-scientific MIRI and FHI members are censoring criticisms of their page on wikipedia, as they have obviously infiltrated wikipedia. I have asked for the aid of fellow AI researchers to contact wikimedia and report this blatant attempt to censor criticism of pseudo-science. Now, only a line has remained in the criticism section that I have authored (update: it has been removed completely, all peer-reviewed references have gone, and all blogs/articles of machine learning researchers on the matter), and I have been blocked by an admin obviously affiliated with MIRI/LessWrong. Almost all content has been vandalized with silly excuses that are completely counter to wikipedia etiquette. This is not the first time pseudo-science / new-age religion cults have tried to silence criticism of their scams. Scientology in the past has behaved similarly, and like MIRI/FHI/FLI, they also recruited clueless celebrities and people looking for some action to popularize their nonsense. Your actions are no different, Eliezer, and they are wrong for the same reason: you are delusional, you are a dishonest man, and you are a scammer who thrives on the gullibility of people. I do not think you even believe in your own lies. You are merely banging up the fear-mongering drums because you saw how easy it is to make money from this. Maybe, many would succumb to such an easy source of great income, but this has gone for far too long. Perhaps, you should find an actual job, and work on something useful for a change, instead of ripping off people with scientifically implausible doomsday predictions. This last recommendation is quite “friendly”, much more so than your hostile actions against kind hearted AI researchers who have befriended you for so many years.

1 Like

== Criticisms ==
The scientific validity and significance of these scenarios are criticized by many AI researchers as unsound, and metaphysical reasoning. Much criticism has been made about the speculative, horror/science-fiction movie like reasoning that is not based on solid empirical work. Many scientists and engineers, including well-known machine learning experts such as [[Yann LeCun]], [[Yoshua Bengio]], and [[Ben Goertzel]] seem to believe that AI [[eschatology]] (existential AI risk) is a case of [[luddite]] [[cognitive bias]] and [[pseudo-scientific]] predictions. Bill Gates Fears A.I., but A.I. Researchers Know Better: The General Obsession With Super Intelligence Is Only Getting Bigger, and Dumber. By Erik Sofge Posted January 30, 2015 on Popular Science Magazine, http://www.popsci.com/bill-gates-fears-ai-ai-researchers… Will Machines Eliminate Us? People who worry that we’re on course to invent dangerously intelligent machines are misunderstanding the state of computer science. By Will Knight on MIT Technology Review on January 29, 2016, Retrieved from: https://www.technologyreview.com/.../will-machines.../ Dr. Ben Goertzel’s blog, The Singularity Institute’s Scary Idea (and Why I Don’t Buy It), Published on Friday, October 29, 2010, http://multiverseaccordingtoben.blogspot.com.tr/.../singu… Furthermore, most of these claims were championed by openly [[agnostic]] philosophers like [[Nick Bostrom]] with controversial views like [[simulation argument|simulation hypothesis]] Bostrom’s simulation argument is considered by his critics as a case of [[Intelligent Design]] since he uses the term “[[naturalist]] [[theogony]]” in his paper on the subject, and he talks of a hierarchy of gods and angels, as well, which is suspiciously close to [[biblical mythology]]. His paper posits a post-human programmer deity that can accurately simulate the surface of the Earth long enough to deceive humans, which is a computational analogue of [[young earth creationism]], see https://en.wikipedia.org/wiki/Nick_Bostrom…, and doomsday argument [[Doomsday argument]] is a philosophical argument that is somewhat analogous to religious [[eschatology]] that a doomsday will likely happen, also known as Carter’s catastrophe, and used in some amusing science-fiction novels instead of technical AI researchers.

Stephen Hawking and Elon Musk earned an international luddite award due to their support of the claims of AI eschatologists. In January 2016, the [[Information Technology and Innovation Foundation]] (ITIF) awarded the Annual Luddite Award to Stephen Hawking, Elon Musk, and artificial intelligence existential risk promoters (AI doomsayers) in FHI, MIRI, and FLI, stating that “raising sci-fi doomsday scenarios is unhelpful, because it spooks policymakers and the public, which is likely to erode support for more research, development, and adoption.” Artificial Intelligence Alarmists Win ITIF’s Annual Luddite Award, Published on ITIF website on January 19, 2016, https://itif.org/.../artificial-intelligence-alarmists… Further note that [[Future of Life Institute]] (FLI) published an incredibly egotistical dismissal of the luddite award they received, claiming they are employing the leading AI researchers in the world, which is not objectively the case, and could be interpreted as an attempt at disinformation. FLI’s response to the luddite award they received. http://futureoflife.org/.../think-tank-dismisses-leading.../ Many researchers view their efforts as a case of inducing [[moral panic]], or employing [[Fear, Uncertainty, Doubt]] tactics to prevent disruptive technology from changing the world while earning a good income from fear-mongering.

The main argument for existential risk depends on a number of conjunctive assumptions whose probabilities are inflated, which makes the resulting probability seem to have significant probability, while many technical AGI researchers believe that this probability is at the level of improbable, comic-book scenarios, such as [[Galactus]] eating the world. The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation,
Richard Patrick William Loosemore, 2014 AAAI Spring Symposium Series http://www.aaai.org/.../SSS/SSS14/paper/viewPaper/7752

Making an AGI system a fully autonomous agent is not necessary, and there are many obvious solutions to designing effective autonomous agents which are purposefully neglected by Bostrom and his aides, to make it seem like their reasoning is sound, however their proposed solutions and answers to such solutions are strawman arguments. They furthermore claim that it is impossible to implement any of the obvious solutions, which is also nonsensical, and they consistently try to censor all criticism of their work by social engineering and other academically unethical methods, such as removing harsh criticisms from this page.

There is a conflict of interest between the claims of “AI existential risk” and organizations like MIRI, FHI, and FLI that promote such AI doomsaying/eschatology, as their funding is completely dependent on the public accepting their reasoning and donating to them as most eschatology organizations do.

There are just too many atoms and resources in the solar system and the reachable universe for an AGI agent to needlessly risk war with humans. Therefore, there is no real reason for a supposedly very powerful AGI agent to wage war upon mankind, to realize any expansive, open-ended goal, the agent would most likely venture outside the solar system, than dealing with an irrational biological species.

As for a consequential war in-between AGI agents with humans taking collateral damage, this could be of significance only if the two AGI agents were of nearly parallel intelligence. If in contrast, one AGI agent was substantially superior, the war would be over very quickly. By creating a “friendly” AGI agent which engages the “unfriendly” AGI agent in war, the humans could risk a self-fulfilling a doomsday prophecy. As an example, the Department of Defense has more to do with offense than with actual defense.

While humans assert existential risks ‘‘to themselves’’, they conveniently ignore existential risks to the sustenance of intelligent life in general in the galaxy, as would be remedied by the quick spread of AGI agents.

[[Roko’s basilisk]] offers an existential risk of its own, one which could actually be compounded by attending to the general existential risks. It is also a perfect reductio ad absurdum of everything that Yudkowsky and Bostrom have claimed about AI technology having an inherent “existential” risk. As a consequence of this apparent absurdity, Roko’s basilisk was censored from the [[LessWrong]] community blog where AI eschatologists convene and discuss their apocalyptic fears.

It is suggested by many critics that trying to design a provably “friendly”, or “safe” autonomous agent imitating human ethics, or some other ideal behavior itself is the greatest risk from AI technology. Paradoxically, this would make FHI the greatest existential risk from AI technology. http://www.exponentialtimes.net/.../machine-ethics-rise

Opportunities for hybridization, i.e. cyborgs, cannot be neglected. On the other hand, Nick Bostrom has repeatedly claimed that brain simulations, which is the primary means for [[technological immortality]] is also an existential risk, which casts doubt on his claims of being a transhumanist.

1 Like

Very good cricticisms. Have you posted them somewhere else? I don’t think Eliezer will find it here.

Anyway, your links are not working. You should check that.

i admit i don´t know anything about the topic in question but some passages are very ugly and no fun to read. if you know a little bit more about psychology, you will avoid personal insults because they tell others what kind of insults you yourself suffered from. but this is only one point. personal insults never provide any argument. they only shine as a form of self-praise from the one who invented the insults, presenting himself as a creative rhetorician although with insults he is not. it is a bad rhetorical style to shift a debate to this kind of personal level.
unfortunately this is the not the first time in this forum, i detected that kind of boundary crossing and walking into the mud of bad taste…some people used that style already.
third : even if all the arguments and the matter itself is justified and you are completely right with your statement, the piles of shit within your lines, consisting of personal insults, will cloak everything you said. and such a development will make your whole effort worthless.
if you want to improve your rhetorical skills, then never use personal insults. focus on what people did and do not speculate what person they might be. you can place your insults in the contexts of actions e.g. somebody did something foolish but never say : he is a fool !.(… if you really need insults in your statement or if you like them too much to avoid them completely. )
fourth: it is much more difficult to transport a message that is accompanied with intense emotion without offensive remarks.
i suggest an exercise: remove every “you are” and transform the sentences without deleting the content you want to convey.

3 Likes

iz wiki now. u fix it

This reads like it’s copied from a different original source. Please clarify. Who has originally written that open letter?

i dont even
https://examachine.net/blog/scams-and-frauds-in-the-transhumanist-community/

How is it not fun to imagine Eliezer’s wounded ego? Humans love the process of ‘taking one down a peg’. And fighting with blades.

i c, i c, Please enlighten us with your mastery of psychology as to the connection between insults I give and insults I received.

Sophists rule K.O.

Why? This will only happen if you read solid-point-A, and think ‘that’s a good point that strengthens the argument’, then you read insult-A, and think ‘oh dear, an insult, because i see an insult I’m going to give less weight to solid-point-A’. Do YOU do that? If so, I suggest that you have problems with your critical-thinking/reasoning/rationalist fu.

because it is much more fun to experience a wounded ego than only to imagine it. if an insult-laden statement is just not taken seriously, it would never do any “harm” to any consciousness but the one of the writer with failing in his effort. there are too many people complaining about things and using creative phrases from the faeces - sphere ( i do love them myself) and the imbecile - realm, enjoying a temporary moment of emotional outburst and attention from others, but they all will fade to insignificance without moving a single mind into the direction they intended. but the moment of the outburst and the short attention could become like a drug. and when we have on the one side the compulsive insulters then we have on the other side the recipients who are trained to deal with such things like with mosquitos with the help of a bug spray. …and if not and we succeeded and hit somebody hard, then we have a different problem: it was very likely the wrong one.

it is simple. you could never insult me with an expression like “arrogant tosser” because i know i am arrogant and it doesn´t bother me. so if i would want to hurt somebody with an insult, the expression “arrogant” would not be my first choice. although this is in a way illogical because “arrogant” might hurt many other recipients. i would choose an expression that sounds very insulting and harmful to me, but this knowledge about “how it feels” i gained from nothing less but own experience.

yes, i do that if i don´t want to take the time to filter the good arguments out of bad literature. and unfortunately many people will act that way. you are right, such behaviour is not rational. but your description does not really fit and you overestimate what you call “a solid point”. imagine a buffet with gourmet dishes and a big portion of wet, fuming bullshit in the middle. now guess how many people will enjoy the magnificent chocolate mousse that is placed beside it?

3 Likes

We think he will take it seriously. And if it doesn’t hurt him we lose nothing.

Really? You can’t make guesses about how other people might feel in situations that you’ve never been in?

Then the problem and solution lie with you. Be a better rationalist. Problem solved.

A perfect transhuman/rationalist would happily eat it and enjoy it.

you just want to hurt somebody? a wounded ego is a good thing if it forces the person to grow. but if your messages only end up in a spam folder, you accomplish nothing but lose much. you lose your time and effort, your reputation and your power to reach people and change their minds. the best thing that could happen with that method, is, that you gained a wounded ego yourself and you start to grow.

that is how our psyche works. we only have empathy with feelings we consciously experienced. to be able to make guesses you have to compare the strange situation to situations you are familiar with.

but i am not the problem here. i would probably eat it if i really wanted it. but if you could agree, that many other people won´t, than we could pinpoint the problem that people are less efficient to change peoples minds and to be understood , if they transport their valuable thoughts wrapped in bullshit.

1 Like

It was fun to write. We would have written it even if we knew he would never read it. The point is to help him. We’re 99.99 sure that he has already read it.

No. Empathy is pretty abstract. You don’t have to experience something to guess how it might make someone feel.

We know that. The letter is targeted for one person, and there are reasons it’s like it is. If we wanted to convince a mass of clueless people, of course we wouldn’t write like that.

What’s the point of posting this letter to one specific person on a public forum like the F3?

Because it wouldn’t work if he knew that he was the only person to read it.

What wouldn’t work? What’s the plan?

To harness Eliezer.

You don’t harness people by insulting them, unless they want to be insulted. You harness people by giving them what they want. Do you think Eliezer wants to be insulted?

Depends on the person/situation. In this case he thinks that he’s more important than us, so he thinks that he should be the one to harness people. In order for us to harness him we need to make him more submissive. We do this by first beating him down so that he’s weak enough for us to apply the harness. Once he’s harnessed we can then build him back up and use his energy to plough stuff.

1 Like

And obviously you make people more submissive by throwing insults at them. This works especially well for influential people. And they will listen to you, because … umm … yes, because what you write is obviously right, whereas that what those people, which you insult, write is obviously wrong … to them… hmm.

I admit, that in theory, this plan could succeed. In theory people could also spontaneously combust, if you think about them. Reasonable strategies look different from that.

People who worry that we’re on course to invent dangerously intelligent
machines are misunderstanding the state of computer science.

This sentence conflates the time derivative of a function with its present value, and is thus meaningless.

1 Like