Thank you for your response - I believe, however, that you have misunderstood many of my statements.
No, we were not hiding from people what we really wanted to do. We told people exactly what we were going to do with the funding obtained. We were and are going create a platform to develop artificial intelligence through crowd teaching. Let me reiterate that the so-called 'singularity' is only a theoretical possibility - it is impossible to say whether our AI would ever reach the point where it is more intelligent than humans. If you truly believe that our AI would be this successful and hence we are 'hiding' the true nature of our AI you are far more optimistic about this than we are.
Equally, note my comment that 'we don't anticipate our platform to demonstrate proper AGI for a considerable period of time'. To claim that we are going to definitely develop AGI and then fail to do that would definitely be deception and not the other way round as we have done. I think most people are NOT familiar with the concept of weak vs strong AI and we tried to explain the concept as best we could - we did not intend to deceive though you may perceive that as the case. Think of it as better to under promise and over deliver than to over promise and under deliver. I think backers would be much happier to be "decieved" (and I use inverted commas because we really don't know what is going to happen) in that we achieve far more than they expect than to deliver none of the grandiose promises we could have made.
Yes we are optimistic, if we weren't optimistic then we wouldn't try in the first place. I fully anticipate and welcome failure because without failure there can be no success. We believe it can be done and that is good enough for us. Also let me make it very clear that we are not promising anything! We are merely saying, fund us and let's see what happens - what we're attempting is entirely novel in its conception. Honestly, I don't care if we 'fail' because we are compelled to try to make our project a reality and we will forever regret it if we don't at least try. We can succeed where Google can't because we are not a commercial enterprise - in the same way Wikipedia is more successful than Britannica we hope to harness the power of crowd teaching to accelerate the learning process.
It is always a pleasure to be able to share our aspirations with others and I am humbled that you have shown such interest in our project.
OK, we are not coming at this from a philosophical perspective but rather a technical one. We're not interested in discussing the philosophical ramifications of the singularity but rather how to solve day to day problems more efficiently through a free open source AI platform. We aim use bio mimetic principles to simulate the brain as a non-for-profit online service. I cannot explain what we are trying to do any more simply than that. Deep learning alone is not the answer to AGI - the back end of what we are developing is entirely novel in its design. Allow me to answer these questions by showing you a flow chart of our design. Model.pdf (194.1 KB)
I have to admit I had a real laugh from you comments on whether our AGI should, could or would demonstrate 'consciousness'. Maybe it is speciest to say that - whether that is or isn't the case is irrelevant to us. We don't know that humans are conscious and other entities are not. We can't tell that other humans are conscious. Personally, I don't know what the basis of consciousness is and I don't care - I cannot speak for the others. Personally, I don't believe this will hinder our success and I don't believe that it is a key ingredient - our AGI will, of course, have a sense of self and others but that doesn't mean that it is conscious. If you can figure out how to make a machine conscious I strongly suggest you patent it and I'll be the first to buy it.
Yes, feelings have an important role in human cognition. I never said that our AI wouldn't reason using emotion - I merely said that it doesn't have feelings. These are two very different things with the difference that the second implies consciousness - but now we are into semantics. Computers do not have motivation - they are programmed to perform a task. Human motivation evolved to allow us to focus on things that help us survive like food, sex, shelter etc. The AGI we are making is designed as an input/output system - it is programmed to give a response to data. To label it as a docile robot or oracle is an unnecessary anthropomorphism.
Gee, thanks Good to know at least one person believes in my sanity.
No I don't see a problem here - but if we've got a problem we fix it. Come back to our original plan - we are going to relaunch using the lessons from our failed kickstarter project. We will have better multimedia, better social media, a marketing strategy and, of course, a prototype. If the crowd is not interested in our project then it obviously won't work. The key to making this project a success is to generate social momentum which is something we truly believe we can do. If you read our kickstarter page carefully we have already thought about this. Interacting with and thus teaching the AI will be like interacting with a baby - it will be play not work.
Once again it is fallacious to argue that by saying that bad people can interact with the AGI that implies that "bad people are likely to interact with the AGI". I welcome "bad people" to come and try to mess up our AGI because our AGI will eventually learn who the "bad people" are and will develop a more sophisticated approach to knowledge evaluation. In fact, having "bad people" interact with the AI is the only way to make our AGI safe. Like a naive child learning about the 'big bad world' the AGI will quickly learn that not all people are out there to tell it the truth. I think that it is very funny that Cleverbot learnt curses - that's just how humans are and whether you like it or not we're all humans here on this forum, unless you are an alien which, for better or for worse, I can't rule out.
Once again it's on the kickstarter page - simple text then moving on to audio and visual. At some point in time we might consider creating a virtual reality for interaction with the AGI but by that time people will probably already be using the AGI to create games/run game logic etc.
How do humans deal with conflicting and contradictory inputs from different people? We're trying to create an AGI which means it will have to deal with conflicting and contradictory inputs from different people. The most often cited truths from the most trustworthy individuals will yield the most likely truth and hence the output of the AGI. Humans guess and hypothesis; so will the AGI. After all, it is often the case that no one knows the truth or the truth is unprovable. The AGI is programmed to seek the truth - if the truth it is seeking is logically consistent it will seek logical consistency otherwise if the truth is logically inconsistent (say, due to subjective realities/conflicting axioms - see Gödel Inconsistency) it will accept logical inconsistency.
10 - we have backgrounds in the sciences, arts, computing and health - we have met through word of mouth. To expedite solutions to the most pressing and complex problems facing humanity today (global warming, energy, education, health, democracy, clean water, rich poor gap and the greatest outstanding theoretical problems such as standard unified model in physics, P vs. NP, accurate supramolecular chemical modelling etc.).