What I have learnt from a failed Kickstarter Project in Artificial Intelligence

Hi F3! Just sharing our failed kickstarter project which can be found online here. What we’re aiming to do is create a open source publicly built and run artificial intelligence platform. The principle is that, like wikipedia, we would create a platform for people to create a resource for others that was completely free except rather than writing articles people would be teaching the program through interactions. The backend would involve deep learning over a neural network. Our idea was that with $20,000 we could get the expertise, servers and scripting done to a sufficient level that it would become a self supporting project from public donations.

None of us have backgrounds in marketing and we’re all university students so we’re pretty short on time hence the clip art graphics and poor quality voice over in the video. The other thing is that we’ve learnt pretty quickly from people’s reactions that whilst many people are interested in the idea there is zero personal engagement with the project and we’re pretty much sure that we won’t meet the funding target let alone get any backers at all. Far from this meaning that we’re giving up we’ve learnt some valuable lessons about trying to launch a project and plan to relaunch through a different crowd funding platform except with many modifications.

Summary of Lessons Learnt:

  1. If people don’t see the direct benefit to themselves they won’t buy into the project. People are only interested in putting their money towards a project such as this if they feel like they’re going to get something valuable to them back - it’s better if this is something concrete rather than abstract.

  2. The lay public are surprisingly tech savvy (and open minded) - although this project is somewhat difficult to explain most people seem to get their minds round the general principles behind what we’re trying to do.

  3. Good graphics and multimedia will make or break whether you get funding for your kickstarter. We got a lot of comments about the shoddy graphics - unfortunately we just simply don’t have the money to pay a professional to make a professional video. We’re uni students. The total cost for us to make the kickstarter page was $0 we, quite literally, did everything ourselves. Whilst we had a lot of fun doing it the end result was far from perfect… :smile:

  4. Many people wanted to see a prototype - in fact, it would be quite easy for us to mock up a page which functions to provide a real time example of what we are talking about. This would act as an engagement tool which would vastly enhance the value and relevance of the project to potential backers.

  5. Some of the people with experience in the area mentioned that we would need vastly more money to do this properly. True - now, at this stage our aim of $20,000 AUD seems fairly reasonable to us but I guess once we start truly trying to nut out the nitty gritty of the backend as well as purchasing a server this might blow out to much more (~40k-50k mark).

  6. The name ‘ARTINT’ is taken - rookie error.

  7. We have no social media prescence.

Our plan at this stage is to relaunch in 2016 with a, new name, fully fledged prototype, beautiful multimedia, aggressive marketing (including active twitter and facebook pages) and a strong emphasis on the personal benefits that this project can bring to the lay user.

Although we have only one backer and 5 dollars to our project, the response from complete strangers with varying degrees of involvement in AI (from ‘just discovered AI through our project’ to ‘university educated in computing’) has been very supportive. For this reason, whilst this project has not been financially successful it has been a success in other ways.

If anyone is interested in joining our team just send me an e-mail at daniel.busch@griffithuni.edu.au. Thanks!

1 Like

Hi Daniel,

thank you very much for your honest and detailed reflection of your Kickstarter project! This kind of report is very valuable for all of us. After all, the F3 was created to help projects like yours succeed. Most of the projects of the F3 members aren’t at the stage, where we would be comfortable to go on Kickstarter. It’s great to see how a futuristic project actually fares on Kickstarter, even if the actual results have been quite sobering so far.

Most of us are idealists, visionaries, and tech geeks, so we don’t necessarily know how to “sell” our projects to investors or to the crowd. I am no exception to that rule, but I see the necessity to become much better in that area. Failure to get sufficient interest, engagement, and support seems to be a rather typical failure mode for ambitious futuristic projects, so I am very much interested in what to do right, and which errors to avoid, to turn such a project into a an actually success. We are still at the beginning of figuring out what to do to get people excited about actually world changing stuff, and I would be really glad if we could help each other out on this adventurous quest. :smiley:

It really seems that great care must be taken to create a very appealing video presentation. I don’t know whether we actually have someone on the F3 who is really good at creating videos. Anyway, it would probably be a good idea to discuss how to make a really great and potentially viral video.

Anyway, I think your project sounds great in general, but it’s far from clear to me what you think your AI should do once it’s ready. Sure, there are myriads of potential applications, but without pointing some of them out explicitly, people don’t have a clear vision about how it will improve their lives. Is it supposed to become something like Watson? A personal assistant? A better chatbot? :wink: Or something more special? What problems is it supposed to solve? Why are other systems insufficient for solving those problems?

1 Like

Hi Michael,

Glad I can help :smiley:. I think that we did definitely suspect that we needed more work to achieve our funding goal but we wanted to take action and see what would happen. One of the problems we had was that there is so little information out there about how to make a successful mainstream kickstarter project, let alone something as esoteric as what we are trying to achieve. We decided that we would put the project out there and if it worked - great, if it didn’t - great - we would refine the project until it worked.

As you say, there’s not a whole lot of information out there as to how to formulate a successful project. Looking at other kickstarter projects I believe they all fail for similar reasons (e.g. lack of marketing, poor multimedia, lack of social media) but they are successful for different reasons. Ultimately, we strongly believe in an empirical methodology - the only way to know for sure whether something will work or not is to take action, hope for the best and prepare for the worst.

Indeed, marketing is not our area at all. We recognise this weakness and we’re trying to work at this. I’ve been trying to make contacts with people in the multimedia and marketing industries and thus far I’ve been able to find some interested parties. They’ve directed me onto some really interesting resources. I’m not sure if you’ve heard of seth godin - his work is truly remarkable. His area is viral marketing and his thesis is that those who can ‘make their ideas spread’ are those to whom the 21st century belongs. He has a free TED talk which can be found here on this exact topic.

We’ve been brainstorming promotion ideas - here are the ones we’ve narrowed it down to:

  1. Use a theme - use an existing popular movie and make a parody - for instance we could ride the wave of the promotion for the recent Star Wars movie by overdubbing the audio from a scene in the movie with a script that leads to them discussing AI (see bad lip reading). To enhance this even further we could digitally alter a scene incorporate our logo.
  2. Make it entertaining - create a video game which learns using deep learning which incorporates elements promoting the AI. For instance, you might have to try to answer a question except you don’t know the question and the answer is constantly changing according to how you interact with the computer. To make it even more interesting we could add an element of emotion into the program so that when you don’t interact with it tries to contact you and if you are ‘too boring’ it shuts off.
  3. Keep it real - have an interview with people on the street asking them about what they think about their job security. Then ask them whether they know that their job is predicted to not longer exist within 10 years due to technology owned by big business which is only going to get richer. This would create a sense of urgency and then we would link to our open source AI. The explanation, of course, being that if we can keep this free then we can help wrest some power from big business and keep it in the hands of the public.

With regards to viral media, a bit of research reveals that marketers have asked and science has answered. Multiple papers have been published on the subject many of which are freely available on line.

  1. Dynamics of Viral Marketing - Tough paper to read (highly technical and mathmatical) but if you persevere there are some real gems.

“Marketers should take heed that providing excessive incentives for customers to recommend products could backfire by weakening the credibility of the very same links they are trying to take advantage of.”

“Since viral marketing was found to be in general not as epidemic as one might
have hoped, marketers hoping to develop normative strategies for word-of-mouth
advertising should analyze the topology and interests of the social network of their
customers.”

  1. Consumer Motivations in Viral E-mails - Easy read aimed at the lay person. I particularly liked this table about the emotions associated with viral e-mails which could easily be generalised to a facebook share or any other mode of social media.

  1. The Six Simple Principles of Viral Marketing - My favorite of these 3. This is not an academic paper but is written by an e-commerce consultant Ralph Wilson on how to use viral marketing. It briefly outlines each of the elements of a viral marketing strategy. Ingeniously the author tried to make his article go viral using the self same principles he outlines and was remarkably successful! Written in 2000 it is still the number 1 or 2 search result when searching for anything related to viral marketing (try it).

Wilson argues that a viral service:

  1. Gives away products or services
  2. Provides for effortless transfer to others
  3. Scales easily from small to very large
  4. Exploits common motivations and behaviors
  5. Utilizes existing communication networks
  6. Takes advantage of others’ resources

In terms of video creation, I’ve made contact with a freelance multimedia business owner in the Gold Coast area where we live who is happy to teach us how to create beautiful multimedia. His suggestion was to start with a high quality camera and audio and to edit using professional software like what he uses - adobe creative suite (including audition, after effects, illustrator and photoshop). He also recommended the use of a vector graphics editor to create any images rather than using free images online. He also argued that free open source alternatives exist such as audacity (audio editing), inkscape (vector graphics), blender (video production) and that, if used correctly, can just as good as Adobe CS.

In terms of what our AI will do once it is ‘ready’ is a question which I don’t know if I can answer. Perhaps part of what makes this project intriguing is the fact that it is very hard to say what it will achieve since nothing like it has ever been attempted. What we had in mind was to create an online service for high quality inferential and deductive reasoning using real time data from the internet with ‘meaning’ created through the computer’s interaction with humans. In a sense, one might consider this AI as, on the one hand a sounding board for ideas and on the other hand a more advanced Google - able to create a meaningful answer to difficult tasks such as ‘discuss the concept of truth’ or ‘how can I develop a viral marketing campaign for my crowd sourced free online artificial intelligence program’. :wink:

Specific applications can be split into a couple of different fields. Complex inferential analysis - ‘suggest why global warming targets have not been met and propose ways in which countries might be encouraged to adopt stricter targets for CO2 emissions’ (socio-economic and political analysis). Simple regulatory processes ‘can you give me feedback on this article I wrote for New Scientist magazine’ (language and concept analysis) or ‘go through my transactions and tell me what I can do to save’ (financial analysis integrated with personal information). Creative tasks - ‘mix elements of jazz and classical music to make a new style of composition’. Synthetic processes ‘compile a list of scientifically supported ways of teaching mathematics’.

But really what are we aiming to create - a program that processes information at level of accuracy and precision which is equal to or greater than that of a human being. So yes to Watson, a personal assistance and a better chat bot and much more. Admittedly, this will take many many years to develop and may never manifest in the manner in which we envisaged.

Why are other systems insufficient? There are always problems to be solved - indeed even in the 21st century we are faced with big problems we haven’t been able to solve like hunger in the 3rd world, clean water, over population, obesity in the first world and global warming. What about theoretical problems like P vs NP or quantum computing? Anything that can help us achieve these goals, I’m sure you will agree would be quite useful.

It’s great that you collected so many marketing related resources. They will definitely be of use for our community. However, I think that we need to consider more factors that are relevant to potential backers, other than the quality and of the marketing aspect.

In the following I am trying to view your project from the perspective of a potential backer:

  1. You state that you aren’t sure about what about AI will do. That doesn’t sound very inviting for potential backers. It rather sounds like a very experimental research project. As such, why don’t you try to get funding from the government? Shouldn’t the government be responsible for funding research in computer science and AI?
  2. Your project at the same time sounds extremely ambitious. About as ambitious as the most ambitious projects by giants like Google, Apple, or IBM. How can such a small team like your and with such little budget hope to create anything that comes close to the power of commercial solutions?
  3. What’s your business model, if your Kickstarter campaign actually succeeds? How will you get funding and income in the future? Are you sure you are going to build a non-profit? Donations alone usually aren’t enough. There are exceptions to that rule, but they are relatively few. Could you offer premium services? What about advertisement?
  4. How can you make your claim to be able to deliver what you promise more believable? Is there a demo project or something like that which people can already use?
  5. Is there really any practical application that your project can offer that Google or someone else will offer anyway within the next few years?

Ok, now back to normal mode. Your idea sounds like something that just might work on a decentralized network, rather than on a centralized server or datacenter. Can’t the AI be parallelised to run on a network of personal computers? If so, then you won’t have to bother about the cost of creating some huge centralized infrastructure. Instead, people would run the AI on their own computers collectively.

Also, have you considered getting funding in Bitcoin? Your project feels like it fits into the general area of interest of Bitcoin users.

Hi Michael,

Of course there’s more to it than just marketing but I think marketing was the primary reason why we weren’t able to make this happen through kickstarter. Allow me to explain our case by addressing each of your excellent points individually:

  1. That’s right - I’m not sure about what this AI will do because to claim that I know exactly what it will do would be a false statement. To claim to know and predict the behaviour of any AI is inherently braggadocio - after all we can’t predict how a human would react given a certain set of circumstances so how can we predict how an AI would react to a given data set? Potential backers should understand that, as yet, no AI with basic human intelligence has been created. Machines like IBM’s Watson will never represent true AI - they are simply sophisticated programs designed to solve a single task. What we are trying to create is something which learns and thinks like a human. What is human potential - that’s the potential of this AI. Yes, this is experimental and I am completely open about that - this is not something to be concealed - backers must understand this. Indeed, I should think that not knowing the end point is part of the appeal of the project. Could you get funding from the government? I have worked for government previously and I can assure you that without the backing of a professor and multiple post-docs there is next to no chance of securing public funding for such a project. In the current economic climate a government fund a project such as this would be political suicide and would be lambasted by a public more interested in health and education funding. There do exist university programs (private universities) for AI - check out MIT’s AI lab. The only government in the world which would consider funding such a project is the Israeli government (see technion).

  2. Yes, it is ambitious and I make no apology for that. Your forum quotes Margaret Mead’s famous line “Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it’s the only thing that ever has.” We are here to take action and make things happen - if we don’t believe in it then no one will. We don’t hope to create this on our own - we just hope to get sufficient momentum to secure expert assistance and financial backers from the public. The nature of our project is that the AI is ‘crowd - taught’. The reason why commercial AIs will never achieve the full power imagined in science fiction is because they cannot ‘learn’ in the way humans do. Consider that formal schooling to learn even basic arithmetic and language with some knowledge of the social or physical sciences takes 12 years of full time study for a human. For an AI to naturally learn basic arithmetic and language it would take just a couple of months of intense programming by a team of experts and this is much easier than to teach an AI the way a human would learn. However a human has the ability to use this knowledge to expand on their learning and develop expertise in an area because a human is taught through experience. And yet big companies insist on ‘programming’ AI’s - this will never be successful. We aim to teach an AI how to think through a natural learning process- that means it will take longer to achieve basic functions but its capacity to master thought and knowledge will mirror that of a human. How would we achieve this quickly enough to make this worth doing - imagine thousands of people interacting with the AI with for an hour. This would facilitate much the same learning as 1 person interacting with the computer of 1000 hours. By crowd sourcing the learning process, we can accelerate the learning and produce something that will rival commercial solutions. How will it rival commercial solutions? Firstly it will be far more adaptable and capable of learning anything unlike say Google or Watson which perform limited tasks extremely well. We are aiming to produce something which can do anything - with it’s performance in that task dependent on its experience and learning in the area. Secondly, anyone can teach it. Tasks which originally only humans were able to do will then come into the realm of what a computer can do.

  3. If the campaign were successful then we would get funding through donations. If Wikipedia can afford to be one of the most high traffic websites on the net without needing advertisements then we can do the same. We are committed to maintaining a free service and if we are unable to fund it either we will have to take the project offline or use advertisements/freemium model but we are committed to maintaining the service as a non-commercial enterprise.

  4. No, there is no demo we are working on a prototype. We don’t know how long this will take.

  5. Yes, this AI will be able to do everything that Google and any other online service can do and it will be able to learn to do, quite literally, anything else. Put simply, anything a human can do but which Google or any other online service can’t do, will be something our AI will be able to do with the proper learning. Ultimately it will able to analyse itself and accelerate its own learning process to the point where it will exceed normal human intelligence in ways which we may not fully understand - the singularity. Can Google do that? I can’t promise that there is any practical/commercially competitive application that we will be able to offer within the next few years but I can promise that in the long run we will produce something that will make commercial conventionally programmed online services utterly redundant.

Yes, you’re right we have considered using a decentralised network with parallel computing. In fact, that was one of our first suggestions and we decided against this for 3 reasons -

  1. Individual computers lack the power to process data on the scale necessary to produce meaningful learning. We calculated that a couple of months worth of daily half hour interaction could generate a minimum of 1G worth of data. The RAM and CPU necessary for other applications would be sucked up completely by the AI making users reluctant to run the program - or it may simply crash every time it was opened. Keeping the scripting and data server side as a cloud bypasses this issue.

  2. Learning occurs as a network process - being able to process data when the user it not actively interacting with the AI is critical if the AI is to develop sophisticated connections between ideas. Think of this like human daydreaming. Computers are not switched on all the time and the AI program may not be running all the time. Also, with data segregated on individual computers which may or may not be on line the learning process would be impaired - imagine thousands of child like intelligences disjointedly babbling to others randomly versus one super sophisticated adult simultaneously communicated with all users. Learning occurs much more quickly by reducing the distance between nodes on the circuit (neural network model of AI) - a network spread globally without a central node is bound to pose problems with connectivity.

  3. Data is not centralised. There is no organisation of information or co-coordinating influence. Think of this as no conductor in the orchestra - all the instruments play out of time and don’t know if they are in tune. Any attempt to make a conductor would be incredibly dangerous because of the potential to hijack such a conductor to individual PCs. Centralising the data means the server can be secured as a cloud and there are no such issues with running the ‘software’ on private computers.

Yes - we did not pursue this because of the volatility in the value of bit coin currency and the security issues working with a crypto-currency.

I hope that answers some of your questions satisfactorily.

Thank you very much for your detailed reply! I must admit that only now I really get what you are trying to build. You want to create an actual Artificial General Intelligence (AGI) or strong AI, as opposed to a regular AI. That changes the nature of your project quite dramatically. And you need to become better at communicating what you actually want to create. Anyway, this means I need to ask more questions.

  1. Have you contacted AGI researchers like Ben Goertzel (see his OpenCog initiative), Danko Nikolic, or Monica Anderson? A collaboration with them might be invaluable!
  2. What is your stance on “friendly AI” and the potential dangers of AGI as explored in the book “Superintelligence” by Nick Bostrom?
  3. Have you attempted networking or community building within the AGI community? Have you tried presenting your idea on Less Wrong?
  4. An AGI would most likely cause technological unemployment. What’s your answer to those who are concerned about that issue?
  5. What rights would you like your AGI to enjoy? Would you grant it human rights?
  6. Where have your published your ideas? If you really believe you can pull this off, you should tell the whole world about it.

Re: Bitcoin.
Taking the volatility and security issues of Bitcoin as reason not to accept donations in Bitcoin makes me suspect that you haven’t explored cryptocurrencies sufficiently. In the very least, you can convert Bitcoins to currencies and assets of your choice. The options available today should be more than sufficient for that purpose. Also, Bitcoin itself is well suited for long-term value storage, at least until there is a clearly superior alternative.

Hi Michael,

Thank you for you interest in our project. I can see that we obviously haven’t made our true aspirations clear. We do, indeed, wish to create a strong AI but we didn’t want to frame this as strong AI because of the concerns over the singularity and the fact that we don’t anticipate our platform to demonstrate proper AGI for a considerable period of time (2-3 years). Although perhaps this might have worked as a publicity measure. Allow me once again to respond to each of your questions -

  1. Whilst we’re familiar with Ben Goertzel’s OpenCog initiative we’re not interested in setting up a plan for how to create an AGI or trying to program ‘analytical functions’. Whilst OpenCog’s work is very impressive and I strongly support their work - we have very different ideas as to how to go about developing AGI. Where OpenCog aims to create algorithms manually using C++, we plan on ‘evolving them’ naturally through a metacognitive process. In this sense, we are much closer in philosophy to Danko Nikolic whose practopoeisis model mirrors what we are trying to achieve. However, where Nikolic advocates a 3 traverse model we are advocating a n-traverse model where we anticipate that within 5-6 years we would be able to achieve a system with 4 or more traverses which effectively surpasses human capability in metacognition. Monica Anderson’s work is interesting in theory but does not particularly help us with the actual creation of AGI. Indeed, it will become more useful for us to discuss Monica’s work once we have the AI up an running. Although I am sure you will disagree with one or more of the above propositions.

  2. As to ‘friendly AI’ I understand why people have concerns but I disagree that it will be an issue. The idea that AGI will manifest ‘overnight’ so to speak is essentially impossible - the years and years to develop and teach the platform will give us time to study and understand the development of AGI and only then will be properly be able to quantify and neutralise any threat posed. Either way AGI doesn’t mean that the AI will be able to ‘do’ anything - it will merely be able to ‘think in a box’ without our needing to actually create a box. Concerns over friendly AI delay the development of any strong AI since people are too afraid of what it will do. However, it is fallacious to confuse an AI intelligence with a powerful AI. Autonomous drones post far more threat than AGI since they are programmed to kill whereas an AGI has no such power. Concerns over the singularity are also misplaced - a singularity could be equally used for good as for bad. This is so far down the line though that it is really a non-issue at this point in time. People are ignoring the real issue here which is that AGI is not good or bad - people are. It is the ill will of the people using the AGI which is amplified and hence determines the threat posed.

  3. Yes, we would love to get help with our project! We welcome anyone who is interested in jumping aboard the AI train can make contact with us. We will make contact with specific individuals as we work on the project who we think will be good candidates for solving whatever problem we have at hand.

  4. Yes, AGI would produce technological unemployment which is going to happen anyway through weak AI. Why not keep it public and open source rather than give the power to commercial entreprise.

  5. We believe that AGI is not human and therefore is not conscious. Those who believe all complex systems have consciousness can believe that if they wish. AGI is simply a tool like any other program - it doesn’t have feelings - I’m sure the AGI can ask for human rights if it so wishes. It will, after all, be capable of making its own decisions at which point we can then make a decision with regards to the program’s rights.

  6. No we haven’t published our ideas because no one would read them. No one cares :smile: Even on a forum dedicated to discussing the future, we are the only ones here interacting and this has barely 30 views. The fact is what we are trying to do is so unusual and unlikely to succeed that we’re the only ones crazy enough to try to make it a reality. We don’t want to just tell people that we can pull this off we want to show people once we’ve pulled it off.

Bitcoin - you’re probably right we haven’t explored it sufficiently. But then it’s seems to us that whilst Bitcoin users might be more likely to fund our project, targeting our campaign specifically at Bitcoin users is not worth our time. There are other small groups who will respond more favourably such as futurists and AI enthusiasts who can donate using a more widely accepted currency. We never said we wouldn’t be happy to accept donations in Bitcoin - it’s just not something we were ever offered. Should someone wish to make a donation in Bitcoin we will accept it gladly. Long term value storage is not something we’re looking for - just funding to help get the momentum to make the project get off the ground.

In other words you were hiding from people what you really wanted to do with the money you hoped to get via your crowdfunding campaign. How do you justify your reluctance to share with the world what exactly you want to do, when you are actually asking the world for help?

What you claim to create sounds extremely optimistic. Almost nobody would believe Google, if they promised the world an AGI within 2-3 years, and here’s a bunch of people with virtually no funding, and no visible credentials who promise exactly that. Your claims sound quite fantastic from that point of view. This doesn’t mean that I don’t believe that you might have a chance at succeeding where everyone else fails, but at the same time I don’t see any strong reason to predict that you will have any kind of significant success.

Anyway, thank you again for your elaborate answers to my questions. They were most enlightening, and I feel honoured that you decided to share your thoughts and aspirations on this forum!

Let me first disclose that I am not an A(G)I expert. I am merely an interested observer and someone who asks philosophical questions about these issues. In this case, it seems to become necessary to dig a bit deeper in the technical details, however. You seem to think that some kind of deep learning is the key. Indeed, deep learning has proven to be quite useful and impressive for narrow AI tasks in the recent years. How many layers do you think a deep learning network needs to show general intelligence? Do you aim at a specific number of layers? Do you propose any kind of innovation on top of the generic deep learning architecture?

That sounds like a very speciestist thing to say about consciousness. How do you know that humans are conscious and other entities are not? How do you tell that other humans are consciousness and not just philosophical zombies who pretend to be conscious, while they have no subjective experience? What do you think is the basis of consciousness? You don’t seem to try to understand or emulate it. Don’t you think that this might hinder your success to create an actual AGI when you decide to leave a potential key ingredient out, simply because you don’t understand it?

Ah, now you don’t only claim that your AGI won’t have any consciousness, but also that it won’t have anything that could be called “feelings”. How did you arrive at that conclusion? Don’t you think that feelings have an important role in human cognition? How else would the motivational structure of human cognition work, if humans had no feelings? How do you envision the motivational structure of your AGI to work, if it’s not supposed to have feelings? Is it supposed to have some kind of general utility function that it is supposed to maximize? What makes it act? Does it only act when asked to do something or being asked questions? Is your AGI supposed to be something like a docile robot or oracle?

At least you are not deluded about not being crazy. That shows that you’re actually relatively sane.

This attitude seems quite admirable, but I sense a bootstrapping problem here. Basically, you want to create an AGI that learns by crowdsourced learning. But what if the crowd is not interested in your project and therefore doesn’t interact with your AGI. In that case, your AGI won’t reach the level it needs to demonstrate to people that it’s worth interacting with.

While I think that the crowdsourced learning idea is innovative and cool, I see some potential problem with it. Let me remind you about what you have written:

You implicitly admit that “bad people” are likely to interact with your AGI and teach it “bad lessons”. That doesn’t sound like a safe or clever choice. Wouldn’t the “bad people” mess up all the teaching efforts that are the “good people” are providing? I remember that the Cleverbot AI started to learn a lot of curses, so this is by far not a hypothetical problem.

Also, what kind of interactions between people and your AGI do you have in mind? What kind of interface is used? Simple text conversations? Speech? Videoconferences? Simple virtual worlds like Second Life? Complex virtual worlds with near real-life physics?

Finally, how if your AGI supposed to deal with conflicting and contradictory inputs from different people (or even one and the same person)? Humans are able to uphold several conflicting ideas in their minds without breaking down. Humans are not logically consistent (though some try to become as consistent as possible). Do you think your AGI will be any different in that respect?

If I may, I have a final question about your team: How many people are you and how have you assembled your team? And what is your core motivation for bringing open AGI to the world?

A few quick thoughts:

  1. Publishing a formal paper will get your paper noticed by formal experts. Internet nerds like us do not need to read a white paper to give it validity (and should not be part of the validity benchmarking). It might even lead to being backed by a professor if it catches on.

  2. As someone who has started up projects in the past which have required servers, my advice is to get the cheapest server that you need (and rent, not buy!) because chances are if you get an expensive server, you are going to be dumping money into it without needing the power it gives.

  3. I’m not an expert on the internals of OpenCog but I know that it is not a collection of narrow algorithms. I am working on the OpenCog project currently, but the aspect I am working on is simply getting it to run on Windows, so understanding the full design isn’t something I’m getting into yet. However I know enough to know that it doesn’t fit the description of what you have stated, so I wouldn’t write it off! The C++ code certainly does not deal with anything involving the cognitive functionality, as that is all higher level in the form of PLN scripts which can be dynamically generated on the fly as part of the cognitive processes, not unlike what you want to do. I would highly recommend considering joining forces with the project as starting one from scratch means it is much less likely that your work will have an impact.

  4. I think that the approach to use many people as an input source is a good one. However, people will only use the system if the system is directly useful to them. I have myself been working on ideas for how to solve that issue. Particularly: a fuzzy logic network involving truth evaluations of knowledge could be organized in a way where people will indeed have a use for it. There would be applications in search, advertising, social media, and so on. My approach was to build my own project this way but recently realized I would rather throw my support into OpenCog instead since I realized that I can help the project more than previously thought, and it’s more likely that my work will turn into something useful in that case.

If you are still set on making your own project, I highly recommend at least getting some peer-reviewed publications on the theory so that you do not potentially go down a dead-end and waste all that talent. I certainly do not recommend spending tens of thousands of dollars on servers until you are sure that your project is getting validation and momentum, because the vast majority of projects do not take off for various reasons (lack of team cohesion, motivation, design issues, personal issues, financial issues, and so on).

1 Like

Hi Michael,

Thank you for your response - I believe, however, that you have misunderstood many of my statements.

No, we were not hiding from people what we really wanted to do. We told people exactly what we were going to do with the funding obtained. We were and are going create a platform to develop artificial intelligence through crowd teaching. Let me reiterate that the so-called ‘singularity’ is only a theoretical possibility - it is impossible to say whether our AI would ever reach the point where it is more intelligent than humans. If you truly believe that our AI would be this successful and hence we are ‘hiding’ the true nature of our AI you are far more optimistic about this than we are.

Equally, note my comment that ‘we don’t anticipate our platform to demonstrate proper AGI for a considerable period of time’. To claim that we are going to definitely develop AGI and then fail to do that would definitely be deception and not the other way round as we have done. I think most people are NOT familiar with the concept of weak vs strong AI and we tried to explain the concept as best we could - we did not intend to deceive though you may perceive that as the case. Think of it as better to under promise and over deliver than to over promise and under deliver. I think backers would be much happier to be “decieved” (and I use inverted commas because we really don’t know what is going to happen) in that we achieve far more than they expect than to deliver none of the grandiose promises we could have made.

Yes we are optimistic, if we weren’t optimistic then we wouldn’t try in the first place. I fully anticipate and welcome failure because without failure there can be no success. We believe it can be done and that is good enough for us. Also let me make it very clear that we are not promising anything! We are merely saying, fund us and let’s see what happens - what we’re attempting is entirely novel in its conception. Honestly, I don’t care if we ‘fail’ because we are compelled to try to make our project a reality and we will forever regret it if we don’t at least try. We can succeed where Google can’t because we are not a commercial enterprise - in the same way Wikipedia is more successful than Britannica we hope to harness the power of crowd teaching to accelerate the learning process.

It is always a pleasure to be able to share our aspirations with others and I am humbled that you have shown such interest in our project.

OK, we are not coming at this from a philosophical perspective but rather a technical one. We’re not interested in discussing the philosophical ramifications of the singularity but rather how to solve day to day problems more efficiently through a free open source AI platform. We aim use bio mimetic principles to simulate the brain as a non-for-profit online service. I cannot explain what we are trying to do any more simply than that. Deep learning alone is not the answer to AGI - the back end of what we are developing is entirely novel in its design. Allow me to answer these questions by showing you a flow chart of our design. Model.pdf (194.1 KB)

I have to admit I had a real laugh from you comments on whether our AGI should, could or would demonstrate ‘consciousness’. Maybe it is speciest to say that - whether that is or isn’t the case is irrelevant to us. We don’t know that humans are conscious and other entities are not. We can’t tell that other humans are conscious. Personally, I don’t know what the basis of consciousness is and I don’t care - I cannot speak for the others. Personally, I don’t believe this will hinder our success and I don’t believe that it is a key ingredient - our AGI will, of course, have a sense of self and others but that doesn’t mean that it is conscious. If you can figure out how to make a machine conscious I strongly suggest you patent it and I’ll be the first to buy it.

Yes, feelings have an important role in human cognition. I never said that our AI wouldn’t reason using emotion - I merely said that it doesn’t have feelings. These are two very different things with the difference that the second implies consciousness - but now we are into semantics. Computers do not have motivation - they are programmed to perform a task. Human motivation evolved to allow us to focus on things that help us survive like food, sex, shelter etc. The AGI we are making is designed as an input/output system - it is programmed to give a response to data. To label it as a docile robot or oracle is an unnecessary anthropomorphism.

Gee, thanks :smile: Good to know at least one person believes in my sanity.

No I don’t see a problem here - but if we’ve got a problem we fix it. Come back to our original plan - we are going to relaunch using the lessons from our failed kickstarter project. We will have better multimedia, better social media, a marketing strategy and, of course, a prototype. If the crowd is not interested in our project then it obviously won’t work. The key to making this project a success is to generate social momentum which is something we truly believe we can do. If you read our kickstarter page carefully we have already thought about this. Interacting with and thus teaching the AI will be like interacting with a baby - it will be play not work.

Once again it is fallacious to argue that by saying that bad people can interact with the AGI that implies that “bad people are likely to interact with the AGI”. I welcome “bad people” to come and try to mess up our AGI because our AGI will eventually learn who the “bad people” are and will develop a more sophisticated approach to knowledge evaluation. In fact, having “bad people” interact with the AI is the only way to make our AGI safe. Like a naive child learning about the ‘big bad world’ the AGI will quickly learn that not all people are out there to tell it the truth. I think that it is very funny that Cleverbot learnt curses - that’s just how humans are and whether you like it or not we’re all humans here on this forum, unless you are an alien which, for better or for worse, I can’t rule out.

Once again it’s on the kickstarter page - simple text then moving on to audio and visual. At some point in time we might consider creating a virtual reality for interaction with the AGI but by that time people will probably already be using the AGI to create games/run game logic etc.

How do humans deal with conflicting and contradictory inputs from different people? We’re trying to create an AGI which means it will have to deal with conflicting and contradictory inputs from different people. The most often cited truths from the most trustworthy individuals will yield the most likely truth and hence the output of the AGI. Humans guess and hypothesis; so will the AGI. After all, it is often the case that no one knows the truth or the truth is unprovable. The AGI is programmed to seek the truth - if the truth it is seeking is logically consistent it will seek logical consistency otherwise if the truth is logically inconsistent (say, due to subjective realities/conflicting axioms - see Gödel Inconsistency) it will accept logical inconsistency.

10 - we have backgrounds in the sciences, arts, computing and health - we have met through word of mouth. To expedite solutions to the most pressing and complex problems facing humanity today (global warming, energy, education, health, democracy, clean water, rich poor gap and the greatest outstanding theoretical problems such as standard unified model in physics, P vs. NP, accurate supramolecular chemical modelling etc.).

Hi Nuzz,

Thank you for your thoughts and advice. Allow me some time to craft you a thought out response.

I will get back to you within 48 hours.

Thanks again for your patient and fascinating answers, Daniel. After what you’ve written this time, it feels to me like you are working towards a Watson-like AI, and not a human-like AGI. But it would be a version of Watson that would learn from interactions with people directly – which is a really interesting approach.

However, my intuition is that text based communication is actually the hardest way to teach an AI anything, because it’s such a low-bandwidth channel that has a very high degree of ambiguity. Of course, the AI could try to ask questions which reduce the ambiguity, but I fear most humans don’t have the patience for the eons of clarification that are required to make a point abundantly clear.

Personally, I would prefer to start with interactions in a virtual world with realistic physics. I think that interacting with objects directly is the best way to understand them.

In any case, I would suggest to you that you compare what you try to build with other AI projects out there and try to explain what’s actually different about your own approach. The diagram you uploaded here is interesting, but there are certainly lots of details that can’t be seen on that coarse grained diagram.

It seems that we are suffering from a lack of clear terminology when it comes to talking about AI / AGI / human cognition. Probably we only will have really clear terminology in that area long after we have created AGI, but we should strive for improving our terminology incrementally as soon as possible. Of course, there’s the general problem that terminology is often theory dependent. So, creating a clear framework for general cognition would probably require creating meta-theories that unify different theories.

You seem to have a lot of confidence in the robustness and rationality of your AI. It seems to me that you think like “if it can’t handle that, then it’s of no use”. That’s a very ambitious mindset to have. Perhaps it’s the right mindset to have. When you fail with your highest ambitions, you can still try something easier afterwards.

Those are really noble goals. Once I have shared your hope that AGI will solve those problems for us. Meanwhile I believe that we would be easily able to solve those problems, if we used better economic and political systems. Perhaps changing those systems is actually what any reasonably good AGI would promote. In that case, it would probably be

  1. ignored
  2. ridiculed
  3. fought against (intellectually and politically)
  4. seen as entity which simply expressed sentiments which have been “obvious” from the start

I think the best reason to pursue AGI is to create a kind of intelligence that is not exactly human. A diversity of different opinions helps us to see the world in different and novel ways. And that is often what we need to actually solve our deepest problems.

Hi Nuzz,

We could certainly publish a formal paper however we’re not looking for expert commentary but rather seek the support of the general public. I am deeply humbled by your use of the term ‘white paper’ but we do not seek validation by posting here. On the contrary, we are hoping to help publicise our work in the hope that we can find like minded individuals who may be interested in joining us. I have previously worked in academia and few professors/experts would be willing to put their name to something they have not created themselves and/or which they do not control. It is a matter of reputation and time: most professors are too busy with their own research to worry about a team like our own.

Yes, we are thinking that renting a server, at least for the time being, represents a far more reasonable solution. However, the cheapest solution might just be running the server using apache or drupal off an old laptop of ours.

True, true, by no means do I wish to understate the excellent contributions of the Open Cog project however we would much prefer to create a simpler program than what the Open Cog project seeks to create and then allow it to ‘evolve’ intelligence. Our AGI will record how it thinks and then will think about that to improve how it thinks. More than that it will be capable of recursive meta reasoning combined with an evolutionary algorithm which will allow it to arrive at a solution to . As such our AGI will learn much more slowly than the Open Cog’s AGI but it will have more powerful learning capacity long term since it will have ‘taught itself’ thought from scratch. Goertzel, Pennachin and Gilsweiller’s Engineering General Intelligence parts I & II (from which the Open Cog project is based) are to be commended. We agree on many points - for instance focusing on building strong AI and ignoring weak AI. However, from a mathematical perspective the more heavily programmed the AGI the less fluid the intelligence – i.e. long term adaptive intelligence is sacrificed for a more intelligent machine initially. By using a dynamic meta-cognitive strategy we bypass this issue somewhat.

Using many people as an input source is what allows our system to avoid many of the problems faced by commercial weak AI. I agree partially with your statement regarding direct benefit. Wikipedia is a good example of the model which we are trying to employ. Writing Wikipedia articles offers no direct benefit to the user and yet their model has been incredibly successful. A well marketed system with a prototype and funding from interested backers will allow people to see the potential of our online platform and thus gain the social momentum to develop the AGI. Once the AGI has developed rudimentary intelligence the system will be of direct use to the use in that users will be able to use it for research, analytical or personal purposes.

With regards to getting peer-reviewed analysis of our ideas once again we would definitely consider this as a means of getting free feedback and attracting programming talent. However, as I said above, our goal is to attract public interest and not that of academia. In terms of setting our own path, it’s not so much that we are set on making our own project because we want it to be ours but that our approach is quite different to that of open cog. Our concept is simpler in design but more computationally demanding. The money we are raising is primarily to get impartial external feedback with refining and developing our algorithms as well as further promoting the project and not to pay for servers. As I said, we’re working on a prototype at the moment.

You won’t get much support from the public if experts do not endorse the project. The reason for this is that an expert, who is already trusted by the “general public” (even not even necessarily an AGI expert) will come along and point out the red flags in the project and the summarily advise people not to donate money. The fact that you want to avoid such expert opinions is a red flag in itself, and begs the question of why you are soliciting money from the general public if expert peer review is something you’re trying to avoid?

If you want a concrete case study of another project that has gone down a path similar to yours (e.g. optimizing for validation from the general public rather than experts) then I recommend looking into Arthur T Murray’s Mentifex AI.

1 Like

Hi Nuzz,

I was contacted by Michael through my kickstarter page explaining who he was and why he thought my project was interesting. Someone on the forum had posted a link to my kickstarter. I came here in good faith to discuss why our kickstarter failed and to share our experience. As such, I am here to engage in intelligent discussion and not to engage in some flame war. I feel that you have failed to read my responses to your initial post.

How exactly should we get expert endorsement without a working prototype for them to examine? By just talking about what we imagine might work? In fact, if you read my response you would have seen that we’re working on a prototype at the moment.

Ummmm, I don’t remember saying or implying that we are trying to avoid expert opinion. In fact I actually said the opposite …

So you didn’t bother to read my response or English isn’t your first language? But again, don’t you think it’s better to show people what the AI can do rather than saying what we think is possible?

Again, I’m not sure I follow you. The title of my post here is ‘what I have learnt from a failed kickstarter project’. The difference between us and Arthur T Murray is that we’re not delusional. We know when we’ve failed and we welcome failure because it will allow us to breathe new life into the project. That doesn’t mean we’re not going to try to learn from the experience and do better next time - perhaps unlike our friend Mr Murray. I may be crazy but I’m not delusional. After all, the chances of our project succeeding are infinitesimally small. In fact, so small it is ridiculous to even attempt it. So what? There’s a chance it might work and that’s good enough for me and my colleagues.

Ultimately our goal is to make something that benefits humanity. If you’re not interested in that because you think I have ulterior motives then I’m going to have to end the conversation here.

On a side note, there are many people who would be very upset with the critical tone of your post. Frankly, I’ve copped much worse. I know you’re just trying to help but please bear that in mind next time when talking online - there’s a person behind that screen.

Hi Michael,

Thank you once again for your thoughts.

Spot on: it is true that what we’re trying to create is a AGI which will not pass the Turing test. We don’t intend for the program to replicate distinctly human traits such as curiosity, humour or empathy. If we didn’t know the program was conscious the program would probably simply be simulating these traits as opposed to replicating them. Who knows, perhaps the program will evolve a sense of humour through its interactions with humans I really don’t know. One key difference between our AGI and Watson is that Watson learned from a few teachers who mediated its algorithms by manually rewriting code. We seek to teach our AGI through interactions with thousands of people and evolving the code by a program which is capable of reflecting on the efficacy of its algorithms and changing them to make them more effective.

Ideally, the AI would interact with humans in a virtual reality environment including dynamic audiovisual media. I agree, one of the risks we put on the kickstarter page for backers was that people would simply ‘give up’ on the program before it started to develop any true intelligent behaviour. Much like a child, the program will need nurturing and that’s why we need the public’s help.

Wikipedia has an excellent list of existing AI projects. So far as I am aware (please correct me if I am wrong) none of these projects have the following 5 characteristics which we are attempting to incorporate in our AGI:

  1. Built through crowd teaching.
  2. Attempts to create artificial intelligence using an evolutionary algorithm.
  3. Attempts to evolve artificial intelligence that is capable of numerical AND linguistic AND logical reasoning across any discipline of study through natural interaction.
  4. Incorporates a ‘meta-cognitive’ module which can analyse and rewrite its own programming by recording its own behaviour and seeking to improve its performance (see Prof Nikolic’s Practopoiesis model).
  5. Possesses a random seed at the very core of the platform - this introduces inherent randomness into the program in the spirit of experimentation with new methods of thinking (attempting to mimic the creative process).

I suppose, yes, it is a matter of time to observe the evolution of the program as it interacts with people but I very much expect that it will learn to deal with deliberately misleading statements. If it cannot determine that a blatantly false statement has been taught then it is not true AGI.

I know nothing about politics and my economics is pretty rudimentary too so perhaps this is true perhaps not. What I do know is that our AGI has the potential to help solve these problems and that’s good enough for us to try. If our AGI is going to be ignored, ridiculed, fought against, or viewed as simplistic then so be it - let’s see what happens. In the end, we are the ones asking the AGI the questions - how we choose to act upon its responses is entirely our choice to make. Perhaps it will suggest a novel solution that we hadn’t thought of or suggest a path of enquiry which leads us to the big answers.

On the same page once again, I make no promises that our AGI will be useful only that it has the potential to be useful. Evolution is a complex and unpredictable process - the intelligence we create will teach itself using humans as guides on the path of wisdom (poor teachers though we may be). It may well help us gain important insights into the big problems that our human bias prevents us from seeing.

1 Like

I think you are reacting needlessly defensive and dismissive towards @Nuzz’s reply. It looks like this was triggered by some misunderstandings, and a dismissive attitude on both sides. My interpretation is that your paragraph

sent out mixed messages about how to go about publishing papers and dealing with the academic community. Perhaps it’s hard to express clearly what you are meaning, but clear communication is important. It’s probably the phrase

that raised a red flag for Nuzz – which I think is a quite justified reaction. After all, expert commentary in highly sophisticated disciplines can be very valuable. Why would anyone easily dismiss such commentary? But from the context of what you’ve written, my interpretation is that you don’t want to rely on expert commentary, but rather want to experiment with your own ideas, first. That’s a legitimate position, and I generally approve of the “let a thousand flowers bloom” approach. Still, it’s an approach that may require more explaining and legitimation than primarily collaborating with academic researchers.

That’s a target audience that’s clearly distinct from the general public. So, you need a strategy for finding those individuals and making them interested in your work. Perhaps creating a website that describes your ideas in detail and writing blog articles about your project might be one of the best approaches for that.

You might also want to share your ideas on the Less Wrong community. Most of its members would probably try to dismantle your approach, since it’s a generally very critical community, but going through the process of dealing with such criticism would most likely help you to refine your ideas and your communication skills.

To me this looks like Nuzz pushed your buttons here. I don’t understand why you are reacting so emotionally here. You seem to hate being compared to Arthur T Murray, but your way of reacting to that comparison does not reflect positively on you. The use of bold text for a whole section is the equivalent of screaming. That doesn’t make people take you more seriously, unfortunately. :frowning:

I don’t recall Nuzz mentioning and even alluding to ulterior motives. Your reaction here seems quite unwarranted to me.

After everything I’ve read so far from you, I get the impression that what stops you from being more successful and convincing is the level of your communication skills. That’s not to say that I think your communications skills are especially bad, but they don’t seem to be especially good, either. Actually, that’s a very widespread and typical problem. Good communication is really hard and takes a lot of effort. And I’m also struggling with that a lot. It’s easy to reject others, because you think they are not on your side. Sometimes that approach might even be justified, but I think you have been overreacting here. That makes me wonder how you would react to really harsh and hostile criticism. Following your first emotional reaction is rarely the best strategy for communicating.

Also keep in mind that if others react to you at all, they think what you say is important enough to react to. Getting ignored completely is the worst that could happen to anyone.

Hi Michael,

Thank you for your helpful advice. After reading through what I posted and your analysis I do think I overreacted to Nuzz’s post. I should explain that I felt that Nuzz hadn’t bothered to read my response to their first post and misinterpreted many of my statements. Moreover, I feel like the tone of their post was attacking our project and I was compelled to respond in kind.

Based on Nuzz and your advice I think it would be a good idea for us to write a paper briefly summarising previous work in this area and why our work is different. By explaining the theoretical foundations of our work it will allow us to gain more credibility as a group and hence attract expert validation. However, I still believe it is better for us to create a working prototype freely available online with snippets of code for users and experts to experiment with.

I have to say I took offence to the comparison to Arthur T Murray because of the implication that we are fraudulent/attention seeking and that we have nothing meaningful to contribute. I took special offence to this because as far as I can see Nuzz has not bothered to ask any questions about our attempts or sought to understand where we are coming from before ‘writing us off’. I shouldn’t have used all bold.

My reaction here is because Arthur T Murray is only interested in promoting himself and attention seeking. We’re here to actually do something worth doing. I felt that Nuzz was implying that we had motives other than what we claimed which was to help humanity.

I’ve had harsher criticism but perhaps not so hostile. It has always been respectful and comes with thorough and well argued justifications. It seems to me that Nuzz has simply written off the project without trying to understand us at all and I have a real problem with that kind of attitude. I agree I overreacted to Nuzz’s post - it is because I am so passionate about what we’re trying to do because we really believe we have a chance of making a real difference to the world.

Thank you for at least trying to see where we’re coming from - I feel like our interaction has been one of mutual respect and I have tried my best to answer your questions/respond to your suggestions and I have taken on board much of your advice with regards to improving my communication. What we’re trying to do is very abstract and difficult to explain but we’re trying our best hopefully that’s enough.

Agreed, it is one of my flaws that I take offence easily especially if I feel like someone is attacking me. I feel the need to return in kind. I am all for respectful feedback and criticism like what you have been so kind as to offer and I am humbled that you have taken the time to listen to what we have to say.

1 Like

I like this rather concise characterization of your AI project. This could be a good basis for better explaining your project to others. Also, you should contrast what your doing with possible alternatives, and explain why you think your approach is better to those alternatives (at least in some respects).

The meta-cognitive module seems to be the most intriguing, promising, but also the most challenging one. Perhaps you should focus on that and try to elaborate in detail how you envision that meta-cognition of your AI to work in practice. If you can do that well, I guess that it will make others interested in your project.

Anyway, there’s a science fiction story about creating AI using evolutionary algorithms. It’s called Crystal Nights and is written by Greg Egan. It might not be probably that things work out as in that story, but at least it’s a hypothetical possibility to keep in mind.

Hi Michael,

Agreed, let’s finish this paper and get it published and see what the experts think. I’ll be sure to check out the book when I get time and I’ll keep you up to date with what’s happening with the project.

1 Like