Decentralized Applications

Decentralized Applications

Evolutionary methods for problem solving and artificial development
From Dana Edwards Dark AI Blog
http://darkai.org/

“One of the principles I follow for problem solving is that many of the best solutions can be found in nation. The basic axiom of all knowledge as self knowledge applies to the study of computer science and artificial intelligence.”

“By studying nature we are studying ourselves and what we learn from nature can give us initial designs for DApps (decentralized applications).”

“The SAFE Network example”

“SAFE Network for example is following these principles by utilizing biomimicry (ant colony algorithm) for the initial design of the SAFE Network. If SAFE Network is designed appropriately then it will have an evolutionary method so that over time by our participation with it can fine tune it. There should be both a symbiosis between human and AI as well as a way to make sure changes are always made according to the preferences of mankind. In essence SAFE Network should be able to optimize it’s design going into the future to meet human defined “fitness” criteria. How they will go about achieving this is unknown at this time but my opinion is that it will require a democratization or collaborative filtering layer. A possible result of SAFE Network’s evolutionary process could be a sort of artificial neuro-network.”

“The Wikipedia example”

"Wikipedia is an example of an evolving knowledge resource. It uses an evolutionary method (human based genetic algorithm) to curate, structure and maintain human knowledge. "

“One of the main problems with WIkipedia is that it is centralized and that it does not generate any profits. This may be partially due to the fact that the ideal situation is that knowledge should be free to access but it does not factor in that knowledge isn’t free to generate. It also doesn’t factor in that knowledge has to be stored somewhere and that if Wikipedia is centralized then it can be taken down just as the library of Alexandria once was. A decentralized Wikipedia could begin it’s life by mirroring Wikipedia and then use the evolutionary methods to create a Wikipedia which does not contain the same risk profile or model.”

“Benefits of applying the evolutionary methods to Wikipedia style DApps”

“One of the benefits is that is that there could be many different DApps which can compete in a market place so that successful design features could result in an incentive to continue to innovate. We can think of the market in this instance as the human based genetic algorithm where all DApps are candidate solutions to solve the problem of optimizing knowledge diffusion. The human beings would be the innovators, the selectors, and the initializers. The token system would represent the incentive layer but also be for signalling so that humans can give an information signal which indicates their preferences to the market.”

“Wikipedia is not based on nature currently and does not evolve it’s design to adapt to it’s environment. Wikipedia “eats” when humans donate money to a centralized foundation which directs the development of Wikipedia. A decentralized evolutionary model would not have a centralized foundation and Wikipedia would instead adapt it’s survival strategy to it’s environment. This would mean Wikipedia following the evolutionary model would seek to profit in competition with other Wikipedia’s until the best (most fit) adaptation to the environment is evolved. Users would be able to use micropayments to signal through their participation and usage which Wikipedia pages are preferred over others and at the same time you can have pseudo-anonymous academic experts with good reputations rate the accuracy.”

“In order for the human based genetic algorithm to work, in order for the collaborative filtering to work, the participants should not know the scores of different pages in real time because this could bias the results. Also participants do not need to know what different experts scored different pages because personality cults could skew the results and influence the rating behavior of other experts. Finally it would have to be global and decentralized so that experts cannot easily coordinate and conspire. These problems would not be easy to solve but Wikipedia currently has similar problems in centralized form.”

“Artificial development as a design process”

“Quote from artificial development:”

“Human designs are often limited by their ability to scale, and adapt to chang-ing needs. Our rigid design processes often constrain the design to solving the
immediate problem, with only limited scope for change. Organisms, on the other hand, appear to be able to maintain functionality through all stages of de-velopment, despite a vast change in the number of cells from the embryo to a mature individual. It would be advantageous to empower human designs with this on-line adaptability through scaling, whereby a system can change com-plexity depending on conditions.”

“The quote above summarizes one of the main differences between an evolutionary design model and a human design model. The human designs have limited adaptability to the environment because human beings are not good at trying to predict and account for the possible disruptive environmental changes which can take place in the future. Businesses which take on these static inflexible human designs are easily disrupted by technological changes because human beings have great difficulty making a design which is “future proof”. It is my own conclusion that Wikipedia in it’s current design iteration suffers from this even though it does have a limited evolutionary design. The limitation of Wikipedia is that the foundation is centralized and it’s built on top of a network which isn’t as resilient to political change as it could be. In order for the designs of DApps to be future proof they have to utilize evolutionary design models. Additionally it would be good if DApps are forced to compete against each other for fitness so that the best evolutionary design models rise to the top of the heap.”

1 Like

I’ve asked Dana to join us here at SFF.

At the moment, this is his position:
“If I see a thread of importance maybe I’ll post. My preference is to just post on my blog through and communicate through the channels I have.”

What gives the most problems are the trolls (and infiltrators). With no certain method to unveil the intentions (and alignments) from each individual, Wikipedia got into the “editorial wars”, and that was certainly political knowing that the two most flammable topics were “Obama” and “Bush” the last time I check.

1 Like

Link to a related Facebook conversation:
https://m.facebook.com/groups/207354862623720?view=permalink&id=1096731987019332

An excerpt:

Ian D. Mclean:

A rationally representative democracy will have a number of representatives proportional to the demographic they represent. Contrary to the way this language is used in the US political system this has almost nothing to do with elections, popularity, or “ability to serve”. The US political system is not representative of US demographics; this can be seen plainly in terms of the number of poor and homeless congress people. Congress skews strongly towards rich, white, Christian, older, cisheterosexual men which is skewed towards a demographic minority representing 1% or less of the population.

This could be remedied by statistical sampling methods. Congressional representation would not be done by election. Double blind controlled random sampling in which congressional representatives would be appointed to office by mandate of civil service upon selection. Poor people, homeless people, disabled people, young people, old people, sick people, people of color, queer people, trans people, intersex people, asexual people, pansexual people, polysexual people, polyamorous people, Buddhists, Hindus, Muslims, atheists, anarchists, feminists, etc would not be exempt from serving terms as congressional representatives.

As long as the present Congress’ authority is accepted, this will never come to pass because the existing hierarchy of power would not survive in the environment and could not compete advantageously, so they would never vote to amend the constitution and undermine their present power. As such the only way to institute this change would be mass revolt. Ferguson and the Occupy movement were tests of the US miltiary-police preparedness for eventualities like this. Given what we’ve seen in those protests and the US corporacratic response, there would likely be lethal force authorized, so the revolt would likely necessitate significant fatalities and casualties. The protesters needn’t use force or return violent hostilities though agitators either hired or voluntary would likely give the corporate media what they need to depict the uprising as violent and the military-police forces as heroic defenders of the US democracy.

The solution to this impasse is to institute the system independent of the US congress. Either as a world congress (ideal) or in a country of major population such as India or China. Under that scenario, the ideal outcome is the systematic shifting of economic power away from the US empire to a coalition of pro-social progressive countries such as New Zealand, Finland, Netherlands, Iceland, and India. Best case scenario is the collapse of the US and Chinese capitalist empires. Under the ideal formation of the world congress, gaining membership as a country would be by voluntary but binding international legal agreement.

Unlike US congressional service, the appointment of new congressional members would not be by discrete years but as necessary in a continuing process as seats become vacant or are allocated. The congress would meet and operate according to modern standards of communication, networking, and principles of universal accessibility. Decision making processes would be formalized according to domain specific logical languages with acknowledgement of the formal limitations of the decision making process, and it would be automated by computer algorithm whenever and wherever possible though the decision making would be checked by human or transhuman audits. Technical specialist decision making would be relegated by representation of the fields involved in a peer review process that would be open to public

Dana Edwards:

Ian D. Mclean Why have democracy at all? Just as we can get rid of capitalism once technology reaches a certain point we can also get rid of democracy for the same reasons. So why not just let the computer manage the entire society including the laws?

Why only have the computer manage the resources such as energy, food, water, etc but not everything else? Wouldn’t the computer managing the resources of the earth be in the position to know what is best for the earth?

The computer would have the God’s eye view and also potentially could be smarter than us all. So if it’s the smartest person in the room why shouldn’t we let it guide us completely?

Ian D. Mclean:

Because no matter how powerful the computer we construct there will always remain propositions expressible in the calculus of the computer which are not only undecidable for the computer but essentially undecidable. There will always be propositions indefinable in the calculus which are true.

Computers are not magic and the reality of the Gödelian theorems, the Tarski Theorems, Arrow’s impossibility theorems, and the halting problem are some of the least understood logical results in the Zeitgeist movement and amongst Transhumanists or Singulitarians.

Thus, there are propositions and decision procedures which will have to be analytically deliberated by humans and the society in general. There must always be a metasystem to watch the watchers.

The question of whether democracy is the best metasystem is something worth deliberation, but I was asked only for, “at least suggest a few starting steps.” Rather than a comprehensive future proof solution.

Mark Larkento:

Yes, I did and you did. Ty.

Mark Larkento:

Then, to move forward these are the key steps (what order?) :
(1) To avoid violent ineffective mass revolt, institute a system independent of the US congress.
(2) Form a world congress (ideal) in a country of major population
(3) Systematically shift economic power away from the US to a coalition of pro-social progressive countries
(6) Add membership by country on a voluntary basis.
(4) Members are bound by international legal agreement.
(7) Appointment of new congressional members would as necessary
(8) Decision making processes would be formalized and it would be automated by computer algorithm

Dana Edwards:

Ian D. Mclean Either a computer will manage the earth or humans will. If humans do it even with the help of computers then you still have all the risks and dangers you have now under capitalism. It doesn’t matter if the humans do it through something we call a “state” or something we call a corporation because ultimately it’s humans managing the resources.

On the other hand if a computer does it then you could have true economic justice and a resource based economy. I don’t see how you can have a resource based economy without the computer making the important decisions on managing resources.

So an RBE means putting the machines in charge of the earth. Otherwise humans will be in charge and it wont make much of a difference if it’s under capitalism or communism because those are just systems humans use to determine where r

Mark Larkento:

Much of that, Ian, is compatible with Dana’s ideas.

Dana Edwards:

Mark Larkento Without step (8) I don’t see how we could get away from human error in resource management. If we are going to accept that resource management is never going to be perfect then anarcho-capitalism makes sense because at least it’s decentralized to the highest degree.

If it’s a state determining what each of us can have then we will never be able to agree. It’s never going to be completely fair when humans are in charge. Only with machines in charge can it be completely free of human bias, because humans tend to favor themselves, their friends, their families, or the human species itself.

Mark Larkento:

That’s why we start small and experiment, as you suggested, Dana.

Ian D. Mclean:

  1. is partial automate where provably decidable. Otherwise, not automated. Figuring out how to divvy up 10 apples among 10 people is a computable problem and an effectively computable problem. It lends itself to the logical powers of a computer. Defining notions of truth (Tarski’s indefinability of), justice, beauty, and similar concepts are not amenable to comp

Dana Edwards:

Computers are pretty good at justice. But you have a point when it comes to beauty which is why humans would have a purpose. The machines need us for our preferences basically.

Collaborative filtering is an example of this. The algorithm can evolve beauty based on our collective likes but it still needs our collective likes. The algorithm could generate the most beautiful partner for us based on our likes and dislikes only after it has collected them.

To me justice is like mathematics though. What would keep us safe? Not just “what would make us feel better”. The death penalty and revenge makes people feel better but it doesn’t keep us safe.

Ian D. Mclean:

Dana, justice is a concept which relies on or extends truth. Truth is indefinable within the calculus of a computer (Tarski’s indefinability of truth). As such the notion of truth within the calculus is not only undecidable but essentially undecidable. Essentially undecidable means precisely that no extension of the concept or formal property can be decided either.

So you have no idea what you’re talking about when you claim that “computers are pretty good at justice.” But I suppose you are conflating following and implementing rules without question or contradiction to be equivalent to “pretty good at justice”.

Dana Edwards:

If humans cannot define “truth” how can a computer? Define the truth.

1+1=2? Computers can do pattern matching. The more abstract you get the less humans can agree on truth. If you’re talking about facts computers can deal with facts better than any human.

So a computer judge would be better than a human. A computer doctor would be better than a human. It is because they can handle more facts and interpret knowledge in ways people can’t.

But if you define truth as something mystical which cannot be calculated or which isn’t a pattern then we cannot easily have a computer detect the pattern or run the calculation. At the same time you won’t have humans all agree either because we don’t agree on what is truth when it’s not quantifiable.

The question isn’t whether or not computers can do perfect justice but merely can they do it better than humans? I think so. I also think computers could be better at surgery, medicine, flying planes, driving cars.

Ian D. Mclean:

You make an immediate error of reasoning. Equating human reasoning to computer reasoning. The truth of a mechanical system can not be directly defined with the same mechanical system of logic. Analytically and non-constructively, we can define truth outside a formal logic and language.

Humans can and do define truth. Formal systems can not define their own notion of truth in their own syntax.

You should not confuse validation or verification with truth. 1+1=2 is a truth in some systems of arithmetic. But if the ones represent cats and the + represents reproduction then the answer might be 6. Computers can only deal with a certain subset of facts in finite time. That is to say in the universe of all possible things, a computer can only effectively compute a strict subset of some possible things. That subset is the set of polynomial expressions or equivalently the set of algebraic reals or equivalently the set of polygons.

There are classes of problems which are computable in principle but would take a countably infinite amount of steps to compute. The class of problems which take a strictly finite number of steps to compute is roughly the class of “effective computability”. Of that class, some problems are solvable linearly, some problems are solvable quadratically, some problems are only effective computable for very small algorithmic complexity inputs but generally take exponentially greater number of steps to compute for each unit of algorithmic complexity added to the input. The class of strictly decidable problems are linear or possibly quadratic.

It is important to understand how small these classes really are. For that, we need to discuss notions of countability and uncountability.The set of integers, the set of rational numbers, and the set of rational polynomials are all countable and countably infinite. You and I recognize that the sequence {0, 1, …} for n+1 is a sequence which goes on without end. We go from counting individual elements to generalizing properties about the sequence intuitively, so we go “one, two, three, …, infinity!” Computers can’t do that; they always count by each and every element; if they don’t then they may get lost in infinitely branching or recurring logic which is what the halting problem is fundamentally about. When you take the power set of a set (the space of all possible combinations and subsets of the elements of a set), the cardinality of the resulting set will always be strictly greater than its source; thus, taking the power set of the set of rational polynomials produces a set with strictly greater than countable infinity cardinality; the amount of things in the power set of the algebraic reals is strictly more than countably infinite. Uncountably more infinite. That is the set of real numbers.

When we ask analytically how much bigger the set of real numbers is compared to the algebraic real numbers, the answer is the real number set is so much bigger than the set of the algebraic real numbers that the algebraic reals can be said to be negligible with respect to the real numbers. That is at that scale, we can basically treat the entire set of rational polynomials as a zero. Almost no real problem is computable. Hence, no computer has yet spit out the Theory of Everything for physics because physics depends critically upon the real and complex numbers.

The indefinability of truth isn’t mystical. It is logical and demonstrable. Truth is something each and every person brings to a given set of syntax; the meaning of a sentence is not contained within the sentence itself; the meaning is generated within your physical embodiment and your physical environment when you are stimulated by perceiving the sentence. Not all sentences are meaningful to you even though many such sentences are in fact true in some language. I don’t speak Chinese, but they can express truths in their language which I can’t and likewise.

Mark Larkento

It would be more clear to say axiom, assumption, or principle rather than truth.

Ian D. Mclean

The notion of axioms are not necessarily true and their individual truth-value is not necessarily true. And notions of assumption are not equivalent to notions of truth. So no, it would not be more clear to say axiom, assumption, or principle as substitution to truth.

Dana Edwards:

Center For Applied Rationality - Overview

Dana Edwards:

Ian D. Mclean If you can’t define truth then how do you know it’s something which exists outside of your own head?

If it exists it should be quantifiable. How true or how false? You can then have a number representing the exact position between true and untrue something is.

So you can quantify it with fuzzy logic which means a computer can discriminate between true and false quite easily as a function of boolean or fuzzy logic.

I don’t know how you have the idea that computers cannot distinguish between true and false when that is what computers do best. Please define truth to me so I know what you’re talking about.

Dana Edwards:

Ian D. Mclean Computers can do propositional logic. You can have a truth table or truth map. Using this you can determine if a statement is true or false and then using this logic determine if an entire argument or set of arguments are logically true or false.

Computers can calculate risk, probability, use Bayesian inference, and they do it better than any human being. So why would you say computers aren’t better at figuring out the truth when it’s the one thing they calculate?

Now if you cannot define the truth to me then of course we cannot input it into a computer. So you’re right some stuff cannot be defined which is why we cannot calculate it. So beauty for example cannot be defined because it’s subjective.

Truth is not subjective. Truth is objective. Something is objectively true or objectively false. If you’re trying to figure out how true or how false something is then you use fuzzy logic.

1 Like

Dana Edwards:

“Truth can not be directly defined with a mechanical system of logic. Analytically and non-constructively, we can define truth outside a formal logic and language.”

This makes no sense. It’s like saying we define reality without mathematics. Reality is described by mathematics and logic better than anything else.

So what exactly is truth if it’s not reality? If it’s mathematics then computers most definitely will evolve to become smart enough to have generate theories. Computers already can generate formal proofs and algorithms using genetic algorithms.

This isn’t to say everything is computable but the question is if something is not computable by a machine would it be computable by the human brain either? It may be some stuff is not computable but if the brain is just a sort of digital signal process itself then what makes the brain special compared to an AI for example?

Qualia? Qualia is what generates the preferences. So maybe you’re saying qualia is not computable and I agree with that. Qualia isn’t the same as “truth” as you used it though.

Ian D. Mclean:

Dana, you once again fail to grok what I am saying. It seems likely to me given our now extensive dialogue that you probably experience a form of Metacognitive blindness. It isn’t uncommon in our species. Metacognitive blindnesses of various kinds effect upwards of one third of the US population. So it is possible that you simply fail to perceive the metalogical structure which I am describing. In which case much of this will be logically flat to you.

Truth is definable analytically with appeals to empirical observation. Or said another way, truth can be defined in domain specific ways for some applications. It would be more accurate to view it as “a truth” than “the truth”. There may be “the Truth”, but we haven’t found a way to describe that in common, and we have formal proofs that it is impossible to do in formal systems without contradiction with respect to themselves. We can develop truth theories for formal logics by using other formal logics, but this appeals to boundless chains of truth theories and formal systems each defining the truth for another.

Computers don’t actually deal directly with truth. A mechanical computer doesn’t understand the instructions which are input by voltage differentials or force differentials. A mechanical computer is a cascading switching board. Changes to the voltage to a transistor result in the transistor transforming into roughly discrete states which we interpret semantically as representing true or false values. The states actually have equal claim to being called true, but we define by convention one to represent “true” arbitrarily and the other to represent “not true” also arbitrarily. When wind blows through a wind turbine, the turbine doesn’t know that wind is blowing through it, and it wouldn’t recognize the truth of the statement that “wind is blowing through the turbine”. It does what it does due to physical causality.

We can use computers and fuzzy processors to model truth-that-we-recognize in certain conditions, but the computer used to model and aid in the measurement of “truth” does not itself have any awareness of what it is modeling or measuring. It simply is a model. It simply measures. It is and it does; it doesn’t know why or how or what it means to be or do.

Dana Edwards:

Why should we search for a theory of everything if we are limited by our hardware as much as the computer you mention is limited by it’s hardware?

I think we can only reach an approximation of the truth rather than an absolute. I think the computer if it’s smarter than us can reach a more approximate truth than what we could reach if we could describe the question in the right way.

But this is now getting into the very theoretical realm. Unless either of us are experts in the field of AI we probably don’t know. I would say theory would suggest computers are limited but our biology limits us too.

Mark Larkento:

You’re both getting closer.

Ian D. Mclean:

The mathematical basis of reality is a conjecture and controversial hypothesis in theoretical physics with a long history of untestable propositions which can by no means be admitted as scientific. Tegmark has somewhat famously addressed this question with “The Mathematical Universe” which attempts to put the hypothesis on more empirical grounds, but we have no resolution on it at current.

As far I can tell from extensive review of the literature, a human (conscious) mind is not equivalent to a Turing machine. I have a couple sketches of proof which indicate that Turing machines will never reach consciousness or if they do it will be severely impaired and inefficient compared to human beings. In formal and metamathematical discourse a distinction with difference is drawn between what is “computable” and what is “analytical”. Some formal proofs can be found by consistent machinery, but almost all proofs can not be found or recognized by consistent computers.

There’s outstanding and fairly recent hypotheses about the quantum mechanical and quantum computation nature of the human mind. Recent developments around the hypothesis that human minds derive their power from contradiction tolerance and paraconsistent or paracomplete properties which are deliberately engineered out of Von Neumann architectures due to unpredictable faults and errors in the architecture.

And anyone who has played pretend or enjoyed a movie should recognize that we don’t define everything in reality with classical mathematics, or logic, or mathematics in the standard, canonical, or common sense of the word, “mathematics”.

Ian D. Mclean:

http://radicomp.blogspot.com/2012/05/hypotheses-basic-overview.html

Here’s a brief summary I wrote of the situation so far.

Radical Computing: Hypotheses: a Basic Overview

radicomp.blogspot.com

The text in this post is somewhat illegible (at least for me: http://s14.postimage.org/twlfo25g1/Blog_shot.png). Can you upload a pdf version of it?

Mark Larkento:

So far, these are the elements I see from Dana’s DAVS research plugging into Ian’s conditions as a test:

(1) Zero State ~= an independent system
(2) Virtual Space ~= a region of major population
(3) ZS Custom DApp ~= algorithm to shift economic power
(4) WAVE pro-social principles ~= Truths

This conversation between Ian D. Mclean and Dana Edwards is interesting. Ian seems to have a background in mathematics, especially in computation theory and related fields, and therefore has a good understanding of the limitation of mathematics and computers. Personally, I have seen myself mainly as a philosopher, but for various reasons I studied mathematics. This has helped me to understand the value that mathematics can have for various fields, especially philosophy, but it has also shown me that it’s unreasonable to expect from mathematics or computers that they will definitely solve most of the really challenging problems.

Dana seems to have the hope that computers will be able to do most things better than humans and should therefore be employed to in various jobs which hold a lot of power, like judges or politicians. I think this wish is naive to some degree. But how naive it is depends on what it is that we see as “computers”. The most advanced AIs don’t have too much in common with manually coded programs, but use deep learning techniques which work in relatively similar ways to how the human brain works (though the similarities are rather basic). Anyway, I think the way to go is to amplify human cognition with AI assistants, at least in the near to middle term future. The long term future should belong to upgraded minds which will use the best components of human-like and computer-like cognition.

In the meanwhile we are still dealing with humans and their strengths and weaknesses. Bypassing them by turning over power to computers seems to be highly problematic to me. Computers may be quite good at many forms of highly complicated computation, but they work with models which have been provided by humans. Computers are bad at creating models spontaneously, though that may change in the future. If you feed computers with the wrong models or wrong data, they will output garbage. The notion that this garbage might be more “correct” garbage in a sense is not really reassuring to me.

So, I think humans should stay in control until there are clearly superior alternatives. Computers are not superior to humans in general.

1 Like