Latest | Categories | Top | Blog | Wiki | About & TOS | Imprint | Vision | Help | Fractal Generator | Futurist Directory | H+Pedia

A Wiki with fact checking?

featured

(Michael Hrenka) #1

Continuing the discussion from What’s the ideal discussion platform?:

That’s interesting. So, you basically want a version of Wikipedia that presents information in a truthful way with verified facts? And what’s an “easy deliberative tool”? What is it supposed to do?

How will the fact-checking mechanism work? How can it be protected from manipulation?

You want more details about DokuWiki to see whether that would be a good basis for your project? There are dozens of different wiki systems out there. And there is even a site which can compare the features of all of them: WikiMatrix.

Also, you need to consider that most wikis can be extended considerably with plugins. There is probably a combination of wiki and plugins that will bring you pretty close to what you want to have. WikiMatrix considers the existence of plugins to some degree. Here’s a comparison between three excellent wikis:

http://www.wikimatrix.org/compare/DokuWiki+Foswiki+MediaWiki

I’ve decided to use DokuWiki, because it made the most overall “advanced” impression. It’s hard for me to come up with a definite list of all possible features for DokuWiki, because that would mean that I’d need to search through all of the available plugins for DokuWiki – which are very numerous.

So, let me just list some interesting features of DokuWiki:

  • There are Access Control Lists which can give different users and user groups different rights for viewing and editing different parts of the wiki

  • There is no database, only text files. This should make it very easy to save copies of the whole Wiki. Extracting information out of databases can be quite complicated – especially without dedicated tools for that purpose.

  • The default theme uses HTML 5 and is responsive and mobile friendly

  • It supports namespaces for pages. Effectively, this is more or less a directory structure for pages.


(Maximo Ramallo) #2

Yes. Actually Wikipedia doesn’t accept to create content on its own right, so we can not check the basis of their asseveration:


This is not the only problem I need to solve, but the constant misuse of words, and even the corruption of words themselves. I have cases where factual “newspeak” is used in a way that some become victims of misunderstanding, and it gets only worst.

If its easy customization, or at least the individual can understand how it operates, and it can set the automation of its deliberation if so decided, without being tricked, we succeeded not only on the final process, but on developing a tool that enlightens how the deliberation occurs. Not just offering a shadowy way of getting things done, but a clear, open process.

It should do primitive operations of reasoning and deliberation, to build from bottom-up the process under many other platforms can operate. Despite I like this approach and it might be what it takes to get what I need, it might also be the reason for not being understood, I must admit.

Most ideas develop from underlying ideas. Some are created by the intersection between other ideas. Other things are those facts which rely on external sources, those which pretty much rely on physical mechanisms (counting too on statistics and probability as “physical mechanisms”). These two kind of things are those which need to be checked.

Manipulation comes from individuals. Is key to map the relations of these individuals and their interests on the topics discussed. A mechanism to check for lobbies. Therefore, its also important to keep out anonymity from sources, but not necessarily out of deliberation. But is also important to check on what the individuals base their opinion, or more clearly, on who. It has to be a logical relation between like minded individuals, or its not an honest relation.

Now, the DokuWiki seems interesting. I don’t know if it offers all the functionality I seek, but then again I may be asking for too much.
Many features you list are nice indeed. The text file approach caught my attention, as also the plugin and the mobile friendliness.

I have decided on choosing Java and xml to keep language entanglement to minimum. Java can do a lot, and xml is the perfect database-like language to perform cross platform information exchange, while Java itself can operate over it. But again, I am no professional, even so this is based on months of investigation.


(Michael Hrenka) #3

It seems you are expressing yourself on a quite abstract and general level. I tend to do that, too, especially because I like thinking in abstract realms (I have studied mathematics and have been very interested in philosophy). Unfortunately, this kind of writing makes it hard for others to understand what one means. The solution to that problem has been taught to me by my ethics teacher in school: Make examples! More examples! :smiley:

So, I’ve still don’t really understood by what you mean with your “easy deliberative tool” and what it should do if it works right. What about making a few examples describing possible actions that it could do. How would people use it concretely. What real tasks would they accomplish with it?

Now, when it comes to your fact-checking mechanism you probably want an approach that combines automatic artificial intelligence based fact-checking with people who go out into the world and collect data which they can feed into the system. This would be quite complicated, but doable in principle. But you would need much more than a simple wiki. You’d need a whole organisation (perhaps crowd-based) with at least dozens of people to aggregate the necessary data and program and improve the AI.

About your preference for Java and XML. Ok, Java is a decent programming language, and XML is, I guess also ok as universal data storage format. I just don’t think these choices are terribly important. It’s more important that your concepts make sense.


(Maximo Ramallo) #4

I guess one of the most effective things (for me, right now) is debunking a lie piece by piece. The tagging system would show what fallacies get committed, it will be a real show of alarms if so.

In the draft I mention a test case of three subjects from a business. Even if two of them coerce the situation into framing the third, lastly it’ll be shown the two liers have arguments incompatible with their sources, or their sources will show a flag of fallacy.

For the system to work it needs to agree on specific terms, and it builds the rest of the system upon that. And when one doesn’t agree on fundamental, basic terminology, like saying “love means hate”, the alarms of fallacy screams.

One can configure a lot of ideology using simple reasoning, it just take time. But that is ok if the individual is not self aware of much from its own usage of terminology. I for sure see this everyday on my country, with words like “militancy” taken as “activism” and “humanitarian aid” for one side, and as “personal army” by the other.

For what I’ve seen, this system predates artificial intelligence programming for ideas. It seems clear that this kind of wiki could actually feed an AI into the kind of advanced, constant programming according the premise of “ideas based on more primitive ideas”. Like teaching the AI that “love is not hate, but has more to do with sweetness”. What can be pre-programmed is the automation of customization, which can automate cumbersome sets of ideas into an automatic reasoning known by the individual.

Thus, another function is the making of an entire body of “laws”, and I mean an entire legal system, if so wished. Well, at least it can be used to build a draft on new laws on its more modest function.

The part on getting people go out into the world to collect data is in part solved for what I concern. Meaning, my specific aim to use. And its true, one needs “an army of people” for this, or one has to rely on what is usually called “external sources”. The veracity of external sources rely on personal identifications, the knowledgeable reputation. This shows we can cross calculate the external sources as somewhat individuals. And just like individuals, we check their relations with other individuals, check the logic, etc.

To somewhat answer this in a broad sense, we can only hope to check something we have access to it. Meaning we must have the thing we are checking available, or we must accept our ignorance over it. Magic doesn’t happen just “because”, I accept that. Therefore, I consider a realistic approach to start on simple matters.

What I can hope, is that every effort I put will consolidate one day into reality. But this is all relative to the current socio-political events I live by. Therefore, I might prepare a document addressing my work for open source this, or the thing just might die.

EDIT: I have to say that when I worked with java, the object-oriented programming model came as very similar to my concept of “ideas based on other ideas”.


(Michael Hrenka) #5

So, you seem to want to let fact checking emerge out of the rules of the system which have to be followed by the users of the system, rather than relying on automatic artificial intelligence. Sort of like Wikipedia is based on the rules for editing its articles.

This approach might work. Or it might fail, because the users don’t bother following the rules precisely enough. In any case, one would have to launch this project to see how it pans out.

Have you presented your idea to the Less Wrong community? If you can formulate it well enough that they can understand what you mean (yeah, I guess you still need to work on that), then they will at least provide you with very valuable feedback, or might even be willing to support your efforts.

P.S.: I’ve just stumbled over this article that is relevant to the topic: http://motherboard.vice.com/read/an-algorithm-for-fact-checking?utm_source=mbfb


(Maximo Ramallo) #6

If the system is the collective entity of humankind (plus the phenomenological sources it interprets, a.k.a. the universe), yes, but because every rule comes ultimately from the system, even artificial intelligence was written by agents of the system.

Yes.

Oh yes, rules are always broken, and new rules comes from that. But precisely this is key to working correctly. Change is key into maintaining the system against adversities. But if “everything is about control”, the specifics of how to cope are flexible, not the goals of keep the system.
One way to do this is to present a way to tolerate anything that is not alike, while keeping off the chances of mutual destruction of ideas if those are contradicting but not completely explanatory of the flaws from the other. Just like paradoxes exists, they remain until an explanation comes (I am thinking on Zeno’s paradoxes)
The wiki (which I am still thinking if “wiki” is the appropriate name) has the mechanism of having groups of thoughts split without conflict, ultimately developing themselves until one is debunked.

I tremble.
Yes, I could present a draft. I was aiming to present this to the various groups developing collaborative tools and deliberation platforms as mean to get this quickly developed.

I dare not to present this in a completely formulated manner for the reason that I lack the scientific and argumentative tools needed to explain it to the scientific community. I am working on these skills, seriously. Even I acknowledge my lack in skills relevant I feel confident on the model.

EDIT: Yes I saw that article, I am collecting articles relevant by the way, why not starting a list of that in the wiki?


(Michael Hrenka) #7

This sounds like “if this rule fails, then we create more rules to cover up the failure”. This approach sooner or later leads to really byzantine rule systems that people neither want to follow nor even learn about. Complex rules should be avoided where possible. If the only way to make a system work are complex rule systems, then the system is kinda broken and should be reconsidered.

About using the wiki for article collection: Yes, you can go ahead with that. To create a new page you search for it. Then a page appears that says the searched page does not exist, but you can then create it.

But you should choose a good name for your page. Also, a project name would be useful. If you have a project name and a page name, you should search for the page

projects:project_name:page_name

The wiki is a bit complicated, but it’s good. :smile:


(Maximo Ramallo) #8

The keep terminology more accurate, lets think on models instead of rules. Many models, mostly on the scientific community, are guided by rules. We know how many models get debunked by new models and the knowledge base indeed grows. This is an overall gain.
The terminology of rules applied to legal systems could be not a good example, perhaps we may leave that for a moment.

Ok, give me time. I need to read the other thread in the forum and catch up.


(Michael Hrenka) #9

So, you want something like place “scientific” quality standards on something like a public knowledge wiki? How would these standards be enforced actually? Perhaps by reputation scores?


(Maximo Ramallo) #10

If usage of models requires either voluntary adherence or use of violence, we surely need to work on adherence.
Would people adhere by their own understanding, by trust, or by trust to an expert, is for each individual a different situation. We could get close to individual preferences and meet individual expectations which people would adhere.
Of course, other demands may exists, but so far we get adherence:
By one’s own understanding.
By blind adherence to established customs.
By trust to experts.
We certainly can fetch reputation scores according to each. Of course, you can see the second one is a bit of a problem. To fix this we either need to divert pure trust to either personal understanding or at least trust to an expert. Mechanisms for this do exists, being the tag themselves one way to divert established customs towards deeper understanding. So individuals gain understanding of their own customs.
We may continue to have blind adherence thou, but we must always try to approach the optimal situation, not to seek some unknown perfection.
Both, blind adherence as adherence by understanding, may cause voluntary migration, like the current intentional communities. This migration can be in virtual spaces as the internet does, or physical as the intentional communities.
Is worth to mention it because is another way models get enforced in cases of dissent, but a way which appears without program it. Measures must be taken to assure it stays on dissent and not strife.
This is a working part and I would like to see what options there are to keep conflict at minimum while migration takes place. Or even to ensure individuals with dissenting opinions inside a community with a different way of thinking, because we don’t know if there are some models which could coexist.


(Michael Hrenka) #11

Unfortunately I have a really hard time to understand what you mean. There seem to be different reasons for that:

  • You are being very abstract
  • You seem to suggest complex solutions to problems that are not clearly enough specified. What’s the context and extent of both the problems and your suggested solutions?
  • You don’t give concrete examples of how your proposed solutions work out. Or even why the abstract problems you mention are really bad.

These facts create huge potential for misunderstandings. I may get a rough idea about what you are talking about, but I could also get it totally wrong.

Perhaps, let’s start again from the beginning please? :disappointed_relieved:

Could you tell me an example of a person using your proposed system to find out the truth about some subject? How would that person interact with the system? How is the truth of the information checked? Which persons and which mechanisms are involved in that?


(Maximo Ramallo) #12

You are right, I need to give more examples. Here it goes:

We have two people editing the same article about a police report on a theft. If person A says the thief was a female, and person B says it was male, we must remit to a physical source to know who is right. But before anything the article must be tagged with the sources.

For sources we have to see if the reputation of the sources are acceptable and always consistent, so lets suppose the source is the web page from the attorney itself. Then that source has a high score consistent with the reputation from the attorney and the accuracy of past cases.

Now, if for example the thief was female, person A gets lifted its reputation and person B gets it lowered. So each time each individual edit an article, they get their reputation linked to the edit, so the edit itself gets linked to a score of reputation. Its karma. Such karma also works with sources linked with multiple articles. And in time we can have a lot linked.

If the article is about ideas or concepts, we tag each phrase, each term, and each word to a root article on the concept the first article is based. Having everything tagged this way force us to have primordial roots, “axioms” which we either all agree or we are speaking about different things, which in that case we split the article into two or more distinguishable axioms.

Knowing that each tag has the hyperlink to root articles, we can measure the accuracy of the edits done by individuals, which were right or wrong, and have the reputation be build upon that accuracy. Knowing also that been right or wrong depends on how consistent the edits are with the entire system of tags.

To know more about consistency of edits, a record with all the edits is set, using a method of comparison between the most voted edit and the last edit done by the individual. Where we compare word by word if it is similar or not, either conceptual or merely grammatical.

As another option for having a even more automatic system of edition, we can use what I presented in the draft. Is the idea of “versions”, where the edits are done in private versions of the wiki, which are one wiki by name edited by the individual. The more edits alike between the private wikis, the more the probability such edits to appear in the universal wiki.

In this case, the method of comparison is done automatically between the universal wiki under that name and the different individual wikis, having a comparison and an edit done almost immediately. Plus, we have the greatest way of having different opinions at hand by displaying the top ranked versions under the universal wiki, so people can know who has the most universal version, but also know what other opinions are, and how they dissent, giving people the chance to change their own opinion according to other points of view if they see fit.

To have more words than the universal wiki should not be a problem for ranking. Because if we link to reputation, we have such scores accounted at the moment of looking for different versions.

Another way to auto-edit, is to designate antithesis of concepts, or, what the concept is not. So, if anyone dares to insert conflicting concepts, the system would downgrade its edit.

In the end, we could develop a meritocracy of experts and a plurality of voices, all in one.

Tell me if I miss something to explain.


(Michael Hrenka) #13

Thanks a lot of explaining your ideas. My impression is that you are trying to do a lot of things at once, which makes the whole system extremely complicated. Let me try to identify the different aspects of your system:

  1. Edits are voted on by the users. Those edits who win the votes get into the official wiki.
  2. Edits are rated according to accuracy.
  3. There are reputation scores for users based on the accuracy ratings of their edits.
  4. Wiki pages about concepts are structured ontologically in some kind of ontological network with root elements called “axioms”
  5. There are different sets of axioms, meaning there are different ontological networks. So, there would be effectively different wikis, one for each set of axioms.
  6. Concept wiki pages have tags that imply its position in the ontological network.
  7. Edits somehow have to be consistent with the ontological network structure. There’s a system that can somehow measure that consistency.
  8. The consistency measure is also somehow linked to the reputation score of the users.
  9. All edits ever are stored in the database. Nothing is ever lost.
  10. There are different versions of any wiki page. There are private versions for each user that bothers to create a wiki page and there are universal versions somehow emerging out of agreeing private versions (staistically?)
  11. People can see all versions of any wiki pages.
  12. Private wiki page versions are compared to the universal version with some kind of artificial intelligence algorithm, so that the degree of agreement is measured.
  13. Private versions of a wiki pages are ranked accordingly to their degree of agreement with the universal measure.

Ok, that’s really a lot, and the complexity of this system may approach the complexity of the human brain. What you are trying to do is not to create a better wiki, but a global brain for humanity emerging out of the interactions of people with the system and the actions of the artificial intelligence components of the system.

Even so, the complexity of the system is so high that individual users would probably have to be trained (and paid) in order to use it properly! This won’t work out, so have to make the system easier. Much easier. Otherwise people won’t use it.

A more incremental approach is needed in order to see how much deviation from the original wiki concept people can accept and cope with. There are several options that seem interesting:

  • A wiki with reputation scores based on user ratings of edits
  • Or a wiki with personal versions for each wiki page
  • Or an ontologically structured wiki

Combining more than 1 of any of these ideas would make the system much more complicated, so I suggest you choose which is the most promising single modification to the wiki concept that you want to implement first. If you have done that, you may add the other components in order to see whether they work together nicely with the others.


(Maximo Ramallo) #14

Truth is this “wiki” is not the original project. I knew it was cumbersome and tried to make it least cumbersome by transforming it into a wiki.
But now I must step back and recognize my failure. You are true about

But, I am confident to be able to combine at least two points, acknowledging the “structured wiki” is a fundamental part of the whole system and that I’ve already manage to model a graphical interface for it.
Reputation scores are work in progress anyway, so I’ll use the “versions” system as the system of edits and probably bootstrap from there. But of course, I will not use the versions model right away but the structured wiki first and then I’ll see from there.
Now, about

I definitely would like to avoid this, but as I currently didn’t modeled the proper interfaces for everything yet, the better approach is to cut much of the original “wiki” idea.

I thank your honesty, now I would direct my efforts towards a more simple model. Probably it would be a mix between a google drive and the structured wiki for now. It would be less of a fact checking, but less cumbersome too.
The part referring to deep ontological properties towards a global brain, in the way of noosphere, I’ll work it as my side project and will be strictly a scientific research planned to take years in a realistic sense. Is a good idea, but better to work on it in the appropriate time and form anyway.


(Michael Hrenka) #15

Now that I have thought about it again, you don’t actually need to include an internal reputation system for the wiki. It would probably be sufficient to just use the Quantified Prestige system I am developing. In that case, you can focus on the structured wiki project (with or without versions) and then just use Quantified Prestige as plug-in.

I think there’s more value in the wiki with personal (or otherwise) versions than in an ontologically structured wiki. When each wiki page has different versions, then people can easily see different points of view (for example regarding the Hikikomori topic), and there may be less edit wars, because the contribution of each person would at least remain on their private versions of the page.

Why do you think the onologically structured wiki would be such a good idea? This feels more like an exercise in old-fashioned artificial intelligence research than something that would be hugely beneficial for mankind. More specifically, the approach seems to be quite rigid, even though you admit there can be several parallel ontologies (though that would hugely increase the complexity of the project). Anyway, there is a strong connection to philosophical ontology and philosophy of language: Please read the books of the philosopher Wittgenstein (ideally all of them), if you are really serious about this.


(Maximo Ramallo) #16

Yes I will focus on it right now. I am starting to see very basic tools like Gjots for start. I may dedicate another post to that.

Yes, totally.

“Know thyself”. I could manage to build a very basic structure, so basic, it can fit on an app, but with enough consistency to serve clarification, and avoiding misunderstandings and deceit. “Avoid misunderstandings and deceit”, remember this goal, is important for the future of mankind.

Oh yes, I got one on my “must read list” (“Philosophical Investigations”). Thanks for this recommendation.


(Michael Hrenka) #17

As it turns out, there is already a project for the “versioned” wiki and it’s called Smallest Federated Wiki:

Just google “Federated Wiki” to find other resources about it. Unfortunately, this idea hasn’t seemed to really catch on.

Anyway, this seems to be a natural combination of the ideas of the wiki and the GitHub. I know there are some other wikis who try to implement this kind of combination, for example the git-wiki, but they aren’t very popular, either.

Seems like these projects failed to get sufficient support and traction. This points to the need for good strategies for making innovative software ideas popular!


(Maximo Ramallo) #18

I may misunderstand what he appointed to do, but if versions are kept without a clear way to compare and check what the general consensus is, is like having a seat without the legs and saying we have a chair. A part of a bigger tool may not be completely functional, thus not appealing.
By intuition this is why much of the deliberative tools in development today are not completely appealing neither. They are just an unfinished work.


(Michael Hrenka) #19

The background is collaborative software programming on GitHub. The “general consensus” is the version that gets copied and worked on most often. Alternative forks are usually less popular, in other words: They are cloned less often.

It’s actually a very elegant system that creates emergent order through the actions of the individual users.


#20

why is that so important for you?