Algorithm Rates Trustworthiness of Wikipedia Pages 175
paleshadows writes "Researchers at UCSC developed a tool that measures the trustworthiness of each Wikipedia page. Roughly speaking, the algorithm analyzes the entire 7-year user-editing-history and utilizes the longevity of the content to learn which contributors are the most reliable: If your contribution lasts, you gain 'reputation,' whereas if it's edited out, your reputation falls. The trustworthiness of a newly inserted text is a function of the reputation of all its authors, a heuristic that turned out to be successful in identifying poor content. The interested reader can take a look at this demonstration (random page with white/orange background marking trusted/untrusted text, respectively; note "random page" link at the left for more demo pages), this
presentation (pdf), and this paper (pdf)."
Seems a bit dangerous (Score:5, Insightful)
And the editor wars start
Godwin's Second Law (Score:3, Insightful)
(Godwin didn't publish this, but I might get around to editing his Wikipedia entry to say that he did).
Re:Seems a bit dangerous (Score:4, Insightful)
#REDIRECT (Score:5, Insightful)
I dunno about this system. (Score:5, Insightful)
And, of course, there is the potential for abuse. If the software could intelligently track reversions and somehow ascribe to those events a neutral sort of rep, that would probably help the system out.
As it stands, they're essentially trying to objectively judge "correctness" of facts without knowing the actual facts to check. That's somewhat like polling a college class for answers and assigning grades based on how many other people DON'T say that they disagree with a certain person in any way.
I suspect this heuristic measures.... (Score:5, Insightful)
If I edit a history page of a small rural village near where I live, I can guarantee that it will remain unaltered. None of the five people who have any knowledge or interest in this subject have a computer.
If I edit an item on Microsoft attitude to standards, or the US occupation of Iraq, I'm going to be flamed the minute the page is saved, unless I say something so banal that noone can find anything interesting in it.
But my Microsoft page might be accurate, and my village history a tissue of lies....
Tuned for Subject Matter (Score:5, Insightful)
Afterall just because someone is a reliable expert at editing the wikipedia entries on Professional Wrestling [wikipedia.org] or Superheroes [wikipedia.org] doesn't necessarily mean we should trust their edits on, for instance, the sensitive issues of Tibetan sovereignty [wikipedia.org].
Tyranny of the majority (Score:5, Insightful)
That said, I can't help but believe that this tool is a net positive because it makes points of debate more visible. One could even argue that it literally highlights the frontiers of human knowledge. That is, high-trust (white) text is well known material and highlighted (orange) text represents contentious or uncertain conclusions.
Don't Care. (Score:2, Insightful)
It doesn't have to be perfect (Score:5, Insightful)
No algorithm, except maybe personally checking every single article yourself, will ever be perfect. I suspect that the stuff you talk about will be very rare exceptions, not the rule. In fact, one of the reasons that it is so rare is because people who know what the actual truth of a matter is can post it, cite it, and show it for all to see that some common misconception is, in fact, a misconception. This is much better than, say, a dead tree encyclopedia where, if something incorrect gets printed, it will likely stay that way forever in almost every copy that's out there. (And, incidentally, no such algorithm can exist, since dead tree encyclopedias generally don't include citations and/or articles' editing histories.)
The goal wasn't to create a 100% perfect algorithm, it was to create an algorithm that provides a relatively accurate model and that works in the vast majority of cases. I don't see any reason this shouldn't fit the bill just fine.
Re:It doesn't have to be perfect (Score:2, Insightful)
This will promote one thing (Score:2, Insightful)
Re:Tyranny of the majority (Score:5, Insightful)
Yes, this system demonstrates the correlation between the content and the majority opinion, not between the content and the correct information (assuming such objectively exists).
Of course, if you take as an axiom that the majority opinion will, in general, be more reliable than the latest random change by a serial mis-editor, then the correlation with majority opinion is a useful guideline.
Something that might be rather more effective, though perhaps less practical, is for Wikipedia to bootstrap the process much as Slashdot once did: start with a small number of designated "experts", hand-picked, and give them disproportionate reputation. Then consider secondary effects when adjusting reputation: not just whether something was later edited, but the reputation of the editor, and the size of the edit.
This doesn't avoid the underlying theoretical flaw of the whole idea, though, which is simply that in a community-written site like a wiki, edits are not necessarily bad things. Someone might simply be replacing the phrase "(an example would be useful here)" with a suitable example. This would be supporting content that was already worthwhile and correct, not indicating that the previous version was "untrustworthy".
Re:Seems a bit dangerous (Score:3, Insightful)
but they get a whole new meaning when it makes sense to find all edits by an editor, delete them, and then rewrite them as your own...
AfD: nn (Score:3, Insightful)
Should be called "stability" (Score:3, Insightful)
Re:Seems a bit dangerous (Score:3, Insightful)
No, it won't gain a better reputation in the eyes of professors (at least decent professors) for two reasons:
1) It's an inherently flawed algorithm and easily gameable. It's useful as a very vague unreliable data-point, and not much else.
2) Wikipedia is not a source for academic research, and never will be. If it's anything to academics, it's a place to go to get some clues on how to proceed with their real research - for example finding links to reliable sources, or related terms and concepts. It's like Google: a great tool for research, not a source.
Wikipedia is not and has never claimed to be an authoritative source on anything, and until people stop referring to it as though it is (or could be, or claims to be) - we'll never get over this wanking about "Don't trust wikipedia, it's not reliable - anyone can change it, omg!"
Re:Light Bulb Moment (Score:3, Insightful)
Can you explain yourself a little more? I don't see how Tor would improve the quality of information being searched for. (Not arguing, just interested in your ideas)
Never confuse popularity with factual truth (Score:2, Insightful)
many cases, but there are many individual cases and times
when the currently popular view is wrong and the lone
wolf opinions are later proven to have been correct.
This algorithm would seem to be more of a popularity contest
than a truth finder. I think we have to be very wary of
the truth by mass agreement theory.
Hint: Remember the "weapons of mass delusion" ?
I bet someone commenting that the US government is lying
through their teeth about it would have been re-edited
pretty quick.
Re:Hmmmmmmm (Score:2, Insightful)
Wikipedia does an extroadinary job from a wide variety of peer resources, both professional and layman alike. So called "experts" like academia are just as political in their research and analysis as well - specifically, in the social sciences. Peer review never really amounts to much more than a consensus, but not necessarily an accurate one. Objectivity is the holy grail which I don't think will ever be achieved whether in Encyclopedia, Wikipedia, or newspaper for that matter. The objectivity is best left to the reader, as well as the research, imho.
What you're asking for is really nothing more than some sort of certification, which most use as nothing more than back patting for their particular opinion. I say, take an Encyclopedia or Wikipedia for what it is, and just move on to the next.
Spelling Mistakes? (Score:3, Insightful)
In particular I'm worried that the system will undervalue the information from people whose edits are frequently cleaned up by others even if that content is left unchanged.
Re:Light Bulb Moment (Score:4, Insightful)
Nope. If you post one misdeed and that gets edited out, such is life but shouldn't affect your credibility that much because everyone is always getting edited out a few times in the long run.
However, if you edit hundreds or thousands of different articles and people leave you alone, o great guru, you're good.
Wikipedia's ultimate strength depends on the community's desire for good information, readiness to stomp on crap, and will to contribute. Conversely, Wikipedia would decay if people didn't give a rat's ass about Wikipedia and let it go to ruin like an unweeded garden. This mechanism of quality control needs to be applied down the hierarchy of categories, subcategories, and articles. It's understandable that certain areas will have more pristine content overall while other areas will be populated with childish and wanton ideas. Thus, a contributor evaluation program can be tested.