Friday, November 25, 2016

A Wikipedian Explains How Wikipedia Stays Reliable in the Fake News Era

According to Wikipedia, "Fake news websites publish hoaxes and fraudulent misinformation to drive web traffic inflamed by social media." In the aftermath of an election season dominated by hyperbole and sometimes outright lies, the press has been more focused than ever on how falsehoods spread. But somehow, Wikipedia itself remains mostly free of utter nonsense.

Even though it's a free internet encyclopedia that anyone in the world can edit, you can usually trust a quick scan of Wikipedia to find out a lot about the world. It's certainly not accurate enough to quote for a PhD dissertation, some pages are more reliable than others it's always good to check its sources—but it's really impressive how correct it is. Do bumblebees sting? Yes, but not usually humans or animals. How deep is the Mariana trench? Forty-three miles. Did Andre the Giant have any kids? Yep! A daughter.

Wikipedia won't often steer you wrong even on controversial subjects. You should obviously approach politics on Wikipedia with skepticism, but sections on touchy topics like Donald Trump's White House transition, or Hillary Clinton's health are refreshingly devoid of conspiracy theories and hasty conclusions. If nothing else, they are good starting points for people interesting in learning about those subjects.

That's remarkable when you consider that according to Buzzfeed's numerical analysis of traffic data, fake news sites often attracted more readers than real news during the election. The story that Donald Trump had died of a heart attack, for instance, spread like wildfire on August 13, but digging through the history of the Donald Trump Wikipedia article suggests that the hoax never showed up there at all.

To find out how Wikipedia inoculates itself against fake news, I got in touch with Victor Grigas, who works as a video producer for the Wikimedia Foundation, which runs Wikipedia. In his spare time, Grigas writes and edits Wikipedia articles, and even created that article about fake news I quoted above. (Grigas asked me to clarify that he spoke to me in his capacity as a volunteer editor, not as a Wikimedia employee. Our conversation has been edited for length and clarity.)

VICE: How'd you get into writing about fake news?
Victor Grigas: Chicago stuff is what I write about, and I had all these friends who were like, "This is bullshit, man!" when Trump got elected. And I was like, "Send your gone. But in the process of researching it, if you type in "Trump protests," you'll find these fake news articles that say there were people paid, and it's crazy! If you actually read the fake news articles, they'll cite this one YouTube video of a dash cam camera driving in Chicago past a bunch of buses. So it's like, "Oh, because these buses are here, they've bused in protesters from everywhere!"

Is that claim backed up by any sources Wikipedia considers reliable?
It's total nonsense with no basis whatsoever! But they're writing this to feed whatever beast. I don't know if they're writing it just to make money, or if there's a political incentive. I have no fucking clue, but it's obviously not reliable. But for some reason it's coming up near the top of my Google searches, which is really infuriating. So I want to make sure that when people read about these things, they know they're not there.

"When you get started with Wikipedia, it's a crash course in library science and intellectual property law."

Does the existence of this fake news merit its own inclusion in well-sourced articles?
At the bottom of the page about the protests, there's one or two lines about . And I got into a little bit of an editing conflict about that because I tried using the fake news site as a source about the fake news. They deleted what I wrote, and I think the line was "awful reference!" and it got deleted right away, automatically without reading or trying to understand what I was trying to do about it.

So when veteran Wikipedia editors aren't around, what happens when an article shows up based on fake news?
There's a lot of policing that happens on Wikipedia, which people see as a real barrier to entry to get started, because there's a huge learning curve. One of the aspects of that learning curve is what you're allowed to write, basically. And it takes a little bit of patience to figure out how to make it work. So one of the things that happens is you start editing and stuff gets deleted like that.

What kind of stuff do you mean?
If you start like a blog, or a personal site, or something like that, it's gonna bite the dust real fast. People are gonna take it out, and they're gonna point you to the reason why they took it out, usually.

How do you spot fake news in time to prevent widespread hoaxes?
There's a whole bunch of people who just stare at the recent changes feed. If you look at any Wikipedia page, on the side it says "recent changes." And you can see what happened in the last minute—everything that happened, how many characters were added, and whatever else. And if it looks like something bogus, there's a whole ton of people who love to correct this stuff. As you edit for a while, you start to develop people who want to be very very strict, and delete anything that looks like it is anywhere close to a any kind of violation that you can imagine.

But how do you answer the criticism that hoaxes do persist on Wikipedia?
If you look at the list, it's maybe 100-something. When you realize there's about 4 million articles just on English Wikipedia, the ratio is really good.

One of them, about a fictional store in Sweden, stayed up for almost 11 years. How do hoaxes survive on Wikipedia for that long?
I have no idea, but my guess would be if it's not a contentious issue, and there's not enough eyeballs on it. It has to be obscure, like a hipster album from the 70s that nobody gives a shit about. If you can create that, that's probably your best bet, but you should never do that because that's stupid.

"If Wikipedia's rules were applied to Facebook? Oh my God! They'd lose 99 percent of their content."

What can editing Wikipedia teach people about spotting fake news?
When you get started with Wikipedia, it's a crash course in library science and intellectual property law. Once you're past that bar, then your bullshit detector is at 100 percent. When you see stuff on Facebook, people are basically responding emotionally.

What do you think that difference in "motivation" between Facebook and Wikipedia means for reliability?
I think it's fine if you have a certain , and you want to design things a certain way to get people to use it. But how many people on Facebook just read a headline and share it, and don't do any research whatsoever? Or they don't even read a headline, and it's just a .gif or .jpg? Facebook is not a place for news, but it functionally is, and that is bad. That's my opinion about it. It's not designed for that, but that's what it has become. And the things people are sharing back and forth are not going through the bullshit detector. If Wikipedia's rules were applied to Facebook? Oh my God! They'd lose 99 percent of their content.

But is it safe to say there's fake stuff somewhere on Wikipedia?
Given the size of Wikipedia I am sure there are cases where stuff that shouldn't be there is there. But on the whole I think it's mostly not there because there are lots of people like me, who, if I see something on a page I'm reading, I'm not thinking in a read-only manner. The Wikipedia crowd, people who write it, think in a read-write way. If I see something that's wrong, I flag it, or I find my own way to refute it. One of the things you see on Wikipedia: "citation needed." Coming to something with critical thinking is important. So the people who are editing usually have a critical thinking hat on, and are looking for everything that can needs to be deleted.

Follow Mike Pearl on Twitter.



from VICE http://ift.tt/2gGMoe0
via cheap web hosting

No comments:

Post a Comment