\ Wisdom of Crowds | TechnoTaste

Wisdom of Crowds


The NY Times is reporting that the English language Wikipedia will soon moved to a "flagged revisions" system by which edits to articles about living people will have to be approved by a more experienced editor before they appear on the live site. This system has been tested for about a year on the German language Wikipedia. On that site, an "experienced editor" is someone who's crossed a threshold of number of successful edits. There were about 7,500 of them in the German case, and there are likely to be an order of magnitude more in the English Wikipedia.

The NY Times article notes that:

Although Wikipedia has prevented anonymous users from creating new articles for several years now, the new flagging system crosses a psychological Rubicon. It will divide Wikipedia’s contributors into two classes — experienced, trusted editors, and everyone else — altering Wikipedia’s implicit notion that everyone has an equal right to edit entries.

In reality, those classes have been present for some time now. As part of my dissertation research I've been interviewing less experienced Wikipedians about their perceptions of the site. One constant theme has been the perception of a class system in Wikipedia. Casual editors worry that their edits aren't good enough, and that they'll be rebuked by Wikipedia's upper-classes. They perceive a mystical group of higher-order contributors who make Wikipedia work. They believe that the barrier to entry is high and that they don't know enough about how the system works even to make small edits. Partly I think this is a function of the increasing complexity of the Wikipedia system. Partly it's because of Wikipedia's increasing stature – less experienced users feel the consequences of their actions, when so many millions read the site each day.

I also think classism is something that Wikipedia's heavy-editor community actively cultivates. The NY Times notes the work of Ed Chi at PARC. Ed and he colleagues have done some really interesting work. Among other things, they've noticed a trend towards resistance to new content. In a recent paper presented at GROUP, Tony Lam and his colleagues found that the rate of article deletions is growing, and that most articles are deleted shortly after they are created. Wikipedia has a core of frequent editors who zealously guard their territory, sometimes actively discouraging newcomers, and enforcing complicated and arcane policies in ways that can reduce new participation. The ideology of Wikipedia is a level playing field in which everyone has a voice, but the practice of it is often far from that ideal.

This latest move is troubling in that it seems to represent a lack of faith in crowdsourcing and the wisdom of crowds, in the model that made Wikipedia what it is today. This change will also remove another of the important social-psychological incentives that draw new people into the Wikipedia fold: the instant gratification that comes from seeing your work reflected on a Wikipedia page. There will certainly be many papers written on the before-after comparison, and I suspect we'll see significant changes in the dynamics of the site, at least for the pages that will see this change.

A little old now in Internet time (May 29th!), but The Register is covering the Wikipedia Arbitration Committee's decision to ban edits from all IP addresses that are known to be associated with Scientology. Apparently there were systematic efforts by those nutty Scientologists to propagandize Wikipedia pages, paper over criticism, etc.

Now, I'm no fan of Scientology, though I admit I think the whole thing is more laughable than anything else. But for Wikipedia this is a bad decision that leads down a bad road. There's two big issues here.

First, if Wikipedia starts to ban whole organizations rather than policing malicious individuals (who Register writer Cade Metz calls "Wikifiddlers" – love it!), how does it draw a reasonable line between protecting Wikipedia and social engineering? Wikipedia is already a horribly slanted body of knowledge, mostly as a function of the types of knowledge that its user communities value highly – natural sciences, computer science, engineering, popular culture. Picking and choosing organizations to ban will make this bias worse. Does Wikipedia only ban organizations that are easy to hate – Scientologists, neo-nazis, etc.? If this is about the policy, and an attempt to thwart coordinated propaganda, then shouldn't we also be banning IP ranges for, say, baseball teams, celebrities, and Congressmen, all of whom engage in organized propaganda attacks to gussy up their Wikipedia pages?

There's also a more fundamental problem with this – it breaks the model of "Wisdom of the Crowds." The whole point of WotC in the Smith / Surowiecki sense, is that a person is dumb but people are smart. When people are diverse, their biases cancel each other out. Picking the number of jelly beans in a jar isn't that different from making Wikipedia. We need all manner of biases. We need people to be wrong in all ways, and to coordinate propaganda in all ways. That doesn't mean we should allow all kinds of malicious activity – going after individual Wikifiddlers makes sense to me. But banning whole groups is a slippery slope that could hurt Wikipedia's reputation and quality in the long term.

Tamar and I were just in Santa Fe for the Society for Applied Anthropology conference. The conference was meh, but Santa Fe is fairly great. In particular, we both love the food there. New Mexico cuisine is heavily influenced by Tex-Mex, or straight-up Mexican, but it's all about the New Mexico chiles. Green, red, or Christmas (that's both). Put that sauce on cardboard and it would taste great.

But I digress. 5 days at a conference means a lot of restaurants, which means a lot of searching on Yelp, TripAdvisor, ChowHound, etc. As anyone who's looked for a restaurant without local tips knows, it can be a chore. In the past, it's been argued that a big problem with online reviews is that they attract the extremes: people who love a place, and people who hate it. So, the meta-rating inevitably consists of a raft of 1's and a raft of 5's that average out to a solid, mediocre 3. When every place is a 3, online reviews aren't much more useful than a phone book – which, granted, is a little useful.

But in the process of using a variety of online review sites in Santa Fe, I noticed that the nature of the reviews could be changing. For one thing, it didn't appear to me that everyone was so extreme. I saw lot's of 2's, 3.5's, 4s, etc. Lots of thoughtful reviews, people weighing their priorities, making substantive comments. This made me wonder if the extremes problem was at least partly an issue of growing pains. When online review sites were for early adopters and tech savvy folks, they were primarily used as venues for rants and raves. Now that they're much more mainstream, it could be that the moderate, balanced opinions are becoming the norm.

Once every meta-review isn't a 3, those scores can start to be really useful. But that points out another key weakness of online review sites. Review aggregators are all about the wisdom of crowds – that's the grand, cantankerous idea that a person is dumb, but people are smart. That individuals can be biased and wrong, but given a diverse enough group, all the biases cancel each other out and what's left is the good stuff. The wisdom.

But the dirty secret about the wisdom is that it's only as good as the crowd. As Tamar wisely pointed out, one good thing about experts is that we can find one that we agree with. We seek out someone who we feel shares our taste in food, wine, restaurant experience, and we trust them. With a meta-review, we know that the biases should cancel each other out to reveal the truth about the restaurant. But there is no truth (spoon). There's only the preference of the population. And without knowing anything about those preferences, the meta-review loses most of its value.

So, realizing the trouble, what does your average review-reader do then? Of course, we turn from the meta-review to the reviews themselves. It's a reasonable course of action, a logical next step, and it feels good. But there's no wisdom in it. Once we start reading individual stories and experiences, the wisdom of crowds is gone. Now we're just getting ideosyncratic little snippets of experience that are probably not representative of the restaurant. Our search will inevitably be biased by the way that the reviews are sorted, or by the happenstance chronology of whether a bad, mediocre, or good review was the last one posted. Worse than that, our search will be subject to all kinds of social psychological biases that are interesting and appealing, but useless if we're looking for a good restaurant. We'll do things like give more weight to the first and last reviews we read, and specifically (but unconsciously) seek out reviews that validate things we already think.

Put these two issues together, and we've got a big problem for online review sites. The meta-review is of limited use because it lies: it purports to represent wisdom, but without knowing the crowd we don't know how much. The individual reviews feel good, but the wisdom doesn't lie there. (See what I did with the title? I hate myself.) The latter is a big problem for the Yelp's out there, because part of what makes us so ready to devote time to reviews is knowing that our story is out there, that our words will be read, and that what we think matters.

There are good solutions to both of these problems – solutions that I think will drastically improve online review sites. First, meta-reviews will be more and more useful the more we know about the underlying population. Review sites should start surveying their users to find out their priorities about whatever is being rated. This sounds boring, but there are lots of creative ways to get this type of info. Jane cares a lot about the food, isn't bothered by slow service because she doesn't mind sitting and chatting. Billy Bob isn't picky about food, he'll enjoy almost anything you serve him, but he thinks what's he's really paying for is the service, so it'd better be Johnny on the spot. Peter won't go to a restaurant that doesn't allow corkage and have good stemware no matter how good the food and service are. With this kind of information, I'll be able to filter reviews based on my own preferences.

Individual reviews have their place too, but primarily as expert-finding mechanisms. Tamar was saying that when she reads reviews, she looks for certain adjectives, certain things about the ways that people write that give her confidence. These are, essentially, things that help her find experts. Once she's found them, if they're regular contributors she can subscribe. You can already do this sort of thing on many review sites, but it's secondary. Individual reviews need to be abstracted from meta-reviews somehow. Not hidden, but divorced from the search-flow in which reading the reviews inevitably follows looking at the meta-review. Doing these things would make review sites 10x better.