Tamar and I were just in Santa Fe for the Society for Applied Anthropology conference. The conference was meh, but Santa Fe is fairly great. In particular, we both love the food there. New Mexico cuisine is heavily influenced by Tex-Mex, or straight-up Mexican, but it's all about the New Mexico chiles. Green, red, or Christmas (that's both). Put that sauce on cardboard and it would taste great.

But I digress. 5 days at a conference means a lot of restaurants, which means a lot of searching on Yelp, TripAdvisor, ChowHound, etc. As anyone who's looked for a restaurant without local tips knows, it can be a chore. In the past, it's been argued that a big problem with online reviews is that they attract the extremes: people who love a place, and people who hate it. So, the meta-rating inevitably consists of a raft of 1's and a raft of 5's that average out to a solid, mediocre 3. When every place is a 3, online reviews aren't much more useful than a phone book – which, granted, is a little useful.

But in the process of using a variety of online review sites in Santa Fe, I noticed that the nature of the reviews could be changing. For one thing, it didn't appear to me that everyone was so extreme. I saw lot's of 2's, 3.5's, 4s, etc. Lots of thoughtful reviews, people weighing their priorities, making substantive comments. This made me wonder if the extremes problem was at least partly an issue of growing pains. When online review sites were for early adopters and tech savvy folks, they were primarily used as venues for rants and raves. Now that they're much more mainstream, it could be that the moderate, balanced opinions are becoming the norm.

Once every meta-review isn't a 3, those scores can start to be really useful. But that points out another key weakness of online review sites. Review aggregators are all about the wisdom of crowds – that's the grand, cantankerous idea that a person is dumb, but people are smart. That individuals can be biased and wrong, but given a diverse enough group, all the biases cancel each other out and what's left is the good stuff. The wisdom.

But the dirty secret about the wisdom is that it's only as good as the crowd. As Tamar wisely pointed out, one good thing about experts is that we can find one that we agree with. We seek out someone who we feel shares our taste in food, wine, restaurant experience, and we trust them. With a meta-review, we know that the biases should cancel each other out to reveal the truth about the restaurant. But there is no truth (spoon). There's only the preference of the population. And without knowing anything about those preferences, the meta-review loses most of its value.

So, realizing the trouble, what does your average review-reader do then? Of course, we turn from the meta-review to the reviews themselves. It's a reasonable course of action, a logical next step, and it feels good. But there's no wisdom in it. Once we start reading individual stories and experiences, the wisdom of crowds is gone. Now we're just getting ideosyncratic little snippets of experience that are probably not representative of the restaurant. Our search will inevitably be biased by the way that the reviews are sorted, or by the happenstance chronology of whether a bad, mediocre, or good review was the last one posted. Worse than that, our search will be subject to all kinds of social psychological biases that are interesting and appealing, but useless if we're looking for a good restaurant. We'll do things like give more weight to the first and last reviews we read, and specifically (but unconsciously) seek out reviews that validate things we already think.

Put these two issues together, and we've got a big problem for online review sites. The meta-review is of limited use because it lies: it purports to represent wisdom, but without knowing the crowd we don't know how much. The individual reviews feel good, but the wisdom doesn't lie there. (See what I did with the title? I hate myself.) The latter is a big problem for the Yelp's out there, because part of what makes us so ready to devote time to reviews is knowing that our story is out there, that our words will be read, and that what we think matters.

There are good solutions to both of these problems – solutions that I think will drastically improve online review sites. First, meta-reviews will be more and more useful the more we know about the underlying population. Review sites should start surveying their users to find out their priorities about whatever is being rated. This sounds boring, but there are lots of creative ways to get this type of info. Jane cares a lot about the food, isn't bothered by slow service because she doesn't mind sitting and chatting. Billy Bob isn't picky about food, he'll enjoy almost anything you serve him, but he thinks what's he's really paying for is the service, so it'd better be Johnny on the spot. Peter won't go to a restaurant that doesn't allow corkage and have good stemware no matter how good the food and service are. With this kind of information, I'll be able to filter reviews based on my own preferences.

Individual reviews have their place too, but primarily as expert-finding mechanisms. Tamar was saying that when she reads reviews, she looks for certain adjectives, certain things about the ways that people write that give her confidence. These are, essentially, things that help her find experts. Once she's found them, if they're regular contributors she can subscribe. You can already do this sort of thing on many review sites, but it's secondary. Individual reviews need to be abstracted from meta-reviews somehow. Not hidden, but divorced from the search-flow in which reading the reviews inevitably follows looking at the meta-review. Doing these things would make review sites 10x better.