\ Online Collective Action | TechnoTaste - Part 2

Online Collective Action


Tamar and I were just in Santa Fe for the Society for Applied Anthropology conference. The conference was meh, but Santa Fe is fairly great. In particular, we both love the food there. New Mexico cuisine is heavily influenced by Tex-Mex, or straight-up Mexican, but it's all about the New Mexico chiles. Green, red, or Christmas (that's both). Put that sauce on cardboard and it would taste great.

But I digress. 5 days at a conference means a lot of restaurants, which means a lot of searching on Yelp, TripAdvisor, ChowHound, etc. As anyone who's looked for a restaurant without local tips knows, it can be a chore. In the past, it's been argued that a big problem with online reviews is that they attract the extremes: people who love a place, and people who hate it. So, the meta-rating inevitably consists of a raft of 1's and a raft of 5's that average out to a solid, mediocre 3. When every place is a 3, online reviews aren't much more useful than a phone book – which, granted, is a little useful.

But in the process of using a variety of online review sites in Santa Fe, I noticed that the nature of the reviews could be changing. For one thing, it didn't appear to me that everyone was so extreme. I saw lot's of 2's, 3.5's, 4s, etc. Lots of thoughtful reviews, people weighing their priorities, making substantive comments. This made me wonder if the extremes problem was at least partly an issue of growing pains. When online review sites were for early adopters and tech savvy folks, they were primarily used as venues for rants and raves. Now that they're much more mainstream, it could be that the moderate, balanced opinions are becoming the norm.

Once every meta-review isn't a 3, those scores can start to be really useful. But that points out another key weakness of online review sites. Review aggregators are all about the wisdom of crowds – that's the grand, cantankerous idea that a person is dumb, but people are smart. That individuals can be biased and wrong, but given a diverse enough group, all the biases cancel each other out and what's left is the good stuff. The wisdom.

But the dirty secret about the wisdom is that it's only as good as the crowd. As Tamar wisely pointed out, one good thing about experts is that we can find one that we agree with. We seek out someone who we feel shares our taste in food, wine, restaurant experience, and we trust them. With a meta-review, we know that the biases should cancel each other out to reveal the truth about the restaurant. But there is no truth (spoon). There's only the preference of the population. And without knowing anything about those preferences, the meta-review loses most of its value.

So, realizing the trouble, what does your average review-reader do then? Of course, we turn from the meta-review to the reviews themselves. It's a reasonable course of action, a logical next step, and it feels good. But there's no wisdom in it. Once we start reading individual stories and experiences, the wisdom of crowds is gone. Now we're just getting ideosyncratic little snippets of experience that are probably not representative of the restaurant. Our search will inevitably be biased by the way that the reviews are sorted, or by the happenstance chronology of whether a bad, mediocre, or good review was the last one posted. Worse than that, our search will be subject to all kinds of social psychological biases that are interesting and appealing, but useless if we're looking for a good restaurant. We'll do things like give more weight to the first and last reviews we read, and specifically (but unconsciously) seek out reviews that validate things we already think.

Put these two issues together, and we've got a big problem for online review sites. The meta-review is of limited use because it lies: it purports to represent wisdom, but without knowing the crowd we don't know how much. The individual reviews feel good, but the wisdom doesn't lie there. (See what I did with the title? I hate myself.) The latter is a big problem for the Yelp's out there, because part of what makes us so ready to devote time to reviews is knowing that our story is out there, that our words will be read, and that what we think matters.

There are good solutions to both of these problems – solutions that I think will drastically improve online review sites. First, meta-reviews will be more and more useful the more we know about the underlying population. Review sites should start surveying their users to find out their priorities about whatever is being rated. This sounds boring, but there are lots of creative ways to get this type of info. Jane cares a lot about the food, isn't bothered by slow service because she doesn't mind sitting and chatting. Billy Bob isn't picky about food, he'll enjoy almost anything you serve him, but he thinks what's he's really paying for is the service, so it'd better be Johnny on the spot. Peter won't go to a restaurant that doesn't allow corkage and have good stemware no matter how good the food and service are. With this kind of information, I'll be able to filter reviews based on my own preferences.

Individual reviews have their place too, but primarily as expert-finding mechanisms. Tamar was saying that when she reads reviews, she looks for certain adjectives, certain things about the ways that people write that give her confidence. These are, essentially, things that help her find experts. Once she's found them, if they're regular contributors she can subscribe. You can already do this sort of thing on many review sites, but it's secondary. Individual reviews need to be abstracted from meta-reviews somehow. Not hidden, but divorced from the search-flow in which reading the reviews inevitably follows looking at the meta-review. Doing these things would make review sites 10x better.

The latest issue of the journal Episteme has all the buzzwords of current topics. It's about Web 2.0, Wikipedia, prediction markets. And epistemology. Big time.

But don't bother. This journal is quite pretentious, and the articles are not particularly well written. Worse, there doesn't seem to be much in the way of new thought there. Rather it seems like a handful of scholars who have just caught on to what's been going on and are talking about it as thought it's the latest and greatest. It reminds me of going to a session at the American Anthropological Association meetings in 2003 and listening to a sincere anthropologist give a 6 point talk on how to use Powerpoint.

The only gem seems to be a long-ish article by Larry Sanger (Wikipedia co-founder) on expertise on Wikipedia:

Sanger, Lawrence M. 2009. "The Fate of Expertise after WIKIPEDIA." Episteme 6:52-73.

I'm not sure I agree with most of what he has to say – I need to stew on it a bit more. But regardless I think it's an interesting piece of opinion and history from someone who was there at the beginning.

I'm not sure you'd find the words 'Esther Dyson' and 'ignorant' in the same sentence very often, but she certainly embarrassed herself in a recent interview with Internet Evolution. On the subject of anonymity on this internet she has this to say:

First, I was a much bigger fan of anonymity then than I am now. I thought it was cool. And it is, but it turns out anonymity really encourages bad behavior. I’m not in favor of the government tracking everybody and so forth, [but] at least persistent pseudonyms and communities and stuff like that makes everything a nicer place.

It’s like a lot of things. I’m pro choice, but I think abortion is an unfortunate thing. I think the same thing about anonymity: Everybody should have the right to it, but it’s not something one wants to encourage. And that’s not weasel words, that’s the reality of it.

[Anonymity] should be allowed. People should be able to make that choice, and there are many reasons to make that choice. If you live in an oppressive regime, you may well want people to be able to remain anonymous or have secret communications. But at the same time, it should not be encouraged, and it should be acknowledged that it’s a response to a bad situation.

So, apparently anonymity, like abortion, is a necessary evil. I think this reflects an extremely dated notion of anonymity on the internet. Freedom of speech under oppressive regimes isn't the only legitimate reason to be anonymous on the internet. Sure, some people use the cloak of anonymity to say nasty things and behave badly. But anonymity also allows people to free themselves from the prejudices, stigmas, and social pressures. I'm not saying anything new here.

I would be willing to bet that the freeing applications of anonymity far, far outweigh the nasty ones. Meanwhile, I think Dyson's views reflect a stark dichotomy that doesn't really exist. The line between anonymous and not is not nearly so clear. Sure, either you have a persistent screen name or you don't. Either your online identity is formally attached to your offline one, or it isn't. But in reality, our identities are more fluid. Even without a persistent screen name, others may guess who I am from context and content. As a poster, I may be completely aware of this, but even a sheer blanket is enough to overcome the pressures that would silence me. In the other direction, what about sockpuppets? Bottom line, online identity isn't so cut and dry.

I'm delving deeply into this topic in my dissertation, too, so it's obviously close to my heart. In my research, though, I look at anonymity of content rather than anonymity of individual, in particular in online collective action situations (think user-generated content). I'm exploring the ways in which the popular notion that everything on the internet should be stamped with an identity is wrong – where the fact that your content can be identified is actually a disincentive for providing it.

Anyway, I think the analogy between abortion and anonymity on this internet is crass and dated. Suggesting that anonymity is 'a response to a bad situation' is only fair if you consider the reality of the world a 'bad situation.' Otherwise, it's just our situation. Even then, I think it's important to start looking at anonymity through a more positive lens, and at the same time try to shake off the all-too-common idea that everything you do on the web, anywhere, should be stamped with an ID.

Ryan pointed me to a recent post called 'The Trouble with Free Riding' on the Freedom to Tinker blog at Princeton. I'm going to try and summarize it's arguments in a neat bulleted list and respond to each one, and in the process I'll miss some of the detail. But, wow. There's a lot there, and it's such a thought provoking blog post. I disagree with most of it though. It's pretty imprudent to do away with 100 years of theory on public goods, collective action, social dilemmas.

  1. The notion of free-riding is 'nonsensical' on Wikipedia. It doesn't even make sense to talk about it in that context.
  2. This seems like a rhetorical tactic to me. Wikipedia is a public good. It's produced by a process of generalized exchange which means that one's rewards aren't contingent on they give. Therefore, it introduces a social dilemma and the possibility of free riding. It's a non-starter to say that somehow online public goods are so different that we have to abandon previous notions, though this is a disturbingly popular point of view. Instead, let's talk about the ways that public goods that consist of digital information on the internet have some unique and special properties.

  3. People who free-ride on Wikipedia provide an audience, and the presence of that audience is a motivator for some contributors.
  4. This is a great point. I don't know of any experimental research that looks at an audience-as-motivator effect in public goods. Actually, I'd love to do that research. An interesting thing about Wikipedia is that estimating the size of the audience may be very hard, and may depend on how much you know about the system. So, we wouldn't expect the audience effect to work the same for my grandma who fixes a typo as it does for the Barnstar-waving expert contributor.

  5. Being an audience benefits free-riders because they value the content, and the presence of an audience benefits contributors because they appreciate that someone is out there reading. Therefore free-riding is not a relevant issue.
  6. Erp. Now we've gone too far. The possibility of an audience effect doesn't make free riding irrelevant. First of all, other contributors are just much an audience (maybe more so!) as anyone. So, there's no necessary relationship between free riding and an audience. Also – it's kind of a snitty point – but the author makes an analogy to softball games, community orchestras, and poetry readings. These are certainly non-rival in the sense that one's partaking doesn't prevent others from doing the same. But they're all completely excludable, so they're not really public goods, so free-riding doesn't mean the same thing.

  7. The scale and function of the Internet changes the nature of collective action.
  8. That's for sure. Part of what's so interesting about online public goods is that the internet reduces the coordination and distribution costs so much that 10,000 people contributing a little can more or less be the same as 10 people contributing a lot.

  9. "The concept of "free riding" emphasizes the fact that traditional offline institutions expect and require reciprocation from the majority of their members for their continued existence. A church in which only, say, one percent of members contributed financially wouldn't last long. Neither would an airline in which only one percent of the customers paid for their tickets."
  10. Well, this really doesn't have much to do with how free riding is defined. Moreover, I would argue that most churches get the vast majority of their money from 1% of contributors or less. These are what Oliver and Marwell called 'privileged groups' who have the resources to provide the good on everyone's behalf. They also said that the chances of the group including these people go up as the size of the group goes up.

  11. As long as there are enough contributors to sustain the public good, the number of free-riders doesn't matter because of jointness of supply.
  12. Well, this completely depends on what your goal is. On the one hand, sure, once you reach critical mass, the marginal cost of providing the good is zero (or near-zero), so who cares how many free-riders there are. On the other hand, there are lots of benefits to adding to the group of contributors. Wikipedia isn't perfect – not even close. It's wrong on a lot of topics. It's poorly written in many places. It's skewed heavily towards CS and popular culture, and away from things like history and literature. There's a lot to be gained for Wikipedia by converting free-riders to contributors. And let's not foget about the many, many systems that never get to critical mass.

  13. "Every project would like to have more of its users become contributors."
  14. I understand that this may have been an afterthought, but I doubt this is true. I'd rather the group that writes the Linux kernel stay very, very small. And, it could be that if contribution rates went from 5% to 50%, we'd have a much worse encyclopedia on our hands. This is part of the reason why I think Knol is such a bad idea. Money appeals to everyone, but you don't want everyone contributing. Maybe one reason why Wikipedia is as good as it is, is because the social psychological incentives that are at work there (status, reputation, feeling smart, unique, like your knowledge is valued) work much better for the kind of people we want to contribute. Of course, there are lots of ways to contribute, so what we really want is to turn some free-riders into content contributors, others into editors, still others into fact checkers, and lots of them into proof-readers.

  15. "We've never before had goods that could be replicated infinitely and distributed at close to zero cost."
  16. Really? No goods before that were non-rival and non-excludable? What about national defense or clean air? Classic examples of public goods. Here's the fundamental problem. Yes, online public goods are interesting and unique in lots of ways. But, they don't require us to rethink 100 years of theory. It all still applies, though maybe in new and interesting ways. So, let's get over the hypoerbole and talk about those.

1st, I'm perfectly aware that I'm using blogging as a way of avoiding the stress of preparing for quals., thank you.

Now, I'm hopping back on-board the Andrew Keen is an Idiot bus. His latest narrow minded commentary is that the economic downturn will mean the end of Web 2.0's 'free labor' movement. Knol will thrive, Wikipedia will fail, etc., etc., etc.

Here are a few reasons why this will absolutely not happen, Wikipedia will be fine, and we'll all keep participating on the internet:

  1. Keen's commentary is narrow minded in the same way that Wired editor Chris Anderson's is (See this recent Wired article), but in the opposite direction. Anderson said the future of the web is FREE. Keen says the future of the web is $$. The truth is: both AND neither. People who participate in Wikipedia or Flickr or blogs may not reap their rewards in cash (some of them do, of course) but they do reap rewards. They connect with other people. They come to identify with social groups. They feel smart, they feel like their knowledge and opinions are valued and unique. They get reputational benefits and status rewards. Yes, I know – the economists are chomping at the bit to tell you that reputational benefits are just future cash rewards when your smart blog helps get you a better job. Sure, that's part of it, but not all of it. The point is, both Anderson and Keen are zeroed in on cash at the expense of all the other rewards (in econospeak: externalities) out there. Maybe that's because they're trying to cultivate a readership. But it's also a common point of view – reduce everything to numbers because that's what fits in my spreadsheet.
  2. Other rewards – let's call them social psychological rewards – are insulated from economic crisis. Sure, if you're about to starve or the police are knocking at your door to evict you, you might be spending less time on the internet. Social psychological rewards are great, though, because you can keep right on getting them when you lose your job and can't pay your bills. You can take solace in the constancy of community, and the fact that you still make a difference somewhere. Do people stop watching TV when they lose their jobs? Do they stop eating, smoking, knitting, running, hanging out with friends, or whatever it is they do to feel normal, even good? Nope. In fact, they often bury themselves in those things. Keen admits this himself, but for him it's because people who are out of a job will have nothing better to do. For me it's because of the rewards.
  3. Furthermore, most people don't have the expectation of monetary reward. This is the fundamental economic fallacy – people don't go around trying to convert their lives into economic gain. Sure, most models will assume that, but it's only part of the picture. There are so many other preferences, values, social interactions that figure in to the choices we make. We're not all marching around with dollar signs in our eyes. I just think it wouldn't occur to most people to think 'Now that I'm out of a job, I'd better get paid for blogging!' because that's not what it's about for them. Keen seems to be the only one screaming that these people should be paid, and that's because (as I said above), he's so stubbornly ignorant of the other benefits they receive.

You can imagine that, in an election that's as enthusiastically watched as this one, there's a lot of attention to predicting the outcome. Sites like fivethirtyeight.com use sophisticated statistical models to make predictions based on polling data. I've written about this previously.

Another mechanism getting a lot of attention, however, is the prediction market. Prediction markets are just like stock markets, except that instead of investing in shares of a public company you invest in a future outcome, such as Barack Obama winning the election. When you buy a stock on the NYSE or NASDAQ, what you're really doing is making a prediction that a particular company will do well in the future, or at least that it's stock price will do well. Prediction markets take that basic mechanism and turn it into a way to predict. In a prediction market, the price of an outcome going up and down reflects the collective intelligence about how likely that outcome is.

At least that's the idea. The New York Times has an interesting article today about recent problems with prediction markets around the upcoming presidential elections. Apparently, among three top prediction markets, InTrade, BetFair, and the Iowa Electronic Markets there's been a great deal of variation in predictions at any given time. In other words, the predictions don't agree. This is not supposed to happen.

Prediction markets work on the principle that if you give enough rational, self-interested, maximizing people the chance for tangible benefit from a future outcome, all those people acting in their own interest will give you a very good idea of what's going to happen. It's in their interest to predict as best they can, and when lots of people do that, the invisible hand works to set the right price / prediction. Meanwhile, I can bet on a prediction at any one of the prediction markets. Theoretically, I'll put my money in the one that gives me the best chance for a gain. If InTrade is predicting a John McCain win, I'm buying Barack Obama as quick as I can if I'm a Democrat. And vice versa. The effect should be to reduce the variation in price across the markets.

But apparently that isn't happening. The NY Times article notes that InTrade saw a giant swing towards John McCain recently that wasn't matched in any of the other markets. Apparently, there was a single 'institutional investor' who was betting big on InTrade for 'political reasons' and skewing the results. At least that's the story they tell. But that's not supposed to happen either. In a big market like InTrade, outliers should not be able to push the price around that much. The bigger the market, the more buying or selling it takes to artificially change the result.

It begs the question – how big does the market have to get in order to be insulated against outliers. Or, can it ever really be insulated at all? We're starting to put a lot of stock in prediction markets. For sure, some of it is justified – the Iowa Electronic Markets have successfully predicted the winner of the presidential election many, many times.

But I don't think we know enough about how incentives work in prediction markets yet, especially on issues like presidential elections. I'm not going to fight 100s of years of economists who have relied on the assumption that people seek profit for themselves. No doubt that's a powerful motive. But it's not the only one. Knowing that people are paying attention to prediction markets, that those markets have the power to sway perceptions and votes, maybe my incentive to invest in John McCain isn't only based on wanting to walk away with a new flat screen TV, it's also based on my Republican ideology. I'm willing to bet my $ on the chance that swaying the market could sway the election.

Is that so far fetched? I don't think so. NY Times notes that big-time political sites like RealClearPolitics list the prediction market prices right there next to the polls. I'm not claiming that the prediction markets are broken. I'm just claiming we don't know. The assumption that profit is the only incentive at work is unfounded in my view. In high-stakes situations like this, where other powerful incentives exist, we really need a deeper understanding of what drives people when they bet.

Yahoo!'s new reputation design patterns got me thinking – what makes a reputation? When I browsed through the 9 design patterns lumped under the title of 'reputation', my first thought was that these are interesting and valuable, but they are not reputation elements.

But then, step back. A reputation system is a substitute for personal experience. It provides you with the information you need to make a determination about someone (something?) else without having had to go to all the trouble of getting to know them. Traditionally, that determination has been about interpersonal trust. eBay's reputation systems is the best example. I don't know anyone who's selling blidgets on eBay, so I don't know how likely they are to cheat me. eBay has found a formal way to represent the likelihood that I'm dealing with a seller who will meet my expectations.

So, the reputation design patterns aren't like that. They're not about trust, at least not directly. But, they are signifiers that help me know someone better. Just like my score on eBay represents something about my behavior, so do achievement badges, rankings, etc. They encapsulate information about the type or volume of my participation in different ways. And this information, in turn, may help me figure out something more about trust. Certainly, in an indirect way at least, these things act as elements of a reputation because they substitute for my personal experience with someone else's contributions over time. If I'd been there to see what they did myself, I wouldn't need the badge or the level information.

Still, if we take this view, is my age reputation information? My address? After all, that information saves you the trouble of having to be around to count the years, or having to travel to my town to check where i live. (I'm going off the deep end now!)

More importantly, why does any of this matter? Who cares whether it counts as a reputation. Well, there's a bit of truth in that – maybe reputation is in the eye of the beholder, at least in practice. But only sort of. When it comes to design patterns, I think the important thing is to realize what badges or points are good for. So, certainly my badges and levels help others figure out how to assess my contributions when they don't know any better. But they also work as incentives that make me feel valued, like my contributions count, like I'm making progress, and like people think of me as an expert. They give me a goal or a quota to shoot for, or a status marker that tempers my insecurity.

I think Yahoo! gets this. But the patterns kind of mix up the part that's for you (the reputation) and the part that's for me (the incentive). If the point is to understand your users deeply, design incentive mechanisms with them in mind, then breaking those two apart is essential, and there's a lot of good work to be done there. Anyway, as I said, I'm somewhat conflicted. Comments welcome (as always)!

Through it's Developer Network, Yahoo! has just released a nice set of design patterns for reputation systems. I have some issues with some of the language and patterns, but overall, I think they put together a really great typology.

Yahoo! Reputation Design Patterns

My biggest beef is with the most 'meta' pattern that they call the 'Competitive Spectrum'. I understand the desire to simplify, but in my view, these 5 things are not really on the same spectrum at all. I think the 'combative' type is off in a corner of its own – a corner that really doesn't exist much on the web. As for the other four, I can't make out what the axis is that they vary on – overall level of competition doesn't make sense to me. Yahoo! seems to realize the confusion themselves, as they include a variety of caveats in their description of the spectrum.

Competitive Spectrum

I really agree with Bryce Glass (one of the patterns' creators), who points out that these patterns are pretty ubiquitous now, and so simply pointing them out isn't enough. It's how they're used – or more specifically how intelligently they're used – that will make them powerful. I think Yahoo! still has some work to do to provide best practices for implementing these patterns intelligently. Obviously, given my interests, I'd like to see them look at some of the underlying social psychological processes, and use them to make some informed recommendations. Also, I think designers really need an accessible way to understand the 'corruption effect of extrinsic motivation' (or, as economists call it, 'crowding out'). I would argue that in many contexts when incentives like the ones Yahoo! lays out don't work as expected, the corruption effect is a big reason why. But, all in all, its a great start. (Note that my opinion is in no way influenced by the fact that I'm an intern at Yahoo! this summer… heh)

(Thanks to Ben for the tip!)

Duncan Riley's recent post at TechCrunch is a little contradictory, on its face, about the fate of the new Digg-clone Ximmy. Ximmy is Digg-style social voting with the twist that they pay people for making pro-social (or at least pro-Ximmy) contributions. First Duncan says "Will Ximmy steal away top Digg and Reddit users looking for pocket money? probably not" but then he ultimately concludes:

I’d bet that sites such as Ximmy (although perhaps not Ximmy itself) will win the hearts and minds of a decent portion of the market, after all, if we’re going to spend time building value for these sorts of sites, it’s not much to ask in return that we should be compensated for our time, even in a small way.

Now, there certainly is a subtle distinction there between what could happen to extremely dedicated Digg users and what could happen to the rest of the rabble. I'm just not sure that Duncan is making it. I'm going to make a slightly different prediction that sort of splits Duncan's down the middle. First, I agree – Ximmy will not steal away top Digg-ers, any more than Knol will steal away top Wikipedians (see my previous post on this subject). If we were to prioritize the incentives that motivate these people, monetary incentives are far down the list, and much less powerful than robust social psychological incentives like rational zealotry (i.e. fierce belief in the cause), reputation, status, and group belonging. Greenbacks just can't compete.

However, Ximmy and sites like it will never be a large part of the market in the long term for at least two reasons.

  1. Cash recruits the wrong kind of content, the wrong kind of users. Paying users to submit and promote stories may indeed promote a certain amount of contribution – but what kind of contribution? Ximmy chooses to offer a comparatively large payoff when a submitted story gets promoted to the front page – undoubtedly a move that's intended to encourage high-quality postings. However, experience with Digg has shown that quality often has little to do with what gets promoted to the front page. Some have argued that a small cabal of powerful users can effectively get any story they want promoted. Other analyses have shown that stories in certain categories are much more likely to be promoted overall. Case in point – stories with the word 'Linux' in the title are almost 10 times more likely to be promoted.

    So, gaming the system is possible – that's not news. The story here is that when you pay people, you make it about the money – social norms, ideology, community-orientation can all take the back seat. Amongst several options, you encourage gaming the system. So, though a caring, thoughtful, internally motivated user might view the payment as an incentive for quality, most users will just see it as a way to make money. So, cash will promote the wrong kind of users and the wrong kind of contributions. Quality will suffer, promoted stories will be old-news, garbage, or within a few narrow, stereotyped categories like Linux and lolcats. Critical mass will never arrive for the Ximmy community, it will fold right quick.

  2. The economics won't work out. This is a corrollary of reason 1, in a way. (Keep in mind, I've not done the reasearch necessary to back up this claim, but one could speculate, for example, about what would happen it Diff were paying users Ximmy-style, if the right stats. were available.) So, now Ximmy is paying plenty of people, but the content is garbage. They're drawing the wrong kind of users who are motivated by the money. It becomes something of a closed system – a relatively small number of people who submit and promote each other's stories. Since Ximmy doesn't provide anything new content-wise over Digg or Reddit, its postings are lower quality and its community is smaller, they're having a hard time getting pageviews. How does this business model work out? It's a simple equation if we had the right inputs. How much is each of Ximmy's 'points' worth in dollar value? How many do they give out each day? Take that amount, plus overhead, plus at least 40% to make it a sustainable, profitable business. This scenario seems very unlikely to me. Is this a funded start-up? I hope not. I assume the guys at Ximmy tried to work this out themselves, and if they got funding, convinced some VC's that it would work. But if they did, they were operating on a false assumption – that cash would stand in for other motivations.

So, anyway, I expect that Ximmy will fail, and soon. I would guess that most of the user-generated content site-clones that are popping up will fail in the same spectacular way when they try cash-based models. I'm sure I sound like a broken record by now, but Duncan got the most important part of online participation wrong. He thinks that 'it's not much to ask in return that we should be compensated for our time.' But people are compensated. It just turns out that the narrow-minded view of online participation is that anything we can't fit on a spreadsheet or assign a dollar value to can't count. But I think cash is a weak and fickle tool when compared to the powerful social and psychological incentives that drive people on the web, and I think the next year will prove it.

Blogs have been buzzing lately about the introduction of a new(-ish) platform from Google called Knol. Check out the announcement about Knol on Google's blog. The idea is that Knol is a cross between Wikipedia and one of a few systems (e.g. Squidoo, Mahalo, Hubpages) that give users tools for creating portal pages on specific topics. The hope is that people will take ownership of particular topics, style themselves as experts, summarize and edit all the content that's out there in the cloud for everyone's benefit. Most of these sites have thus far tried to motivate users through a combination of reputation and community involvement. And let's not forget the value of the knowledge base itself.

Google is tackling issues of motivation and quality with reputational and social networking tools, just like everyone else. But with Knol, they also seem to be stepping in with the increasingly classic Google move: add cash. How else is a deep-pocketed late-comer supposed to make a dent in the market? The strategy is no-doubt driven by Google's bevy of economists who argue: when a rational person has the choice between doing something just for the warm-fuzzies, or for warm-fuzzies and cash, that person will go for the cash.

I'm surprised, however, that for all the talent they have on staff, no one around there has told them how dangerous this idea is. It turns out there's all sorts of evidence that when you add monetary payments (or, more generally extrinsic incentives), all kinds of unexpected things can happen. Motivation can be reduced, quality can dip, resentment brewed. I recommend the good folks at Google get started by reading Not Just for the Money by Bruno Frey (an economist!) and The Hidden Costs of Reward edited by Lepper and Greene.

Of course, all of this will depend on just how carefully Google designs their system. One of the most fascinating areas of my research is understanding how the minutiae of user interaction and design elements can influence social psychological motivations. Crowding out (when extrinsic incentives push out intrinsic ones instead of adding to them) can in some cases be crowding in when the context is right. We know so very little about this stuff right now, at least in a scientific sense.

Ultimately, there's a pretty fundamental divide here. Wikipedia is the 10-ton gorilla of knowledge sharing, and they've gotten this far without paying people a cent. Google is betting that Knol will be able to leech away contributors from Wikipedia. Michael Arrington over at TechCrunch seems to agree. And they may be right. But, I worry, to Google's own peril. Who are those users who will abandon Wikipedia to feed from the Google cash trough? Are they they the invested, high quality, knowledgeable contributors that Google would need to build a respectable knowledge repository? Doubtful. But it may be presumptuous of me to assume that Google cares about the quality of their Knol content. Maybe sheer volume is enough. It's their own property, so they can promote it in their search results all they want, and if the eyeballs and ad revenues are there, maybe Google is happy. But then let's not fool ourselves by calling it a 'knowledge repository' when it's really just another ad vehicle.

« Previous PageNext Page »