July 2009


I have been re-reading Steve Weber's great book:

Weber, Steven. 2004. The Success of Open Source. Harvard University Press.

…and I wanted to share this wonderful passage:

…theorizing about collective action is not a matter of trying to decide whether behavior can be labeled as 'rational.' It is a matter of understanding first and foremost under what conditions individuals find that the benefits of participation exceed the costs. This means, of course, understanding a great deal about how people assess costs and benefits.

Weber nicely captures somethings that I have been thinking about for a long time: the notion of 'rational' can sometimes be a cop-out, a too-convenient shortcut that sidesteps the messy (but necessary) work of understanding the dispositions, attitudes, and social conditions that influence decision-making in specific contexts.

Everyone is a-flutter this morning about Google's announcement: the rumors are true, and they are developing a new operating system after all. It's called the Google Chrome OS, and it's going to be targeted at netbooks. Netbooks are those cheap, light laptops that are designed to connect to the internet and that's about all.

When Google makes any kind of big announcement the tech. world sits at attention and listens carefully. TechCrunch says Google just dropped a nuclear bomb on Microsoft. This one, however, is a big fat yawn. Google will release an OS, a small group of dedicated Google-fanatics will use it, and then it will die a slow and steady death. Here's why:

  1. Google is good at search. Easy, one-step search. Fast and easy search has become so important to everyone, across every demographic and level of expertise that Google has become the verb to describe that action. But operating systems are not like search. Operating systems for netbooks are even less like search. More importantly, all those people who understand fast and easy search are not interested in OSs, and they've never heard of netbooks.
  2. "Hey, wait!" you say. "Google isn't stupid. They know a netbook OS isn't for the masses. This is about Google promoting its brand and its products. It's the same strategy as Android." Well, I say, it may be the same strategy, but mobile OS and netbook OS are very different. There are only a handful of mobile OSs. There are dozens of OS distributions (flavors of Linux), many of which are targeted to netbooks. Google can't compete with that. No matter how much $$ and development time they throw at their OS, they'll need a dedicated community of developers and testers. And they'll need to steal them from another open source project. The Google carries a lot of weight, but it can't carry that load.
  3. An operating system doesn't take advantage of Google's core competencies. They have two. First, search. Second, efficient use of massive (and massively distributed) servers. GMail, Docs, etc. make sense because they integrate with search and capitalize on Google's massive server load. A netbook OS does neither.

From my POV, this is just another bit of Google casting around in search of more footholds. Eventually they'll find one, but it won't be operating systems, and it won't be browsers. (Depending on who you ask, Google Chrome is languishing at between 1 and 3% market share.) It may be that these efforts, even if they're incrementally beneficial, are useful enough for Google to push them. Convert a few developers, get some buzz, develop technology with multiple uses. That's fine. But let's not call it a nuclear bomb. A world with Google Chrome OS will look almost exactly like a world without it.

Two seemingly interesting papers in the latest issue of JASIST:

Lim, Sook. 2009. “How and why do college students use Wikipedia?.” Journal of the American Society for Information Science and Technology 9999:1-14. (link)

and

Jansen, Bernard J., Mimi Zhang, Kate Sobel, and Abdur Chowdury. 2009. “Twitter power: Tweets as electronic word of mouth.” Journal of the American Society for Information Science and Technology 9999:1-20. (link)

My dissertation research is all about motivation – why do people participate in online collective action? What are they getting out of it? As I've written about before, just because people aren't working for money doesn't mean they're working for free.

Anyway, when you study motivation, you think a lot about how to measure it. Recently I've been seeing some studies that measure motivation with survey questions. In these studies, the researchers come up with a list of potential motivations and ask participants to respond, usually in the form of Likert-style agreement statements. There are several excellent examples of interesting papers that use variations of this method, for example:

Oreg, Shaul, and Oded Nov. 2008. “Exploring motivations for contributing to open source initiatives: The roles of contribution context and personal values.” Comput. Hum. Behav. 24:2055-2073.

Kuznetsov, Stacey. 2006. “Motivations of contributors to Wikipedia.” SIGCAS Comput. Soc. 36:1.

Using surveys to assess motivation has many benefits. Principal among them, I'd say, is that surveys allow us to collect a large amount of data quickly. And studies such as the ones I cite above certainly provide valuable insights. However, surveys are limited in their ability to tell us about motivation. From my point of view there are at least two big challenges to measuring motivations (or anything, for that matter) with surveys:

  1. For many people – even most – motivations are soft or implicit attitudes. When you ask someone to respond to a direct question about something they are not used to thinking directly about, they may be forced to take a position on an issue they do not feel strongly about (their attitude is 'soft') or, more problematically, they may act on an attitude that they are unaware of or cannot express (their attitude is implicit). In either case, surveys that tick off motivations and ask participants to agree / disagree can sometimes come up with noisy data at best, and downright wrong data at worst.
  2. When it comes to motivation, social desirability is a big problem. When you ask people why they contribute to Wikipedia there are norms and expectations operating that mean fewer people will probably say they contribute to gain fame, and more people will say they contribute to do something good for the world. In the grand scheme of surveys, asking about motivation for online participation may be less susceptible to social desirability than other controversial issues – say, gay marriage. But still, it means survey data can only take us so far.

As I have been reading about the recent dust-up around LinkedIn's plan to crowdsource it's site translation, I've been thinking about what other methods could supplement and compliment survey data. Here's how LinkedIn asked its members about what they would get out of the translation project:

Hmmm. Interesting and useful, but only to a point. How else should we be investigating this?

One great alternative is interviews, which help to mitigate both of the issues above. When you get people talking about their lives, you often find through rigorous analysis that there are consistent themes and patterns in the narratives that participants may not even be aware of. While social desirability is still an issue, part of the theory of qualitative work is that when you develop rapport with a participant, when you're non-judgmental and open, you find that they will tell you all sorts of things. Here's a great example of interview-based research on incentives of online participation:

Ames, Morgan, and Mor Naaman. 2007. “Why we tag: motivations for annotation in mobile and online media.” Pp. 971-980 in Proceedings of the SIGCHI conference on Human factors in computing systems. San Jose, California, USA: ACM.

In social psychology the gold standard in motivation research is a truly behavioral measure. This is why experiments are so great. If we manipulate one factor and then look at some measure of contributions or their characteristics as evidence of motivations, we get a view of motivation that's largely free of the problems I outlined above. This type of research isn't always practical, but it isn't all that uncommon in industry either. To find out what motivates users, companies often try out a new incentive program with a subset of users, then compare the results to the old way. I'm not sure if LinkedIn had the luxury to do this, though, and it can get tedious especially in cases where there are many incentives and motivations at work, as is the case when we talk about online participation.

Sticking with surveys, there is some really interesting innovation in methodology aimed at combating the social desirability problem. It's called the List Experiment. (For an overview of the method, check out this paper.) The basic idea is this: we distribute a survey that has 3 statements about issues that people may or may not agree with. The question is not which statements you agree with, but how many. For a random subset of our sample, we also add a 4th statement about the controversial issue that we expect has a social desirability problem. The difference in the number of statements agreed with between the people who got 3 items and the people who got 4 is the percentage of people who agree with that 4th statement. And because we never asked anyone to say which ones they agree with, we mitigated the problem of social pressure.

This is not yet a widely used method, but in studies about hot-button issues like racism, list experiment results have turned up very different results than the traditional direct-question method. In the paper linked above, for example, the author investigated attitudes about immgration and found a difference of more than 20 percentage points between the traditional method and the list experiment. Far more people supported cutting off immigration to the US than we thought.

Anyway, this has now turned into a very long post, and I have no blockbuster conclusion. In summary, I think assessing motivation with Likert-style questions is interesting, valuable, and important. However, it's subject to some important limitations – just like any method is. The best solution is a mixed-methods approach. Interviews, surveys, experiments. I'm sure I'll be thinking about this issue for a long time, and I think there is an opportunity here for some real methodological creativity.