My dissertation research is all about motivation – why do people participate in online collective action? What are they getting out of it? As I've written about before, just because people aren't working for money doesn't mean they're working for free.

Anyway, when you study motivation, you think a lot about how to measure it. Recently I've been seeing some studies that measure motivation with survey questions. In these studies, the researchers come up with a list of potential motivations and ask participants to respond, usually in the form of Likert-style agreement statements. There are several excellent examples of interesting papers that use variations of this method, for example:

Oreg, Shaul, and Oded Nov. 2008. “Exploring motivations for contributing to open source initiatives: The roles of contribution context and personal values.” Comput. Hum. Behav. 24:2055-2073.

Kuznetsov, Stacey. 2006. “Motivations of contributors to Wikipedia.” SIGCAS Comput. Soc. 36:1.

Using surveys to assess motivation has many benefits. Principal among them, I'd say, is that surveys allow us to collect a large amount of data quickly. And studies such as the ones I cite above certainly provide valuable insights. However, surveys are limited in their ability to tell us about motivation. From my point of view there are at least two big challenges to measuring motivations (or anything, for that matter) with surveys:

  1. For many people – even most – motivations are soft or implicit attitudes. When you ask someone to respond to a direct question about something they are not used to thinking directly about, they may be forced to take a position on an issue they do not feel strongly about (their attitude is 'soft') or, more problematically, they may act on an attitude that they are unaware of or cannot express (their attitude is implicit). In either case, surveys that tick off motivations and ask participants to agree / disagree can sometimes come up with noisy data at best, and downright wrong data at worst.
  2. When it comes to motivation, social desirability is a big problem. When you ask people why they contribute to Wikipedia there are norms and expectations operating that mean fewer people will probably say they contribute to gain fame, and more people will say they contribute to do something good for the world. In the grand scheme of surveys, asking about motivation for online participation may be less susceptible to social desirability than other controversial issues – say, gay marriage. But still, it means survey data can only take us so far.

As I have been reading about the recent dust-up around LinkedIn's plan to crowdsource it's site translation, I've been thinking about what other methods could supplement and compliment survey data. Here's how LinkedIn asked its members about what they would get out of the translation project:

Hmmm. Interesting and useful, but only to a point. How else should we be investigating this?

One great alternative is interviews, which help to mitigate both of the issues above. When you get people talking about their lives, you often find through rigorous analysis that there are consistent themes and patterns in the narratives that participants may not even be aware of. While social desirability is still an issue, part of the theory of qualitative work is that when you develop rapport with a participant, when you're non-judgmental and open, you find that they will tell you all sorts of things. Here's a great example of interview-based research on incentives of online participation:

Ames, Morgan, and Mor Naaman. 2007. “Why we tag: motivations for annotation in mobile and online media.” Pp. 971-980 in Proceedings of the SIGCHI conference on Human factors in computing systems. San Jose, California, USA: ACM.

In social psychology the gold standard in motivation research is a truly behavioral measure. This is why experiments are so great. If we manipulate one factor and then look at some measure of contributions or their characteristics as evidence of motivations, we get a view of motivation that's largely free of the problems I outlined above. This type of research isn't always practical, but it isn't all that uncommon in industry either. To find out what motivates users, companies often try out a new incentive program with a subset of users, then compare the results to the old way. I'm not sure if LinkedIn had the luxury to do this, though, and it can get tedious especially in cases where there are many incentives and motivations at work, as is the case when we talk about online participation.

Sticking with surveys, there is some really interesting innovation in methodology aimed at combating the social desirability problem. It's called the List Experiment. (For an overview of the method, check out this paper.) The basic idea is this: we distribute a survey that has 3 statements about issues that people may or may not agree with. The question is not which statements you agree with, but how many. For a random subset of our sample, we also add a 4th statement about the controversial issue that we expect has a social desirability problem. The difference in the number of statements agreed with between the people who got 3 items and the people who got 4 is the percentage of people who agree with that 4th statement. And because we never asked anyone to say which ones they agree with, we mitigated the problem of social pressure.

This is not yet a widely used method, but in studies about hot-button issues like racism, list experiment results have turned up very different results than the traditional direct-question method. In the paper linked above, for example, the author investigated attitudes about immgration and found a difference of more than 20 percentage points between the traditional method and the list experiment. Far more people supported cutting off immigration to the US than we thought.

Anyway, this has now turned into a very long post, and I have no blockbuster conclusion. In summary, I think assessing motivation with Likert-style questions is interesting, valuable, and important. However, it's subject to some important limitations – just like any method is. The best solution is a mixed-methods approach. Interviews, surveys, experiments. I'm sure I'll be thinking about this issue for a long time, and I think there is an opportunity here for some real methodological creativity.