\ Online Collective Action | TechnoTaste

Online Collective Action


Well, yesterday was the day, and in less than 9 hours DARPA crowned a winner: The MIT Red Balloon Challenge Team. I spent a good part of the day stuck to Twitter watching the contest develop and trying to read between the lines (mostly unsuccessfully). So after all the build up and a day of searching, what have we learned from the DARPA Network Challenge?

  1. DARPA Made it Easy Take a look at the map of the balloon locations. Notice anything? Huge swathes of the country with no balloons? Yup.

    BalloonMap
    (Click for a larger image.)

    Remember when I flippantly dismissed the Twitter-only strategy by pointing to the NY Times infographic showing how few Tweets there are in so much of the country? Well, apparently DARPA knew that too, and decided to make the challenge very, very easy. The balloons were in major parks in major cities, almost entirely on the Atlantic and Pacific seaboards. I mean, for peets sake they put a balloon in Union Square, San Francisco!

  2. How did MIT win? Well, first let me tell you how they did not win. MIT's victory had absolutely nothing to do with the ridiculous reverse pyramid scheme they were using to hand out money to tipsters. I am willing to stake a fortune on that one. Here's the graphic they use to try and explain it:

    MIT's Reverse Pyramid Scheme
    (Click for a larger image.)

    I'm hoping the folks over at MIT aren't patting themselves on the back for their brilliant use of monetary incentives.

    So how did MIT win? Well, you can find the answer to that question on Slashdot, Techcrunch, CNet News, the Washington Post, and… Get the picture? Shocking. Well known educational institution gets huge amount of publicity, attracts more tipsters, wins scavenger hunt.

  3. Where was the Innovation? Unclear. I didn't see any interesting innovation in incentives. I didn't see many creative uses of technology. Most teams set up websites with simple web forms for submitting tips. Army of Eyes had an iPhone app. for that same purpose, and they did quite well I think. But it still boiled down to a simple form for submitting tips. I mean, the FBI has been doing the same thing with a phone number and a few operators for years. This is a nice reminder that, although the internet and social media provides some fascinating new contexts for interaction, what we do with it in social interaction, organizing is fundamentally the same as before.

    MIT's team says they were using some algorithm to verify balloon tips. I'm not sure what that means. I know that they were keeping track of all DARPA-related posts during the challenge. They may have been looking for posts such as these, which I found earlier on in the day…

    DARPA Tweet 2
    (Click for a larger image.)

    Of course, none of those actually referred to a balloon in the challenge (as best as I can tell), but looking for those types of messages was more the sort of strategy I was expecting.

  4. Fin I can't help but feel that the whole thing was a bit of a disappointment. The way DARPA decided to place the balloons meant that teams could win without any secret sauce. I was excited to see what people would do to find the balloons on a random stretch of I80 in Nebraska or whatever.

    I'm hoping that this is all part of DARPA's strategy, and the next thing they'll do give people a chance to really organize, make the challenge really hard. I think they've got the right idea that there's innovation to be done in this space. But Twitter is such a blip on the radar. To really tackle nationwide emergencies, and to effectively harness the power of networked media and the internet we need to learn to integrate new technologies with old organizational tools. We need to look at lessons from MoveOn and Obama – people who arguably did networked organizing, combined the power of new and old media better than anyone ever has – and we should see how we can use those lessons for a more directed goal.

    I'd love to work on that problem. DARPA – bring it!

DARPA posted a poll on their Facebook page. Here are the results as of about 10:30AM PDT on Dec. 4th:

Capture

Note sure how many people have voted. My vote was 'Never.' Although it occurs to me now that I don't quite know their definition of solved. If they mean when will someone find at least 5 balloons and get the prize, well, that should happen in less than 24 hours. Maybe even less than 6. But I'm betting no one will find all 10.

Well, the day of reckoning is almost upon us – Dec 5th, DARPA Network Challenge Day! I've previously blogged about this here, and make sure to take a look at the comments because there's some good stuff there.

Anyway, I was looking again at the rules recently, and noticed some things I didn't notice before. The prize is to the 1st team that submits the most correct balloon locations. The balloons will only be aloft during the daylight hours of Dec 5th, unless there are weather difficulties, in which case the balloons will go up on the 6th or later. It's unclear whether they'll delay all the balloons, or whether it means that depending on the weather there could be balloons up on different days in different parts of the country (more bad news for teams that are thinking of driving around looking!!!). But teams have until Dec. 14th to submit winning entries. And they've revised the accuracy of the location to be within 1 mile (huge!). To me, this suggests that DARPA thinks the challenge will be won algorithmically. I think they might be right.

On Dec. 5th, I'm guessing we'll hear about the locations of 3-6 of the balloons. There may be a team here or there that has some private information, but I suspect most of the locations will be known by all teams who are paying attention. (Of course, only one of them will be first to submit…) But then the real fun begins. Starting at nightfall, the balloons are gone, but we can still find them. How? Well, I don't know. How much media is uploaded to the cloud each day? How many pictures, videos, that might show a balloon in the background, even if the photographer never noticed it? Of course, that media would have to be geotagged. It's possible to infer the location of balloons after the fact with some accuracy, especially now that we only need to be right within 1 mile. After all, teams will have more than a week to look, and then even to go out to these locations and pace off distances if they choose.

So, what do you think? Is this how the challenge will be solved?

As most people who read the iSchool's mailing lists know, I got pretty excited about the DARPA Network Challenge. If you haven't heard, it's a competition in the spirit of the DARPA Grand Challenge. The idea is that on a particular day in December, for a few hours only, DARPA will fly 10 large red weather balloons in 10 different locations, somewhere near a road, somewhere in the US. The challenge is to find them all. Sounds easy, right?

I like what DARPA is thinking here. We know that internet-based tools have helped people coordinate massive jobs on the fly. The best recent examples of collective efforts enabled by the internet were the searches for Jim Gray's lost sailboat and Steve Fosset's downed plane. Unfortunately, neither search turned up anything. Both of those examples were a carpet bombing approach. We all had one job: search satellite photos for clues. This challenge is a little different, but much more like the kind of thing we might want to mobilize for nationally in response to crisis. Those balloons could be anywhere. They'd be hard to spot. And depending on how hard DARPA wants to make this thing, a good portion of them are likely to be stationed in rural America. Since they announced the challenge a few weeks back, DARPA has updated the rules to say that if no one finds all 10 they'll give out the prize to the first team to find at least 5. This tells me it's going to be hard. Awesome!

Dumb Ideas

The web is full of commentary on this, and I'm repeating a lot of it. Here's a quick review of very dumb ideas, ranked in order to dumbness.

  1. Send people out looking. Some people think the answer to this is to get a big group of people to drive out looking for these balloons. It seems rational on the face of it, like DARPA planned a big game of hide and seek. But this is very dumb. No group will be large enough to cover the necessary ground in time. Ryan's back of the envelope calculation:

    To aid with calculations, according to the CIA World Factbook there are 2,615,870 miles of paved roads in the U.S. If we assume that DARPA only considers 80% of those roads eligible for this challenge, and an average speed of 45MPH, it would take 46,500 hours to travel those roads, which is just under 2000 (person-)days. So how many people do you need in order to scout out all this terrain?

  2. Offer a cash bounty and watch the tips roll in. Many teams think this will be won by offering to split the winnings amongst people who submit the locations of balloons. Nuh uh. First of all, we have a repeat of problem one above: how are people going to find these things, by driving around looking? Second, who is going to go driving around on the basis of a cut of the $40,000 prize? The problem with that is most people would view it like a lottery: you'll give me $3,000 for submitting the balloon location if I find it, but the chances that our team will win are so small anyway. Offer cash prizes and a very small group of committed people might be motivated to go driving around, but you won't get very far with that.
  3. Look at satellite photos. Again, this sounds smart on the face of it, but it's not. This blog post on the subject puts it nicely:

    The very best resolution you’ll realistically get is 1.6 ft, meaning an 8ft red balloon will take up about 19 pixels. That’s not bad, but that’s only under ideal conditions, so if you’re trying to automate the process of finding those 19 pixels with computer vision, you’re going to get a lot of false positives (see below).

    To that add the issues of weather (clouds make balloons hard to see from above) and cost (it would be way more than $40k). You might imagine a collective effort like the Jim Gray search, but good luck getting enough people to care about this.

  4. Data Mining. Armies of computer scientists read about this problem and started to design their Twitter crawlers to look for the inevitable flood of 'OMG WTF Red Balloon?' messages. This is the least dumb of the dumb ideas because they will actually find some balloons this way. It's a tiny fraction of people who know about this DARPA challenge, but there are plenty of people who might be curious about a giant red balloon in their neighborhood. But let's not get too carried away. This is still dumb. Reason: there are only like 12 people in the US who use Twitter. Ok, I'm exaggerating (a lot) for effect. But the point is, it's a small fraction of Americans who use Twitter and, more importantly, they mostly live in about 3 cities. Check out this beautiful NY Times visualization of Twitter usage during the Superbowl in Feb. Now try to estimate the fraction of the country in which there were no Tweets at all… Roops.

How to Win

So, as much as I like to critique other people's ideas, I'd like to offer my own opinion on how to win this challenge.

  1. Forget cash. Splitting up the prize money will do no good. If you're going to motivate people to help with the challenge, it's going to be based on non-monetary, social psychological incentives. In other words, they're going to help because they're interested, because they think it's fun. So…
  2. Focus on the hard to find balloons. I'm guessing somewhere between 3-6 of the balloons will be stationed in social media dense areas, so a lot of people will find them in the first few hours, and they'll be a lot of easy to spot chatter about it on Twitter and Facebook. So figure on someone else finding those, and worry about the balloons in that vast swath called middle America.
  3. Harness the people who are out anyway. We can develop as many sophisticated motivational schemes as we want. We can donate the money to charity or get a major celebrity to mobilize people. We can design games with prizes and achievements and badges. Yay! The problem is that it's still a lot to ask of people to go driving around. So, why not get the people who are out driving anyway? Partner with trucker organizations. Get road trippers and cops. These are people who are driving anyway, so all the incentives have to do is get them to report in. Reporting and verification will still be an issue. So is communication. Truckers use CB radio, so design a an automated CB messaging interface. Road trippers are bored out of their minds, so make the balloons part of a big game of I SPY. Design an iPhone app. and give away instant music downloads – road trippers need new music! I'm not saying that's easy. Just easier.

Prediction

My prediction is simple: no one will find all 10 balloons. The winning team will find 5-6 at most. Most people will drastically underestimate the magnitude of this challenge. If DARPA wants to make this hard (and I think they do), they can make it VERY hard. The problem is that the only people who could reasonably plan this in time are the teams of technologists who think they can solve this through data mining alone. But they can't. Strategies with a real chance of winning would take too long to develop. DARPA will re-issue the challenge and up the ante.

Update: A few new ideas and predictions here. I'm so excited! Tomorrow's the day!

Recent discussion about the future of Twitter got me thinking: is Twitter a public good?

Twitter Logo

First, let's make sure we're on the same page about public goods. Public goods have two properties – when one person takes advantage of the good, it doesn't reduce the amount available to anyone else (that's called non-rivalry) and you provide the good to everyone or no one, you can't selectively exclude some people from taking advantage of the good (that's called non-excludability). Traditional examples of this are things like clean air and national defense. My breathing easier, my being safer doesn't take away from your breathing easier or being safer. And we all breathe easy, we all benefit from safety.

So, is Twitter a public good? Well, yes and no. First let's start with the 'no'. Depending on how it's used, Twitter is a point-to-point or broadcast communication tool. In that capacity, we could argue that it does not constitute a public good. No more than email or letters do, anyway. If I post a tweet about my breakfast, where is the public value there? Where is the 'good' in the sense of something which can benefit many?

But then again, there is a 'good' there. It's a derivative good, but it's important. When people make their tweets public, they are doing at least two things: first, they are communicating with friends and family (or strangers). This is probably what they were trying to do. But second, they are contributing a bit of information to a collective body of real-time information about what people are doing and thinking about.

This is the power of Twitter trends and Twitter analytics. (See Twitter Search, Twitalyzer, Tinker, Brizzly, or Trendistic, just to name a few…) By aggregating all those tiny bites, we get a public body of information that can tell us a lot about what's going on. If I want to know what people are thinking about, paying attention to today (or at least what the tiny fraction of Americans who Tweet are thinking about), I can use Twitter to find out. And just like any public good, there's a social dilemma there. I read my Twitter stream all the time, but I almost never Tweet myself. I search the stream and look at trends, but I don't add my bites to the stream. I'm a taker, but not a giver. So technically I'm a free-rider.

But there's another interesting dimension there. If the Twitter public good is a derivative by-product, that means that many or most people are contributing to it without intending to. Like i said, they're just sharing some info. about their day. Many of them may not be aware that their info. is helping to make Twitter Trends more powerful. So how can we call them free-riders? It's like saying the people who don't edit Wikipedia because they don't know they can are free-riders. From one point of view they are, but the definition of a free-rider requires an informed choice to take advantage of others. So, we'd better look more closely.

Here's another example: Netflix' movie rating system. If I never rate movies, I still benefit from the algorithms that suggest movies I might like, but I'm not putting in my little bites. This may be because I don't even know that this is how the movie recommendation algorithm works. Am I a free-rider, even though I never knew about the derivative product of my ratings? On the other hand, I might rate movies all the time, but only because I like to have a reminder of how I liked movies I've seen over time. I don't know or care that those ratings get used for anything else.

Anyway, I think there are some interesting issues here. But getting back to the original question – Is Twitter a public good? – I'm going to come down on the side of a definitive 'Yes!'. But there's a lot of interesting thinking and research to do on the question of exactly how it's a public good.

Via Marc Smith's Connected Action blog, I learned about Jenny Preece and Ben Sheniderman's paper in a new journal:

Preece, Jennifer and Shneiderman, Ben (2009). The Reader-to-Leader Framework: Motivating Technology-Mediated Social Participation, AIS Transactions on Human-Computer Interaction (1) 1, pp. 13-32. (link)

Drawing on a huge amount of prior research, the paper develops an interesting model of the progression of participation in online collective action (although they don't call it that). Actually, I would say the references in this paper are almost entirely must-reads for anyone interested in online participation, and the manner in which Preece and Shneiderman go through them is almost like the syllabus for a good course on understanding online participation.

Reader-To-Leader
(Click for a larger image.)

The figure above highlights the paper's main model. I like that the authors include all those arrows to indicate that it's not a step-wise progression from one stage to the next. I think this is a key point. Preece and Shneiderman talk a bit about Lave and Wenger's notion of Legitimate Peripheral Participation. One of the key misconceptions of that work is that it suggests a linear path from periphery to center. But Lave and Wenger go out of their way to argue that although there are some activities that are peripheral (yet legitimate) there are many paths from them towards others types of participation. They also argue that 'central' is not the right idea since communities are constantly in flux, and suggest 'full participation' is a better term. I think Preece and Shneiderman are on board with all of this.

I am thinking about another way to conceive of this model which highlights another key point: progressing in participation usually means supplementing participation with new knowledge, activities, and social interactions, but not supplanting the previous forms. A 'leader' on Wikipedia is certainly still a 'reader,' and though she may spend less time fixing typographical errors (as a 'contributor' might) and more time arbitrating disputes, the progression is often about growth rather than a substitution. An alternative way of visualizing this progression is below.

New-Reader-to-Leader-Model
(Click for a larger image.)

This is not a perfect way either, more of a straw man. Gain some things, lose some things. One thing lost in this new visualization is the progression of thick green arrows that indicate the path that Preece and Shneiderman argue many users follow.

I don't think this alternative way of looking at the progression of participation fundamentally alters Shneiderman and Preece's argument. From one point of view, this is just a quibble about visualization. But actually I think the venn-style highlights that reading is a starting point, and the progression from there goes in many directions. At the same time, deeper forms of participation each share much in common with others, but some new activities as well. For me, even though the more linear style is common for visualizations of conceptual models, it's important that the model not imply separations that might not exist, and that it emphasizes that increasing participation is often a process of learning and growth which allows deeply embedded participants to experience more and share more with a diverse array of others.

The NY Times is reporting that the English language Wikipedia will soon moved to a "flagged revisions" system by which edits to articles about living people will have to be approved by a more experienced editor before they appear on the live site. This system has been tested for about a year on the German language Wikipedia. On that site, an "experienced editor" is someone who's crossed a threshold of number of successful edits. There were about 7,500 of them in the German case, and there are likely to be an order of magnitude more in the English Wikipedia.

The NY Times article notes that:

Although Wikipedia has prevented anonymous users from creating new articles for several years now, the new flagging system crosses a psychological Rubicon. It will divide Wikipedia’s contributors into two classes — experienced, trusted editors, and everyone else — altering Wikipedia’s implicit notion that everyone has an equal right to edit entries.

In reality, those classes have been present for some time now. As part of my dissertation research I've been interviewing less experienced Wikipedians about their perceptions of the site. One constant theme has been the perception of a class system in Wikipedia. Casual editors worry that their edits aren't good enough, and that they'll be rebuked by Wikipedia's upper-classes. They perceive a mystical group of higher-order contributors who make Wikipedia work. They believe that the barrier to entry is high and that they don't know enough about how the system works even to make small edits. Partly I think this is a function of the increasing complexity of the Wikipedia system. Partly it's because of Wikipedia's increasing stature – less experienced users feel the consequences of their actions, when so many millions read the site each day.

I also think classism is something that Wikipedia's heavy-editor community actively cultivates. The NY Times notes the work of Ed Chi at PARC. Ed and he colleagues have done some really interesting work. Among other things, they've noticed a trend towards resistance to new content. In a recent paper presented at GROUP, Tony Lam and his colleagues found that the rate of article deletions is growing, and that most articles are deleted shortly after they are created. Wikipedia has a core of frequent editors who zealously guard their territory, sometimes actively discouraging newcomers, and enforcing complicated and arcane policies in ways that can reduce new participation. The ideology of Wikipedia is a level playing field in which everyone has a voice, but the practice of it is often far from that ideal.

This latest move is troubling in that it seems to represent a lack of faith in crowdsourcing and the wisdom of crowds, in the model that made Wikipedia what it is today. This change will also remove another of the important social-psychological incentives that draw new people into the Wikipedia fold: the instant gratification that comes from seeing your work reflected on a Wikipedia page. There will certainly be many papers written on the before-after comparison, and I suspect we'll see significant changes in the dynamics of the site, at least for the pages that will see this change.

I have been re-reading Steve Weber's great book:

Weber, Steven. 2004. The Success of Open Source. Harvard University Press.

…and I wanted to share this wonderful passage:

…theorizing about collective action is not a matter of trying to decide whether behavior can be labeled as 'rational.' It is a matter of understanding first and foremost under what conditions individuals find that the benefits of participation exceed the costs. This means, of course, understanding a great deal about how people assess costs and benefits.

Weber nicely captures somethings that I have been thinking about for a long time: the notion of 'rational' can sometimes be a cop-out, a too-convenient shortcut that sidesteps the messy (but necessary) work of understanding the dispositions, attitudes, and social conditions that influence decision-making in specific contexts.

My dissertation research is all about motivation – why do people participate in online collective action? What are they getting out of it? As I've written about before, just because people aren't working for money doesn't mean they're working for free.

Anyway, when you study motivation, you think a lot about how to measure it. Recently I've been seeing some studies that measure motivation with survey questions. In these studies, the researchers come up with a list of potential motivations and ask participants to respond, usually in the form of Likert-style agreement statements. There are several excellent examples of interesting papers that use variations of this method, for example:

Oreg, Shaul, and Oded Nov. 2008. “Exploring motivations for contributing to open source initiatives: The roles of contribution context and personal values.” Comput. Hum. Behav. 24:2055-2073.

Kuznetsov, Stacey. 2006. “Motivations of contributors to Wikipedia.” SIGCAS Comput. Soc. 36:1.

Using surveys to assess motivation has many benefits. Principal among them, I'd say, is that surveys allow us to collect a large amount of data quickly. And studies such as the ones I cite above certainly provide valuable insights. However, surveys are limited in their ability to tell us about motivation. From my point of view there are at least two big challenges to measuring motivations (or anything, for that matter) with surveys:

  1. For many people – even most – motivations are soft or implicit attitudes. When you ask someone to respond to a direct question about something they are not used to thinking directly about, they may be forced to take a position on an issue they do not feel strongly about (their attitude is 'soft') or, more problematically, they may act on an attitude that they are unaware of or cannot express (their attitude is implicit). In either case, surveys that tick off motivations and ask participants to agree / disagree can sometimes come up with noisy data at best, and downright wrong data at worst.
  2. When it comes to motivation, social desirability is a big problem. When you ask people why they contribute to Wikipedia there are norms and expectations operating that mean fewer people will probably say they contribute to gain fame, and more people will say they contribute to do something good for the world. In the grand scheme of surveys, asking about motivation for online participation may be less susceptible to social desirability than other controversial issues – say, gay marriage. But still, it means survey data can only take us so far.

As I have been reading about the recent dust-up around LinkedIn's plan to crowdsource it's site translation, I've been thinking about what other methods could supplement and compliment survey data. Here's how LinkedIn asked its members about what they would get out of the translation project:

Hmmm. Interesting and useful, but only to a point. How else should we be investigating this?

One great alternative is interviews, which help to mitigate both of the issues above. When you get people talking about their lives, you often find through rigorous analysis that there are consistent themes and patterns in the narratives that participants may not even be aware of. While social desirability is still an issue, part of the theory of qualitative work is that when you develop rapport with a participant, when you're non-judgmental and open, you find that they will tell you all sorts of things. Here's a great example of interview-based research on incentives of online participation:

Ames, Morgan, and Mor Naaman. 2007. “Why we tag: motivations for annotation in mobile and online media.” Pp. 971-980 in Proceedings of the SIGCHI conference on Human factors in computing systems. San Jose, California, USA: ACM.

In social psychology the gold standard in motivation research is a truly behavioral measure. This is why experiments are so great. If we manipulate one factor and then look at some measure of contributions or their characteristics as evidence of motivations, we get a view of motivation that's largely free of the problems I outlined above. This type of research isn't always practical, but it isn't all that uncommon in industry either. To find out what motivates users, companies often try out a new incentive program with a subset of users, then compare the results to the old way. I'm not sure if LinkedIn had the luxury to do this, though, and it can get tedious especially in cases where there are many incentives and motivations at work, as is the case when we talk about online participation.

Sticking with surveys, there is some really interesting innovation in methodology aimed at combating the social desirability problem. It's called the List Experiment. (For an overview of the method, check out this paper.) The basic idea is this: we distribute a survey that has 3 statements about issues that people may or may not agree with. The question is not which statements you agree with, but how many. For a random subset of our sample, we also add a 4th statement about the controversial issue that we expect has a social desirability problem. The difference in the number of statements agreed with between the people who got 3 items and the people who got 4 is the percentage of people who agree with that 4th statement. And because we never asked anyone to say which ones they agree with, we mitigated the problem of social pressure.

This is not yet a widely used method, but in studies about hot-button issues like racism, list experiment results have turned up very different results than the traditional direct-question method. In the paper linked above, for example, the author investigated attitudes about immgration and found a difference of more than 20 percentage points between the traditional method and the list experiment. Far more people supported cutting off immigration to the US than we thought.

Anyway, this has now turned into a very long post, and I have no blockbuster conclusion. In summary, I think assessing motivation with Likert-style questions is interesting, valuable, and important. However, it's subject to some important limitations – just like any method is. The best solution is a mixed-methods approach. Interviews, surveys, experiments. I'm sure I'll be thinking about this issue for a long time, and I think there is an opportunity here for some real methodological creativity.

I just read through Oded Nov's paper from Communications of the ACM:

Nov, O. (2007). What motivates Wikipedians? Commun. ACM, 50(11), 60-64. (link)

Two things occur to me. First, Nov explains away the potential influence of social desirability in about two sentences, but I'm not buying it. When you ask people why they do something, there's a huge number of social factors that are going to come into play. In the case of Wikipedia I also think there are likely to be lots of soft and implicit attitudes. Soft attitudes are expressions that don't reflect beliefs, but rather answers to questions someone might not have thought about previously. For example, if I asked you "How do you feel about Kobe Bryant elbowing Ron Artest in the neck last night?", you might respond by saying it's abhorrent. If I took that at face value, I'd be ignoring the fact that many people don't know about basketball, don't know who Bryant or Artest are, don't know the contest, or don't care. Unconscious attitudes, on the other hand, are attitudes that we hold and act on but can't express. To me, neither of these things makes survey research of this type invalid – I do similar surveys myself! But they're important issues, too often left out of discussions.

The second issue, maybe more important, is about scope. There are a fair number of studies now about motivations for contributing to various online collective actions. But they almost always focus on people who contribute a lot. However, these papers, like Nov's, usually don't make that distinction. They make claims about motivations for all contributors. In reality, the motivations of casual or infrequent contributors are likely to be very, very different. Harder to study, though! On the one hand, by studying the heavy contributors we capture motivations for majority of the work that gets done, but we do that at the expense of attention to the vast majority of people who contribute.

In sum: Social desirability, soft attitudes, etc. need more consideration when we talk about motivation. Studies that focus on heavy contributors should say as much, and more studies should look at casual contributors' motivations.

Next Page »