Thu 13 Nov 2008
I just got back from CSCW 2008 in San Diego, which was wonderful, relevant, and thought provoking. Among the papers that most piqued my interest was one by Aniket Kittur, Bongwon Suh, and Ed Chi called 'Can You Ever Trust a Wiki? Impacting Trustworthiness in Wikipedia'. The authors do a really interesting study to examine how much a visualization that presents factors related to 'trust' can push around a participants perception of the 'trustworthiness' of a Wikipedia article. Ultimately they find that the visualization can, in fact, push perceptions in both directions.
Now, this is a really neat paper, and it won the best note award this year. But you might be able to tell from the title and the scare quotes above what my problem with it is. This is a paper that's all about trust, but it's treatment of trust is completely insufficient, uncritical, and confounds what would otherwise be a neat study. We can break the problem down into two big issues:
- The authors' do a poor job of defining what trust and trustworthiness mean.
- We have no way to interpret what participants thought 'trust' and 'trustworthiness' meant.
For a paper that has the word trust in the title, the authors spend few words discussing what it means. And it's not like there's been any magical clarity about its definition that allows us to take the idea of trust for granted. If you look at the huge body of literature on trust in HCI (which I did while I was at Yahoo! this past summer), you find out that there's huge variety in definitions of trust. As evidence, note how many papers about trust begin with long sections of literature review and analysis to make sense of what trust means. This paper needed that. As it is, the authors use the following definition from Fogg and Tseng (1999):
"…[trust is] a positive belief about the perceived reliability of, dependability of, and confidence in a person, object, or process."
Worst. Definition. Evah. The only part I'm on board with is that trust is positive. Otherwise, this definition legitimizes the colloquial usage of trust by lumping many, many distinct ideas together: interpersonal trust, credibility, reliability, security, privacy, and confidence about people, things, and systems. Wow. That's not a definition, it's more like 12. Ignoring the definition of trust isn't unique to this paper though. Many in the HCI community seem blissfully ignorant of the problems around trust, and completely unwilling to unpack it. But we have to unpack it if we want to say anything meaningful. Otherwise, we don't even know what we're talking about. Otherwise my feeling about the factual accuracy of a sentence on Wikipedia is the same as my feeling about whether the seller on eBay is going to come through with the genuine article. But even worse:
Whatever definition Kittur et. al chose, it would have been fine if they'd communicated that to participants and asked them to respond to it. As it is, they just asked participants to make judgments about 'trustworthiness' without providing any clarity, so we have no idea what that construct means, how participants interpreted it, or how its interpretation differed across participants (I'd bet it differed a lot!). Now, you might say (and I'd agree) that people tend to define trust in their own way. That's great, true IMHO, and something we need to investigate and deconstruct soon.
But if you're going to make a scale, you'd better be clear about what that scale represents or else your reliability and validity are completely shot. As it is, the best we can say about these results is that providing visual feedback on Wikipedia articles was correlated with changes in participants' response to a scale measuring unknown perceptions related to interpersonal trust, credibility, reliability, and who knows what. In that respect, the note is a great first start to show that some forms of feedback can influence perceptions. But the authors missed a huge opportunity to say much more. Hopefully this is on tap for future work.