The Tragedy of the Comments – Part II: In search of solutions

"The same equations have the same solutions." – Richard Feynman (Physics Lectures Vol. 2, Chapter 12)

"In theory, there’s no difference between theory and practice. In practice, there is." – Yogi Berra

Bad behavior has been around in digital commons for 40 years. The fact that Anil Dash and Nick Denton talked about it this year at South by Southwest demonstrates that, despite everyone’s efforts, the problem remains unsolved. Comments (and digital commons) still suffer from trolls, snarky behavior, bozos, and a general lack of quality at scale.

In a previous post, I showed how one can create a solid analogy between the classic Tragedy of the Commons and what happens in online discussions (or in any digital commons). With this groundwork, the natural question to ask is whether we can leverage real-world solutions to help find ones in the digital world.

Real-world commons can work

It’s probably not surprising that a huge body of work has been devoted to studying this problem, with many people severely criticizing the original work on the Tragedy of the Commons and the solutions the author put forward (privatization, centralized government control and regulation).

Subsequent Nobel-prize-winning work by Elinor Ostrom showed that the commons problem may not be as difficult to combat as Hardin implied, and that the proposed solutions of privatization and centralized control can often backfire. Seminal work by Ostrom and Yochai Benkler showed that real-world commons-based systems can actually work well with a simple set of conditions. The conditions they outlined are:

  1. Mutually visible action among participants
  2. Credible commitment to shared goals
  3. Group members’ ability to punish infractions

Let’s see if these conditions can apply in the world of digital commons.

1. Mutually visible action among participants

Mutually visible action in real-world commons establishes a level of accountability, which allows for punishment of infractions (Condition 3 above). In other words, if you act like a jerk, and everyone knows, you pay the price. If you behave badly, but no one knows it was you, punishment is more difficult.

Unfortunately, there’s an attribution problem in digital commons, given the slippery nature of identity discussed in the previous post. In situations where identity is masked, accountability gets attached to avatars, not people, which takes us away from the whole point of this condition (the ability to punish or praise based on identity).

There’s a seemingly simple solution to this problem: eliminate anonymity. The most visible recent example of this approach is the use of Facebook comments, where someone has to login via Facebook to leave a comment. Facebook identities are much harder to fake, so forcing people to use their FB identity is a good way to pull away the mask of the avatar. Techcrunch was one of the early adopters of this approach. As one might expect, the switch to FB comments led to a significant reduction in comment volume (one analysis showed a 42% reduction in comments for all posts). The whole topic of Facebook comments, identity and comment streams has been discussed at length (see this Quora thread on TechCrunch and FB comments for an example).

So removing anonymity is the solution, right? Maybe not. There are a few big problems:

  • Lack of anonymity limits discussion: Some people feel more comfortable speaking freely when they can be anonymous. In fact, this is critical in areas where identity disclosure can threaten the person expressing their opinion (e.g., activists in repressive countries, victims of domestic violence). Take away anonymity, and you lose the power of online discussion forums that thrive precisely because people can be anonymous.
  • Fluid digital identity enables good things: In the online world, there is a fluidity to identity that allows for a much richer set of interactions (related to the previous point). Three levels of digital identity exist that make this possible: real-world identity, experientially known identity (i.e., in certain online contexts or communities), and what one might call "nonce pseudonymity" (where people can leave one-off anonymous comments). Each of these types of identity has a place in the world of online discussion, and to eliminate the latter two would wipe out some of those good things (see Chris Poole’s insights about identity for more).
  • No universal solution for digital identity exists: Facebook is just one system, and they use their own form of identification and authentication. Other services (like Twitter) use their own. There’s no universally accepted system of federated identity (yet) and no set of standards that allow for the connection between real human being and a digital avatar. This problem only gets worse when one thinks of trying to establish globally valid digital identity systems (since we want people anywhere to be able to participate in a commons, in principle).
  • Most systems can be gamed: It’s harder to set up multiple accounts on Facebook than it is elsewhere, but a determined enough troll can do it. This is also true of just about any online system with registered users, because there’s no corresponding universally accepted biometric control to connect a real person with an online account.
  • Not everyone uses the same services: Lots of people don’t use Facebook (or whatever other social network you might try to use for authentication and identity). By definition, then, you’re closing the commons off to people that don’t use that system. It would have to be based on an as-yet-nonexistent system of digital identity.
  • Not all systems are allowed in the workplace: Some workplaces ban the use of Facebook, which means that Facebook comments aren’t necessarily visible to everyone. The same would be true of other social networks that are banned. While this may seem a minor problem, it does impact the use of digital commons.
  • Identity is no guarantee: Some people will say stupid, thoughtless and hateful things whether or not anyone knows who they are. It happens in the real world, and it will happen online. The only way to stop this kind of behavior is through other means (e.g., punishment).

2. Credible commitment to shared goals

While commitment to shared goals makes a lot of sense in real-world commons, in the digital world, it’s a pipe dream at best, and a joke at worst (especially at scale). There’s just no way to create a shared set of goals for millions of people who might access a popular Web site or discussion forum. Imagine trying to get everyone reading TMZ.com to agree to a set of shared goals for civil discourse on the site. Not gonna happen.

3. Group members’ ability to punish infractions

Punishment is the most commonly used way to stop bad behavior in the real world, and this is the approach taken by many commenting systems and discussion forums. In a digital commons, punishment could take a few forms, each with its own benefits and drawbacks:

  • Expulsion: Behave badly, get banned. Pretty simple, right? Maybe not. The problem is, who makes the call as to whom gets banned? Would this be the job of a moderator, or could group members actually do it? Once banned, what’s to stop someone from adopting a new identity and joining again? People have been banning trolls for years, but they keep popping up. It’s like the digital analog of whack-a-mole. Expulsion is expedient, but it doesn’t work well.
  • Community rating systems: It’s pretty easy to imagine a "karma" system where everyone starts with a neutral rating in a digital commons, and participants can vote to adjust someone’s rating up or down. Bad ratings could then potentially limit the ability of someone to participate in the forum (or lead to expulsion as a worst-case scenario), whereas good ratings might lead to an upside. Two examples of this type of rating approach are eBay (with its internal seller reputation system) and Amazon (with its system to promote top reviewers). These systems seem promising, and work in many cases, but there are a few potential issues. First, there’s no way to regulate mob behavior (i.e., people ganging up on someone whose opinions they may not like). Second, without standards, these ratings wouldn’t be portable across commons. Finally, ratings systems are subject to being gamed.
  • Gamification: This approach is similar in spirit to community rating systems: incentivize people to behave well, but do it with explicit virtual rewards. Unfortunately, it’s not clear that it works. Denton described Gawker’s efforts to do precisely this at SXSW and called it a total failure, claiming that the game-oriented incentives weren’t enough to deter bad behavior (or encourage good).
  • Moderation and exclusion: The simplest form of punishment would just be have moderators exclude bad comments on a case-by-case basis. This again raises the question: Who makes the decision about which comments get included, and which don’t? And how could this kind of moderation possibly work at scale?

None of these forms of punishment is binding, and they actually all suffer at scale. The reality is that if someone really wants to participate in a digital commons open to the public, they will find a way, regardless of punishments associated with bad behavior.

A few more words about moderation

One obvious solution to the whole "Tragedy of the comments" situation is moderation. Moderate every comment in a forum, comment thread, or what have you; only publish the ones that are on topic and civil. Unfortunately, moderation has a few significant potential problems:

  • Selection bias: Moderator’s may have a point of view, which could lead them to exclude comments that run counter to this point of view. The prime example of this would be an author moderating the comments associated with a post they had written. While they are (in theory) in the best position to moderate, in terms of subject matter expertise, they could be predisposed to suppress comments that run counter to their opinion.
  • Scale: High-traffic web sites get hundreds of comments per post, and have multiple posts live at any given time. It would take an army of moderators to manage the flood of incoming comment traffic. It’s nearly impossible to support moderation in these circumstances.
  • Lack of objective standards: Which comments are in, and which are out? What’s the standard for inclusion? This question is related to the notion of selection bias, though slightly different. Imagine someone who leaves a joking comment that’s funny, but not really on topic. Does this get published, or not?

The bottom line: Imperfect solutions for a messy, beautiful digital world

The three conditions for resolving real-world commons problems have questionable applicability when it comes to digital commons. Issues of digital identity complicate things significantly, mutually shared goals are unrealistic, and the problems associated with punishment at scale can’t be ignored. So should we just throw our hands up in the air and admit defeat? Of course not.

Here’s my approach to combat the Tragedy of the Comments:

  1. Make a choice – comments or no comments: Obviously comments are optional (for anything but a forum site). Many well-known bloggers opt to forego comments (e.g., Dave Winer). Sites must make a decision one way or another about whether they’re willing to accept the good and the bad of online discourse.
  2. If comments are allowed:
    • Moderate with guidelines and a system: Use a comment system like Disqus or Livefyre, and moderate based on a set of clearly articulated guidelines about what’s in and what’s out.
    • Keep identity fluid: Avoid requiring people to share their real identities (e.g., by using Facebook comments).
    • Ban trolls, reward heroes: Pursue the best combination possible of banning obvious trolls and rewarding people who add to the discussion (e.g., using karma systems).
    • Expect lower average quality and dischord: It’s impossible to enforce quality in any objective way. Expect that some some comments will add to the discussion, some will be noise, and some will create dischord through differing opinions. The line between dischord and offense should be governed by the guidelines around moderation.
    • Evolve: Nothing in the digital world is static. Approaches to identity change, norms change. Stay on top of it, and evolve the way you approach comments based on prevailing best practices and norms. And if you think you’ve got a better way of doing things, do it and tell people about it.

At the end of the day, the Tragedy of the Comments is a reality spawned by the nature of the digital world. We need to embrace this reality and realize the best we can hope for are imperfect solutions. The online world of comments and discourse will always be messy, but some beautiful things are born from that chaos.

Further reading

I came across so many insightful articles trying to pull this post together that I wanted to share them. Dip your toes in this pool and suddenly you discover it’s miles deep:

This entry was posted in Social media, Web stuff. Bookmark the permalink.