Measuring the leaderboard effect

Since Techmeme, the headline aggregation site for technology news, introduced its Leaderboard, which lists its Top 100 contributors as measured by the percentage of overall headline space they accounted for over the preceding 30 days, there’s been a lot of speculation about its possible effects. Would the list further concentrate traffic at the site, or would it have the opposite effect? Would the link-rich get richer, or would the Leaderboard spread the wealth a bit?

Now that a couple of weeks have gone by, we have a little hard data on the question. In the first Leaderboard (LB), published on October 1, the Top 100 contributors accounted for fully 72.34 percent of total headline space, and the Top 25 contributors accounted for a sizable 43.87 percent. This reveals that Techmeme is a highly inbred site, with a fairly small number of sites responsible for the bulk of the real estate. Techmeme’s “long tail,” while it certainly exists, has relatively low importance. Now, if we look at the current LB (October 16), we find that traffic has become even more concentrated, with the Top 100 accounting for 73.95 percent of headline space and the Top 25 accounting for 45.55 percent.

This is a small change, and it could, of course, be a fluke (it’s worth remembering that the data overlap for about 15 days). But it also could indicate the existence of what might be called a Leaderboard Effect: the existence of a list of top contributors to a social network will tend to further concentrate traffic among those contributors. It will be interesting to see whether the concentration continues. (I’ll leave it to others to crunch the numbers in the future.)

“Inbred” is intended to be a value-neutral term here. In some social networks, inbreeding may increase the value to users, while in others it may decrease the value to users. What’s of interest is that, depending on their goals, the operators of social networks can influence the intensity of inbreeding through their rules and algorithms as well as through the introduction and promotion of various information-navigation tools. Techmeme’s leaderboard provides, from this standpoint, a useful, real-time experiment in the dynamics of traffic concentration.

NOTE: The October 16 version of the LB as linked to above may not precisely match the one I looked at earlier today, so the numbers may vary slightly.

One thought on “Measuring the leaderboard effect

  1. Bertil

    There have been many interesting conversations at Anderson’s blog, the Long Tail about how to measure such effects: measuring contribution share of the best 25, or the best 25% of all authors, it’s not really relevant. You often don’t really have a unit: the star blogs have several writers, publicly signing their posts or not — should you count the URL or the person?

    I like the power-law argument, when it fits the data: on a log-lin graph of success ordered by frequency, you often see a line from the stars to the end of the long tail, and then a drop before the publishers not supported by the current system. Anderson’s initial argument was that this drop was going further down the curve. The coefficient (the slope of the line) and the position of the drop make more sense to me: you are not framing the issue by choosing a scale.

    You avoid the question: should it be the 10, 1,000 or 100,000 most influential blogs that we should measure the concentration on? I agree that the Leaderboard made this choice — but a bigger picture might help understand its impact.

    There’s an increasing number of academic papers trying to address this issue, but none proved conclusive so far. Whether this is socially optimal (lip-syncing teenagers anyone?) is yet another question. An interesting element, from Amazon is that it seems the total sales are concentrating (steeper slope) but the individual baskets have more diversity.

Comments are closed.