News Release

For music, social-media marketing doesn't trump quality

MIT researchers revisit data from a seminal online experiment but draw much more encouraging conclusions

Peer-Reviewed Publication

Massachusetts Institute of Technology

In 2004, a trio of researchers at Columbia University began an online experiment in social-media marketing, creating nine versions of a music-download site that presented the same group of unknown songs in different ways. The goal of the experiment was to gauge the effect of early peer recommendations on the songs' success; the researchers found that different songs became hits on the different sites and that the variation was unpredictable.

"It's natural to believe that successful songs, movies, books and artists are somehow 'better,'" one of the researchers wrote in The New York Times in 2007. "What our results suggest, however, is that because what people like depends on what they think other people like, what the market 'wants' at any point in time can depend very sensitively on its own history."

But for music fans who would like to think that talent is ultimately rewarded, the situation may not be as dire as the Columbia study makes it seem. In a paper published in the online journal PLoS ONE, researchers from the MIT Media Laboratory's Human Dynamics Lab revisit data from the original experiment and suggest that it contains a clear quantitative indicator of quality that's consistent across all the sites; moreover, they find that the unpredictability of the experimental results may have as much to do with the way the test sites were organized as with social influence.

Numbers game

In their analysis, Alex "Sandy" Pentland, the Toshiba Professor of Media Arts and Science, his graduate students Coco Krumme — first author on the new paper — and Galen Pickard, and Manuel Cebrian, a former postdoc at the Media Lab, developed a mathematical model that, while simple, predicts the experimental results with high accuracy. They divide the decision to download a song into two stages: first, the decision to play a sample of the song, and second, the ensuing decision to download it or not. They found that, in fact, the percentage of customers who would download a given song after sampling it was consistent across sites. The difference in download totals was due entirely to the first stage, the decision to sample a song in the first place.

And that decision, the researchers concluded, had only an indirect relationship to the songs' popularity. In the original experiment, one of the sites was a control, while the other eight gave viewers information about the popularity of the songs, measured by total number of downloads. But on those eight sites, the number of downloads also determined the order in which the songs were displayed. The MIT researchers' analysis suggests that song ordering may have had as much to do with the unpredictability across sites as the popularity information.

"We've known forever that people are lazy, and they'll pick the songs on the top," Pentland says. "There's all this hype about new-age marketing and social-media marketing. Actually, it comes down to just the stuff that they did in 1904 in a country store: They put certain things up front so you'd see them."

Quality, not quantity

In their work, the MIT researchers interpret the likelihood that sampling a song will result in its being downloaded as a measure of quality. Since that measure was consistent across sites, using it, rather than volume of downloads, to order song listings would probably mitigate some of the unpredictability that the Columbia researchers found.

Even on sites where the number of downloads determines song ordering, high-quality songs will gradually creep up the rankings, because, by definition, they net more downloads per sample than low-quality songs do. But "it does take a long time for the market to fully equilibrate," Krumme says. "Precisely how long it would take for the highest-quality songs to rise to the top depends on the specifics of a particular market."

###


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.