Alan Wurtzel’s Editorial in the Q3 Issue of the Journal of Advertising Research

More good metrics reading from the JAR: After my prior posting on metrics articles in the Q4 Issue of the Journal of Advertising Research, it occurred to me that I did not mention the editorial that NBC’s Alan Wurtzel wrote in the Q3 issue…

Now. Or Never – An Urgent Call to Action for Consensus on New Media Metrics” by Alan Wurtzel, President of Research and Media Development at NBC Universal
In this editorial, Alan Wurtzel lays out what he believes is a critical juncture for measurement of new media. He sums it up this way: “You can’t sell what you can’t measure, and, unfortunately, our measurement systems are not keeping up with either technology or consumer behavior.” The problem isn’t a lack of data – the samples are getting bigger and the precision is getting greater. The problem is that technical challenges exist that make it hard to assess the validity of measurements. Without precise and transparent definition for how data are gathered and metrics are being calculated from data, Wurtzel says programmers cannot depend on the numbers as a basis for decision-making. Proprietary considerations are holding vendors back providing this level of visibility into their processes.

Wurtzel cites a case – quoted by many sources last fall – where NBCU purchased and compared viewership data for the “Heroes” finale from several different set-top box (STB) vendors. The difference between the highest and lowest measurement of the show’s ratings was 6% – which translates into $400,000 of difference in revenue. While 6% sounds low, the “Heroes” example had high enough ratings that they should have had relatively low variation in measurement, meaning that the variation in ratings for lower-rated shows would be much worse. And this is variation in purportedly directly-measured STB data, which should have had little variation at all.

According to Wurtzel, there are serious differences between different vendors that cause this variation. For example, there is no standard way to determine whether or not an STB-attached TV is on or off from the STB data stream, so every data vendor has come up with their own algorithm for deciding when the TV is on or off, and they aren’t sharing these algorithms. There are other similar “edit rules” that each vendor carefully guards. This creates differences in the measurements generated. Now, when you think of the task as not just measuring TV, but an integrated understanding of how a program works across three screens (TV, Mobile, and Internet), now you are looking at huge gaps in comparability and meaning of metrics from screen to screen.

This was written last Fall. What grew out of this thinking was the CIMM, which I have discussed in prior posts. What is likely to happen in the long run is anyone’s guess, but Alan’s article reads like a set of product requirements for the ultimate three-screen audience metrics platform, so the best outcome would be for some smart entrepreneur were to develop just such an offering. Hmmm… I’d say keep your eyes on the marketplace.