Alan Wurtzel’s Editorial in the Q3 Issue of the Journal of Advertising Research

More good metrics reading from the JAR: After my prior posting on metrics articles in the Q4 Issue of the Journal of Advertising Research, it occurred to me that I did not mention the editorial that NBC’s Alan Wurtzel wrote in the Q3 issue…

Now. Or Never – An Urgent Call to Action for Consensus on New Media Metrics” by Alan Wurtzel, President of Research and Media Development at NBC Universal
In this editorial, Alan Wurtzel lays out what he believes is a critical juncture for measurement of new media. He sums it up this way: “You can’t sell what you can’t measure, and, unfortunately, our measurement systems are not keeping up with either technology or consumer behavior.” The problem isn’t a lack of data – the samples are getting bigger and the precision is getting greater. The problem is that technical challenges exist that make it hard to assess the validity of measurements. Without precise and transparent definition for how data are gathered and metrics are being calculated from data, Wurtzel says programmers cannot depend on the numbers as a basis for decision-making. Proprietary considerations are holding vendors back providing this level of visibility into their processes.

Wurtzel cites a case – quoted by many sources last fall – where NBCU purchased and compared viewership data for the “Heroes” finale from several different set-top box (STB) vendors. The difference between the highest and lowest measurement of the show’s ratings was 6% – which translates into $400,000 of difference in revenue. While 6% sounds low, the “Heroes” example had high enough ratings that they should have had relatively low variation in measurement, meaning that the variation in ratings for lower-rated shows would be much worse. And this is variation in purportedly directly-measured STB data, which should have had little variation at all.

According to Wurtzel, there are serious differences between different vendors that cause this variation. For example, there is no standard way to determine whether or not an STB-attached TV is on or off from the STB data stream, so every data vendor has come up with their own algorithm for deciding when the TV is on or off, and they aren’t sharing these algorithms. There are other similar “edit rules” that each vendor carefully guards. This creates differences in the measurements generated. Now, when you think of the task as not just measuring TV, but an integrated understanding of how a program works across three screens (TV, Mobile, and Internet), now you are looking at huge gaps in comparability and meaning of metrics from screen to screen.

This was written last Fall. What grew out of this thinking was the CIMM, which I have discussed in prior posts. What is likely to happen in the long run is anyone’s guess, but Alan’s article reads like a set of product requirements for the ultimate three-screen audience metrics platform, so the best outcome would be for some smart entrepreneur were to develop just such an offering. Hmmm… I’d say keep your eyes on the marketplace.

The December Issue of the Journal of Advertising Research (JAR) has Great Metrics Articles!

There are a couple of useful articles this month in the Journal of Advertising Research. They are on a roll over at the JAR, driving some great discussion in the last few months about measurement of marketing, digital and otherwise. Recommended reading in this month’s issue:

Commentary: Who Owns Metrics? Building a Bill of Rights for Online Advertisers”, by Benjamin Edelman, Harvard Business School Assistant Professor in Negotiation, Organizations & Markets
Ben Edelman, who has written on the role of deception and overcharging in online media (among other topics) is right on target here – he argues that advertisers have a right to know where and when their ads are being shown, delivered in the form of meaningful, itemized billing. He also asserts the advertisers’ ownership of the data that comes from their campaigns, and says they should (for example) be able to use data collected from their Google PPC campaigns to target campaigns on MS AdCenter or Yahoo! This is definitely a controversial area – certainly Google, along with cable and satellite TV operators, would disagree – read it and let me know what you think.

It’s Personal: Extracting Lifestyle Indicators in Digital Television Advertising, by George Lekakos, Assistant Professor in e-Business at the University of the Aegean, Greece.
In case you think my comment about TV distributors wanting to own audience data is irrelevant in the context of digital marketing, Lekakos lays out a scheme for using set-top box data to discover and target lifestyle segments that are then used as part of a targeting algorithm. The author lays out an approach by which TV set-top box data can be used to drive very accurate personalization and targeting of ads, but the question of whether the data belongs to the distributors, the programmers, or the advertisers is quite critical to whether this can be implemented. I’d have to say that the question is far from settled.

Measuring Advertising Quality on Television: Deriving Meaningful Metrics from Audience Retention Data<by Dan Zigmond, Sundar Dorai-Raj, Yannet Interian, and Igor Naverniouk
The authors explore the use of audience retention metrics captured via TV set-top boxes as a measure of ad quality. They use a “retention score” that purports to isolate the effect of ad creative on audience retention, and link it with future audience response and qualitative measures of ad quality. They assert its usefulness as a relevance measure that could be used to optimize TV ad targeting and placement. Again, we should note that the issue of data ownership needs to be dealt with if this approach is going to be applied widely.

The Foundations of Quality (FoQ) Initiative: A Five-Part Immersion into the Quality of Online Research, by Robert Walker, Raymond Petit, and Joel Rubinson
To address both the increasing importance of online research and questions about its validity, the FoQ Initiative has been undertaken to measure the quality of online research. The Online Research Quality Council included large advertisers, ad agencies, academic researchers, and research suppliers in the process. Among the issues they addressed: accuracy, representativeness, and replicability of results, identification and handling of questionable survey-taking behaviors, and the suspicion that small number of “heavy” online respondents are taking most online surveys.

Some of the interesting findings:

  • There is significant overlap in membership of various online research panels, but no evidence this causes data quality issues
  • Multiple panel membership actually lowers the odds of “bad” survey-taking behavior by 32%
  • You should keep surveys short – longer surveys increase the occurrence of “bad” survey-taking behavior by 6X
  • Age matters – younger respondents had 2X the occurrence of “bad” survey-taking behavior than older ones