When I oversaw marketing for a cruise line, many years ago, one of the unique challenges that we faced, as I suspect it is for many CMOs, was the task of justifying the ROI on our marketing. We were an interesting company, among those in the travel space, in that our CEO and many of us in the marketing function were no strangers to quantitative analysis. This grounding led to a highly analytical approach in both allocating our marketing mix and assessing its impact. What also made us unique was that we were very direct marketing oriented and sought to isolate and track the results of each ad execution as much as we could, calculating everything from cost per call generated to overall acquisition cost per guest for each ad.
As part of this rigorous focus on analytics, it was often that our senior management team would engage in heated but enlightening debates about specific marketing initiatives. It was healthy to question everything, and my background in marketing research design, had already prompted me to develop a healthy skepticism regarding the ability to easily isolate the return on a singular marketing execution, like a direct marketing piece, on overall sales. I had already done enough purchase process and consumer buying cycle research to recognize that moving a brand to the top of the consideration set, in high involvement purchase decisions, was an iterative process.
My first job after receiving my undergraduate degree had been in the field of public relations, and this experience had given me an intuitive albeit qualitative appreciation of the benefits of building a brand’s reputation and continuing to reinforce that. It was my own organic application of the reach plus frequency recipe, coupled with the aforementioned purchase process research that had convinced me that there was important value derived from brand advertising that didn’t tie directly to moving cabins for a particular sailing. This met with some resistance from our CEO, who uttered a line that has stuck with me, forever. “You can’t eat impressions,” he exclaimed. And though I ultimately prevailed in that particular debate, the comment did prompt a lifelong pursuit towards solving for the great challenge of reasonably measuring the impact of specific marketing initiatives and demonstrating that the right type of impressions are akin to the necessary task of driving one to the dining room table, so that ultimately, one can eat.
Fast forward to my next career stop, in the also numerically driven media sales and marketing world, and I found myself channeling my cruise line CEO’s sentiments. In the media space there was never a shortage of syndicated data and purported proprietary methodologies that claimed to precisely measure impressions and ad impact. It didn’t take long for the research methodologist in me to look under the hood and see grave flaws in sampling, data collection instrument design and analysis and interpretation. Charged with stewarding a portfolio of quality differentiator brands in a narrow vertical space, where the competition sold on low price and alleged audience parity, much of my audience research focus was spent in challenging the validity of the available currency. We rightfully questioned whether the competition’s reach was legitimately equivalent to ours in both sheer numbers, but more importantly in quality. To paraphrase and build off of my former cruise industry boss, “Not all impressions are created equal.” This premise became a rallying cry from which our own custom research was effectively used to combat the low-price competitor’s assertions of audience parity.
The successful battles waged in media sales land were a springboard and evolutionary step to where I presently sit, in heading up a research consultancy whose work in the sports and media space often travels a similar road. Here we are called upon to often help clients on both the property and brand side in their quest to evaluate or demonstrate the impact of their marketing activations, and make better decisions. Within this environment, both the challenges of objectively measuring audience quality and isolating the efficacy of single executions come front and center. I’ve grown to embrace the application of classic experimental design and multi-variate testing, as the means in which to solve for the efficacy question. That is, by designing tests that hold all but singular elements of an execution constant across similarly drawn audience cells, we can quantitatively observe differences in consumer perceptions and behaviors, and see if they are statistically meaningful. When they are, we know that we’ve hit upon something that works. In the words of Briggs and Stuart, we know “What Sticks.” Further, with the benefit of big data, we can now fuse those behavioral tracking measures with soundly executed attitudinal research, and understand not just what target consumers do after ad exposure, but why they do it and how the isolated marketing mix elements affect them. Hence, solving for which eyeballs or impressions are more valuable than others.
Of course, such an approach is not the simplistic “silver bullet” that many of the syndicated media products suggest that they have found. It angers me still that, the methodological shortcomings of direct questioning (“Did this ad make you more inclined to buy this product?…Like anyone can rationally answer such an inquiry.) and convenience sampling persist in even today’s analytics friendly climate. The pragmatic reason that these measures do exist, may very well be because:
- They are cheap and easy.
- People are lazy and reticent to challenge them, because they have been around forever.
- Often the agencies evaluating marketing opportunities embrace these results because they often commoditize competing properties, thus arming themselves with greater negotiating power.
The fact of the matter is that my old CEO was correct in many ways. You can’t eat impressions. You have to season them with well thought out research that uncovers the relative engagement levels and appropriateness of those eyeballs that are being reached. You can’t take short cuts if you really want to know what is working and in what ways. To paraphrase my grandfather, the first CEO I ever listened to,“If it’s worth doing, it’s worth doing right!”
Jon Last is President of Sports and Leisure Research Group, a full service marketing research consultancy serving the sports, travel and media sectors with consultative custom research. Learn more at www.sportsandleisureresearch.com.