Last week I did a short presentation for a NewMR session called “Looking Forward.” This post is a summary of that presentation (with an “improvement” or two). You can view the presentation at the NewMR website.

I was trained as a historian and so I tend to view contemporary and even future events from a historical perspective. In this case, the title references history in two ways. First, the title is taken from a 1965 book with the same name by Peter
But I also chose the title to remind myself how simple and elegant research once was compared to how complicated and sometimes ugly it has become. A hark back to the days when virtually every client’s problem could be solved with a well-designed and executed survey.
But alas, the world is different now and the proposition here is that the practice of contemporary market, opinion, and social research is the result of four interrelated trends that have played out over about the last 50 years.
1. The relentless march of technology
Below is a rough timeline of the major deployment of technology in the search for better data and insights (at least that was the rationale):
- From paper and pencil administered in person or via
mail to - representing the questionnaire as a computer program administered via various electronic
channels to - harvesting data that already exists.
The timeline also shows our expected next wave—automation and applications of AI.
But has this ongoing deployment of new technologies led us to better data or to richer insights? It has shortened cycle times and generally
2. The unmooring of research from its scientific foundations
Historically, survey research rested on two main scientific pillars:
The term “sample quality” has been redefined to mean hygiene rather than
Sample is research’s Achilles’ heel. What started as a slow, downward slide in sample quality is now a race to the bottom, at blistering speed. Unless we quickly course-correct. it is sure to end in a terrible, and potentially fatal, crash.
Questionnaire design has fared no better. It, too, has been redefined and now emphasizes usability and engagement rather than measurement accuracy with a clear focus on what we need to know. The debate now centers on issues such as mobile first, use of color and graphics, game-like features, and asking simple straightforward questions with vague wording and unspecified recall periods. “Which brands have you seen ads for recently?” We no longer focus on question wording, framing, and context, and questionnaires are seldom tested before they are put in the field.
3. The Gospel of Innovation
Then there is the constant search for the next big thing spurred on by a new generation of professional evangelists preaching disruption as an end in itself. Innovate or die! We are seeing new entrants that differentiate primarily on technology rather than data quality or insights. And there always seems to be at least one new technology on the horizon best characterized as a solution looking for a problem to solve. (The obvious current candidate is blockchain.) Yet despite all the
4. The loss of public confidence in market, opinion, and social research
In parallel with all of
How do we stay relevant and sustain our industry over the long haul?
There is little sign that these four forces are weakening. In fact, they seem to be getting stronger and I don’t think that’s a good thing for what we do. Being fast, cheap, high-tech, and wrong is not the answer. Rather, we need to find ways to reconnect to science. If the classic survey paradigm no longer works, invent a new one. But “some data is better than no data” is not a paradigm. It’s a surrender to market forces that may well end badly for us.
To say it more plainly, we need to stop pretending that our work is better than it is. All the data we work with, whether collected via a survey, harvested from social media, or generated by a massive machine learning exercise on a client’s data warehouse is flawed. It’s our job to understand those flaws while still making sense out of what we see. To quote the statisticians Stephan and McCarthy in their book, Sampling Opinions:
Samples are like medicine. They can be harmful when they are taken carelessly or without adequate knowledge of their effects. We may use their results with confidence if the applications are made with due restraint…Every good sample should have a proper label with instructions about its use.
We should do the same for every bit of research we do, and not just about the sample.
Of course, acquiring the capacity to evaluate data, document its biases, and factor those into the conclusions that we draw is a challenge in itself. It is a skill to be learned and the core of it is an understanding of the basic principles that are the foundation of good quality research. Researchers take this perverse pride in having “fallen into” research with little or no background or training. Yet they are surprisingly resistant to learn what they don’t know, to go back and study the basics that are as relevant today as ever before.
Finally, there is the challenge of restoring the public’s loss of confidence in our work. That’s a heavy lift and probably begins with earning back some trust. We will not earn back that trust through association public relations campaigns alone, but rather in how we behave, how we treat those whose data we collect and analyze. It is now more important than ever that we separate what we do from direct marketers and a tech industry that essentially uses our personal data to harass us, to fill our screens with evidence that they know way too much about each of us. Crossing that bright red line between research and marketing is more tempting than ever, and not crossing it has never been more important.
Our future depends on it.