You don’t have to be a political pollster or a political junky to know that a number of widely publicized failures to accurately predict election results is wreaking considerable havoc in that part of the research industry. At its core the problem mostly comes down to reliance on unrepresentative samples, although very late breaking changes in public opinion also are sometimes to blame. Most defenses of electoral polling come down to a recitation of all the times that results were correct, which is meant to be encouraging, but there is no clear way to tell into which bucket any given poll is likely to fall.
What I find especially interesting about all of this is that the problem afflicts both probability and non-probability samples alike. The former suffer from a combination of changing and uncertain sample frames coupled with exceptionally high nonresponse. The latter is more about ongoing struggles to produce representative samples from the dog’s breakfast that is online sampling, a problem we have been struggling with for the better part of two decades.
Back in September I heard three very distinguished US public opinion pollsters sum the situation up thus: “Sometimes probability sampling works and sometimes it doesn’t. Sometimes non-probability sampling works and sometimes it doesn’t. In either case we have no idea why.”
It would be downright silly of us to assume that market research is somehow immune from all of this. Political pollsters at least know at some point whether their numbers are right or wrong. As market researchers, we generally have no clue. We rely heavily on quota sampling and demographic weighting, both shown to be insufficient when response rates plunge or when dealing with self-selected online sample frames. And we still too often use a single survey as a sort of truth around which we build our advice to clients.
In a perfect world we would focus more on developing models for sample selection and post survey adjustment that do a better job of delivering representative samples. History suggests that those kinds of developments are more likely to come from the social research and academic sectors than from MR. A more common sense approach might be to adopt the practice of assuming that whatever survey result we are looking at it probably is wrong, and then set about finding other data that proves otherwise. In the process we will learn a great deal more about topic we are studying and perhaps even become better are spotting potential biases in our survey practices.
We increasingly live and work in a data rich world, but we are not doing a very good job of leveraging the advantages that can bring us. We need to view new data sources and methods less as substitutes for other methods and more as enhancements that make them better.