The first time I heard the phrase “good enough” used in an MR context was probably 15 years or so ago at an SPSS event in Montreal. Tony Cowling, then Chairman at TNS, used it to describe how online panels might be used to look for signals from the marketplace that something was afoot and further investigation was in order. Tim Macer was there as well, and afterwards I asked him what he thought of Tony’s use of the term. He wondered if it might be what we used to call “quick and dirty.” Like Tim, I was used to the term as a pejorative as in “good enough for government work.” The urban dictionary has several other interesting uses of the term, none of them especially positive.
Despite its somewhat questionable lineage the term has crept into our MR vocabulary to describe one side of the cost-time-quality triangle. We may be unique among industries in our use of the term to describe the quality of what we do. Granted, we all face cost constraints in the things we buy, but I can think of few ad campaigns in other industries built around the concept of “good enough.”
I appreciate the fact that unrelenting pressure to reduce costs and cycle time has put us in a box we might not like very much. So be it. But as best I can tell we have not taken the next obvious step of figuring out what this means when it comes to talking to clients about the research we deliver to them. As David Smith recently pointed out to me, we need a new language for talking a about research results. We continue to use the language of a rejected paradigm—margin of error, confidence intervals, significance tests, etc.—as if nothing has changed. But much has changed and it starts with the precision we can attribute to whatever result we are looking at.
The obvious implication of “good enough” is that we could produce something better if we just had more time and more money. Might that better outcome be a different result? So different that it might it lead to a different decision? How do we couch what we say about our work in terms that express the uncertainty of “good enough?” Or do we continue to obscure that uncertainty with false precision?
Elsewhere on this blog Jeffrey Hunter worries that market researchers may be afraid of consulting. Or, as he put it, “being held responsible for recommendations, which if implemented, would have consequences.” Is that fear? Or a lack of faith in our work?
There is a not so old saying, “If you have one GPS you will always know exactly where you are. If you have two, you will never be completely sure.” And what if you have three? Or four?
We increasingly do research in a world where we can assemble multiple data points around the same problem. The challenge is not to decide which one is right but rather to figure out why each may be wrong and what that tells us about where the truth might lie. We don’t seem to be very good at that. We take way too much at face value, as good enough.