As I think back to the topics we have covered in recent months regarding research quality, I recall what you might expect. Mobile design considerations. Panel partnership and sourcing. Automating in-survey checks. Reviewing and coding open ends. Bayesian techniques for identifying outliers.
But what I haven’t seen enough about, or maybe anything about, is how important screener design is to quality. I remember when I first started in research, I spent half a year just writing and reviewing screeners before I wrote a full survey. The point of that focus was that no matter how great all other aspects of the research design may be, if the screener doesn’t quality the right population, then everything else is at best biased, and at worst useless.
Well into my third decade of market research, I’ve seen screeners succeed fantastically and fail miserably. Many failures can actually be traced back to a poorly designed screener. Straighlining and abandons can actually mean that the participant doesn’t know the answers or understand the content. Speeding can mean the content is not relevant to someone. Bad open ends can be the result of not having enough experience or expertise to have a strong point of view. While these issues can be a problem with participant or with survey offenders, it can also point to a poorly designed screener that didn’t qualify the right audience.
Screener success comes down to just a few common traits:
- Defining the population: the term screener comes from the practice of screening potential participants for their qualification to participate. I’ve heard it said that the purpose of the screener is to “keep out the riffraff.” Good research is contingent on getting the RIGHT people to answer the right questions. Is the potential participant qualified as part of the population being studied, and able to provide helpful opinions and information? From their decision making role to their ability to be coherent and thoughtful, and their personal or professional characteristics, can the person answer the questions being asked?
- Well-designed cognitive thought: In short, start broad and narrow the focus. Best practice is to eliminate as many people as possible at each step, narrowing the participants along the way stage. In a B2B study, people often start with industry, and then company types, and then role, until they reach the right person. This avoids unnecessary questions, but also mirrors how people naturally process information.
- Quota questions up front: Along with careful narrowing of the topic and the audience, any question that has a quota should be at the front. This is important to the participant, making sure they don’t answer a full survey before being told they don’t qualify. If you have a long screener, or are collecting a lot of data for market sizing, rather than “disqualifying” people after 5-10 minutes, we should thank them for their responses and treat it like a short survey complete.
- Carefully considered term points: Along the screener path, caution should be used when disqualifying people based on their responses, especially in consumer work. Disqualifying on demographics, political views, or religion can be offensive. In these cases, disqualifying further into or at the end of the screener is a better choice.
- Unbiased and masked questions: This is crucial to not leading a potential participant down a clear path, and then being surprised when they aren’t actually qualified to answer your questions. Aiding someone in qualifying does not help your study. So the audience you are seeking should be masked as long as possible.
- Built in quality controls: Once you have the right audience, you should add a question or two that tests their knowledge of the subject, especially for B2B topics. This process helps weed out people who have overstated their role or influence.
- Keep it short: While achieving the first six points, you also have to stay lean. Participants resent answering more than 5 minutes of questions only to be told that their answers don’t count. It makes them feel like we’ve wasted their time. So qualify them, or disqualify them, quickly and professionally.
Screeners are a crucial piece of the researcher’s toolkit. They ensure quality and veracity of data. Many perceived sample quality issues are actually the result of a poorly designed screener. Here’s an example:
- Do you own an ATV? (Yes/No)
- Do you own a motorcycle? (Yes/No)
- Do you own an RV? (Yes/No)
- Do you own any other motorized vehicle? (Yes/No)
In the example above, did you know that question 4 included cars? Would you have said no to the last question, even though you own a car, because you thought this was about recreational vehicles? Even if you qualified, are you responsible for vehicle purchases? If this study is seeking to interview the population of people who are in the market to buy a Jeep, has that population been properly identified?
Spending an extra 20 minutes to ensure your screener is fit for purpose will save everyone pain and hassle on the backend. And it will often save your research.
Melanie Courtright is the Executive Vice President of Global Client Services at Research Now (www.researchnow.com) where she leads a team of people who are passionate about research sampling, quality, thought leadership, and service excellence.