In two previous posts I commented on the difficulties that pollsters face getting representative samples regardless of the methodology they choose. Those difficulties vary depending on the broad approach to sampling (probability vs. nonprobability), but in all cases it takes a deep knowledge of the target population, a science-based approach, and a little luck to get it right. But there is an added complication.
Electoral pollsters, not unlike market researchers, are not particularly interested in samples of the general population. Instead they focus on two different but related populations: registered voters and likely voters. The former is typically the target in the early stages of a campaign and often relies on lists of registered voters that are relatively easy to come by. Those lists are of varying quality and will change as they are cleaned up or new people register, but for the most part they form a reasonably good sample frame.
As the election approaches pollsters gradually shift their target from registered voters to likely voters and with that change comes a whole new level of risk. There is a lot of variation in the methods pollsters use but I am not going to review them here. Pew has a nice report for those who are interested. Suffice it to say that they involve screening respondents with questions about political awareness and enthusiasm, voting history, voting intention for the coming election, and so on in an attempt to create a sample that represents who will show up and vote on election day. It is a prediction of turnout and, for whatever reasons, seems to be getting more and more difficult to do accurately. Many of the “polling failures” over the last decade have been attributed to a mismatch between the composition of the electorate divined by pollsters versus the actual electorate that showed up and voted. It seems pretty obvious that predictions based on interviews with one set of people are gong to be unreliable when used to describe the behavior of another different group of people. It’s my guess that when the studies are done that will be a significant part of the story of the 2016 US election.
By now I hope it’s clear that with this mix of sometimes-lousy sample frames, high nonresponse, and unreliable likely voter filters that pollsters might rightfully quote Trump himself, “It’s amazing how often I am right.”
In my next and mercifully last post I will comment on where we go from here.
Reg Baker is Executive Director of MRII