How do we teach AI systems to make the right, bias-free decisions? Learn what Reg Baker, North American ESOMAR Ambassador has to say about sampling in market research as it relates to artificial intelligence.
How to determine market research sampling size – in a valid way that will determine representativeness.
Margin of error as popularly understood overstates the validity of research results in at least three key ways. First, those interpreting margin of error forget an important caveat. The results are estimates and typically vary within a narrow range around the actual value that would be calculated by completing a census of everyone in a […]
I have been spending time this week wrapping up my presentation for tomorrow’s AMSRS webinar on The Future of Surveys. It’s caused me to step back and take a broad look at what has happened with surveys over the last 75 years or so, and I am once again impressed by what a great source […]
Numerous industry groups have reported that the levels of respondent cooperation and response rates have been dropping over the past 20 years. Phone surveys have an average answer rate of less than 8% and of those, less than 4% agree to participate. In the early 2000s Web-based panels produced average response rates of around 48%, […]
The political polling community has taken its share of hits over the last few years and there has been no shortage of brickbats thrown at pollsters both in the US and UK. All sorts of explanations have been suggested. The response within MR has been especially amusing, focusing primarily on measurement error (asking the wrong […]
At the end of every questionnaire SSI asks people how satisfied they were with the survey experience, using a 5-star rating. If they rate the survey high or low, we ask participants why they say that. When we look at the comments from people who ranked their survey experience a 5-star one*, some patterns emerge […]
As I think back to the topics we have covered in recent months regarding research quality, I recall what you might expect. Mobile design considerations. Panel partnership and sourcing. Automating in-survey checks. Reviewing and coding open ends. Bayesian techniques for identifying outliers. But what I haven’t seen enough about, or maybe anything about, is how […]
This is the last in a series of posts arguing that the fundamental problem with the recent US electoral polls was the failure to achieve representative samples of the population that actually showed up to vote. I hope I have been clear that achieving such a sample with current methods is no mean feat given […]
In two previous posts I commented on the difficulties that pollsters face getting representative samples regardless of the methodology they choose. Those difficulties vary depending on the broad approach to sampling (probability vs. nonprobability), but in all cases it takes a deep knowledge of the target population, a science-based approach, and a little luck to […]