A week or so ago my email included news of a new study breathlessly announcing, “96 percent of Americans are shopping online.” That number struck me as high since the very idea that 96% of Americans even use the Internet sounded like a stretch. So I asked Google, “How many Americans use the Internet?” First up was this 2015 report from Pew telling met that “15% of Americans don’t use the Internet.” That seems in line with what the U.S. government finds in its Current Population Survey.
So then I asked Google, “How many Americans shop online?” That’s a bit tougher one for Google but I found a report on the Statista site putting the penetration of online shopping at 63%. The CPC Strategy blog has a visualization that says “more than 80% of the online population shops online.” If we accept the Pew number for Internet penetration at 85% (similar to what high quality government surveys report) and the 80% figure for online shoppers then the percent of Americans who shop online is around 68%.
I don’t give a rat’s ass about how many Americans shop online and I probably should look at still more studies to triangulate to an estimate that is defensible. But what bugs me is the ongoing reports of studies conducted online with uncertain methods using uncertain samples that purport to find all sorts of things about “consumers” when in fact they just describe bad estimates from poorly designed online surveys done by people who don’t know what the hell they are doing.
It is so easy in many of these cases to corroborate one’s findings with similar work by others to create a context for your results and perhaps identify some biases that you need to take into account when interpreting your own data. It ought to be SOP. Why aren’t people doing it? Is it naiveté or something more sinister?
The democratization of survey research is not necessarily a bad thing. In principle at least, more surveys by more people can broaden our understanding of the world and lead to better, more informed decisions. But too often bad surveys garner headlines that have the cumulative effect of undermining the public’s faith in surveys as a way of learning about society and the world. And it’s not just the mainstream media. Too many MR-oriented publications deliver this stuff into our inboxes every day.
The Poynter Institute has some good news in this regard with the announcement of a new online course on polls and surveys designed specifically for journalists “to help improve media coverage of polls and survey results around the world.” The course seems to be mostly directed at electoral polling, but the need for educating journalists does not stop there. Journalists across the board need to be come smarter and more selective about what they choose to publish. One way to stop junk surveys is to deprive them of the oxygen that publication supplies.