Survey Says: We’re Flying Through a Fog
The pollsters got it wrong – again. The latest spectacular failure is not anticipating the Conservative Party’s triumph in Great Britain. The public should get used to it, because inaccurate survey results are going to continue – and likely to get worse.
There is trouble in the land of polls. In addition to the difficulty in getting accurate results, most reporters, commentators and pundits know very little about how polling works. The result is that consumers of news are getting terrible information.
Polling is simply determining what a population looks like by looking at a subset or sample. For elections, pollsters have two big problems: 1) getting the right sample and 2) getting the truth out of the respondents. The honesty problem is a tough nut to crack. Most pollsters assume the liars cancel each other out on ballot questions and use subtle language tricks to try to get accurate responses to issue questions (although those assumptions and tricks are far from foolproof).
The challenge with getting the right sample is incredibly vexing and getting more difficult all the time. For any survey to be valid (political or anything else), the results must be drawn from a random sample, otherwise you end up with biased results. You could hardly trust a survey taken just among the first dozen people you bump into walking out of the New York Stock Exchange.
Because different groups are harder to reach than others and have different levels of willingness to participate, no pollster can get a truly random sample. Instead, pollsters have to build a sample that imitates randomness and representativeness.
Most polls sample from 600 to 900 people. Believe it or not (and you should believe it), a sample of 600 people does a pretty good job of representing as many as 10,000,000 people http://www.surveysystem.com/sscalc.htm – as long as it is a random sample. And, there’s the rub, it is really, really tough to get the sample right.
The demise of telephone land lines has been a disaster for political polling. In the olden days, if you kept calling and calling eventually you would reach the people you needed. With cell phones, the problem is not so much reaching people, but that call screening is much easier, people live in one place but have a phone with an area code from another, and when you reach someone, they won’t stay on the phone for more than a few minutes. One of the top congressional media consultants confided to me that surveys conducted on cell phones simply cannot last more than 6-8 questions.
The cell phone problem is compounded by the general difficulty in reaching younger voters and lower income voters. Retired voters are relatively easy. While older voters turn out more, younger and low income voters do vote and need to be accounted for, which leads to the problem of figuring out who will turn out.
Every pollster has to put together a turnout model. They assign quotes by age, gender, party and ethnicity to try and match projected turnout – all in the hope of building a perfect imitation random sample. In the end, every pollster has to play some hunches as to how well each party’s turnout effort will be and how motivated (or not) different sub-groups will be. Leading up to and on Election Day, even very good pollsters will look foolish just because a hunch or two on turnout fails.
The media know polls can be unreliable, but they don’t really understand why. Pollsters know, which is why they hedge their projections with the infamous "margin of error." Reporters and commentators attempt to sound intelligent by invoking the margin of error. But the way they report it belies their ignorance. How many times have you heard a reporter say: "Smith leads Jones 42% to 40% with a margin of error of plus or minus 5%, so the candidates are statistically tied." Wrong! Smith is ahead and that’s it. They are not "statistically tied."
The margin of error is what is known as a confidence interval. What is means is that 95% of the time the true result of a poll would be plus or minus the margin of error (if you asked every voter in the district). So, if Smith has 42% and Jones has 40% with a margin of error of plus or minus 5%, then there is a 95% chance that Smith has anywhere from 37% to 47%, and Jones anywhere from 35% to 45% — but not an equal chance. The best estimate for Smith is 42% and Jones is 40%.
The estimate is essentially the high point of a Bell curve (really a normal curve).
Because only a sample was interviewed, it is still an estimate. The number is not exact. So if you imagine a Bell curve, the best estimate in our poll is for Smith to have 42%. He has less of a chance to have 41% or 43%, less for 40% or 44%, and so on. The chances that Smith has less than 37% is really, really small. When two candidates are within the margin of error, it does not mean they are tied. It just means the pollster is not quite so certain about the outcome.
In the British elections, the Tories performed at the high end of the margin of error and that was all it took for a resounding triumph. Their national vote total was less than 37%.
Putting together the problems that pollsters face in tracking down voters, getting truthful answers and interviewing the right imitation random sample, leaves us with a lot of uncertainty heading. There are going to be some surprises, as there always are. Unfortunately, being truthful about uncertainty is not very profitable. For the foreseeable future we are just going to have to take the polls with a grain of salt – even better an entire shaker of salt.
(Keith Naughton is a public affairs consultant specializing in messaging, policy analysis and policy development. He has a Ph.D. in public policy from the University of Southern California. He can be reached at [email protected] or @KNaughton711.)