Wednesday, December 29, 2004
Polling and Accuracy
Taking the public's pulse is a $6.6 billion industry that combines people skills and a certain artfulness with statistics. Good opinion surveys don't just ask questions - Who are you going to vote for? Have you had more than 20 sexual partners? - and then spit out numbers. Pollsters make adjustments, like giving more weight to answers from particular groups so the sample reflects the overall population they're trying to represent. Mathematicians and survey methodologists devote entire careers to getting more predictive and illuminating results.
For example, a couple of weeks before the election, Science published an article by Drazen Prelec, an MIT psychologist. Prelec describes how to put the statistical thumbscrews on poll respondents - "a Bayesian truth serum," he calls it. (Bayesian math is a branch of statistics and probability theory.) In addition to posing a direct question to the respondent, the pollster also asks for a guess about how other people will answer the same question - "What percentage of people in the population do you think have had more than 20 sexual partners?" People telling the truth tend to overestimate how common their own answer was; the math's complicated, but basically we all think we're typical.
Prelec's article addressed a small but vital problem. Mr. and Ms. America don't tell outrageous lies to pollsters, but they do tend to shade their answers to please interviewers - only a touch, maybe, but enough to change results. People say they plan to vote when they don't, or that they're paying close attention to an issue when they're not. But these little white lies are critical because pollsters use that information to determine if a respondent is a "likely voter," the linchpin question in any political survey. Screw that up, and the poll is worthless. In fact, many experts now suspect that volatility in political polls, especially in close races, is a consequence of flaws in the way pollsters identify likely voters.