From the Wall Street Journal, Sharon Begley Is ... - UCLA Statistics

11 downloads 42 Views 13KB Size Report
Mar 12, 2004 ... From the Wall Street Journal, Sharon Begley. Is Your Radio Too Loud. To Hear the Phone? You Messed Up a Poll. March 12, 2004 7:38 a.m.; ...
From the Wall Street Journal, Sharon Begley Is Your Radio Too Loud To Hear the Phone? You Messed Up a Poll March 12, 2004 7:38 a.m.; Page B1 With the political season in full bloom, tiny type is springing up like daffodils. You know, the stuff at the bottom of election polls saying that the "sampling error" is plus or minus some percentage points. Don't be fooled. For all its apparent precision, the plus-or-minus statement bears little resemblance to how accurately a poll reflects the opinion of voters. It is error of a completely other kind that trips up polls. (Math-averse readers are allowed to skip this paragraph.) The sampling error represents the range of possible outcomes from taking a random, representative slice of the population. For practical purposes, it equals one divided by the square root of the number of people surveyed. If you poll 1,600 people, then the sampling error is 1/40, or 2.5%. What leaps up and bites pollsters, however, is nonsampling error. This reflects the very real possibility that the people who participated in the poll are not representative of the voting population; that is, different kinds of voters, belonging to particular demographic or ideological groups, didn't have an equal chance of being polled. It's well known, for instance, that polls tend to include too many women, too many whites and too many older folks -- "too many" meaning a greater fraction of them among respondents than in the voting population. This over-representation reflects who is more likely to answer the phone (or even have a phone), and to cooperate with the pollster. For example, women make up 51% of the U.S. population, so if 59% of respondents are women you have to adjust the raw numbers. Otherwise, your results will miss the mark, especially if women are more likely than men to hold a particular political opinion. "This source of nonsampling error is not that hard to correct for," says statistician Andrew Gelman of Columbia University, New York. "You just downweight the women and upweight the men so your numbers match the census." Yet that is sometimes easier said than done. Pollsters typically use random dialing of land-line numbers, both listed and unlisted. Because more than 95% of the U.S. population has a home phone, that gives pretty good coverage. But the rich often have more than one line, so they have a greater chance of being called. Since people are reluctant to tell pollsters how wealthy they are, the error introduced by reaching more rich folks is harder to correct than the error of reaching more women (though ZIP Codes can be a proxy for wealth).

Worse, polls likely undersample or oversample people in categories the census doesn't count, making an adjustment such as that for too many women virtually impossible. Prof. Gelman's favorite example is surly people. They're more likely to treat a pollster as they would a telemarketer, hanging up and therefore not having their views included. But we don't know how many surly people are in the voting population. If surly people lean toward one candidate, then a poll asking, "Who are you most likely to vote for?" will underestimate his support. Reaching unlisted numbers is a mixed blessing for pollsters. "These people generally don't want to be bothered, so their response rate is low," says Fritz Scheuren, vice president for statistics at the National Opinion Research Center at the University of Chicago and president-elect of the American Statistical Association. But how do you adjust your poll for that? "We don't know how to quantify this," says Clyde Tucker of the U.S. Bureau of Labor Statistics, an expert on nonsampling error. "Are people who refuse to respond special in some way that matters?" If someone reached by random dialing doesn't answer, the pollster is supposed to try again. But since that takes time and money, pollsters sometimes just go on to the next number. This is another potential source of error. No-answers might be too busy, at work, out partying, or screening calls. They are more likely to be childless, single or working a shift job. But you can't be sure why they're not picking up, so correcting for "nonresponder bias" is harder than correcting for an overrepresentation of women. Failing to include no-answers "may be the most biggest source of nonsampling error," says Dr. Scheuren. If pollsters really wanted to indicate how good their sample is, they'd skip the plus-or-minus-X% and reveal the no-response rate. Blame technology for the newest source of nonsampling error. Pollsters don't call cellphones (the owner might be driving). "As more and more people have only a cell, you have a problem," says Dr. Scheuren. Because no one knows how ditching one's land line correlates with political leanings, pollsters can't tell how omitting the cell-only population distorts reality. Instant polls gauging reaction to nominees' speeches at party conventions are notorious for no-answer bias. Up against a deadline, pollsters don't have time to call back. "You obviously only get people who are at home, and they are more likely to be people who were watching the speech" and like the candidate already, says Dr. Scheuren. "You can get a bump-up of eight percentage points, but it isn't representative of likely voters." Poll-reader, beware.