Thursday, May 24, 2012

How to Read Polling Data

Confused by all the polls and surveys taking place?  Who is ahead or behind?  I was interviewed by Profnet for this article on doing polls and it is worth reprinting here.

Dear Gracie,

With elections approaching, I've seen a lot of polls in the news recently. How do we know if the polls are accurate or biased?

Puzzled by Polls

Dear Puzzled by Polls,

Three ProfNet experts provide some insight: What You Need to Understand About Polls

"Creating and fielding a poll is not something that just anyone can do at the drop of a hat," says Jason Reineke, associate director of the Middle Tennessee State University (MTSU) Poll, which is a statewide, biannual poll of Tennesseans; as well as the university's assistant professor of journalism.

"It is both an art and a science, and the people who do it well usually have extensive training and expertise," continues Reineke. "Like a journalist, lawyer or medical doctor, being a pollster is a profession."

Polls are snapshots in time and not predictive tools, explains David Schultz, law and graduate school professor at Hamline University's School of Law, and editor of the Journal of Public Affairs Education. For example, polls conducted today about the presidential elections are not necessarily indicative of what will happen in November.

"A common problem with political polls is that they are often fielded by one party to support its agenda," adds Bob Clark, president of 24K Marketing.

Some polls are better than others, but the value of a poll can be better determined by the goals that it was designed to address, rather than one-size-fits-all rules, says Reineke. "Nonetheless, there are some standards that can be applied across most polls."

Transparency
Pollsters should freely and honestly report information about the poll's funding, affiliation, methodology, data and analysis, explains Reineke.

"If the source of a poll can't or won't tell you how they sampled respondents, how they interviewed them, what the questions and response options were, what the response rate was, or other details about the poll, then the results should be taken with a commensurate grain of salt," he advises.

Also, be skeptical of a poll if it was designed and conducted by someone without recognized credentials, experience and reputation, says Reineke. Just you'd be skeptical about a doctor without a degree or a journalist without any bylines.

Reineke suggests checking out the website of the American Association for Public Opinion Research (AAPOR). "If a pollster is not a member of AAPOR, or is dismissive of the organization -- or worse yet has never heard it of -- that should be cause for concern."

Poll Questions
One indicator of bias in surveys are leading questions, says Clark. For example: Are you better off now under the Obama administration than you were four years ago?

This question is biased because it ties Obama to the issue, says Clark.

"A poll is only as good as the questions asked," agrees Reineke. Questions should not encourage or discourage respondents to provide a particular response over others, and should only ask about one thing at a time.

Conversely, answers to questions should not include biased or politically charged words, says Clark. For example, phrases like "tax breaks for the rich" (instead of "tax reduction/reform"), "Obamacare" (rather than "healthcare reform") and "War on Terrorism" (instead of "War in Afghanistan") are all political labels with divisive meanings.

"Answers to questions that include these terms are more likely to be used by one party to validate their agendas," Clark explains. Thus, this is not a projectable measurement of public sentiment on issues.

Reineke also suggests considering these three guidelines regarding poll answers:

Response options should be exhaustive, meaning that any possible response is represented by a response option.

Response options should be mutually exclusive, meaning that participants will need one and only one response to indicate their answer.

Pollsters, and consumers of their results, should also pay attention to potential order effects, meaning the ways in which a previous question, or a participant's response to it, might affect interpretation or response to following questions.

Population Sampling
"Polls work by contacting a sample of the population of interest," says Reineke. That sample should be representative, meaning it should have the same proportion of all important characteristics as the population.

Representative samples are often achieved through random sampling, which means every member of the population has an equal chance of being selected, he says. "Pollsters should be prepared to explain how their sampling is random if they claim it is so."

"In cases where sampling is not random, pollsters should be able to explain how their sample is representative of the population, and provide appropriate cautions about the extension of results to groups who were not adequately represented in the sample," continues Reineke.

Population Size
"The size required for a random sample to be representative of the population in question is dependent on the size of the population," says Reineke. "The larger the sample, the smaller the margin of error."

In the simplest terms, "margin of error" is a statistic that shows how well the selected sample predicts things about the entire population.

Look at margins of errors when evaluating polls, suggests Schultz. "I would say any poll with margins of errors greater than +/- 4 are meaningless, since that means the results could be off by as much as eight points."

Interestingly, there is not much difference between the margin of error for a sample of 5,000 Americans vs. a sample of a million Americans, says Reineke. However, there is a significant difference in margin of error for a sample of 500 Americans vs. 2,500 Americans.

Statistical formulas aside, as a rule of thumb, you should look for a sample between 500 or 1,000 for state polls; and 1,000 or 2,000 for national polls, says Reineke.

"For presidential polls, I am suspect of any poll with survey samples of much less than 1,000 people," agrees Schultz. "They probably need about 1,200 to 1,500 people to be accurate, especially if one wants to tap into swing voters or the views of particular subgroups."

Also, ignore any poll that does not have a confidence level of at least 95 percent, says Schultz. Some polls have confidence levels of only 90 percent, which means they are only 90 percent confident that responses were within their margin of error. In other words, 10 percent of the time they are not sure if sample answers were indicative of the true population (not good).

Furthermore, polls are only as good as the underlying assumptions that go into them, continues Schultz. For example, a poll that lists 50 percent of those who responded as Democrats is skewed in terms of over-representing Democrats.

That's why samples are sometimes weighted to better represent the population of interest, says Reineke. For example, if African-American males ages 18-35 are 1 percent of the sample, but 2 percent of the population, a pollster might mathematically adjust the sample so that responses of individuals in that demographic actually count as two responses each, thus better reflecting the population.

Regardless, pollsters should report their sample size and their margin of error, and provide information about how they sampled so that others can evaluate their claims and methods, Reineke stresses.

Gracie

No comments:

Post a Comment