Polls and surveys

Reporting on public opinion research requires rigorous inspection of a poll's methodology, provenance and results. The mere existence of a poll is not enough to make it news. Do not feel obligated to report on a poll or survey simply because it meets AP's standards.

Poll results that seek to preview the outcome of an election must never be the lead, headline or single subject of any story. Pre-election horse race polling can and should inform reporting on political campaigns, but no matter how good the poll or how wide a candidate's margin, results of pre-election polls always reflect voter opinion before ballots are cast. Voter opinions can change before Election Day, and they often do.

When evaluating a poll or survey, be it a campaign poll or a survey on a topic unrelated to politics, the key question to answer is: Are its results likely to accurately reflect the opinion of the group being surveyed?

Generally, for the answer to be yes, a poll must:

  • Disclose the questions asked, the results of the survey and the method in which it was conducted.
  • Come from a source without a stake in the outcome of its results.
  • Scientifically survey a random sample of a population, in which every member of that population has a known probability of inclusion.
  • Report the results in a timely manner.

Polls that pass these tests are suitable for publication.

Do not report on surveys in which the pollster or sponsor of research refuses to provide the information needed to answer these questions.

Always include a short description of how a poll meets the standards, allowing readers and viewers to evaluate the results for themselves: The AP-NORC poll surveyed 1,020 adults from Dec. 7-11 using a sample drawn from NORC's probability-based AmeriSpeak Panel, which is designed to be representative of the U.S. population.

Some other key points:

  • Comparisons between polls are often newsworthy, especially those that show a change in public opinion over time. But take care when comparing results from different polling organizations, as difference in poll methods and question wording — and not a change in public opinion — may be the cause of differing results. Only infer that a difference between two polls is caused by a change in public opinion when those polls use the same survey methodology and question wording.
  • Some organizations publish poll averages or aggregates that attempt to combine the results of multiple polls into a single estimate in an effort to capture the overall state of public opinion about a campaign or issue. Averaging poll results does not eliminate error or preclude the need to examine the underlying polls and assess their suitability for publication. In campaign polling, survey averages can provide a general sense of the state of a race. However, only those polls that meet these standards should be included in averages intended for publication, and it is often preferable to include the individual results of multiple recent surveys to show where a race stands.
  • Some pollsters release survey results to the first decimal place, which implies a greater degree of precision than is possible from scientific sampling. Poll results should always be rounded to whole numbers. Margins of sampling error can be reported to the first decimal place.
  • Take care to use accurate language when describing poll results. For example, only groups comprising more than 50 percent of the population can be said to be the majority. If the largest group includes less than 50 percent of the surveyed population, it is a plurality. See majority, plurality.
  • In most cases, poll and survey may be used interchangeably.

Polls are not perfect

When writing or producing stories that cite survey results, take care not to overstate the accuracy of the poll. Even a perfectly executed poll does not guarantee perfectly accurate results.

It is possible to calculate the potential error of a poll of a random sample of a population, and that detail must be included in a story about a poll's results: The margin of sampling error for all respondents is plus or minus 3.7 percentage points. See Margin of error later in this entry.

Sampling error is not the only source of survey error, merely the only one that can be quantified using established and accepted statistical methods. Among other potential sources of error: the wording and order of questions, interviewer skill and refusal to participate by respondents randomly selected for a sample. As a result, total error in a survey may exceed the reported margin of error more often than would be predicted based on simple statistical calculations.

Be careful when reporting on the opinions of a poll's subgroup — women under the age of 30, for example, in a poll of all adults. Find out and consider the sample size and margin of error for that subgroup; the sampling error may be so large as to render any reported difference meaningless. Results from subgroups totaling less than 100 people should not be reported.

Very large sample sizes do not preclude the need to rigorously assess a poll's methodology, as they may be an indicator of an unscientific and unreliable survey. Often, polls with several thousand respondents are conducted via mass text message campaigns or website widgets and are not representative of the general population.

There is no single established method of estimating error for surveys conducted online among people who volunteer to take part in surveys. While they may not report a margin of error, these surveys are still subject to error, uncertainty and bias.

Margin of error

A poll conducted via a scientific survey of a random sample of a population will have a margin of sampling error. This margin is expressed in terms of percentage points, not percent.

For example, consider a poll with a margin of error of 5 percentage points. Under ideal circumstances, its results should reflect the true opinion of the population being surveyed, within plus or minus 5 percentage points, 95 of every 100 times that poll is conducted.

Sampling error is not the only source of error in a poll, but it is one that can be quantified. See the first section of this entry.

The margin of error varies inversely to the poll's sample size: The fewer people interviewed, the larger the margin of error. Surveys with 500 respondents or more are preferable.

Evaluating the margin of error is crucial when describing the results of a poll. Remember that the survey's margin of error applies to every candidate or poll response. Nominal differences between two percentages in a survey may not always be meaningful.

Use these rules to avoid exaggerating the meaning of poll results and deciding when to report that a poll finds one candidate is leading another, or that one group is larger than another.

  • If the difference between two response options is more than twice the margin of error, then the poll shows one candidate is leading or one group is larger than another.
  • If the difference is at least equal to the margin of error, but no more than twice the margin of error, then one candidate can be said to be apparently leading or slightly ahead, or one group can be said to be slightly larger than another.
  • If the difference is less than the margin of error, the poll says a race is close or about even, or that two groups are of similar size.
  • Do not use the term statistical dead heat, which is inaccurate if there is any difference between the candidates. If the poll finds the candidates are exactly tied, say they are tied. For very close races that aren't exact ties, the phrase essentially tied is acceptable, or use the phrases above.

There is no single established method of estimating error for surveys conducted online among people who volunteer to take part in surveys. These surveys are still subject to error, uncertainty and bias.

Evaluating polls and surveys

When evaluating whether public opinion research is suitable for publication, consider the answers to the following questions.

— Has the poll sponsor fully disclosed the questions asked, the results of the survey and the method in which it was conducted?

Reputable poll sponsors and public opinion researchers will disclose the methodology used to conduct the survey, including the questions asked and the results to each, so that their survey may be subject to independent examination and analysis by others. Do not report on surveys in which the pollster or sponsor of research refuses to provide such information.

Some public opinion researchers agree to publicly disclose their methodology as part of the American Association for Public Opinion Research's transparency initiative. Participation does not mean polls from these researchers are automatically suitable for publication, only that they are likely to meet the test for disclosure. A list of transparency initiative members can be found on the association's website at: http://www.aapor.org/Standards-Ethics/Transparency-Initiative/Current-Members.aspx

— Does the poll come from a source without a stake in the outcome of its results?

Any poll suitable for publication must disclose who conducted and paid for the research. Find out the polling firm, media outlet or other organization that conducted the poll. Include this information in all poll stories, so readers and viewers can be aware of any potential bias: The survey was conducted for Inside Higher Ed by Gallup.

Polls paid for by candidates or interest groups may be designed to produce results that are beneficial to that candidate or group, and they may be released selectively as a campaign tactic or publicity ploy. These polls should be carefully evaluated and usually avoided.

— How are people selected to take part in the poll? Does the poll rely on a random sample of a population, in which every member of that population has a known probability of inclusion?

These are known as probability-based polls, and they are the best method of ensuring the results of a survey reflect the true opinion of the group being surveyed.

Those conducted by telephone must include people interviewed on their cellphones. Those that only include landline interviews have no chance of reaching the more than half of American adults who have only a mobile phone.

Avoid polls in which computers conduct telephone interviews, sometimes referred to as IVR (for interactive voice response), automated or robopolls. These surveys cannot legally dial cellphones, and while they sometimes are supplemented with online interviews to reach cellphone users, such supplements are usually of dubious quality. These surveys also cannot randomly select respondents within a household, which can lead to underrepresentation of some demographic groups such as younger adults.

Polls conducted online are valid if the poll is of a panel of respondents recruited randomly from the entire population, with internet access or the option to take surveys over the phone provided to those who don't have internet access.

Many online polls are conducted using opt-in panels, which are composed of people who volunteer to take part, often in response to web advertisements. As of 2018, research into such surveys finds that traditional demographic weighting is often insufficient to make such opt-in panels representative of the population as a whole. Results among demographic groups such as African-Americans and Hispanics can be especially inaccurate, and biases within these groups are especially difficult to correct. These surveys lack representation of people without internet access, a population that differs in key ways from those who do have internet access.

However, opt-in surveys that use additional variables as part of their weighting schemes have shown more promising results, particularly those that use a probability-based sample that is supplemented and/or combined with other sample sources. Because of the difficulty in assessing such approaches and ongoing research into how well they work to reduce bias, the results from such polls should be published only after careful consideration of the techniques used to ensure the results are truly representative. The sample selection and weighting process must be disclosed in detail before they can be considered for publication.

Do not accept assurances from pollsters that "proprietary" sampling and weighting methods that are not made available for review and scrutiny are able to produce representative results.

Balloting of visitors to a website, of a company's email list or polls conducted by Twitter users rely on self-selected samples that should always be avoided. They are both unrepresentative of a broader population and subject to manipulation.

For surveys conducted by mail and sent to a random selection of addresses, pay especially close attention to how long it took to field these polls, especially if they include topics in the news or pertaining to elections. Before publishing results of polls by mail, carefully consider whether the results of time-sensitive questions may be outdated.

Outside of the United States, many polls are conducted using in-person interviews of people at randomly selected locations. Many are of high quality. Pay close attention to how the pollsters tried to include rural and other hard-to-reach places in the survey sample.

Many political polls are based on interviews with registered voters, since registration is usually required for voting. Polls may be based on likely voters closer to an election; if so, ask the pollster how that group was identified. Polls that screen for likely voters at the sample level by only attempting to interview those who have a history of voting may include fewer nonvoters, but may also exclude some potential new voters.

— Are the results being reported in a timely manner?

Public opinion can change quickly, especially in response to events. Make every effort to report results from a poll as close to the period when the survey was conducted as possible.

Be careful when considering results from polls fielded immediately after major events, such as political debates, that often sway public opinion in ways that may only be temporary. Similarly, if events directly related to a poll's questions have taken place since they were asked, the results may no longer reflect the opinion of the populations being surveyed. That does not mean they are no longer valid, but must be placed into the proper context. Often, such results are valuable in describing how public opinion has changed — or remained consistent — in the wake of such events.

In all cases, consider whether it is useful to inform readers and viewers directly when the poll was conducted: The poll was taken three days after the president proposed new tax cuts. The poll was conducted the week before Congress passed the new health care legislation.

The timeliness of results is especially crucial in reporting on pre-election polls. Voter opinions often change during the course of a political campaign, and results from questions asked several weeks or in some cases days prior likely no longer provide an accurate picture of the state of a race.

When describing voter opinions about candidates for political office, it's best to summarize results from several recent polls, or the trend in polls over time, rather than cite the results of a single survey in isolation.

Methods statement

When publishing poll or survey results, as an additional effort in transparency, consider also publishing a stand-alone statement about the survey and its methods:

The Associated Press-NORC Center for Public Affairs Research poll on the nation's priorities was conducted by NORC at the University of Chicago from Nov. 30 to Dec. 4. It is based on online and telephone interviews of 1,444 adults who are members of NORC's nationally representative AmeriSpeak Panel.

The original sample was drawn from respondents selected randomly from NORC's national frame based on address-based sampling and recruited by mail, email, telephone and face-to-face interviews.

NORC interviews participants over the phone if they don't have internet access. With a probability basis and coverage of people who can't access the internet, surveys using AmeriSpeak are nationally representative.

Interviews were conducted in English.

As is done routinely in surveys, results were weighted, or adjusted, to ensure that responses accurately reflect the population's makeup by factors such as age, sex, race, education, region and phone use.

No more than 1 time in 20 should chance variations in the sample cause the results to vary by more than plus or minus 3.7 percentage points from the answers that would be obtained if all adults in the U.S. were polled.

There are other sources of potential error in polls, including the wording and order of questions.

The questions and results are available at http://www.apnorc.org/

Back to Top