Researchers conduct surveys in order to collect data, such as political opinions or product preferences, from a small audience and then use statistical inference to generalize that information to a larger population. However, unless a complete census is taken - and it uses an imaginary method that allows respondents to perfectly express their answers - survey results will always have some amount of error. Survey error is the difference that exists when one compares data from a flawless complete census to data from an imperfect sample. The actions of taking a sample and using survey instruments inevitably distort the true value of data that are being collected, but by being cognizant of common sources of error and making smart decisions to minimize their impact, savvy researchers can still succeed in collecting valid and generalizable data.
Dillman, Smyth, and Christian define the four common pillars of total survey error to be coverage error, sampling error, nonresponse error, and measurement error. Let’s define these sources of error, walk through some common examples of where they can occur, and consider possible methods to minimize their negative effects.
Occurs when the list that is sampled from is not accurate/complete.
Example: A market research firm wants to learn about the opinions of doctors at Large Hospital General. They receive a list from the hospital that is supposed to have the phone numbers and names of all the doctors. However, it includes incorrect phone numbers for some doctors and has duplicate records for other doctors. If a sample is drawn from this list, these errors ensure that every doctor does not have an equal chance of being selected. This increases survey error.
How to minimize: This will depend on the context of the project. Take the appropriate steps to ensure that the list you will draw from is as complete and correct as possible, so that all members of the population have the same chance of being sampled. In general, before drawing a sample from a list double check the list for duplicates, obvious typos, and other errors. Additionally, if possible, try to confirm with the list provider that no members of the survey population are missing.
Occurs whenever a sample is taken, rather than a census.
Example: Central Elementary School wants to conduct a survey to gain a better understanding of how to prioritize improvements to the school. Mr. Principal decides to randomly sample 100 people in the school, from a pool of the students, teachers, and other staff. The random 100 sample ends up being including no students, and the subsequent survey results misrepresent the views of the complete school population (The students really wanted more recess, but none got to express that opinion through the survey).
How to minimize: There are three common steps one can take to minimize sampling error: 1) Increase sample size, 2) Use the right sample design, and 3) Properly use a margin of error and confidence interval to interpret results in light of the estimated sampling error.
Increasing the number of survey respondents is perhaps the most straightforward method to reduce sampling error. As a larger subset of the population get a chance to share their answers, the difference in the data values from the sample and the true data values of the population shrink.
Using the right sample design can help reduce sampling error. In the school example given, a stratified sample could have provided more generalizable results. Divide the population into subsets of faculty, staff, and students. Sample each of these three groups separately, with the sample size being commensurate with the proportion of the population they represent.
Understanding how to interpret a margin of error and confidence interval doesn’t reduce sampling error, but it allows researchers to properly contextualize results in light of the sampling error that does exist. Commonly, statistics may be presented in the following form:
Example Result = 100 +/- 5%, at a 95% confidence interval
The margin of error of +/– 5% indicates that, when interpreting the estimate of 100, researchers should consider the interval of 95 – 105 to also be plausible values.
The confidence interval of 95% tells researchers that they can be 95% confident that the interval of 95 – 105 contains the true population value of the data. In other words, if the survey was conducted on 100 separate occasions, we expect that on 95 of these occasions the interval of 95 -105 contains the true population value.
Occurs when respondents who do not respond to the survey and/or some of its questions systemically differ from those who do.
Example: A company that does business in New York City and Los Angeles randomly calls 100 customers from its customer database for a satisfaction survey. The calls take place from 12:30 – 1:30 ET. When the data is analyzed, a researcher discovers that 90 of the completions came from customers who reside in NYC, and only 10 come from customers who reside in LA. The researcher is worried by this, because their customer base is split evenly between the two cities. They realize that while the early afternoon survey time was convenient for New Yorkers on their lunch break, most Los Angelenos were too busy during the morning to answer the survey.
How to minimize: Nonresponse error can be minimized by 1) Coming to a careful understanding of who the target population is and 2) Recognizing differences that exist within that population that may affect the ability or willingness of different subsets to respond to the survey. In the example outlined above, time of day influences which subset can respond to phone surveys. Additionally, using only one mode of survey data collection, or having web surveys that are not compatible with phones are other examples where subsets within the target population may be affected by the data collection method differently. Using multiple methods, or methods that have been demonstrated to work with the target population minimize nonresponse error and ensure that no respondents are systematically being excluded from data collection.
Occurs when respondents do not or can not express their true answer to survey.
Example: Bob is taking a survey about his favorite kinds of cars. A question asks Bob how much he agrees with the statement “Red is the best color for cars.” The response options are:
Bob doesn’t strongly disagree that red is the best color for cars. However, he doesn’t like red, and because the scale is not balanced with positive and negative options it is his only negative choice. Researchers who look at this data may misjudge how strongly people disagree that red is their favorite car color.
Later on in the same survey, Bob reads the following question:
“Of all of the previous questions, which one was asked that you don’t think should have been omitted from this survey?”
Bob leaves the survey question blank because he is confused by the wording and decides he is not sure what it is asking. Researchers do not collect data for that question and miss out on valuable information.
The final section of the survey is a long series of questions that ask Bob to pick his favorite car brand is in various scenarios. Bob doesn’t really care about car brands, so he just quickly picks the first brand in each of the scenarios. The first brand option is always “Cool Brand X,” so when researchers interpret this data they overestimate how popular “Cool Brand X,” really is.
How to minimize: Measurement error can be the most difficult to avoid, as there are many possible sources. Still, some of the most common solutions include: 1) Avoid having questions that do not allow respondents to express their true opinion, 2) Avoid having confusing questions, and 3) Avoid question order effects.
Survey questions often use scales to try to understand how interested a respondent is in something, to what extent they agree with something, or some other measurement that can be placed a positive to negative spectrum. When using scales such as these, researchers can reduce measurement error by ensuring the scales are balanced, so as to give respondents a better chance to properly express their opinion. Additionally, for questions that respondents may not be able to or feel comfortable answering, surveys can benefit from adding ‘I don’t know,” or “I prefer not to answer,” response options, to avoid respondents either refusing to answer completely or intentionally answering incorrectly.
Make sure questions are no more complicated than they need to be. By having simple questions that avoid double negative, confusing phrasing, or ambiguous words, researchers can avoid instances where respondents do not understand how to properly answer.
There are many types of question order effects. The order of the responses in a question or the order of the questions in the survey itself can affect how respondents answer. Respondents may pick the first response they see, the last response they hear, the response they feel best aligns with their previous survey answers, or otherwise answer the survey questions in a way that is impacted by question and response order. By randomizing the presentation of response options, or the order of questions where appropriate, researchers can negate question order effects.
Survey error always exists in the real world. However, by understanding common sources of survey error and equipping themselves with tools to combat their effects, researchers can still confidently draw conclusions from their data.
Dillman, D., Smyth, J., & Christian, L. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method. Fourth Edition. 2014.
Lohr, S. Sampling: Design and Analysis. Second Edition. 2010
Lock, R., Lock, P., Morgan, K., Lock, E., & Lock, D. Statistics: Unlocking the Power of Data. 2013.