7 Types of Survey Bias & How to Prevent Them

Online surveys are one of the most frequently used types of market research, providing a cost-effective and efficient way to collect data from a large number of respondents. But, like any methodology, there are benefits and drawbacks to survey-based research. In particular, online surveys can be susceptible to a variety of different types of bias.

But what is bias? Put simply, bias refers to systematic errors that can influence the results and conclusions drawn from data collection and analysis. Bias can creep its way into any stage of the research process. But it’s important to understand that it’s almost impossible to eliminate bias completely. As a researcher, your job is to try and identify and mitigate material bias as much as possible.

In this article, I’ll focus on bias related to the data collection stage of survey-based research (e.g. sourcing samples, questionnaire design, survey completion, etc.). In total, we’ll look at seven of the most common and important types of bias, which include:

  1. Sampling Bias

  2. Non-Response Bias

  3. Self-Selection Bias

  4. Social Desirability Bias

  5. Question Order Bias

  6. Acquiescence Bias

  7. Response Bias

1. Sampling Bias

Also known as selection bias - but not to be confused with self-selection bias (see below) - this arises when certain groups of a population are more likely to be surveyed (i.e. sampled) than others. This can happen when the survey is distributed through specific channels or platforms that have demographic or psychographic (e.g. ideological) skews.

For example, a survey distributed through a social media platform, like TikTok, may over-represent younger groups while under-representing older groups who are less active online or social media. Alternatively, surveys/polls distributed by specific entities or people of influence can also result in sampling bias.

Take the YouTube poll, shown below, published by comedian-turned-political commentator Russell Brand. Brand is known to have an audience that skews ideologically right / conservative.

This poll had 172,000+ votes, which is an enormous sample size. Most political polls would typically only have 1,000 to 2000 samples, so one might assume that a poll of 172K people is representative. But it’s not, because of the inherent demographic and psychographic skews of both YouTube’s user base as well as Brand’s own following on the platform. Regardless of how you feel about Brand or politics in general, there’s no question that Brand’s audience skews ideologically. And this is why no media outlet or authority would cite a YouTube poll conducted by Russell Brand as being a naiotnally representative sample. Hence, this is a great example of sampling bias at work.

How to prevent or mitigate this type of bias?

Here are some ways you can try to mitigate the effects of sampling bias:

  1. If you have the budget, work with a reputable panel provider to acquire your samples

  2. If you’re not using a panel provider, use multiple distribution channels (e.g. email contact list, social media, website, etc) to reach different demographics, and target large network communities.

  3. Stratify the sample by enforcing quotas for key demographics (e.g. 50/50 split on gender, etc).

2. Non-Response Bias

Non-response bias happens when the people who don't respond to a survey are significantly different from those who do, in ways that matter to the research. This usually occurs when factors prevent respondents from willingly participating in your survey.

For example, let’s say you launch a survey that deals with a subject matter that is taboo or highly sensitive. In this case, people who you would want to survey as part of your target audience could be much more likely to not participate. Alternatively, non-response bias can occur as a result of decisions you make around how the survey is configured, such as whether participation is anonymous.

How to prevent or mitigate this type of bias?

Here are some ways you can try to mitigate the effects of non-response bias:

  • Consider the subject matter of your survey. If you’re researching a sensitive or taboo topic, make sure you keep the results anonymous, and explain to the respondents up-front how you will keep their identity safe.

  • Monitor survey response demographics, and follow up with non-responders. Make sure you’re open and transparent about how the data will be used, and if the results are truly anonymized.

  • Provide incentives to encourage participation.

3. Self-Selection Bias

Self-selection bias is similar to non-response bias, but is also unique is some interesting ways. To best explain what this is, let’s turn to the following definition from Izabela Kaźmierczak et al, where they define self-selection bias as:

(A type of bias that occurs when) “Respondents select the type of psychological studies that they want to participate in consistence with their needs and individual characteristics, which creates an unintentional self-selection bias”

Unlike non-response bias where respondents choose not to participate, self-selection occurs when a particular group is more likely to want to participate in your survey. This can occur because of the incentives you offer, or sometimes the respondent’s interest in the topic you’re researching.

For example, say you’re surveying the general population about their views on ride-sharing services like Uber and Lyft. To encourage participation, you offer a $5 Uber voucher as an incentive to complete the survey. Unfortunately, you may have just created a self-selection bias, because you’ve made your survey significantly more appealing to individuals who use (or intend to use) services like Uber. Simply put, the incentive to participate is in conflict with the audience you want to reach (i.e. the general population). Note that this wouldn’t be an issue if your target audience was only people who use or intend to use Uber.

This doesn’t mean that you shouldn’t offer incentives to participate in surveys. But you can’t choose an incentive that is significantly more appealing to a specific group. And this is why cash or broadly used gift vouchers like an Amazon credit usually work best. It’s also why reputable panel providers offer a variety of commonly used merchants (like Amazon credit), as opposed to only offering one option.

How to prevent or mitigate this type of bias?

Here are some ways you can try to mitigate the effects of self-selection bias:

  • Similar to selection bias, consider working with a professional panel provider if you have the budget.

  • Randomly invite participants from a list of potential respondents.

  • Limit the ability for individuals to participate based on their interest in the topic. One way to do this is to avoid describing what the survey is about in your invitation.

  • If you’re offering rewards or incentives, try to use options that are appealing to a broad popular (e.g. cash, Amazon credit, etc)

4. Social Desirability Bias

Social desirability bias occurs when respondents answer questions in a manner they believe is more socially acceptable or favourable, rather than being truthful. This often occurs when a survey delves into topics that are (or are perceived to be) sensitive in nature; such as questions about income, political views, or behaviours that may be judged by others.

This type of bias is mainly driven by a desire to conform to social norms, avoid judgment, or project a positive self-image. As a result, the data collected may be skewed, presenting a rosier or more acceptable version of reality.

For example, in a survey about healthy eating habits, respondents might overreport the frequency of consuming fruits and vegetables while underreporting junk food consumption.

Similarly, in a survey about behaviours at the workplace, employees may hesitate to critique company leadership or admit to low productivity for fear of repercussions. This type of bias can be especially prevalent in surveys addressing sensitive topics, such as income, political opinions, or personal behaviours.

How to prevent or mitigate this type of bias?

Here are some ways you can try to mitigate the effects of social desirability bias:

  • Assure respondents of anonymity to reduce the pressure to give socially desirable answers.

  • Use indirect questioning techniques, such as asking about behaviours in third-person scenarios.

  • Consider adding a text screen before questions that may be sensitive asking the respondents to be as truthful as possible.

5. Question Order Bias

Question order bias occurs when the sequence of questions in a survey influences the way respondents answer subsequent questions. A simple example this can be found in a brand tracking study, a common use-case for brand or marketing managers. Brand tracking typically push respondents through a sequence of questions where they ask about brand awareness, usage, consideration, etc. It’s common to start a brand tracker with questions about brand awareness, and this usually includes an aided and unaided awareness question.

Aided awareness is where you ask which brands the respondent has heard of in a particular industry or sector, and then you show a list of brands where they can select one or many options. Unaided awareness uses an open ended question type, where you ask which brand first comes to mind when they think about a particular industry or sector, and then the respondent types the answer into a text field. Here’s an example of what they looks like.

When using these types of questions in a survey, it’s customary to run the unaided question first. This is because showing the brand list first could influence how they respond to the unaided question.

Question order effects can have an enormous impact on your data. There are plenty of scenarios where the potential risk of question ordering will skew results, such as the example above. Another common example can be found in an A/B test, where you run a survey to compare two or more variants of a product design, creative asset, etc. In this case, it’s common to configure question block randomization to randomize the order in which your variants are shown across respondents.

Unfortunately, there are plenty of cases where question ordering may impact your data which are less obvious. In my professional experience, I’ve come across a few cases where my team and I found question ordering effects that weren’t inherently obvious, and which was discovered almost by accident.

What’s important is that you’re aware that question ordering can and will affect your data, and that before you hit pubblish on your survey, you ask yourself whether there are questions or sections of your survey that need to be re-ordered.

How to prevent or mitigate this type of bias?

Here are some ways you can try to mitigate the effects of question order bias:

  • If you’re running a brand tracker, avoid running aided awareness questions before uniaded.

  • If you’re running an A/B test with two or more product variants (and a sequential design where one group is exposed to multiple variants), make sure you utilize question block randomization (note that this may only be available in paid survey tools).

  • Group related questions to minimize contextual effects.

  • Pilot test the survey to identify potential order effects.

6. Acquiescence Bias

Acquiescence bias, also known as agreement bias or the “yes-man” phenomenon, is the tendency for survey respondents to agree with statements or questions, regardless of their true opinions or beliefs. It’s similar to social desirability bias. However, acquiescence bias is when people agree with statements, no matter what they say, while social desirability bias is when people answer questions to make themselves look good or fit in with what others expect.

This type of bias can happen for many reasons, but a common factor is usually related to respondent engagement levels and fatigue. It’s no secret that filling out surveys isn’t the most exciting task, but the incentives offered, survey design and user experience can all contribute to keeping the respondent engaged enough to give you good data. On the other hand, if you publish a survey with poor incentives (not necessarily monetary), confusing questions or a poor user experience, fatigue and disengagement can set in.

The good news is that the effects of acquiescence bias can be easy to detect, as industry standard quality checks, such as speeder checks (i.e. making sure the respondent didn’t complete the survey too quickly) and straightliner checks (i.e. checking to see if the respondent selected the same option for many or all questions), will usually uncover respondents who weren’t engaged.

How to prevent or mitigate this type of bias?

Here are some ways you can try to mitigate the effects of acquiescence bias:

  • Use balanced scales that offer both positive and negative response options.

  • Include reverse-coded items to check for consistency in responses.

7. Response Bias

Response bias occurs when survey participants provide answers that don't accurately reflect their true beliefs or opinions. Instead, their responses are influenced by the survey itself, particularly the structure and language of the questions. Think of it as the survey unintentionally nudging people towards certain answers, rather than capturing their genuine thoughts. This means the data you collect isn't a true representation of your target audience, leading to potentially flawed conclusions.

Questionnaire design, including how you phrase question propositions and how you structure response lists are primary factors that can lead to response bias.

For example, unbalance ordinal response scales are a good example of this. If you ask about customer satisfaction but only offer "Very Satisfied," "Satisfied," and "Dissatisfied," you're heavily skewing the results towards positive responses. Participants might feel pressured to choose a positive option, even if they have minor criticisms.

If you recall the Russell Brand YouTube poll I covered earlier, that can also be used as an example of response bias. To recap, his poll asked who you would trust to end the war in the Ukraine, and then he provides the names of four political figures to choose from. The problem is the respondent is forced to choose from one of the four, and they have no way to cast a vote for something else. This is why it’s customary to include a “none of the above” and/or a “someone / something else” option in a nominal response list..

Another example can be found in leading or loaded questions. If you ask, "How much do you like Starbucks?” you're framing the question in a way that suggests the respondent has an affinity for the Starbucks brand or coffee, which is leading.

How to prevent or mitigate this type of bias?

Here are some ways you can try to mitigate the effects of response bias:

  • Avoid leading or loaded question propositions.

  • When using ordinal or interval scales, make sure they’re balanced. Here’s an article I wrote about using scales in survey research.

  • Avoid jargon in your question proposition

  • For nominal response lists, be sure to include a ‘none of the above’ and/or ‘something else’ option.

Conclusion

Bias in online survey research is an ever-present challenge, but with careful planning and execution, its impact can be minimized. Understanding the various types of bias—such as selection, non-response, self-selection, sampling, social desirability, question order, acquiescence, and response biases—enables researchers to design better surveys and draw more accurate conclusions. By recognizing the signs of bias and implementing strategies to address them, market researchers can improve the quality of their data and the validity of their insights.

By applying these techniques, survey-based research can maintain its value as a tool for gathering meaningful and representative data.


If you liked this article and you’re interested in learning more about how to conduct survey based research, check out my course Quantitative Survey-Based Research on my

Stephen Tracy

I'm a designer of things made with data, exploring the intersection of analytics and storytelling.

https://www.analythical.com
Previous
Previous

The Benefits and Pitfalls of Using LLM’s for Qualitative Data Analysis (QDA)

Next
Next

Guide to Understanding Beginner, Intermediate & Advanced Excel Skills?