Let’s say you want to buy a pair of shoes, and the salesperson tells you the price is $120. You’d most likely think you can get it at a lower price if you negotiate a little bit.
Now imagine the salesperson tells you the price is $200, but then offers you a discount of $80. You’re way more likely to go for the second option because it seems like you’re getting a good deal.
If you find the second deal more appealing, you may be prone to anchoring bias. An anchoring bias is an over-reliance on the first information you are presented with when making decisions or making judgments.
Let’s explore anchoring bias in surveys, its implications, and how to avoid it.
Anchoring bias is a cognitive bias that affects our decision-making process. It’s when we rely too much on the first information (the anchor) we come across to make decisions or estimates.
Anchoring bias can have significant implications for survey design and data collection. It affects how respondents answer questions and perceive the survey.
Here are some of the most common sources of anchoring bias in surveys:
The way questions are presented can play a significant role. If a survey begins with a question that sets a specific reference point or anchor, it can influence subsequent responses.
For example, if the first question asks about the frequency of a behavior, such as listening to a song by a famous artist. This can introduce anchoring bias that will shape the respondents’ estimates.
Most people would choose a high number of times, even if they’ve never listened to the song, because they like or have heard about the artist. However, if the question suggests a low listening rate, most people will openly rate it low.
If the options are too high or too low, the respondents could underestimate or overstate their behavior. For example, if a survey question asks about an opinion or attitude, such as satisfaction or preference, the order of the response options can act as anchors that influence the respondents’ ratings.
When options are presented in ascending or descending order, they may trigger a primacy or recency effect. This causes respondents to prefer the first or last option more often than others.
if the options are worded positively or negatively, they can create a framing effect, making respondents more or less favorable to the topic.
For instance, if a question asks, “Would you rate our excellent support live chat?” it primes respondents to think of the customer service as excellent and may result in more positive evaluations.
This means that when people are presented with an anchor, they tend to adjust their initial impression based on new information, but not sufficiently to reach an accurate judgment.
For example, imagine a survey question asks about the number of people in a country who are vaccinated against COVID-19 and provides an anchor of 1000 vaccinations. Respondents’ estimates may be adjusted based on their knowledge of the pandemic, but not sufficiently to account for the actual number of vaccinations.
This means that when given an anchor, people tend to pick up on and remember things that match the anchor, and forget things that aren’t consistent with it.
For example, if a survey question asks about the quality of a product, and provides an anchor of 5 stars out of 5, respondents might recall and emphasize positive aspects of the product, and overlook negative aspects of it.
Example 1- Income Level Questions
Asking about income or education level before asking about opinions or preferences. This can make respondents more likely to align their answers with the social status or group identity that appeals to them.
Example 2– Satisfaction Level Questions
When you ask for Satisfaction or how likely respondents are to recommend certain things before asking about a product or service features, it can lead to anchoring bias. This compels respondents to give consistent ratings across different dimensions, even though they’re not all happy or unhappy with them.
Example 3- General Beliefs/Attitude Questions
General attitudes or beliefs also tend to trigger anchoring bias. Respondents are more likely to express opinions that are consistent with their initial anchor, even if they would act differently in reality.
Anchoring can create response patterns where participants consistently cluster around the initial anchor points, leading to biased averages or distributions. This makes results less accurate and unrepresentative of the true population.
This reduces the variability and diversity of responses, making them less informative and insightful. It also makes it challenging to draw meaningful conclusions or make accurate predictions based on the survey data.
Anchoring bias also leads to inconsistency and variation in responses. Different anchor points can cause different types of responses. This makes it hard to compare or group the data.
For example, if one survey has a high anchor for one question and another uses a low one for the same question. The responses may not match up directly, making the data unreliable and hard to draw conclusions or recommendations.
It also affects the validity and reliability of the survey instrument, making it less sensitive and responsive to changes in the underlying phenomena.
Anchoring bias causes you to favor the initial information you received and disregard subsequent information, even if it is correct. This can negatively impact the validity and credibility of survey research; you may be affected by the sequence, wording, or form of survey questions.
For example, a survey asking people to estimate the average monthly rent for apartments in a particular city, if the statistical average is $2000/month. This can create a cognitive bias that influences respondents’ subsequent answers. People may be inclined to provide lower estimates, even if they would have given higher estimates without the initial anchor.
Anchoring bias can also influence respondents’ judgments when asked to rate something on a scale. A survey that asks app users to rate their satisfaction while showing an average rating of 4.5/5 will almost always result in anchoring bias.
This bias can cause respondents to see themselves as less satisfied and give lower ratings than they would if the anchor were lower (say, 2/5). This bias can affect the overall rating distribution and distort the true satisfaction level.
You have to take preventive measures against anchoring bias in your survey design. For example, carefully craft your questionnaire, avoid leading or suggestive questions, randomize the order of questions or response options, and use pre-testing and pilot studies to identify potential sources of bias.
Without taking these measures against anchoring bias, you will most likely end up with skewed data, and eventually, poor decisions based on wrong data.
Here are some strategies and best practices to minimize anchoring bias in surveys:
Offer clear and concise instructions to survey participants; make sure your respondents understand the purpose and scope of your survey.
Also, avoid using words that might influence their answers. For example, instead of asking “How satisfied are you with our excellent customer service?”, ask “How satisfied are you with our service?”.
Randomize the sequence of multiple-choice question responses. This allows you to spread the impact of anchoring over multiple response options.
You can change the sequence of response options or questions for each participant. You can also use different scales or labels for the same question.
Before launching the survey, test your survey with a small sample of your target population, or use online tools or experts to review your survey. Next, analyze the feedback and data to optimize the survey.
Check out the results of the survey against other data sources, such as previous surveys, benchmarks, and expert opinions. This will help you spot and fix any anchoring bias in the data.
Also, use multiple questions to assess the same construct or variable, and check for consistency and reliability. This can help you reduce the impact of a single anchor on the overall result.
Avoid asking leading or suggestive questions that could introduce anchors or sway respondents’ responses. For example, avoid using numbers or ranges that suggest a certain standard or expectation.
Vary the order of questions or response options. You can randomize or counterbalance techniques the questions. This can help you avoid order effects or priming effects that may result in anchors.
Triangulation and cross-validation allow you to compare different sources of data, methods of analysis, or perspectives of interpretation. Triganluation uses multiple research methods to research the same concept, e.g. physical interviews, online surveys, emails, and more.
Cross-validation compares the results with other independent data sources to validate the survey data. If the conclusions align with the study’s results, it reduces the likelihood of the anchoring effect.
Another strategy is using the mixed method approach. You have to combine qualitative and quantitive research methods.
With quantitative data, you get an overview of participant opinions with figures. Combining this with qualitative data helps you undercover the reason behind respondent answers.
An anchoring bias is when respondents rely on the first information they get, and then base their answers and decisions on that information. Anchoring bias, like most cognitive biases, is unintentional and can easily enter your survey.
Employing the best practices for minimizing the anchoring effect enable you to improve the reliability and validity of your data. This also helps you make insightful recommendations and well-informed decisions.
You may also like:
Introduction Are you looking to test a new product idea or a service? They are quite a number of methods available in the market...
Introduction Survey biases can occur in any survey, but they are more likely to occur when the survey is conducted by humans. Humans are...
Introduction Inattentional blindness is a cognitive phenomenon in which an individual fails to perceive a visible object or event...
Introduction Survey straight lining occurs when the respondents of a survey in haste, select the same response every time....