Henry MaloneBy: Henry Malone
Aug 5th, 2021 (updated July 29th, 2024)

Rolling out an effective survey is far from an exact science. Many factors come into play when attempting to maximize response rates and reach the appropriate audience.

There are several errors that may arise along the way that can almost completely derail your project and set your team back significantly, or worse, put you on the wrong track altogether.

Here are five of the most common mistakes made when designing surveys and how to avoid them:

1. Leading and/or Biased Questions 

As far as generating questions for your survey, it’s important to avoid accidentally leading your respondents to answer the question the way you want them to. A leading question is worded in a way that suggests what the correct answer is before the respondent even has a chance to answer. Leading questions will not yield useful data, and they may even lead your organization to make detrimental decisions. It’s crucial to try to steer clear of questions that imply bias or try to lead the respondent to a specific answer. 

Examples of Leading Questions with Better Alternatives: 

Leading questions can bias responses and undermine the reliability of your survey data. Here are five examples of leading questions from various sectors, along with better alternatives to ensure more accurate and unbiased data collection.

1. Healthcare Sector

Leading Question: “Don’t you agree that our hospital provides the best care in the region?”

Better Alternative: “How would you rate the quality of care you received at our hospital?”

2. Education Sector

Leading Question: “Isn’t our new curriculum much better than the old one?”

Better Alternative: “How do you feel about the new curriculum compared to the previous one?”

3. Customer Service Sector

Leading Question: “Were you satisfied with our exceptional customer service today?”

Better Alternative: “How satisfied were you with the customer service you received today?”

4. Retail Sector

Leading Question: “Do you think our prices are very affordable?”

Better Alternative: “How would you describe the affordability of our prices?”

5. Government/Public Sector

Leading Question: “Wouldn’t you say that the new public transportation system is very efficient?”

Better Alternative: “How would you rate the efficiency of the new public transportation system?”

How to Avoid Leading Questions:

To avoid leading questions, ensure that your survey queries are neutral and objective, avoiding any language that suggests a particular answer. Use open-ended phrasing that allows respondents to share their true opinions and experiences without being influenced. Additionally, pre-test your survey with a small group to identify and revise any questions that may unintentionally lead respondents.

Another way to prevent yourself from leading your respondent too much is to scrutinize adjectives and adverbs within your questions. The use of some adverbs, like “How poorly was…” or “How good was…” can suggest how the respondent should feel about a question rather than allowing them to respond naturally and honestly.

2. Unbalanced Response Scales

When designing a survey question, you have to ensure that each set of response choices covers the whole spectrum of positive and negative responses (as much as is feasible). For example, a survey question asking for a service rating with the response options of “poor”, “satisfactory”, “good”, and “excellent” will inherently sway the results. Arguably, the choices “satisfactory”, “good”, and “excellent” (which represent 75% of responses) tip the scales too much in the positive direction. Furthermore, there is no neutral option offered, forcing those who are “on the fence” to select an answer that does not represent their actual thoughts or feelings. 

Offering a majority of positive or negative choices will only provide the user with more of an impetus to respond in the majority category, and it will not give them an accurate enough option to express their true thoughts and reactions.

Response options should always include a definitive midpoint if a user is unsure of how to feel about a certain question and should typically have an odd number of possible responses to prevent either the positive or negative side from becoming too lopsided.

Example of an Unbalanced Response Scale

Unbalanced Response Scale: “How satisfied are you with our customer service?”

    • Very Unsatisfied
    • Satisfied
    • Very Satisfied
    • Extremely Satisfied

This scale is unbalanced because it offers one negative option and three positive options, which can skew the results towards positive responses.

Better Alternative

Balanced Response Scale: “How satisfied are you with our customer service?”

    • Very Unsatisfied
    • Unsatisfied
    • Neutral
    • Satisfied
    • Very Satisfied

3. Double-Barrelled Questions 

You ideally never want to include more questions than are necessary for a survey. Given the ever-decreasing attention spans of individuals (especially in digital environments), it makes sense to try to make your survey experience as streamlined and efficient as possible. The goal is to lose as little user interest as possible before they finish the range of questions. There is no definitive answer as to how many questions are just right, but many survey software providers will include recommendations to help you create the best survey possible within the context of their software.

Another helpful measure may be “time to complete the survey.” If a survey seems neverending, many people will abandon it regardless of how far they get. You can mitigate disinterest by providing respondents with the number of questions, estimated time to complete the survey, and a progress bar from the beginning. 

It’s also important not to cram too much information into a single question.  A “double-barrelled” question forces respondents to answer two questions at once, often giving more weight to one part of the question than the other and skewing the data. For example, a question such as “Do you think that this program has benefited older and younger individuals?” doesn’t allow the respondent to share their exact thoughts on either demographic. 

Alternatively, splitting that question into two separate questions will reveal how much the respondent weighs the two elements involved.

4. Unclear or Insufficient Instructions

Survey instructions play a critical role in guiding respondents through the process and ensuring accurate and consistent responses. Neglecting to provide clear, concise, and comprehensive instructions can lead to confusion, misinterpretation, and unreliable data.

Clear instructions set the tone for the survey, provide context, and help respondents understand what is expected of them. They also minimize the risk of respondents skipping questions or providing inconsistent answers.

To avoid this pitfall, be explicit about the purpose of the survey, offer specific guidance for each section or question type, and include examples when appropriate. By providing clear instructions, you create a smoother experience for respondents and improve the quality of your data.

Be explicit: Clearly explain the purpose of the survey, how the data will be used, and the importance of respondents’ participation.
Guide responses: Provide specific instructions for each section or question type, especially if they require a certain kind of response (e.g., rating scales, multiple-choice, open-ended).
Offer examples: When appropriate, include examples to illustrate how respondents should answer the questions. This can help clarify expectations and reduce ambiguity.

Examples of Unclear vs. Clear Instructions:

Unclear Instructions:

    • “Rate our service.”
    • “How did we do?”
    • “Provide your feedback.”

Clear Instructions:

    • “Please rate the quality of our service on a scale from 1 to 5, where 1 is very poor and 5 is excellent.”
    • “How satisfied are you with the cleanliness of our facility? Select one of the following options: Very Unsatisfied, Unsatisfied, Neutral, Satisfied, Very Satisfied.”
    • “Provide your feedback on the recent workshop you attended by answering the following questions. Be as specific as possible in your responses.”

5. Failing to Pretest the Survey

Pre-testing, or piloting, your survey with a small group of individuals before full deployment is an essential step that is often overlooked. Without pre-testing, you risk launching a survey that contains unclear questions, technical issues, or other flaws that could affect the quality of your data. To avoid this pitfall:

  • Conduct a pilot test: Share your survey with a small, diverse group of people who resemble your target audience. Collect their feedback on the clarity, length, and overall experience.
  • Analyze feedback: Pay close attention to areas where respondents struggled or provided inconsistent answers. Use this feedback to refine and improve your survey.
  • Test functionality: Ensure all technical aspects of the survey work smoothly, including navigation, question logic, and data recording. This is especially important for online surveys to prevent any technical issues during the actual data collection.
Woman using Compyle on laptop

Simplify Your Data Collection and Service Recipient Management

Schedule a custom, private demo of Clear Impact Compyle.

Conclusion

Developing effective program evaluation surveys is essential for collecting reliable and actionable data. By avoiding leading questions and double-barrelled questions, using balanced response scales, providing clear instructions, and pre-testing your survey, you can enhance the accuracy and reliability of your survey results. Implementing these tips will not only improve the quality of your data but also create a better experience for your respondents, encouraging higher participation rates and more honest feedback.

Read Next

To dive deeper into survey design and program evaluation, check out our other blogs:

Explore these resources to further refine your approach and ensure your surveys effectively capture the insights you need to drive impactful decisions.