When it comes to rolling out an effective survey, it’s far from an exact science. Many factors come into play when attempting to maximize response rates and reach the appropriate audience.

There are several errors that may arise along the way that can almost completely derail your project and set your team back significantly, or worse, put yourselves on the wrong track altogether.

Here are three of the most common mistakes made when using survey software and how to avoid them:

  1. Leading and/or Biased Questions 

As far as generating questions for your survey, it’s important to avoid accidentally leading your respondent to answer the question the way you want them to. A leading question is worded in a way that suggests what the correct answer is before the respondent even has a chance to answer. Leading questions will not yield useful data and they may even lead your organization to make detrimental decisions. It’s crucial to try to steer clear of questions that imply bias or try to lead the respondent to a specific answer. 

Instead, focus your question’s wording on the respondent specifically. Ask questions that are relevant to them, and don’t overload them with information in each question, just include enough to gather an informed response.

Another way to prevent yourself from leading your respondent too much is by scrutinizing adjectives and adverbs within your questions. The use of some adverbs like “How poorly was…” or “How good was…” can offer an implication of how the respondent should feel about a question, rather than allowing them to respond naturally and honestly.

Learn more about performance measures.

Download our free e-Book: Achieving the Performance Imperative

Download the Free e-Book

   2. Unbalanced Vs. Balanced Response Scales

When your target audience is given the opportunity to respond to a question, you have to ensure that each set of response choices covers the whole spectrum of positive and negative responses (as much as is feasible). For example, a survey question asking for a service rating with the options of “poor”, “satisfactory”, “good”, and “excellent” will inherently sway the results. Arguably, the choices “satisfactory”, “good”, and “excellent” (which represent 75% of responses) tip the scales too much in the positive direction. Furthermore, there is no neutral option offered, forcing those who are “on the fence” to select an answer that does not represent their actual thoughts or feelings. 

Offering a majority of positive, or negative, choices will only provide the user with more of an impetus to respond in the majority category, and it will not give them an accurate enough option to express their true thoughts and reactions. Response options should always include a definitive midpoint, if a user is unsure of how to feel about a certain question, and should typically have an odd number of possible responses to prevent either the positive or negative side from becoming too lopsided.

     3. Double-Barrelled Questions 

You ideally never want to include more questions than are necessary for a survey. Given the ever-decreasing attention spans of individuals (especially in digital environments), it makes sense to try to make your survey experience as streamlined and efficient as possible. The goal is to lose as little user interest as possible before they finish the range of questions. There is no definitive answer as to how many questions are just right, but many survey software providers will include recommendations to help you create the best survey possible within the context of their software. Another helpful measure may be “time to complete the survey.” If a survey seems neverending, many people will abandon it regardless of how far they got. You can mitigate disinterest by providing respondents with the number of questions, estimated time to complete the survey, and a progress bar from the beginning. 

It’s also important not to cram too much information into a single question. A “double-barrelled” question is one that forces respondents to answer two questions at once, often giving more weight to one part of the question than the other and skewing the data. For example, a question such as “Do you think that this program has benefited older and younger individuals?” doesn’t allow the respondent to share their exact thoughts on either demographic. 

Alternatively, splitting that question into two separate questions will reveal how much the respondent weighs the two elements involved.

About the Author:

Henry MaloneHenry Malone received his B.A. in journalism from the University of Maryland, College Park in 2021. There, he served as Deputy Editor for the University newspaper, the Testudo Times. At Clear Impact, Henry researches writes, and edits website content and news articles focusing on the nonprofit and public sectors.