Plan, Build, And Test

Oct 17, 2017


It’s easy to over-simplify survey design and analysis, but what I’ve come to learn is that good research is the result of thoughtful planning and careful execution. Before I entered the field of survey design and analysis, surveys seemed so simple. You’re just asking people to answer a set of questions—right? Not so. Unarticulated research objectives and poorly asked questions can impact results. Variations in question types and response options can also have a major impact on the quality and value of your results. Bad results can lead to bad decisions, and that is one thing we all want to avoid. Below are three tips to consider when designing research and surveys that I’ve found to be most impactful.


When thinking about surveys, it’s easy to think about what you want to know (“What’s our share of wallet?”) or the type of study you want to do (satisfaction study). What’s difficult to keep front and center are the business decisions that will be made with the results of the survey.

In his book, The Complete Guide to Writing Questionnaires, David Harris outlines three types of survey requests that researchers typically receive from their business partners: information requests, study-type requests, and decision requests. David suggests the problem with starting out with information or study-type requests is that little guidance is provided about what to ask in the questionnaire. The outcome? A questionnaire that is too long with unnecessary questions. Take a moment with your business partner to understand the decision that your research will support. Doing this FIRST is critical to designing the right study and getting quality information back from respondents. How will we know what information is needed until we specify what we plan to do with it? Your work becomes more effective and meaningful when you start working through the decisions your research supports.

When you understand what decisions are going to be made with the research, it’s also easier to define your sample plan (the specific group of people you need to survey). For example, if you want to understand the reasons customers have stopped shopping so you can build a reactivation marketing plan, you don’t need to survey your entire population, just those that have lapsed. Plan for all the comparisons you want to make and be sure you’ve captured the information in your survey. For example, if you want to compare age ranges, be sure to have that as a demographic question. Better yet, if you have that data saved in your customer data warehouse, track your survey respondents and overlay any demographic or behavioral data once you have collected all your responses. As we’ve shared before, “behavioral data and traditional primary research are two powerful functions that can complement each other quite nicely.”


Ok, you know what decisions are going to be made with the results, and you’ve defined your research objectives and questions. Before you hand your questionnaire over to your survey programmer, take another look at your questions and ask yourself: “What will I do with this data?”

I’m a nerd (data scientist) and responsible for programming and analyzing our surveys. I’ve found that the survey analysis goes much smoother when I connect with the suits (marketers) on my team before programming to build out an analysis plan. (If you’re not sure if you’re a geek, nerd, or suit, take this quiz to find out). For each survey question you’ve written, define:

  • The research objective for each survey question
  • The hypothesis
  • How you will analyze the results

Here’s an example.

If you are not able to tie a particular survey question back to a research objective or hypothesis, then consider excluding it. It’s also pretty easy to see from this view if you have missed any research objectives.


You’ve programmed a great survey that best addresses your research objectives. Awesome! Before you launch your survey and start collecting responses, pretest it to make sure you’ve crossed your t’s and dotted your i’s. I start by pretesting it myself to make sure any question skip logic (funneling some respondents into some questions, but not others) is working properly and double checking for spelling and grammar errors. Then I send it to my teammates, to find problems and suggest improvements.

Pretesting helps to uncover questions that are unclear, information that might be difficult to recall, and response items that may be missing. While very time consuming, sit down with one or two people while they take the survey and have them share their thought process for how they arrived at their answer and what information they think each question is supposed to measure. This helps you make sure the question is actually getting the information you really need.

If at all possible, conduct a soft launch for a final quality control check. A soft launch means sending the survey to a very small number of respondents to make sure it all goes off without a hitch. This is the last opportunity to make sure respondents are progressing through the survey as they should be. A few things to look for:

  • Are there questions they are spending too much time to answer? They may need some re-wording or to be simplified.
  • Are there common questions being skipped or questions where respondents are dropping out? That might point out potential problem areas.

During one of my early research projects we were not able to do a test launch. Unfortunately, the tags we added to our survey link to track our customers were not saved back in our survey tool. While we were still able to overlay the behavioral data (segment assignment, revenue, frequency…), it added more complexity and time to the project to solve for that mishap. It was a HUGE bummer that pretesting would have helped us avoid.

No survey you design is going to be perfect, but investing time and thought into planning, reviewing, and testing yields a better questionnaire. A better questionnaire means better data. Better data means getting to better insights. Better insights result in good business decisions. Who would argue with that?