ICMI is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Advertisement

3 Pitfalls to Avoid When Writing Customer Surveys

Customer service and support organizations are notorious for the wealth of data they constantly collect—from how long we spend answering calls to how long it takes to respond to an e-mail, how long we spend in training, how many screens are accessed during a customer interaction, how many calls it takes to achieve issue resolution…  And that’s just mentioning the things we track internally!

Indeed, it’s arguably more important to think about all of the metrics we are tracking externally—in other words, directly from customers.  You’d be hard-pressed to find a company that isn’t surveying customers about the service and support experience in some way.  But what we often hear from the companies we work with is that, even though data is being gathered, there is a sense of uncertainty about how much faith to put into that data.

Service and support leaders often aren’t confident in the results from customer surveys—and the root cause of that doubt typically stems from basic survey hygiene.  In the rush to get surveys out the door, organizations overlook common errors and seldom do they purposefully assess the effectiveness of their surveys.

With that in mind, here are three common pitfalls that can lead to biased survey data—and tips on how to avoid them:

1. Too Many Questions

An overload of survey questions or options can overwhelm customers, eventually leading to a large drop-off rate (and smaller sample sizes) or poor data quality. While focusing on data gathering, companies often forgo setting a survey goal, making it hard to focus questions on a central theme due to assuming a mentality of “since we have a survey, let’s ask as much as we can.”

Avoid This Pitfall:

  • Set a Survey Goal: A clearly defined goal narrows the area of focus of survey questions and helps to filter out irrelevant questions.
  • Gather Otherwise Difficult-to-Find Data: Focus on asking only those questions which are difficult to measure internally or are not in your database already.
  • Define a Question Limit: Aim to create a question list of no more than 5–10 questions.
  • Rotate Questions: While keeping the number of survey questions constant, rotate one or two questions temporarily into the survey for a small sample of the population to gather other key data.

2. “Forcing” Customers to Respond

The desire for a healthy response rate often overlooks a key tenet of survey design—customers should not feel forced to respond to questions. While this is permissible in certain cases, if most questions are highlighted as “required fields” customers may refuse to respond entirely—or worse, may give false answers.

Avoid This Pitfall:

  • Allow Customers to Decide Which Questions to Answer: One way to prevent customer frustration is by designing surveys in such a way that the initial three or four questions are key questions and are made mandatory while the others are optional. That said, ideally survey designs should aim at easing the customer response experience to such an extent that customers willingly complete a survey.
  • Ensure Survey Participation is Optional: Give respondents the opportunity to opt-in or opt-out of surveys. By making participation mandatory, companies run the risk of collecting false responses.

3. Introducing Bias during Survey Collection

While most companies are careful about writing questions that are free of bias, very often aspects like the use of positive language, question sequencing, and leading with opinions in questions can result in incomplete or incorrect data. This often leads to a response bias (i.e., a false response or an unrelated response).

Avoid This Pitfall

  • Generic Questions Should Be at the Beginning of the Survey: Start surveys with high-level questions to capture the customer’s immediate reaction to the overall transaction. This eliminates potential bias due to any previous responses that are top of mind; for example, if a poorly rated question (e.g., wait time) is followed by a question on “overall experience”, there is a high probability that “overall experience” would also be rated poorly.
  • Use Neutral Language: Check if the survey allows customers to respond independently, or if questions are subtly guiding customers toward a certain respond. Look out for leading phrases (e.g., “don’t you agree that…”) as well as positively or negatively aligned language (“how easy was it to…”).

More Resources