Published: September 23, 2020 | Comments
Customer surveys have become the norm whenever we shop, call, or engage online. Recently, I’ve had several situations in which the survey questions, and the lack of response after the surveys, have caused me to question their value if they aren’t backed with action.
For example, I recently called a well-known online entertainment company and asked about my service options. After careful research, I made the decision to change to a less expensive plan. When the call was complete, I received the customary survey and filled it out with high marks.
I am not the harshest grader on these surveys. Since I work in the industry, I try my best to offer good scores whenever possible. After all, the life of an agent is a tough one, and I know they can use positive feedback whenever possible.
Ten days later, I received my bill and found a new charge that was not discussed in the original call. I called back and got a very pleasant agent who tried his best to explain what should have been explained in the original call – that the cost was based on my new plan and that plan had other connected charges. I explained that I was still unhappy and was surprised there was not a way to fix the issue. He said he was sorry, but this was the policy, which is the worst excuse ever, and we ended the call.
What happened next made me rethink survey processes. I received another survey which asked me about my experience with the agent. I was torn; the agent had been polite, professional, and helpful. He had tried to solve my issue. But his hands were tied by “policies.”
The text survey first asked me to rate the agent. I wondered, do I give him a bad score because he did not solve my issue or do I give him a good score for trying? The order of the questions did not allow me to tell my story, and there was no option to give detailed feedback.
The next question was, “How would you rate the overall call?” Well again, are we talking about the agent of the lack of solution?
The last question brought the problem with these surveys into even sharper focus: “Did we solve your problem?” When I click “no,” I had hopes that they would then text back, “Would you like someone to follow-up?” Instead, the text simply read, “Thank you for taking the survey.”
There was no follow-up, no return call, no email, no attempt to turn me from an unhappy customer into a satisfied company.
This exposes an inherent problem with these surveys - if we don’t intende for them to be actionable, what are they for? Are we surveying our customers to solve problems? Or are we surveying them in hopes that some random percentage of them were “satisfied?”
Without a doubt, the answer should be the former. Surveys are meant to gather data on the customer. Finding that someone is dissatisfied allows you the opportunity to take action and, in this case, perhaps hold onto a customer longer.
Surveys without action, however, might be worse than no surveys at all.