ICMI is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Advertisement

Expert's Angle: Combining Quality Monitoring and Customer Satisfaction in the Call Center

Quick, what’s the best way to sustain the service quality of your organization? Is it better to use a customer satisfaction survey or a quality monitoring program? Perhaps you need both. Here are some tips on making the best of either program, and of course, on the benefits of combining them.

1.  Nothing can replace the voice of the customer

In some settings it’s challenging to measure customer satisfaction. For instance very short customer service interactions make it awkward to ask customers for their opinion: The whole point of the service is to get a quick answer, so asking customers for their input seems somewhat inappropriate. Another example would be if you are an outsourcer: You probably cannot survey the end customers without your client’s agreement, although it’s hard to imagine that the client would not want or even mandate a survey!


Give the survey an honest try. Most customers are open to giving feedback, and no amount of quality monitoring can ever tell you whether the customer was truly satisfied with the interaction.

2.  If you want good customer satisfaction data, make it very easy to give feedback 

Truly ecstatic customers will probably be vocal, and dissatisfied customers will also find a way to be heard, but you really want to hear from everyone – or at least a representative sample of all customers, not just the very happy or very unhappy. Most satisfaction surveys are weak because they rely on very little data. And they have very few data points because they require a large investment of time from customers.


Try asking just one question from customers and see your response rates soar. “How satisfied are you with your last interaction with support?” is a simple approach, and have customers rate their satisfaction on a scale of 0 to 10 with ample room for comments. Offer the survey automatically at the end of phone interactions or email exchanges.


Bribery can also be effective. Every month or every quarter you can give away a nice prize to a randomly chosen customer who answered the survey: You give your customers an incentive and also show them tangible proof that you care about the survey.

3.  Let customers know you are using their data

Customers are more likely to provide feedback if they believe that you are reading the feedback and acting on it. It’s easier to do that if you have recurrent customers, but even with one-time customers you can contact customers on bad surveys, a very potent demonstration that you read the surveys, especially if the follow-up is prompt.


You can also share the results of the survey with customers on a regular basis. Post them on your web site. If it brings shivers down your spine it’s a sign that you can improve, right?

4.  Use customer surveys as part of performance management

Clearly, if Jane’s survey rating is 9/10 and Don’s rating is a 7, Jane is doing better in some way. Make it worthwhile for reps to garner good ratings, but don’t go crazy insisting that everyone gets 9.5 out of 10: You are likely to get unnatural “gaming” behaviors that get in the way of good customers care.

5.  Quality monitoring can be used as a substitute for satisfaction surveys

Again, it’s always better to ask customers directly for their feedback. But if you simply cannot, then yes, it’s fine to rely on quality monitoring as a measurement for quality. Just don’t imagine that you are capturing the real thing.

6.  Quality monitoring can provide more details than a customer survey

We said earlier that in order to maximize response rates (hence increase the validity of the survey) you should make the survey shorter, but that would also mean capturing fewer details, making it harder to pinpoint issues or to design improvements. With quality monitoring you can see in great detail that a particular rep does, indeed, interrupt customers too much, sends curt emails, or uses atrocious grammar – and coach appropriately.

7.  Quality monitoring can focus on areas a customer survey cannot

Quality monitoring can go much further than providing details: It can probe different areas of the interaction. For instance, no customer survey will ever tell you that a rep is failing to use the knowledge base effectively (a big and important productivity and customer satisfaction problem), is using the incorrect greeting (perhaps not such a big deal, at least if the alternate greeting is courteous and informative), or is offering to help the customer with out-of-scope issues (either an annoyance or potentially fraud, depending on the offer).

8.  Quality monitoring and customer surveys should work hand in hand

In an ideal world, you would do both: conduct a customer satisfaction survey and also have a quality monitoring program. But how do the two work together? Since one channels the voice of the customer and the other the details of the interactions, it’s very desirable to have some kind of a match between the two.


So if customer satisfaction is soaring but quality ratings are declining (or vice versa), something is wrong, most likely with the infamous quality monitoring checklist. Too often the checklist is full of procedural nits (use the canned greeting, say the customer’s name three times during the conversation, mandatorily cross-sell at the end of the conversation) that have no or little impact on customer satisfaction: Get rid of them! Focus on items that matter to the customer (listen without interrupting, refrain from blaming the customer, offer to call back at a specific time if you don’t have the answer) rather than what matters to you internally.


Take advantage of fortuitous double-evaluations to compare notes. If a particular customer rates a particular case very badly but the quality monitoring rating is high, you are measuring the wrong things with the quality monitoring. Ask the customer for what created the problem (easy, since you will follow up on bad surveys, remember?) and see whether you are capturing that aspect of the interaction in the rating. It’s very disorienting for a rep to get one bad rating and one good rating for the same case.


And don’t go crazy: you are not looking for perfect agreement between the survey and the monitoring ratings, only general harmony.

Francoise Tourniaire is the founder and principal of FT Works. [email protected]