ICMI is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Expert's Angle: Call Center Metrics vs. Customer Experience Metrics: The Critical Difference

Customer Experience. It's the latest buzz-phrase in customer service. Everyone wants CE in his or her title. It gives the executive influence, because it broadens the scope of what is controlled and measured. It is the company's neural-network node to customer loyalty and ultimately profitability. In other words, an update of the 1980's Service – Profit Chain:

Service + Experience → Loyalty + Word of Mouth → Profitability

So, if you have the new title “Customer Experience”, some new metrics are in order. Metrics that go beyond Net Promoter, First Contact Resolution, Service Level, or Service Level Consistency.

Here, we will discuss how to translate some old-style customer service/call center metrics, like average “rep-generated customer hold time”, into new customer experience metrics. Additionally, we will highlight four new customer experience metrics that most contact centers don’t measure, but should. The new metrics and their equations one simply defined in the table below.

Typical Call Center Metric
 Customer Experience Metric
Rep-generated hold
Average rep hold time
Average rep "hold of holds"
Equation Average hold time/all calls
Average hold time/calls held
Knowledge Database Effectiveness
Not typically measured
"One right answer" database response to questions asked by rep of internal search database, or, by customer of the internal website search engine.
Equation None Number of "one right answer" database responses/total queries by rep or customer of the knowledge database.
First Contact Resolution
Overall FCR
Tail FCR
Equation Post call survey: # of customers indicating FCR/Total customers answering survey Post Call Survey: # of customers indicating the actual number of times of contact to resolve/Total customers answering survey.  Ideally organized by product, call type and individual rep.
"Talk Over"
Not typically measured
 % of time that the rep or the customer is "talking over each other"
Equation None Measured at a center level and individual rep level: The % of time both the customer and rep voice channels are active at the same time. Measured via voice or speech analytics
Customer Effort Score
 Not typically measured
The amount of relative effort the customer says it takes to do business with your center or company
Equation  None On a scale of 1 – 9, the "amount of effort" the customer felt was necessary to deal with the company or center, relative to other customer experience with other companies

Rep-Generated Hold

Many centers measure the amount of time that a phone rep puts customers "on hold" and thereby project an average "customer hold time." Many centers then look at their numbers, which average about 60 seconds hold time, and think, "that's not bad." But, as we know, "averages lie." In most centers where I have consulted, only 20 to 30% of customers are put on hold by the rep. Therefore, if we divide the total hold times by only those customers that reps actually put on hold we find that the average hold time in centers that are only holding 20% of their calls may be more like 300 seconds (5 minutes) of hold per held call. And studies have shown that 5 minutes on the phone can seem like 15 minutes to the caller.

The new customer experience metric would now be defined as "hold of holds", or the average hold time of only callers that were place "on hold." While still an average, this number gets much closer to the actual customer experience. Now, we see there is a real experiential problem in this center. What might we do next? The obvious course of action is to find out why customers are placed on hold and what types of calls are held the longest. Typical root causes include knowledge database insufficiencies, lack of call control proficiencies or poor employee habits.

Knowledge Database Effectiveness

The root cause of many problems identified in poor customer experience is directly related to the knowledge of the phone representative. Often, this is because of poor training and retention; more often, it is as a result of poorly constructed or out-of-date knowledge management databases. Therefore, measuring the number of times the rep was able to successfully access the knowledge database/search engine, and obtain "one right answer" is critical. Similarly, measuring the customer’s ability to get "one right answer" from the website’s search engine and stay in their "channel of choice" is equally compelling. In fact, with the website, the cost factor makes another case for this measure. If 20% of your phone contacts are a result of the customer not easily finding the answer online, then the poorly performing search engine is costing your company $5 -$17 per call instead of $0.10 to 0.20 per web transaction. As a result of knowing our Knowledge Database Effectiveness score, what action might we take?

We frequently see reps or customers trying to access an old "Google-style" internal search engine, which might return 20 or more possible answers, or perhaps no answer. These old-style search engines exacerbate the issue, causing unnecessarily high handle times, poor FCR, high rep generated customer holds and escalations to a help desk. Newer search engines might be deployed that can get to "one right answer" at a rate of 95-99% of the time. Our new number at NOVO 1 is 99%. But boy, did we have to work to get there!

First Contact Resolution Tail

Many call centers use a FCR measure to evaluate the effectiveness of their call center. In my workshops on FCR, I’ve found that few are measuring FCR properly. The best methodology is to ask the customer whether their question or problem has been answered on the first contact. And to create actionable data, it must be measured at the product, call type and rep level. Only 10% of center managers I speak to use this methodology. Even if they do, most measure it through an automated system immediately following the call. According recent to research by CCMC, the practice of an automated survey immediately following the call receives a negative loyalty rating. A truly best practice is to give customers time between the interaction and the survey. This practice nets a positive loyalty score. Therefore, our practice is to email the customer and let them do the survey at their convenience, and hopefully in a timeframe in which they could actually know for sure that their issue was resolved (which is often NOT right after the call).

Now, let’s talk about Tail FCR - which really gets at customer experience. Tail FCR, like its cousin Tail Service Level, is a measure which assists center management in understanding which issues to attack first. It is a measure that illuminates the issues or products that are causing customers to contact a company 3, 4, 5 or even 7 times before getting an acceptable answer or resolution. What would you do if you knew that product returns took 4 calls to get resolved, while a billing problem took 1.25 calls to be resolved? Of course, you would put a manager or service engineer on the fixing the return process!

"Talk over Percentage"

The percentage of time that a rep or a customer is "talking over" each other is indicative of a pacing or communication problem. Often, the rep thinks they know what the question or problem is and rushes to an answer. Maybe they know, maybe they don't, but in any case the customer has a desire to be heard. If one measures this statistic by phone representative, they will find significant variance. Ensuring that the customer is heard is critical to strong customer experience scores and to creating an emotional connection with the customer. "Talk over" is actually a measure of both the customer and the rep speaking at once and it is best measured through voice analytics software that is often a module of call recording systems. Since each participant in the call comes over separate channels, it is easy to measure. Whether the customer is over-talking the rep or vice versa, communication clearly cannot be happening in these situations.

So, if you knew that your rep Joey’s "talk over" percentage was 5% and Marcia’s was 20%, what would you do? Something, I think!

Mary Murcott at ACCE 2012:

Pre-Conference Session 7: Supercharging Your First Contact Resolution Initiative. The key driver in attaining high levels of customer satisfaction within the call center is first call (first contact) resolution. Attend this interactive workshop and learn how to supercharge your FCR initiatives, improve customer satisfaction, boost sales conversions and reduce expenses.

Session 201: Insiders' Secrets to Leading Multi-Generational Teams. With older generations working longer and new generations coming into the workforce, the modern contact center is increasingly comprised of multiple generations of employees. Don’t wing it -- honing your communication techniques is critical for connecting with your diverse team! Join Mary Murcott as she interviews four generations of call center representatives, from traditionalist and baby boomer to Gen X and Y.

Customer Effort Score

The newest measure on the customer experience management horizon is the Customer Effort Score. The Customer Effort Score is a relative measure of effort that your company or center requires of a customer, as rated by the customer. It is best measured on a scale of 1 – 9 and identifies how much you require of your customer. Those customers who respond are often thinking about how intuitive your products are, how legalese your fine print is, or how much "customer homework" your reps are requiring of the customer. Much like a Net Promoter score, without more information from the customer, it is hard to move this metric. So, make sure that along with the Customer Effort Score question you ask your customers specifically what your company could do to lower their level of effort.

Now, a Customer Effort Score would appear to be all about the company - but what is surprising is that many companies see a significant variance between phone reps. This is often due to the reps’ interpretation of the company’s policies, so a follow-up to understanding the Customer Effort Score would be to understand and fix any phone-rep variance issues.

The Case for Customer Experience Metrics

Hopefully by now you are convinced that there is a difference between customer service metrics and customer experience metrics, and perhaps shown you how to translate the old metrics into more meaningful and actionable ones. At least, you have something interesting to speak with your executive team about!