Date Published: January 05, 2012 - Last Updated 5 Years, 107 Days, 12 Hours, 36 Minutes ago
A few service organizations function with too few metrics: they are flying blind. Virtually all others are drowning in metrics, too many of them and often the wrong kind. If you are feeling overwhelmed, it's time to step away from the flood and redesign the way you think about metrics. My favorite approach is to start from nothing. That's right, forget about all the metrics you are currently collecting and build from scratch.
Know Your Goals
Start with the goals for the service organization as a whole. Do you see the organization as being responsible strictly for delivering great customer support, or do you have loftier goals of maintaining and enhancing customer loyalty? You need to clarify the overall goals in order to define meaningful metrics. For most service organizations, there are three categories of goals:
- Maintain and increase customer satisfaction (usually satisfaction with customer service, but it could be with the overall product or service delivered to customers)
- Increase profits (if customers pay for support) or reduce costs, and
- Serve as the voice of the customer for the rest of the company.
Three categories of goals, three categories of metrics. And if you cannot tie a particular metric to one of the overarching goals you can safely dispose of it.
Be Clear About Usage
What are you going to do with the metrics? A metric can be used to measure success (or lack thereof!) or it can be used to improve performance or coach staff; some can do both. For instance, identifying the top case drivers is the first step towards driving product improvements or self-service improvements: It’s an improvement metric. Tracking response time achievement assesses success in that area – although digging into the details of response time, such as the time of day when response times are poor, is a great way to improve the performance on that particular point.
If you cannot think of any action that would be triggered by a particular metric, you don’t need it! For instance, you could measure the number of transactions per case – but then what? It’s clear that a case with lots of transaction is going to take longer, is likely to be more complex, and may be associated with lower customer satisfaction, but will the number of transactions tell you anything about the rep or the topic that you cannot capture in another way, for instance through the time to resolution (to capture case complexity or customer satisfaction), or through a case audit (to gauge the rep’s ability)? So you can decide to track the number of transactions per case as an alert that will guide further investigation, but the number of transactions per case in and of itself is probably not meaningful and not worth tracking.
Put On Your Customer's Hat
Would customers care? Many service organizations say that their main goal is to keep customers satisfaction, and yet they measure many items that customers don’t care about. For instance, call monitoring checklists typically include items such as "using the customer's name X times" or "wishing the customer a good day", which may make perfect sense in general but will backfire if a customer is very upset. Allow common sense and customer satisfaction to override minutia. Another example would be measuring response time. Of course customers want a quick initial response but first-contact resolution is much more important for customer satisfaction.
Ultimately, the arbiter of success is the customer so the best measurement is always, by definition, a customer survey, but there are many environments in which a survey simply does not make sense. Don’t use that as an excuse for shoddy metrics, though. A monitoring program that focuses on the crucial question of whether the customer got a prompt, personalized answer yields much more meaningful results than the mechanized "did you use the customer's name" approach.
Is perfection killing goodness? In an attempt to capture the exact root cause of each customer interaction, you require the reps to fill out a three-level cascading list of categories and reasons for each interaction, which works out to an exquisite 672 different root causes. Can you trust the results?
Probably not. This is simply way too much work for the reps – and perhaps even for the category designers, which means the category tree is not maintained and not up-to-date! You would be much better off trying to capture a few categories only, which would be filled in much more reliably. In the same vein, asking customers a single question ("please rate your service interaction on a scale of 0 to 10") yields much higher response rates, hence more meaningful data than asking ten more pointed questions.
Beware Of Gaming
Is the act of monitoring changing the underlying action? Here’s a common scenario: we ask reps to capture all the questions they ask the leads, and we then turn around and use the records to beat the reps over the head when they exceed a certain threshold of questions. Do you think that perhaps the reps may be tempted to (a) use other avenues than the leads to get answers or (b) slack off just a tad on recording their questions? (Say yes!!!)
Another common example is manual time tracking. If we ask the reps to track their time manually, some will forget to do it at least some of the time, then attempt to enter a vague recollection of what they did. If incentives are set up so that reps are rewarded for time spent on cases, you can be sure that tracked time will be high, and vice-versa. So, you will get data on time spent on issues, but it won’t be accurate data since the incentives bias the accuracy of the collection. Don't set up situations where you will get bad data.
Leverage Automatic Data Gathering
Can you be completely unobtrusive in gathering data? The best data is the data that is collected automatically. If there is no effort required from customers or reps, compliance is high, and the chances that the metric will be "gamed" are attenuated. For instance, if you ask customers to rate knowledge base articles, you may get a 2% or 3% response rate – so wildly unreliable data, likely biased towards the unhappy customers. If instead you capture movement from the article towards other documents or towards assisted support you will unobtrusively and very reliably capture the usefulness of the article.
In the same vein, asking reps to log each use of a screen sharing tool is subject to error. Providing a button to open the tool from the case-tracking environment simplifies the process and can be logged automatically.
Use Transparent Definitions
Can you understand the computations? We have less than two days worth of backlog! Sounds good, but what does it mean? What is the definition of backlog and how is it measured? Maintain a list of definitions for metrics so there is a single source of truth. And keep definitions straightforward. If you exclude various categories of cases from the definition of backlog, what does it really mean? Think of it from the customer’s perspective: if the customer is waiting for an answer, the case should be part of the backlog, period.
Some organizations like to build composite indices to capture various aspects of their work. So there could be a customer forum index to capture whether forums are flourishing. Good idea, perhaps, but how is the index constructed? What is lurking behind the alluring single number? You may be better off with simpler, but more transparent measurements.
Do metrics scale? There are 3491 cases in the backlog? Is that a lot? No one knows – but if you use a simple ratio, dividing the number of open cases by the average incoming volume, you can measure the backlog in days (or weeks) and continue to use the metric regardless of sales growth.
Look At Metrics Over Time
Can you gauge progress? Isolated numbers don’t make much sense, but time progressions do. If customer satisfaction is 7 out of 10, is that good? Doesn’t sound like it, but if last quarter’s number was 6 then you are making progress. Present numbers in a historical context and look for the rationale behind the changes – without being obsessed by small changes, up or down, that are normal, statistically expected variations.
Give Immediate Feedback
Is immediate feedback available? Imagine that you are a rep and all day today you dealt with complaints. You listened, you soothed, you went the extra mile (and a half). And then, you go home and you do it all over again the next day. Exciting, huh? When work is a repetitive affair, it’s important to give the reps tangible feedback. Nothing replaces a good, supportive manager, of course, but real-time metrics can play a role too. If I can tell how many customers I helped today (and this week, and this month) and how that compares to the team’s average, I get a feeling of accomplishment. If I can see my customer satisfaction rating over time (not just the low rating from the curmudgeon I was stuck with this morning!), I can try a little harder to move it up. A transparent approach to metrics makes for better acceptance and higher performance. In the same vein, managers should have access to their team’s performance, contrasted with others'.
Leverage Metrics Across the Organization
Can you communicate to others outside the service organization? Within the service organization you may routinely check a dozen metrics, but outsiders don’t want more than a couple of high-level measurements, so make them meaningful and comprehensive. Customer satisfaction ratings, volumes, and productivity would be a good trio, together with a list of hot issues. Not by accident this set matches the three goals of service organizations: customer satisfaction, profits or costs, and voice of the customer.
Now, Do It
Where do I go from here? Draw up a short list of key metrics, maybe a half-dozen or so, which would allow you to run the team. Don’t worry about feasibility at this point. Try very hard to start fresh and to ignore the metrics you are currently using. Then use the criteria described above to vet the list. You may be surprised to find how many of the metrics you currently use do not make it to the new list.