Published: March 21, 2013 | Comments
I quickly learned when getting my private pilot’s license that any instrument, any measure in isolation, can be misleading. One of the reasons you fly is to get somewhere fast. But airspeed alone can’t tell you whether you’re heading for your goal ... or in a dive. You need context—other instruments and an understanding of what they are really saying—to know.
Consider the industry darling, first call (first contact) resolution. Higher is better, right? It depends. I’ve seen too many cases where organizations with high FCR rates (e.g., in the mid- to upper-90 percent range) are resolving calls they shouldn't be handling in the first place. Here are some common scenarios that artificially drive first-call resolution scores up:
- The center is handling contacts that could be automated. Handling many interactions that could be serviced through IVR, web-based services, or mobile apps suggests that the systems don't exist, are difficult to use, or don't work as well as they could. Or maybe customers are unaware of or unwilling to use them. I’m not making a case to force them into lower cost channels against their will (heaven forbid); but by all means, open up options and choices.
- Communication with customers is unclear. When statements, promotional pieces, product documentation, and other types of customer communication are unclear or incomplete, the contact center gets more work. These calls are usually straightforward—"Yes, I apologize for the confusion and I can help you with that …"—but they drive up costs and consume precious resources, even as they boost first contact resolution.
- Ditto products or services that have glitches that lead to customer contacts. We get lots of practice playing cleanup for issues that should be resolved at a deeper level.
First call resolution is just one example. Virtually any other measure, when viewed in isolation, can be misleading. Consider service level. I once discovered a large utility that employed two people in workforce management just to monitor incoming traffic. If service level began to slip, they would (with a few clicks) take blocks of queued calls and put them into a holding pattern, allowing customers just entering the system to go right to agents. As the queue settled down and service levels quickly improved (or so it appeared), they would release the calls from the holding pattern and allow them to reach agents. In this way, they could literally control their service level results. Egregious example, I know, but there are plenty of ways to manipulate service levels.
Handling time? That one’s easy. Even those new contact centers can quickly decipher that we can rush through interactions and create unnecessary waste, rework and repeat calls.
How about customer satisfaction (customer loyalty, net promoter scores, et al.)? Yep, by itself, we’d have to know more. Consider the extreme: Giving your customers ski vacations for $5 inconveniences would earn you some great PR, but probably break the bank. (Most organizations don’t err in this direction, and for the record, I love those that, comparatively, do so much for their customers—think Nordstrom or Zappos). But there are plenty of more insidious and common examples. When we take samples, who we sample, or whether customer satisfaction reflects the cost of heroic efforts necessitated by poor product or service design are all important questions.
Of course, bad readings in any of these areas is not a good thing! The point is, we need to know a bit more about what these measures are telling us. We need to know if we’re effective. Having a good understanding of our most important goals—e.g., sustainable customer loyalty, market share, profitability, true cost effectiveness—is the context required to make sense of it all.
Back to the cockpit: You will sacrifice some speed as you climb or turn, and that's okay, as long as you know where you're going and making adjustments that get you there in the most effective manner.
Keep your eyes on the prize. Effectiveness is what matters.