Date Published: April 22, 2019 - Last Updated 4 Years, 87 Days, 5 Hours, 53 Minutes ago
Metrics, like a ton of bricks, weigh on the contact center brain. During a recent #ICMIChat, we talked about scoreless QA and other numberless processes to evaluate agent performance. Having spent many years in a contact center environment that is run only on numbers and nothing else, it’s hard for me to grasp how scorecards without a number would work. Hurdling towards ICMI Contact Center Expo, I have an existential question on my mind--“To measure or not to measure?” And as we get closer, the question becomes less existential and more pragmatic. What and how should we measure?
Measurement is the key to quality management and continuous improvement. How else would we know how good or bad our performance truly is versus objectives, the lifeblood of contact center management. But with all these numbers, systems, tools and platforms, are our customers and stakeholders really better off?
If the point is to resolve customers’ problems, make their experience as satisfactory and as low-effort as possible, then I think we need something to guide us. The trick is understanding how that guide should look.
In my opinion, traditional call center metrics – Average Handle Time (AHT), Service Levels, Average Speed of Answer (ASA) - that also apply to non-voice channels (chat, email, SMS, social care) have to exist. Sorry to those who so badly want to expel these from our vocabulary. But from my perspective, we need them as boundary markers to gauge how well we’re responding to our customers. We can’t eliminate these metrics if we want to have a minimal glimpse into how quickly our teams are responding, and what our customers are going through while waiting for us to respond.
Beyond these, my highlight list of metrics includes Employee Satisfaction, Agent Turnover, and Agent Schedule Adherence, as indicators of the type of work environment we’re providing. These metrics have a powerful impact on the cost structure and profitability of our contact center; the higher the level of satisfaction and agent retention, the less costly it becomes to hire and train new agents and team members.
Training and learning metrics are likewise critical to understand how prepared we are to solve customer problems. Graduation rates and scores, training attrition, interval QA and performance metrics show us how efficient we are at controlling the potentially high costs of our training and retraining programs.
At Callzilla, and at most of the contact center leaders I interact with, we are all trying to do more with less. Specifically, in our world, we are tasked with making our QA and training programs more productive and less costly. Speech analytics is the golden child that we hope will allow us to evaluate more customer interactions, with less subjectivity, fewer errors, in less time and at a vastly reduced cost. By doing so, we expect to reduce our overall cost of QA monitoring by 50% and speed to market (completion of QA tasks) in a small fraction of the time that it currently takes us. In training, our eLearning initiative should similarly allow us to reduce training times up to 40%, improve the learning experience by 70% (surveys we have our agents take), and reduce the overall cost structure of our training program by 50%.
These are the KPIs that we believe have a measurable impact on our ability to perform, to operate profitably and to satisfy our customers in a much more meaningful way than we do today. Sure, AHT and Service Levels matter. But the actual drivers that determine how to get to customer satisfaction and effortless experience are training and QA driven. If we manage those things well and make the contact center more productive and more cost-effective, achieving customer satisfaction becomes a much more achievable objective.