Published: April 19, 2018 | Comments
In a contact center, business changes constantly. Customer expectations change. Technology certainly changes. Job expectations change. But have your metrics changed? If your organization is like many others, metrics tend to fall in the "set it and forget it" category. But if your business is changing, shouldn't your metrics follow suit?
Over the past two years, our contact center experienced changes which prompted us to look at our metrics. But before we implemented any changes, we knew we had to take care to avoid the pitfalls that come with introducing new metrics or changing existing ones.
Pitfall #1 - Launching too quickly
If you are planning on launching a new metric, one that will affect an agent's performance, what do expect to happen if you don't prepare the agents adequately? Chances are, they will not meet the prescribed goal because they haven't been coached to achieve success in the metric, which will result in frustration, decreased engagement, and a decrease in customer satisfaction.
Our story? We knew that we wanted to launch a new Customer Experience metric on top of our existing Call Score metric. Our call quality had been very strong for several years, and we wanted to kick it up a notch. We started our research and planning in early 2016, carefully laid out what these new requirements would look like, and then started coaching to these new Customer Experience categories. We spent months doing this and began incorporating the markings into our call review process to get a baseline. It was not until the verbiage was ingrained in our culture that we formally launched the new metric. Overall, it took us nearly a year from the initial planning to the roll-out of the new expectations, but because we took our time, we got the buy-in we needed from staff, and we had a successful adoption of the new behaviors.
Pitfall #2 - Setting unrealistic expectations
We'd love to achieve 100% performance on every metric, right? If you figure out how to make that happen, please share! In reality, we know that if we set unrealistic expectations for a metric, it is going to backfire. If we set the bar too high, agents will become frustrated and so will managers, because how can you coach to something that is unattainable?
Our story? We've always focused heavily on quality in our Contact Center…not so much on efficiency. We didn't have an Average Handle Time (AHT) metric because we didn't want our agents to feel like they needed to rush people off the phones. But over the past few years, we started to feel like our quality was so strong that it was finally time to work on our AHT. The new metric was launched in January 2018, but the work started long before then. First, we made sure that we were coaching to reducing AHT - starting first with lowering holds and After Call Work (ACW). We looked at the outliers in those areas and started to do targeted coaching with agents on how to eliminate holds (improving our work instructions was a big help) and speed up ACW (wrapping up while on the call, for example). After seeing progress in those areas, we knew our agents were ready to be successful with the new metric. As we prepared to launch it, we then took six months worth of data and analyzed it. Each month, what was our median AHT? What was our mode? What numbers were clear outliers? We used that information to determine what our standard and goal ranges would be, knowing that we wanted our agents to feel they could be successful in attaining those ranges. When we announced the new metric, we made it clear to our agents that most of them were already meeting at least the standard range and that we were setting them up to be successful.
Pitfall #3 - Unintended consequences
So you haven't rushed the launch of your new metric, and you've set realistic expectations that your agents should be able to meet…but you're not out of the woods just yet. How are you going to ensure that your new success doesn't cause another metric to struggle?
Our story? We knew that launching an AHT metric could cause agents to rush on their calls, therefore negatively affecting quality. To prevent this from happening, we took a few intentional actions. First, we created an awards program that incorporated BOTH quality and efficiency metrics - our All-Star Award. To win the award, an agent needed to perform successfully with AHT, call scores, and our other quality and efficiency metrics. This sent a distinct message to our agents that we expected that they perform well in the existing metrics alongside the new one. Second, we sent consistent messaging daily to staff applauding those who were doing well in both the quality and efficiency metrics. We wanted to send a daily reminder that it wasn't an either/or expectation but, rather, we expected excellent performance in both areas. As a result, we saw that our quality was not negatively affected by the launch of the new AHT metric.
It's essential to improve your contact center continually, and that includes looking at your metrics. The next time you consider adding a new one or changing an existing metric, make sure that you take into account these three pitfalls. With planning, you will set your agents - and your contact center - up for success.
Have questions for Amber? Want to learn more about how U.S. Bancorp uses metrics to drive success? Attend Amber's ICMI Contact Center Expo session with Nick Stenberg.