ICMI is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Advertisement

Three Keys to Mastering Continuous Improvement

I’ve shared several recommendations for live chat metrics with the ICMI community, some pertaining to operational efficiency and customer service quality (including metrics representing those who never engaged a chat agent), some to conversions and agent mentoring. This article is not about specific metrics (you should read the linked articles above if that’s what you’re looking for), but rather about how to approach performance management as a team strategy.

All of the metrics I’ve recommended have been shared with the understanding that, to be of any real value, they must be tracked accurately and used to inform tweaks (or more substantive changes) to team processes. These changes could include agent training, shifting schedules, adjusting workflows/routing, changing pre-made messages, changing the appearance of your chat invitation, etc.

Continuous Improvement

But what if some of those changes are outside what you’re realistically capable of influencing? And is there a way to increase the impact of your performance management initiative simply by changing the way you present your data?

To get the most from any performance management initiative, you must consider three things:

  1. metric hygiene/clarity
  2. what possible solutions can and cannot be implemented
  3. to whom you’re going to present your data and what that means for how you present it

This might seem overwhelming to those who don’t have any experience with this kind of data collection, analysis, and implementation. How can you be confident about what solutions are relevant and realistic (and confident about details like how long you should wait before trying a different solution)?

For beginners who don’t feel comfortable handling this kind of process yet, you can read more about getting started with live chat performance management here.

For those who have some practice with performance management already, let’s dig a little deeper into each of those three considerations.

Metric Hygiene/Clarity

If you are using simple, specific goals, your choice of metrics will probably be pretty intuitive. The trick to “hygienic” data is then to refine those metrics based on the nuances of your business, and to track performance that way consistently.

For instance, if your goal is to reduce the live chat transfer rate by getting visitors to the right agent the first time, you may want to filter out certain transfers from your tracking. Instances of user error in which visitors select the wrong team/department from your pre-chat survey, and instances of visitors looking to speak with a specific agent might be smart to exclude from data collection for this goal. On the other hand, perhaps tracking that user error data will inform necessary changes to the pre-chat survey (maybe the team/department selection could be clearer to users, and that could have a significant impact on how customer service interactions begin). In the case of transfers to specific agents, maybe those kinds of requests happen so infrequently that any effort to change routing to accommodate them would not be worth the trivial resulting impact on the total number of transfers- but you might still want to exclude them from your transfer rate calculation.

How you refine the data depends on what processes are painful for you and/or your customers, as well as what is likely to meaningfully impact your goal.

You may need to collect data differently for several weeks before it becomes clear how you should actually be defining and tracking a particular metric, restarting the clock on data collection. It’s better to have this figured out before you start implementing possible changes. This will reduce confusion about what the data represents, make it easier to calculate improvement over time, and protect against arguments about data quality.

Your Universe of Possible Solutions

Performance management is a perpetual, cyclical process, but it can be broken into two stages: analyzing and reacting. Analyzing entails collecting and interpreting data about the activities your team is performing and the results of those activities. Reacting covers the activities themselves, especially how they are adjusted in response to your ongoing analysis.

For performance management to work, you need to have metrics that you can meaningfully analyze, but you also need to have activities you can undertake or changes you can implement in order to directly impact those metrics.

The easiest way to understand this point is to ask yourself: what could I do about it?

There’s no point in tracking your live chat transfer rate if you don’t have viable means of adjusting how transfers occur. Let’s say that you want to reduce the transfer rate, that you’ve identified a major cause of transfers is a lack of knowledge among your tier one agents, and that you have some levers you can push to try to positively impact the situation, but there are also limitations. You don’t have the budget to train your entire tier one team, but you might be able to establish some informal, ad hoc peer coaching.

In that example, you might track whether coaching impacts transfer rate (you’d probably want to track this at the individual agent level, both by coach and by tier one agent receiving coaching), how peer coaching impacts average handle time and first contact resolution for both agents involved, whether there are certain times of day that there seem to be lulls or surges in the need for coaching, and whether the need for coaching is reduced over time (this may suggest that tier one agents have become capable of handling the inquiry without a coach, and may even be able to serve as coaches themselves to newer tier one agents).

Think about what possible intelligence you might gather from the data, and the resources and limits impacting your ability to address what you might have learned. This exercise may inform how you scope or refine your metrics, as it is pointless to track something that you can’t respond to—with one exception, noted immediately below.

Presenting Your Data for Secondary Impact

The one situation in which you’d want to track data for something that you can’t effectively respond to is when you are making the argument that you should be able to effectively respond to it, and as a result be granted the resources and/or authority you need in order to do so.

For instance: “As you can see, average handle time has been increasing steadily for three months. Unfortunately, without budget to hire more experienced staff or approval to spend an hour per day on coaching, there’s not much we can do to impact this.”

If you can’t respond to the data and you can’t use the data to make the argument for what you need in order to respond to it, then it’s almost certainly a waste of time and energy to track that particular metric. Identify a more appropriate metric if that’s the case.

If you can respond to the data, however, you need to consider its secondary impact. Your primary impact is the performance management process itself- gathering the accurate data to inform improvements, and making and optimizing those improvements again and again.

Secondary impact is what happens when you present performance data to other stakeholders. Maybe it’s your team who needs to see the impact of the painful process changes you’ve asked them to undergo. Maybe it’s your supervisors who need to see the way you and your team are using data to continuously improve. Maybe it’s your customers who need to have their expectations about customer service reset.

Consider who would benefit from seeing evidence of the progress of your performance management process, how you need to present that evidence to that particular audience in order to realize that benefit for them, and whether those presentation needs have implications for how you collect and/or respond to data.

For example, while measuring the quarterly increase in number of agents who are qualified to coach may not help you improve the quality of your peer coaching initiative, that may be inspiring for the agents themselves to hear or an opportunity to receive recognition from your own supervisors, and as such you should incorporate that into your data collection process.

It’s a Team Strategy

The point of conceptualizing performance management as a team strategy is to build a sense of shared responsibility, by better connecting resources, staff, and results. Who does the data represent? What data-points represent what parts of the process you’re striving to improve? How can you be as efficient and accurate as possible in your data collection? How does day to day work impact data, both in terms of how it measures against goals and in terms of how it is defined and collected?

To master performance management at the team level, consider metric hygiene, realistic responses to the data, and potential secondary impact.