ICMI is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Advertisement

Metrics for Training

In my 18 year career of managing the training function in contact centers, I have collected lots and lots of data on the training I managed, but have often struggled with what to do with it and figuring out what it really means. I have come to realize that while I could slice and dice data in a hundred different ways, that wasn’t always an illuminating process nor a good use of my time.

I have found that it helps to know my stakeholders and what is important to them so that I can provide the right data to the right stakeholder while collecting the data needed to manage training and trainers effectively. Let’s look at what is important to a few training stakeholders.

Training ROI

HR/Recruiting

From my experience, HR and Recruiting were always interested in two metrics on new hire classes: the percentage of people who actually showed up on the first day (Show Rate) and the percentage of those who showed who also successfully completed the training and moved on to production (Completion Rate).

HR looked at these numbers to measure their recruiting efforts. They were interested in whether they had met the number of new hires requested by the manager (Show Rate), as well as whether those people fit the profile for the job by successfully completing the training and actually starting the job (Completion Rate).

Recruiting was also interested in the attrition reasons for those who left the job during. They couldn’t impact all those attrition reasons, but some they could. If a trainee were let go because they didn’t have the keyboarding skills to operate the computer while talking with customers (or maybe they couldn’t use a mouse or understand anything about operating a computer), then it was a failure in our screening that Recruiting would investigate and try to prevent in the future. On the other hand, if someone had a major, unforeseen family emergency that required them to stop working for a period, that was not something that Recruiting could control or foresee.

Using the Show and Completions Rates, HR/Recruiting could fine tune their efforts, even matching rates to sources to identify which sources provided the best candidates. Completion Rate wasn’t totally the responsibility of Recruiting as the Team Supervisor and the Trainer also greatly impacted these rates, but it also partially reflects on the ability to recruit the right people for the jobs.

Managers

Besides the Show and Completion Rates, the managers for the agents were also interested in whether training met business objectives. For new hires, this meant whether or not they were trained adequately enough to get fully up to speed in a reasonable amount of time. We never expected new hires to perform at 100% their first day on the phones, but for each program, Training and Management had set metric benchmarks for the first through the tenth week out of training (and sometimes longer for more complex programs) and each new hire was expected to hit those benchmarks.

If Training had not prepared the new hires well enough to hit or exceed the performance benchmarks for at least the first two weeks after training, it was on us to make improvements in the training or point out the things beyond our control that influenced results. In one particular incident, new hires had been pulled out of class on the second day to triage an overwhelming number of calls due to an onslaught of bad weather around the country. A couple of days later, we were given just 5 days to finish up the normal 10 days of training, so ultimately this group took a bit longer to get fully up-to-speed in a situation that Training could not control. Everyone had made their best efforts in an odd situation.

For refresher and cross-training, management also looked to see if the training changed agent’s behavior in the ways they desired. Of course, they were not always clear on the change they were looking for, so I always asked lots of questions. Ultimately, the training was successful if sales retention rates increased, or the client was happy with the way the agents mirrored the customers’ concerns, or agents more quickly located stranded motorists on a map, or whatever the issue was that the training was supposed to address. Having a clear idea of the business objective before we designed the training always made things much easier.

Managers were not really interested in end-of-course surveys or the post-course surveys we administered 30-45 days after training was complete. When it came to the surveys, all they cared was that I, as the training manager, was analyzing the results and addressing any issues uncovered by the surveys. As long as I was analyzing the survey results, identifying actions for improvement, and addressing those actions, no one outside of the Training Department was really interested in the survey data except at the very highest level.

One other area that account managers were interested in was whether or not trainees had fulfilled all of the requirements to successfully complete the training. Managers weren’t interested in the specifics unless there was a problem with someone. It was my job as Training Manager to be concerned with the specifics and verify that all I’s had been dotted and T’s crossed in the training—and that we had the documentation to back it up when necessary.

Executives

The contact center executives really just wanted the 30,000-foot view of the training data. Most of the time they were quite happy if they knew that the managers and I were addressing any issues, and if I could produce data for reports or to back up any requests I made. I tended to report the key data—generally completion rates and benchmark attainment for the first two weeks after training—on a monthly basis and as a percentage to goal, with high-level notes about any key successes or problems and general notes about improvement efforts. This gave the executives information but also assurance that the training function was working to meet the needs of the business.

Trainers

Trainers were interested in every bit of data I could provide them, but quite often they didn’t really know what it meant or what they could do to “move the needle” on the metric. Again, that was part of the Training Manager’s role – to coach and help them interpret the data while helping then not overreact to things they could not control. Most of the trainers I have worked with are very conscientious and are always looking to improve. With them, I tended to focus on survey data, training completion rates, and whether trainees met their benchmarks in the first two weeks after training. We generally discussed class management issues as they arose rather than waiting for a debrief later.

Training Managers

Some training metrics are important only to training managers. For instance, I regularly looked at the ratio of trainees to trainers to make sure that no one was getting overloaded or underutilized. Since I oversaw 17 trainers at 5 different locations, I couldn’t always “be there” to prevent bad situations, but I could identify and address any trends in overwork or under-utilization. We did find that the very small classes (fewer than 5 participants) and the very large classes (more than 18-24 participants, depending on the program) performed more poorly than the more normal-sized classes. Ideal class size was between 8 and 15 participants in most of our programs, but we could not always keep classes to that ideal size.

I was also interested in how many hours each trainer was spending actually doing training and managing a class directly. If a trainer was in class for 8 hours a day, when did they have time to do reports and documentation, research questions, and prepare for the next day? On top of that, if they were teaching classes back-to-back-to-back, it also meant that they were a candidate for burn out and that we were not able to follow the classes to the floor to help them make the transition. This was not effective, so I kept my eye on training hours so I could address problematic situations and could use the data to support hiring additional resources when required. I wasn’t always successful, but I kept making the effort.

Truly, what is most important when it comes to training metrics is to ensure that the data communicates whether business objectives are being met or not. As I described in Is Training the Answer?, you can’t know this unless you have identified the specific business outcomes the business owner expects. Consequently, good training metrics begin well before the training begins.

Looking for more training best practices? Join Elaine in Alexandria for the upcoming ICMI Symposium! She'll be leading a Trainer Development Workshop.