ICMI is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Advertisement

Best of ICMI in 2021 - #7: Grade Customer Experience, Not Calls

disagreementBest of ICMI in 2021 - #7

It’s 3pm on a Wednesday afternoon, and the internal Quality Assurance (QA) team, along with a few chosen members of Operations and Training, are gathering around the phone ready for the eagerly awaited and, sometimes dreaded, weekly calibration call with an external client.

The research has been done, calls have been reviewed and submitted, and the QA team feels as though they have done their job to the best of their ability. The client opens the bridge, and they start to review the first call, which has scored a very encouraging 87% on the internal scorecard. But after a heated twenty minutes review of said call with the clients, the new score is a 63%, and the whole internal QA team are scratching their heads in disbelief. How on Earth did that happen?

I wonder if the above sounds familiar to anyone. It’s a commonplace occurrence that happens all too often in this day and age, the inability to get alignment on how to score a call. Usually, this stems from how a 1-5 scoring scale for each element of a call is being used, and how one person’s opinion differs from another.

How can this be changed or challenged? Good question, and while I don’t have the golden bullet to solve this problem altogether, what I do have is a suggestion on a way that the methodology of the calibration session could be changed so that internal QA scoring gets closer to what the external client wants and, more importantly, improves the customer experience.

In most QA departments these days, it’s commonplace to see the standard 1-5 scoring matrix in place. Usually, the matrix has been shared from the client with some guidelines. Sometimes the matrix has been thoroughly thought out by the client, but not always. A haphazard approach to launching a matrix, created solely because someone said there was a need and with no inputs or a launch strategy, happens and leads to more confusion. I propose it might be time to step away from the 1-5 scoring scale.

First, let’s ask what the purpose of the QA form and the calibration sessions is in the first place. People might answer it’s for senior leadership to know how the effort is going, or for everyone to get aligned to what the clients want. However, while it’s important to have a QA score to baseline performance and for the senior leadership team to gauge goals, surely the biggest reason for the score should be to ensure that customers are being dealt with effectively, and that their issues are resolved well. If the QA scoring sheet and the calibration sessions do not do that, then they are not doing their job.

While there are a number of different ways to tackle this problem, the one that I would like to share is a movement away from scoring and more to evaluation. I’m not saying we should evaluate the call, but rather the customer experience. In essence, the QA process would remain the same; there would be a call-flow and the call would be broken out by sections. The difference is each section would be scored based on how effective or ineffective the agent had been. While not 100% black and white, it’s certainly a step away from the many grey areas with the previous 1-5 method.

Under this method, it might become easier for the agent/QA/Training departments to get aligned because they have one though in their collective heads now - “Was the agent effective?” - rather than having to think about what a “1” looks like. Meanwhile, the senior leadership team still get their QA score, and this time they know what it means. There is now a real partnership between QA and CSAT, as they are both looking at the same metric.

In my experience, this approach has eliminated some of the pain points of the dreaded Wednesday calibration session, and it has had a positive impact on driving up CSAT scores. As we all try to improve customer experience, it might be worth trying this revision to QA scoring to make sure everyone is on the same page.

Topics: CSAT, Customer Experience, Analytics And Benchmarking