ICMI is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Advertisement

Adding the Customer to the Call Center's Quality Equation

In today’s contact centers, the term “customer experience” is used more liberally than a bottle of SPF 50 at a beach party for redheads. This is particularly true when managers are talking about their quality monitoring programs. While they truly believe that their monitoring efforts are all about ensuring a positive customer experience, the truth is that the customer’s perspective and opinions are typically left out of the quality equation.

Consider the typical quality monitoring approach: quality assurance (QA) personnel listen to and rate calls – scoring agents on two major components: 1) how well they comply with the company’s policies and procedures (as well as the accuracy of information provided); and 2) how courteous, helpful and professional the agent was during the interaction.

Now, take a closer look at the second component – shouldn’t the customers themselves be the true judge of such agent behaviors and performance attributes? Asking a supervisor or QA specialist to rate how satisfied a customer was with an agents’ performance is the equivalent of asking a waiter or a chef to rate how a diner liked his or her meal.    

Following this traditional monitoring approach, centers provide agents with coaching and feedback that doesn’t always gel with actual customer expectations and preferences, causing customer satisfaction ratings to plateau at “mediocre” – despite the center’s heavy investment in quality monitoring resources.

None of that will change until centers start incorporating the “voice of the customer” (VOC) into their QA program via post-contact transactional customer surveys, says Mike Desmarais, founder and president of Service Quality Measurement (SQM) Group, a Canadian firm specializing in QA services for call centers.

“Traditional quality assurance is limited in terms of its impact on how it can help companies run their call centers. Our research has shown it to have no real impact on [customer satisfaction] performance. Call centers would rather take ownership for judging the customer experience than actually letting the customer be the judge. However, customers should be the main judge of call quality – and centers do that by integrating the ‘voice of the customer’ with the compliance aspect of the traditional QA process.” 

Call Centers Listening to the Voice of the Customer

The thought of revamping the entire monitoring process may cause minor panic attacks among managers already challenged to do more with less in the current economy. However, such fears will likely be assuaged when these managers hear about the results achieved by organizations that have already adopted a VOC-based QA program and when they learn that doing so is not as challenging as it may seem. 

Rogers Communications Customer Care (based in Toronto, with centers throughout Canada) “added the customer” to their QA efforts in the summer of 2007 – a move that has elevated the organization to elite status. Since implementing its “My Customer” program, Rogers reports a whopping eight-point improvement in its top-box customer satisfaction rate on the cable side of the business, and a healthy two-point improvement on the wireless side. In addition, Rogers has seen a significant improvement in agent performance and quality. Due to such success, in 2008 Rogers implemented the My Customer program at its National Technical Service Delivery (NTSD) centers, which handle tech support issues

“The ‘My Customer’ program is a primary initiative and listening post that enables and extends our coaching capability while driving toward an industry-leading customer experience,” says Roland Pauksens, senior vice president of National Customer Care for Rogers.

Other organizations – including Pacific Gas & Electric (PG&E), GE Capital Solutions, and Amex Bank of Canada – have reported similar results since adding a VOC component to their quality monitoring and coaching programs.

Of course, you don’t achieve such results overnight. Successfully moving from a traditional monitoring program to a VOC-based QA initiative requires careful planning and a clear understanding not only of best practices in quality monitoring, coaching and customer satisfaction measurement, but also of how these elements all fit together. 

Following are the key tactics and strategies embraced by centers that have effectively added the customer to the quality equation.

Administer a concise and timely transactional survey to capture the voice of the customer – and tie the results to individual agents.

Using either an automated (IVR- or email-based) post-contact survey or one administered by a live person via phone, top centers are able to capture the customer’s experience and direct feedback on agent performance immediately (or within a few hours) after the interaction has taken place. Some centers opt to conduct the post-contact surveys in-house, while others use a third-party surveying specialist, of which there are many. Either way, Desmarais recommends administering the survey in the same channel through which the customer contacted the center – i.e., an IVR-based survey or live phone survey for customers who called the center, and an email-based survey for customers who interacted with an agent via email or chat. 

Transactional surveys are typically quite brief – five to seven questions – and focus on such things as: the customer’s overall satisfaction with the call center (wait times, IVR experience, call routing, etc.), satisfaction with the agent, and whether or not the customer’s issue was resolved. 

At Amex Bank of Canada, the questions regarding the agent get pretty specific (without making the overall survey too long) to elicit invaluable feedback for agents. “We ask customers to rate the agent on knowledge and authority, communication, whether the caller was treated as a valued customer, as well as on listening and understanding,” says Lucy Panacci, project analyst for Amex Bank of Canada’s Line Optimization Team.  

At most contact centers with VOC process in place, survey response options feature one or more of the following:
  • rating scale (e.g., 1-5);
  • a satisfaction scale (“very satisfied”, “satisfied”, “neutral”, “dissatisfied”, “very dissatisfied); or
  • an agreement scale (strongly agree, agree, neutral, disagree, strongly disagree) – depending on how the question is posed.

The best surveys also feature a couple of open-ended questions that give customers the opportunity to provide detailed responses or to elaborate on previous ones. A best practice, according to Desmarais, is to follow up any question for which the customer signified strong satisfaction or strong dissatisfaction with an open-ended question intended to capture the reasons behind such responses. 

Dr. Jodie Monger, president of Customer Relationship Metrics, agrees, and adds that open ended questions not only uncover causes of strong satisfaction/dissatisfaction; they help to reveal customer’s response errors. For instance, a customer might score an agent with a “1” on a scale of 1-5 thinking that “1” is the highest rating, when it’s really the lowest. “By reviewing the customer comments along with the survey scores during the quality control process,” Dr. Monger explains, “it would become apparent that the negative score was a mistake.” 

Of course, the key to any successful VOC-based QA program is how the survey results tie into the monitoring and coaching process – something we will cover in the next two sections.

Use internal quality monitoring methods to measure compliance only.  

In leading centers, once a customer survey is complete, the numeric scores as well as any verbatim comments from the customer are sent to a QA specialist in the contact center, who then accesses the recorded call* (or email/chat transcript) in question and, with monitoring form in hand, scores the agent solely on the compulsory compliance issues – how they handled the objective, non-negotiable requirements of the interaction. Examples include whether or not the agent used the proper greeting and closing (and followed other key script components; adhered to company- or industry-defined privacy/security policies; imparted correct/accurate information. In centers whose call monitoring system “records” screen activity, QA staff can also evaluate how effectively agents filled in data and moved in and out of appropriate screens. 

(*Note: For phone interactions, finding the call in question is made easy by today’s quality monitoring systems, most of which have a customer survey feature that links completed surveys  to recorded calls.)

The monitoring forms used by QA staff are generally concise and straightforward, with a simple “yes” or “no” option for most of the criteria.  For example, “Did the agent use the customer’s name during the call greeting?” 

By taking this compliance approach to monitoring, and having the call recording or text transcript on hand for agents to hear/read, the call center greatly reduces the problem of agents refuting or rejecting how a QA specialist scored them on a contact – a common occurrence in many centers.

But the real performance impact comes when the customer scores and feedback are factored in and shared with the agent, which brings us to how customer ratings and comments are incorporated into agent monitoring scores and feedback. 

Effectively incorporate customer ratings and verbatim comments into agent monitoring scores and feedback.  

After completing the compliance evaluation for the agent, the center’s QA specialist determines the agent’s overall quality score by combining the compliance score with the customer rating from the survey, taking any weighting issues into consideration. 

Desmarais recommends a 70/30 split – “70% of the total quality points on the customer side, 30% on the compliance side.” Rogers Communications has achieved its previously mentioned VOC success with a 60/40 split. “Sixty percent of an agent’s individual quality score is based on the caller’s customer satisfaction rating,” explains Rod Cook, director of customer advocacy and quality at Rogers’ Customer Care center, “with the remaining 40% based on how well the agent fulfilled certain essential call elements, as determined by the QA analyst using a clear-cut monitoring form.”   
  
At Rogers and most other centers with a VOC-based QA initiative, once the QA person determines the overall quality score, they send it – along with the customer survey results, the compliance score sheet and the call recording (or text transcript) – to the agent’s supervisor or team leader, whoever typically provides coaching. The QA person might highlight any notable results, whether positive or negative, that he or she feels the supervisor should focus on. The supervisor then schedules a feedback session with the agent in question to go over the results and provide any necessary coaching, as well as deserved praise.

Ask any manager or supervisor in call center that has adopted a VOC-based QA initiative, and they’ll tell you that their agents are much more motivated by direct customer feedback than they are by supervisor/team lead feedback.

“Because we hire people who care about people, our agents take customer feedback to heart, says Cook of Rogers Communications. “They see it as being valuable as well as fair and objective. Comments are played back for agents during the feedback session, and because it’s actually the voice of the customer, it resonates with the agent much more than if such feedback was coming just from the [team manager] alone.”

“It’s a huge motivator,” says Panacci of Amex Bank of Canada. “This is direct feedback from the customer – how the customer perceived the agent’s performance; there’s no issue of an agent feeling that they received biased feedback from a supervisor. When our agents get positive feedback from a customer, they are so pleased and driven to continue such performance, and if the customer wasn’t satisfied with something about their performance, agents are highly motivated to improve in that area.”   

The direct customer feedback doesn’t just benefit the agent who handled the contact: Most centers use such feedback (along with the recording/transcript) as a coaching and training tool for other agents. For example, if an agent at Rogers Communications has an exemplary call, the center may use the call recording along with the positive customer rating and feedback to demonstrate to trainees not only what great service “looks” like, but also how it impacts the customer experience.

Involve agents in the management of the process.

 
To further ensure the success of their VOC-based QA programs, top centers actively involve agents in the implementation of maintenance of the initiative. As Panacci explains, moving from a traditional quality monitoring program to a VOC-based one represents a big shift; thus, it’s critical to get agents’ input and support early on. “Because this was a huge change management piece, the communication with our agents was very thorough and delicate,” Panacci explains. “We clearly explained the objective of the [program]. We also held several meetings where we had agents listen to calls, then we shared the customer survey results with the agents so they could see how customers rated the calls. This helped agents to see how their service impacts the customer, and what drives a ‘top box’ score.”

At Rogers Communications, agents have played an important role in the implementation and maintenance the center’s My Customer initiative – provided invaluable feedback and recommendations that have led to important changes and improvements to the customer survey questions and the center’s monitoring form. “Agents are very much involved in the success of the program,” says Cook.   

Commit to Listening to Your Customers

Organizations are always talking the talk of “the customer experience”; relatively few, however,  walk that talk when it comes to their monitoring practices. Many call centers back away from a true VOC-based QA initiative because it involves transferring a lot of control from the contact center over to the customer. Desmarais acknowledges that this can be a large and scary undertaking from a management perspective, but he implores organizations to take this next step, and, in fact, is unsympathetic toward those that don’t.   

“If you truly believe that the voice of the customer should be a critical measure, if you believe that your primary purpose is to be a world class call center and to provide great customer service, then why don’t you commit to this? Why don’t you change the way you traditionally do things with regard to quality? The payoff in terms of customer and agent satisfaction make it hard not to.”