Original Publication: Customer Management Insight - August 2007
Today’s call centers have become obsessed with customer satisfaction; unfortunately, not all know how to measure it.
Without a way to formally and accurately gauge the impact that an organization’s self-proclaimed “customer-centric” approach has on caller opinion and behavior, true customer loyalty will remain out of reach. Too many centers either fail to survey callers following an interaction, or have a poor surveying process in place. Highlighted here are several common and costly pitfalls that hinder most call centers' customer satisfaction measurement efforts.
Antiquated, Inappropriate Survey Methods
The most common method of customer satisfaction measurement is the follow-up phone survey, where a person — often a survey specialist at a third-party firm -— calls the customer anywhere from one hour to several days after an interaction with the call center and asks the customer several questions regarding his or her experience. This method is straightforward, popular and, according to experts… ineffective.
“While issues exist with all [customer satisfaction measurement] methods,” says Dr. Jodie Monger, president of Customer Relationship Metrics, “this method is the least cost-effective, has many biases associated with the collection method and results, and provides the largest challenge to reliably linking results to a CSR.”
Some call centers still use mail surveys, which, says Monger, are also not very cost-effective and have even more biases associated with them. Even more problematic is the issue of timeliness when using such surveys; customers typically don’t receive the survey in the mail until three to five days after the interaction with the call center took place — making it nearly impossible for the customer to clearly remember the details of the experience. Consequently, customers either disregard the survey, or respond as best they can despite the time that has lapsed, which can lead to watered-down ratings or feedback that doesn’t accurately reflect what transpired during the transaction.
Timing is everything in customer satisfaction measurement, according to analysts at Gartner Group.
“[Real-time] customer feedback is the pulse by which an enterprise can adjust and personalized its designed customer experience to ensure that it meets — and exceeds, where necessary — customers’ expectations. …The most important aspect of feedback is timing. Gartner has determined that feedback collected immediately after an event is 40 percent more accurate than feedback collected 24 hours after the event.”
The data that is continually captured via a well-designed customer satisfaction survey is veritable gold, and failure to share that gold with appropriate departments within the organization is a huge missed opportunity for the enterprise as a whole.
Fortunately, more call centers are recognizing the value and importance of surveying customers immediately after an interaction occurs — when the experience is fresh in the customer's mind, and before problems can escalate. According to Monger, the only survey method that has increased every year since 2004 is real-time, automated post-call surveys, where callers — while waiting in queue -— are asked to participate in a survey after their interaction with the agent. Those who accept are automatically routed to a concise IVR-based survey following their call, and are asked to respond to a series of questions regarding their experience, their feelings about the organization, and their plans to continue doing business with the company. Customers rate each question on a numeric scale (often 1-5 or 1-7) for easy customer satisfaction calculation. Many surveys also feature a couple of open-ended questions for more detailed customer feedback.
Today’s advanced IVR survey apps can be programmed not only to recognize when a customer gives an abnormally low overall rating and send an alert to the center manager or quality assurance team, the system can capture — via CTI -— the caller’s identity, and link it to the actual recording of the interaction in question for complete analysis. After reviewing the survey responses and the call, the manager can quickly call the customer to “repair damage” and hopefully restore trust and loyalty. (For more information on IVR-based post-call surveys, see “Tapping IVR to Capture the Customer Experience,” Call Center Magazine, February 2007.)
Of course, not all customers contact the center via phone, thus automated IVR-based surveys alone are insufficient for holistic customer satisfaction measurement. Progressive centers also gauge the satisfaction level of customers who have chosen to interact with the company via email or chat. To do so, these centers send a survey like the IVR-based one to these customers via email, or program the survey to pop up on the customer’s screen upon completion of an online interaction.
Ignoring customer’s channel preferences — i.e., sending an email survey to a customer who contacted you by phone — can hinder survey response rate or, worse, and frustrate customers, which, says Monger, undermines the validity of the customer satisfaction measurement initiative. “To best measure the effectiveness of service delivery, an immediate evaluation is needed via your customer’s preferred channel. This will ensure the success of your caller satisfaction program as well as increase your customers' satisfaction and loyalty.”
Too Many or Too Few Survey Questions
Selecting the right real-time survey method, while critical, is only part of the customer satisfaction measurement battle. Numerous mistakes are often made with the actual design of the survey, and chief among those errors is making the survey too long or too short.
The danger of too many questions is that customers will tire of the survey and opt not to complete it, leaving the call center with incomplete ratings and feedback. The trouble with too short a survey, says Monger, is that it fails to gather critical data and prevents the center from properly analyzing, reporting and defending the center’s value to and impact on the enterprise.
So what’s the right number of questions? According to Monger, between 10 and 14 is ideal — with the ability for customers to elaborate on their ratings with open-ended comments. Steve Graff, vice president of technology for contact center software provider Autonomy eTalk, feels that a solid survey needs only six to eight focused questions, and should take callers no more than a minute or so to complete. “Callers have already spent a considerable amount of their time on the phone with [an agent],” Graff explains. “We’ve found that keeping the surveys short increases the number of people who start to answer the questions, and actually complete the entire survey.” Graff strongly recommend informing customers of the survey length in the invitation message — whether the center is using an IVR-based phone survey or an email or Web survey -— so that the customer is aware of how brief the survey is and, thus, will be more willing to participate in and complete it.
Failing to Capture FCR Feedback
Neglecting to include at least a question or two regarding whether or not the customer’s issue was fully resolved on the initial contact is a common — and potentially costly — survey oversight. Studies have shown that no other performance metric has as big an impact on customer satisfaction as does first-contact resolution(FCR) — and, according to Monger, there’s no better way to measure FCR than via a real-time, post-contact survey. She says that doing so not only provides a clear picture of the call center’s true FCR rate from the customer’s perspective, it can help the center — if the survey is appropriately designed -— to discover some of the main causes of repeat contacts.
“For those [customers] who had a problem that was not resolved on the call,” Monger explains, “the survey should branch to an open-ended question to capture the customer’s description of the problem. This qualitative information adds the explanation to the dramatic quantitative information you now have available. The cause of unresolved calls is invaluable to correcting process issues.”
And what exactly does the contact center stand to gain from FCR improvements? Just about everything. Research by Monger, as well as by customer contact research and consulting firm Service Quality Measurement (SQM), has revealed that, in addition to big increases in customer satisfaction, high FCR rates beget lower operating costs, increased upselling and cross-selling opportunities, and lower agent burnout.
Not Including Customer Feedback in Monitoring and Coaching
Customer feedback from surveys can be a highly effective agent coaching and training tool when used appropriately. More centers are starting to embrace a 360-degree approach to agent feedback, but many still have yet to utilize customer comments and ratings as a powerful agent development resource.
Experts say that incorporating direct customer feedback into agent evaluations and post-monitoring coaching isn’t just what agents need, it’s what they want. “Agents find [customer] feedback more meaningful and believable than the ratings they receive from peers, supervisors or quality assurance teams,” says Mike Desmarais of call center consulting firm Service Quality Measurement (SQM) Group. “Each agent [is able to] understand which issues are most important to customers and how they are improving.”
Incorporating the Voice of the Customer into coaching will do much more than just make agents happy, says Monger. “Connecting real-time caller feedback directly to the agent providing the service has far-reaching benefits. [Based on] our research, agent-level customer feedback increases productivity, first-contact resolution, customer satisfaction, and the ROI on training and coaching efforts.”
Several advanced monitoring systems today offer a fully automated customer feedback feature (via the IVR), thus eliminating the challenge on the part of the quality assurance specialist of linking customer satisfaction results with specific customer/agent interaction recordings. For example, NICE’s Feedback IVR Survey system enables call centers to incorporate post-call IVR surveys that capture customers’ comments and ratings regarding their experience, and provide the quality assurance specialist with instant access to the recorded call in question.
Email- and Web-based customer surveys also make it relatively easy to provide agents with timely Voice of the Customer feedback. At Wells Fargo’s Banker Connection call center, for example, members of the center’s Customer Care team send an encrypted spreadsheet (containing information on each customer contact monitored that day) to an outside vendor, who, in turn, emails each customer a concise satisfaction survey that contains both closed- and open-ended questions about the customer’s recent experience with the call center and agent. The vendor then sends the survey results to Banker Connection, where Customer Care evaluators incorporate the results into their earlier contact assessments before sitting down with agents to provide feedback.
Thus, Banker Connection’s monitoring program is “married” to the banker survey process,” says Terri McMillan, senior vice president and manager of the Billings, Mont., center. She adds “[Customer] feedback via the quality assurance program gives our specialists the confidence to support our customers and to ensure great customer service.”
Not Sharing Key Survey Insights Enterprisewide
As important as it is to share customer feedback and preferences with agents, it isn’t enough to drive enterprisewide improvement — nor to drive lasting customer satisfaction and loyalty. The data that is continually captured via a well-designed customer satisfaction survey is veritable gold, and failure to share that gold with appropriate departments within the organization is a huge missed opportunity for the enterprise as a whole.
Invaluable quantitative and qualitative information from surveys should be shared with Marketing, Finance, R&D, HR and Manufacturing, as well as with the CEO and board of directors, says Monger.
“The contact center touches and represents all parts of the organization,” says Dr. Monger. “The actionable customer intelligence that the contact center collects -— or could collect -— can be leveraged by all parts of the organization.”
Many Call Centers Fail to Measure Customer Satisfaction, Survey Says
More than 30% of contact centers fail to formally measure customer satisfaction with the service they receive, according to a recent report by ICMI.
The 2007 Customer Satisfaction Measurement Report found that, of those that do measure, more than a third (38.7%) reported an enviable customer satisfaction rating of 91%-100%, with another 35% reporting a more than adequate rate of 81%-90%. In fact, only 10.7% of centers surveyed cited a customer satisfaction rating of 70% or lower.
Following are a few key findings from the report:
- The most common surveying method cited in the study was live phone surveys — used by 38% of centers. The second most common surveying method was email surveys (34.1%), followed by automated phone surveys (23.9%) and mail surveys (20.5%).
- Many centers are waiting too long (two days or more) after the customer interacts with the center before surveying the customer about their experience (except for centers using automated phone surveys, which typically occur immediately following a transaction). This not only dilutes the feedback received (since the contact is not fresh in the customer’s mind), it greatly decreases the likelihood that the center will be able to “recover” a customer from a highly negative service experience.
- All in all, contact centers are pleased with their center’s ability to measure customer satisfaction and to take positive action based on the findings: Two in three centers rated themselves as either “good” (37.3%) or “very good” (29.8%) in this regard, with a few centers (3.7%) giving themselves an “exemplary” rating.
- While the vast majority of centers reported sharing survey results with other departments within the enterprise, most rate the overall sharing process as merely acceptable (43.8%) or “not very effective” (19.4%).