ICMI is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Advertisement

Expert's Angle: Beware of Benchmarking: Bad Data is Worse Than No Data (Part 1)

If you're looking to benchmarking to map your contact center's future, beware these nine pitfalls. In Part one of this two-part series, we identify the nine pitfalls and delve into four of them.

Benchmarking has been popular for at least twenty years. However, if the wrong items are benchmarked or the result is inaccurate, bad data is actually worse then no data because it leads the organization to pursue the wrong goals, move in the wrong direction or at minimum, wastes resources.

ICMI Related Resources:

Survey Pain Relief: Transforming Customer Insights into Action
In this book, Dr. Jodie Monger and Dr. Debra Perkins offer an insightful, user-friendly overview of the science of research, dispel common misconceptions about the validity of widely publicized research methods, explain the sources and risks of "survey malpractice" and how to avoid it, and reveal how to transform customer insights into action.

John Goodman at ACCE 2012:

Pre-3: Making Your Call Center the Heartbeat of Voice of the Customer (VOC)
In this session, John Goodman of TARP will show you how to make the VOC report something your staff welcomes instead of dreads. You’ll discover how you can harness VOC results to prevent future problems, and you’ll identify the eight key components necessary to create an impactful VOC process. You’ll also learn how to provide a quantitative interpretation of VOC by understanding customer behavior and expectations.

Session 103: Seize the Power - Taking the Lead in Customer Experience and Social Media

Create an integrated voice of the customer process that quantifies the benefits of the call center to other departments and the CFO. You’ll return to your center equipped to build a strong economic case – and ready to seize the power of customer experience!

Session 306: User Input: The Secret Sauce to Technology that Delivers

IT is often criticized for doing things on their own, without asking for input from the team. John Goodman of TARP will share ways to implement a similar process in your center and gain buy-in from all users to make your technology more effective.

Examples of Bad Benchmarking

There are some obvious examples of benchmarking errors but some are more subtle. While the following examples draw heavily from the customer contact environment, the lessons apply to all benchmarking.
A company decided to focus on the best (lowest) metrics for average speed of answer (ASA) and talk time. It ended up devoting headcount to answering quickly but then rushing people off the phone. Further, the mechanistic responses resulted in incomplete answers, frustrated employees and higher turnover that caused even more demand to throw partially trained employees into the breech which resulted in even worse answers. Closer scrutiny of the data found that the companies that were benchmarked had less complex calls due to a different mix of products and a different approach to welcoming new customers.
Another company compared itself to only those in its own industry and was one of the best on first call resolution and customer satisfaction. Its high ranking led to complacency even though it still had significant voluntary customer attrition because the top company in its industry had over 12% annual attrition.
The executives of a third company set a target for first call resolution based on a visit to a leading company but failed to implement or even understand the streamlined case investigation and empowerment processes that led to the company being able to achieve that level.
Finally, TARP has observed many companies and regulators setting targets for operational metrics based on common practice with little underlying analysis of whether the target is really what the customer wants or is really in the best interests of the customer or company. One of the worst examples is Average Speed of Answer where the target is 80% of calls answered in 20 seconds or 90% in 30 seconds. TARP's research suggests most customers will wait in queue for 60 seconds if, when their call is answered, the CSR can completely answer the question.

Nine Pitfalls to Avoid

There are at nine pitfalls that TARP has observed with how most customer service benchmarking is done. The result of these problems is often misleading or incorrect conclusions.

    1. Benchmarking only gathers basic operational metrics without linkage to desired outcomes.
    2. Benchmarking looks only within the organization's own industry where even the best company may do poorly against leaders in other industries.
    3. Benchmarking shows you are the leader so you declare victory.
    4. Benchmarking does not take into account the specific market and workload mix of the company.
    5. Benchmarking does not focus on the processes that lead to outcomes.
    6. Benchmarking focuses on averages.
    7. Benchmarking does not take into account differences in markets and cultures.
    8. Data is drawn from small number of participants who are not representative.
    9. Data lacks quality control to deal with definitional differences, artificial situations, or flat out fudging of data to make the company being benchmarked look better than its peers.

Let's begin with pitfalls one through four here. We'll pick up the next set of five in Part 2 of this series.

1. Benchmarking only gathers basic operational metrics without linkage to desired outcomes.
Benchmarking looks at only a few (and often wrong) operational metrics rather than outcomes. Usual metrics are ASA, talk time and first call resolution (FCR). However, none of these are alone necessarily the critical factors driving long term cost effectiveness. Many benchmarking processes gather data on key operational metrics such as FCR or ASA from contact centers of a similar size or in a similar industry. The potential difficulty with such an approach is that the findings can be misleading unless the call centers are handling exactly the same types of contacts using similar technology and staff of similar levels of training. In most cases, comparing the statistics of one center that serves consumers who are untrained and another center that serves seasoned professionals who have used the product for several years will result in misleading comparisons.

2. Benchmarking looks only within the organization's own industry where even the best company may do poorly against leaders in other industries.
Most customers do not compare their health insurance company to other health insurance companies; they compare to their last best service experience. The fact that you compare favorably to others in your industry means little to the customer who had a bad experience. TARP recently asked on behalf of a power company which companies its customers thought gave the best service and why. On the list of possible great service companies, TARP included Amazon and FedEx even though the inquiring company was a power company. The company learned two very useful lessons. FedEx knows where its trucks are and the power company did not and the consumer could do several things on Amazon’s website that they couldn’t do on the power company’s site. Thinking beyond your own industry gives you a more valid perspective.

3. Benchmarking shows you are the leader so you declare victory.
TARP has observed that some companies become complacent when they are the best in their industry or even across several industries. There are two dangers of this position. First, when there is management turnover, incoming managers fail to understand why some processes (such as quality, service or feedback) are in place and start cutting corners to cut costs. This can lead to quality and service disasters overtaking respected leaders as recently seen in the auto, communications and pharmaceutical industries. Secondly, as executives at both USAA and Chick-fil-A have indicated, even in those last few percentage points of dissatisfaction, there my be simple changes that will enhance satisfaction while cutting costs. To declare victory risks leaving money on the table that would be easily captured. Also, the fact that you are the best in your industry does not mean that the law of diminishing returns applies.

4. Benchmarking does not take into account the specific market and workload mix of the company.
If the call center is in a financial service arena, the type of products being serviced can have a huge impact on talk time and FCR. For instance, if you are in a mortgage or credit card setting, many of the calls will be on late charges and payment arrangements. In a retail banking environment, discussions will more often be on transactions, passwords and balance inquiries. Likewise, catalog call centers usually deal primarily with orders and very seldom with problems, resulting in higher satisfaction than a technology company help desk. Further, FCR data must be tempered with satisfaction data by type of transaction because certain issues will inherently result in lower satisfaction unless more time is spent in educating the customer on why the policy is the way it is. You can have high FCR and low satisfaction if the prescribed answer is, "That’s our policy."

In Part 2, we’ll cover the remaining pitfalls and give you some action steps to make sure benchmarking is appropriate.

Cynthia Grimm is Vice President at TARP Worldwide.