ICMI is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Advertisement

Expert's Angle: Beware of Benchmarking: Bad Data is Worse Than No Data (Part 2)

Make sure your benchmarking data is appropriate. In Part 2 of this two-part series, five more pitfalls and offer action steps for benchmarking success.

As noted in Part 1 of this two-part series, benchmarking for the call center and customer service has been popular for at least twenty years. But successful benchmarking requires the right data about the right things.

Review: Nine Benchmarking Pitfalls to Avoid

There are at nine pitfalls that TARP has observed with how most customer service benchmarking is done. The result of these problems is often misleading or incorrect conclusions.

ICMI Related Resources:

Survey Pain Relief: Transforming Customer Insights into Action
In this book, Dr. Jodie Monger and Dr. Debra Perkins offer an insightful, user-friendly overview of the science of research, dispel common misconceptions about the validity of widely publicized research methods, explain the sources and risks of "survey malpractice" and how to avoid it, and reveal how to transform customer insights into action.

John Goodman at ACCE 2012:

Pre-3: Making Your Call Center the Heartbeat of Voice of the Customer (VOC)
In this session, John Goodman of TARP will show you how to make the VOC report something your staff welcomes instead of dreads. You’ll discover how you can harness VOC results to prevent future problems, and you’ll identify the eight key components necessary to create an impactful VOC process. You’ll also learn how to provide a quantitative interpretation of VOC by understanding customer behavior and expectations.

Session 103: Seize the Power - Taking the Lead in Customer Experience and Social Media

Create an integrated voice of the customer process that quantifies the benefits of the call center to other departments and the CFO. You’ll return to your center equipped to build a strong economic case – and ready to seize the power of customer experience!

Session 306: User Input: The Secret Sauce to Technology that Delivers

IT is often criticized for doing things on their own, without asking for input from the team. John Goodman of TARP will share ways to implement a similar process in your center and gain buy-in from all users to make your technology more effective.

1. Benchmarking only gathers basic operational metrics without linkage to desired outcomes.
2. Benchmarking looks only within the organization’s own industry where even the best company may do poorly against leaders in other industries.
3. Benchmarking shows you are the leader so you declare victory.
4. Benchmarking does not take into account the specific market and workload mix of the company.
5. Benchmarking does not focus on the processes that lead to outcomes.
6. Benchmarking focuses on averages.
7. Benchmarking does not take into account differences in markets and cultures.
8. Data is drawn from small number of participants who are not representative.
9. Data lacks quality control to deal with definitional differences, artificial situations, or flat out fudging of data to make the company being benchmarked look better than its peers.

Let's pick up our coverage with the final five call center and customer service benchmarking pitfalls to avoid:
5. Benchmarking does not focus on the processes that lead to outcomes.
Caller knowledge and expectations are a key driver of satisfaction and talk time. Similarly, if a website is easily navigated and has a clear accessible site map, customers are more willing to self service. This, ironically, leads to longer talk time as only the harder calls get to technical support. The benchmark effort should review the methods by which the caller's expectations have been set such as welcome packages, startup kits, proactive education, tutorials for latest releases of software. For instance, leading companies like HP now invest up to two minutes of educational time on calls where the customer asked a simple question that could have been self serviced on the Website. While talk time is longer, future calls are prevented because the customer is educated on self service – what TARP terms "leading the horse to water and assuring that they take the first sip."

Benchmarking should always explain why the parameter is at the level achieved – does the technician have broader authority or flexibility to take action – or are they encouraged to escalate to a subject matter expert. To the degree that the contact center must investigate issues by gathering information from other units, their cost effectiveness is highly dependent upon the relationship with these partners. Therefore, understanding of service level agreements and access to database and knowledge management technology is critical to understanding the source of the high performance.

6. Benchmarking focuses on averages.
Averages such as FCR or abandons can be very misleading. Abandoned call rates of 4% for a week often actually consist of 8% on Mondays and 1% on Thursdays. Likewise, FCR in auto insurance is a combination of 93% satisfied and FCR for non-claim customer calls and 65% for those with claims.

7. Benchmarking does not take into account differences in markets and cultures.
We have found that different customer segments and customers in different geographic locations have different expectations and behaviors. For instance talk time will be longer for many high end customers with complex products than low end customers with simple products. Also, customers from New York generally have different expectations than those from Texas or California. Finally TARP has seen dramatic differences in satisfaction ratings and talk time between customers encountering the same service model depending on whether they are located in the US vs. Europe, Latin America or the Middle East. But even within a marketplace, what is important can differ dramatically. For high end investments, TARP has seen some millionaire segments that want long personal calls with lots of advice and other customer segments characterized by their lack of desire to talk to a human.

8. Data is drawn from small number of participants who are not representative.
TARP has observed a number of "Landmark benchmark studies" that might represent 50 companies overall but for particular items benchmarked, provide values from only four companies. Unless you know which four companies provided that data, the information provided may be irrelevant to your environment. The problem is that you don't know and spending resources to respond to the benchmark may be a waste or worse, inappropriate.

9. Data lacks quality control to deal with definitional differences, artificial situations, or flat out fudging of data to make the company being benchmarked look better than its peers.
Definitions can impact even such "simple" topics as ASA. TARP has observed some companies measure time in queue starting before the IVR answers and others count from when the customer exits the IVR. Similarly, when calculating average cost per call do you count IVR calls or not and do you include IT costs? While you answer might be, "certainly include IVR calls, and doesn’t everyone include IT cost?", TARP found one major auto company who didn’t include IT costs because IT was a separate company. How is satisfaction defined? Even if it is via a five point scale, is the rating based on top box or top two box ratings and are the boxes labeled in a balanced manner or skewed manner? In short, does the benchmark study assure consistent definitions?

A number of benchmark studies are based on "mystery shopping calls" or testing using dummy accounts. The problem with this approach is twofold. First, the routine transactions that usually make up the test calls are usually the basic issues that almost everyone gets right. Where most service systems break down is dealing with the non-routine issues, which are exactly what mystery calls often don’t replicate. Secondly, when CSRs are candid, TARP has heard from a number of top performers that they can almost always discern who is a mystery caller and then carefully follow the protocol to assure a perfect score. Neither of these situations results in a representative picture of the service system's performance.

Finally, to the degree that participants know that the data will be public, we have identified instances where internal corporate politics have led participants to attempt to influence the data and their standing. For instance one benchmarking study required that each company provide a random sample of customers to be surveyed. An analysis of the reasons for call compared to samples supplied by other companies suggested that the company had cherry picked the customers by eliminating all those with more difficult problems.

Actions to Assure Benchmarking Is Appropriate

Blindly adopting a benchmark number as your service performance target risks waste and even service disaster. Therefore you must assure that the data is not misleading; bad data is worse than no data.

    1. Look at the details of benchmarking data. Who are the respondents, how big is the sample and does the workload profile suggest that the data is relevant to your marketplace?
    2. Understand how the company got to the benchmark level. Remember, outcome does not reveal process used to get to the outcome.
    3. Examine all averages. What are the major types of contacts that are included in the average and does the abandon rate average really bad Mondays with great Thursdays?
    4. Ask if the best is good enough and is it possible to be better than the best? You should understand where your industry is in the overall service context and the money left on the table due to preventable dissatisfaction and poor quality in your own company.

Cynthia Grimm is Vice President at TARP Worldwide.