Published: June 11, 2013 | Comments
Let me begin this article by saying that we all (I included - quite regularly) go into situations: projects, relationships, corporate initiatives, with the best of intentions. We want what is best for our people, and our customers, and our organizations, and ourselves, and our neighbor’s Scottish terrier, Watson… What happens is we tend to err on the side of people pleasing and in what can become grandiose attempts to satisfy all, we are successful with few. While this is a character flaw that we know permeates our lives in various ways, it is not always easy to recognize, especially when we’re close to the issue. This can impact many areas of the contact center but, the one that I’ve encountered the most often and will be addressing today, is the process for monitoring and improving contact quality. As the area which can arguably have the greatest impact on agents future performance and a future customer’s experience, it is critical to use a fair and balanced process for scoring and coaching interactions. It’s self-destructive for us to blame agents for failure when we are the people responsible for managing the organization’s quality program. To be successful, we must create and sustain an environment that fosters high levels of employee enthusiasm and the desire to continually improve.
When someone is struggling with relevance, buy-in, and/or engagement in their quality program, one of the first questions that I’ll ask is “why did you select the quality standards that are currently in place?” One of the most common answers is, “I don’t know, they were in existence before I took my position.” Hello, people! If we don’t know, are uncertain, or haven’t revisited why we have our quality measures in place, why are we surprised that our teams are disengaged? We absolutely must understand and be able to articulate and drive relevance to every quality standard that we have in place. These standards must be based upon two key factors: Customer expectations and our mission, vision, and values. Both areas of need are as important as the other and it is vital to identify any areas of conflicting needs. We need then to translate the mission, vision, and values into our contact center employee performance standards and observable behaviors into our quality monitoring programs.
Once we’ve identified these factors, it is our responsibility to create a quality monitoring program that meets the objectives of the organization, the customers, and equally important, the agents. While these objectives can and will vary from one organization to the next, the key is to make a direct link with your mission, vision, and values. As these objectives manifest into performance standards, we may find ourselves dividing them into base requirements (behaviors which represent the minimum level of acceptable performance) and expectations (behaviors that can be continually refined and improved). All performance standards, however, should be specific, observable, realistic, and valid.
The next challenge in developing performance standards is distinguishing between those which are objective versus those which are subjective. This classification can be accomplished by considering the purpose of the measurement and the means by which it is measured. The first type of performance standards are considered “foundational”. These are the objective measures with the primary purpose of establishing consistency from call to call. Foundational skills should be demonstrated in every call and can be measured using a “did” or “did not” evaluation. The second types of performance standards are considered “finesse”. These standards are subjective measurements which determine the “personality” on the interaction. These skills are often cited as being the most important to customers and must be measured as “how something was done” versus “if it was done”. A best practice to determine whether or not something is a foundation or a finesse skill is to ask whether or not you need to use judgment to assess whether the agent met the standard. If you need to use judgment, it is a finesse skill. If it is more black and white, they either “did it or didn’t do it” it is a foundation skill.
Once we’ve identified and classified the performance skills and measurements, it is critical to document the definition of each standard. This process includes four requirements to ensure a comprehensive understanding and consistency in communication. The first requirement is a brief description of the performance requirement, or the “what”. The second requirement is the business reason/purpose, or the “why” behind the “what’. This particular requirement will arm your coaches and quality assessors with an explanation of why the standard is important, rather than having to use a “because I said so” argument. This also requires us to “test ourselves” about why we are asking our agents to do something and, in turn, forces us to take a hard look at our performance standards. The third requirement is establishing your definition and rating guidelines. If a standard is “Uses a professional greeting”, how do you define the word professional? What rating scale will be used to score this criterion? The fourth and final requirement is providing a model and/or examples. The more specific and applicable that you make your examples, the easier it is to ensure that everyone knows what the expectations are and what they “look like”.
When we use these guidelines and connect our quality standards to the things that matter most, we will find that successful calibration, coaching, and employee engagement are achieved with reduced effort. We are then also equipped to drive the right behaviors, the desired outcomes, and ultimately, the real reason that quality monitoring programs were implemented in the first place: ensuring consistent, high-value interactions with our customer base. Because after all, what we really want is a successful quality program that inspires excellence in our agents, fulfills our corporate objectives, and satisfies our customers.