ICMI is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Advertisement

How to Build Call Scoring Evaluation Forms

We are all familiar with Newton's third law: For every action, there is an equal and opposite reaction. While the breakdown and application of this law is more commonly found in a physics discussion, it can certainly be diluted into the layman's term "cause and effect" and applied to the customer service world. Taking this a step further, high quality customer service fosters strong customer loyalty. A fine statement we can all agree on, but how is this measured on a call-by-call basis? Your measurements are only as good – and accurate – as the evaluation form you use.

Form design has changed a lot over the years. These concepts and tips will assist in creating effective, well-designed evaluation forms.

Contact center quality form

1.    Assemble stakeholders/project team. The project team should include stakeholders that will be impacted or measured by the new evaluation form. Don’t just rely on the quality analysts or team supervisors to join the project team.  Agents can be a valuable resource in helping to define a new evaluation form, and their buy-in is critical to a successful quality program.

2.    Decide what to measure. Only measure what truly matters to the business; the trick is figuring out how to quantify what really matters. What are the corporate initiatives or business goals? Is it to make an upsell attempt on every interaction?  If so, measure the agents on whether or not they attempted to upsell. Perhaps there is an initiative to measure First Call Resolution. If that’s the case, did the agent ask if they resolved their issue? That is a measurable metric. I use a modified version of SMART as a litmus test:

S – Strategic: does it matter, or need improvement?

M – Measurable: is it quantifiable?

A – Aligned: is it aligned with corporate goals?

R – Results-based: will it drive the behavior?

T – Timely: is this still important?


3.    Plan the form. Do the measurement objectives lend themselves to sections or categories? Do you even need to create separate sections? The answer is maybe; it depends on the individual business. I have customers who have one category housing all of their questions, and others who have multiple categories. Each version is equally valid because it works for their business respectively.

TIP! During the planning phase of evaluation forms, use a combination of a dry erase board and sticky notes. If the form lends itself to categories, those get written on the board. Add the questions to the sticky notes, one per note. Questions can easily be moved around the “structure” without having to continually erase and rewrite.

4.    KISS: keep it seriously simple! The form shouldn’t take longer to score than the call. Keep the form between 10 and 20 questions, unless you have a compelling reason to make it longer. These might be regulatory or contractual requirements, but most customers can measure what is meaningful in just a few questions.

5.    To branch or not to branch? Is it better to make one big form with hidden sections, or multiple forms? That depends on the business requirements. If all agents in the center take all types of calls, it may work out better to create branching forms. Sections of the form hide/become visible based on triggers, and the Analyst doesn’t have to figure out what form to select at the beginning of the call. Branching can be more challenging to manage, however, so keep that in mind when planning to determine which route works best for your company.

6.    Scoring methods. Most form software has similar scoring methods available. The questions can be summed, adding to a specific result, or start with a total and have values subtracted. Questions can also be calculated as a percentage and weighted by assigning a value to the question. Additionally, question sums can be combined with category percentages.

7.    Auto fail rules. Auto fail rules are used for behavior so detrimental that the form score or category score is set to zero, or a form or category is reduced by a specified number of points. Auto fail rules are not a requirement, but if a center deals with sensitive information, it may be wise to consider using one.

8.    Testing the form. Once the form is completed, it’s important to test it. It doesn’t pay to make a form available only to discover the calculations are wrong, or there are mistakes in the form. Get the entire team to test all of the form functionality, and don’t forget to spell check.

TIP! Set all responses to yes, and calculate the score.  Did it add or score correctly?  Then set one response to no. What did that do to the calculation? Add another “no” response. Are the calculations still holding? If there’s an auto fail rule, test that as well.

9.    Calibration; why it’s important. Once the evaluation form is tested it’s a good idea to save it as a calibration form. It’s important that everyone is on the same page about what is being monitored and scored. The calibration process will help pinpoint the areas that are unclear and need to be addressed. Calibration sessions should be held regularly and should include anyone who performs evaluations. Don’t forget to include an agent or two in the process!

Call scoring forms are an integral component of an overall quality assurance program and help to ensure that your call center is maintaining the highest levels of quality. Designing a new form can be a challenge, but a great form can be delivered with a little pre-planning, and hopefully by implementing some of these tips. Just keep an eye on the target and have fun!