ICMI is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Advertisement

Evaluate Training Effectiveness with Well-Aligned Tests and Assessments

bubblesThe goal of every training program is for learners to master the knowledge and skills shared in the training so they’re prepared to apply it on the job. Most learning organizations create tests or assessments to ensure learners successfully achieve the former and are ready to move on to the latter.

Tests and assessments provide learners with the opportunity to demonstrate mastery and retention of training content, give us the chance to intervene if the learner doesn’t demonstrate mastery, and reveal gaps between what we think we are teaching and what learners are learning.

While knowledge tests and skill assessments are standard in most training environments, crafting tests and testing processes that achieve these three goals is surprisingly challenging. Here are some practical guidelines to ensure your testing strategy supports learners and drives desired training outcomes:

Start with your learning objectives.

With learning objectives that clearly describe observable outcomes, drafting a knowledge test or skill assessment is streamlined and straightforward. This is just one reason I encourage instructional designers to begin training design with rock-solid objectives.

Let’s look at these objectives from an “empathy” training module:

1. Describe the importance of empathy and the best methods to create authentic customer connections.

2. Apply the LEA (Listen, Empathize, Assure) model to customer interactions to deepen customer connections, increase positive customer sentiment, and improve customer outcomes.

Test and assessment opportunities immediately reveal themselves with observable and measurable objectives like these. For example, we could create a forced-choice test that asks questions like: “Which of the following statements is the best definition of empathy?” or “Select the items from this list that are examples of ‘assuring the customer’.”

Maybe you’re thinking that regurgitating definitions and examples in a knowledge test isn’t the best way to evaluate whether a learner can effectively demonstrate “empathy”? I agree! Luckily, skill assessments are a viable alternative to support learners in demonstrating mastery of more complex skills.

For example, the learner can play a call recording and identify which steps of the model are illustrated in the call. Or the learner can read a scenario and weigh in on how they would use the model and the methods they would use to connect with the customer.

Starting with solid training objectives ensures our tests and assessments are closely tied to the performance we are expecting on the job, setting the stage for learning transfer.

Make it like the job.

The previous example shows that tests and assessments are most likely to predict future performance when they mirror the learner’s work environment. While true-false or multiple-choice knowledge tests can be adequate spot-checks for learning retention, they are less effective at measuring whether learners are ready to take what they learned in training and apply on the job. This requires realistic situations and scenarios that reflect the learners’ work environment with real-time decisions and customer feedback. This is why I prefer skill assessments, such as simulations, scenarios, and role-plays, over knowledge tests for most contact center training evaluation.

Reduce anxiety with frequent tests and assessments.

When a facilitator announces an end-of-class test - especially in new hire training - learner anxiety skyrockets. We want our learners to be engaged and positive about training, not distracted and fearful about a test.

Remove uncertainty and build learner confidence with frequent learning checkpoints that mirror the test format. This allows you to reassure learners, by saying, in essence, “Module assessments mirror the format, scenarios, and question types you’ll see on the final test. You’re likely to ace the test if you do well on those. If you don’t, we can see where you’re having problems and help you prepare for the test. We are here to help you be successful.”

Increase learner comfort with repetition.

Learners grow their confidence and relax into learning when they know what to expect. Use familiar approaches and formats - question types, scenario formats, role-play instructions - across all delivery formats (elearn, live classroom, live remote).

For example, with every update email, learners can link to a 2-question “test.” With every elearn module, learners may encounter a 5-minute to 10-minute “learner checkpoint.” At the end of every class, there’s a scenario-based skills evaluation. You should use the same evaluation scales across all test and assessment formats - e.g., Expert, Effective, Novice - to categorize and interpret the results.

----

Creating familiarity and repetition helps learners put their effort into demonstrating new skills rather than figuring out a new question format or absorbing tricky new role-play instructions.

Tests and assessments should point us toward gaps in instruction or learner retention so we can more effectively support learners and identify threats to on-the-job application of new knowledge and skills. In successful organizations, testing and assessment are positive experiences that provide learners with a chance to celebrate what they’ve learned and prepare for how they will apply their new knowledge and skills on the job.