Date Published: June 07, 2022 - Last Updated 1 Year, 118 Days, 20 Hours, 52 Minutes ago
If you’ve been following along with the Training Evaluation Series, you’ve likely figured out that I believe there are many reasons we attempt to measure the training experiences and training outcomes. Time dedicated to training is an investment, and we believe it pays dividends in employee engagement, retention, and overall employee satisfaction. Training is an opportunity to communicate what the organization values, and to demonstrate how to embed those values into daily work tasks. In training, we support and encourage our employees to become lifelong learners and to value continuous development and growth.
And, of course, we want to prove that the investment pays dividends in the form of great customer experiences, and efficient and high-quality work.
So while learner feedback, tests and assessments, and self-evaluation are valid and useful methods to assess the quality of training design and delivery, it’s through observation - observing the employee engaged in the work - that we ultimately know if the employee retained the training content and if they are able to apply it on the job.
With observation, the rule of thumb is “early” and “often” to ensure the learner receives frequent feedback while adopting increasingly complex and incremental concepts or processes.
Example 1: Observation During Training
A learner learns a new process - how to enroll a new patient - through lecture with on-screen demonstration, reading the KB article, and side-by-side observation. Now it’s time for them to demonstrate mastery of the new process and how to apply it during a customer interaction.
The employee logs into an electronic simulation in which they complete an enrollment while interacting with a simulated customer. The simulation includes customer statements and the opportunity to select the correct response while completing the required on-screen steps. For the first three enrollments, the enrollment steps are displayed on the screen, prompting the learner toward the right action. For the next three enrollments, the enrollment steps are hidden, but the learner is prompted toward the right action if they veer off course. The results of the last three enrollments are recorded, so the instructor can see who is catching on and who needs additional help.
Automated simulations are especially useful to observe the performance of ALL learners, especially those who would otherwise keep quiet or fly under the radar.
Example 2: Observation After Training
A learner completes the system upgrade training. The system upgrade impacts how three call types are completed. During the first two days after the training, three call recordings are observed to determine if the learner is able to successfully complete these three call types. The process repeats within one and two weeks after that, ensuring the employee has retained the new process in their long-term memory.
What about tests and assessments? They absolutely have their place, but can easily be incorporated into more realistic activities. In Example 1, we might ask the learner some questions about the process to test their understanding of the process or ability to access the KB article. Using this approach, if the learner doesn’t perform well in the demonstration portion, we’ll have an idea of why the learner is struggling - did they not understand the process (which a test question would catch) or is the problem in the application of the knowledge?
In Example 2, we could send out a quick five-question quiz seven days after the training to test knowledge retention. If learners perform poorly, we might ramp up our observation to determine if this is translating into poor performance.
As you can see above, observation begins early during practice, where we observe and provide feedback during role-plays, scenarios, or simulations. Hopefully, this is a mix of electronic simulations and live practice to increase the volume and type of feedback. When we start observation during practice, we can intervene quickly when necessary and use feedback to build learner confidence and assurance.
Then, as learners transition into the job, they are accustomed to observation and feedback. We apply the same methods and rubrics to on-the-job observation and feedback that extends beyond a simple proficiency sign-off. An important message to accompany all this observation and feedback is, “We want to make sure you understand what we ask you to do and how to do it under the required conditions.” Frequent observation and clear feedback should be positive and motivational. It’s intended to help build your confidence and proficiency, never to punish you for “not doing it right.” If we do it right, learners should welcome observation and feedback and see it as a critical part of the training process.
Observation is the training evaluation method that uncovers if learners can apply what they learned in training on the job (or a scenario meant to mirror the job) - the holy grail of training evaluation in most organizations. By deploying this method early and often, you identify opportunities for quick course correction and support an approach that drives learner confidence and proficiency.