About the Author:

Michael Whatley
Michael Whatley is the Senior Manager of Instructional Design for a multi-billion dollar automotive organization. Since graduating from the University of Georgia with a BSEd in Workforce Education in 2009, Michael has worked in sales and service education for the healthcare, logistics, and digital media industries, in areas ranging from mobile and social learning to microlearning and interval reinforcement. Michael was the recipient of a 2013 Gold Brandon Hall Group Excellence Award for the Best Use of Mobile Learning, and he continues to push the envelope in training and development, looking for ways to increase engagement through virtual platforms and enhance retention of content through easy-to-access performance support portals. When he's not hard at work shaping minds and changing behavior, he spends his time with his family, watching Netflix, or throwing strikes at the local bowling alley.

If you ask any LMS administrator what they report on a regular basis, you might hear things like course completions for e-learning modules, assessment scores, or class attendance. Historically, these things mattered – a number that proved the learner did something and got credit for doing it. The problem with those types of metrics is that they don’t really demonstrate whether learning actually took place. When it comes to true learning analytics, very few of the traditional levels of assessment provide value anymore. Technology has advanced in such a way that learning analytics is now comprised of tracking the transfer of knowledge and the effects that transfer has on a business.

Let’s take a look at Kirkpatrick’s four levels of assessment, one at a time. First, there’s Reaction. Did the participant find the learning event to be enjoyable and relevant? Answering this question on a survey tells the evaluator that the event wasn’t boring, the trainer was likable, or there was an above average catered lunch because that’s all that stood out to the participants. There’s nothing in this data that tells the evaluator whether learning actually happened. Participants can fully engage within a class and still walk away no more educated than they were when they walked into the class.

To really assess whether learning happened, let’s move on to Kirkpatrick’s second level – Learning. This level is meant to measure how well participants acquired the expected knowledge and skills based on participation. A typical assessment strategy that demonstrates an increase in knowledge and skills could be a pre-test followed by a post-test. Surely if participants score higher on the post-test than they did on the pre-test they have learned something, right? Those numbers are undeniable. Except that all that demonstrates is a participant’s ability to recall and then recite the information presented.

How many of you reading this ever crammed for a test in high school or college? How much of that information did you retain after the test was over? Let’s check the science on this one. According to German psychologist Hermann Ebbinghaus’s forgetting curve, humans tend to lose their memory of freshly learned knowledge in a very short time if they do not make the effort to retain it. In other words, use it or lose it. If you don’t apply what you’ve learned, the training will not stick. So, when we assess learners immediately after a class, chances are they will forget what they learned as soon as they click the submit button on that test.  

This is where most learning evaluations stop. The learning event occurred, and the session was evaluated, so the job is done, right? Wrong. Where you can really begin to see whether learning stuck is through Kirkpatrick’s third level – Behavior. Did the participant changed his or her behavior based on what they learned? The idea that you can observe this behavior change is exactly what can demonstrate whether learning has been transferred.

For tasks that don’t require decision making, repetition is the key to increased performance, or changed behavior. Think about a basketball player practicing his jump shot. Technically, the execution is perfect. Swish after swish alone in a gym. But what happens when he gets on the court for the first time, his team is behind by one point, the clock is winding down, and he has to choose to either take the shot in the face of a defender, or pass the ball to an open teammate? The behavior he has been training to perform may seem like it’s all about the shot, but it’s not. The goal of his training is to score points. The player’s decision making competence is what demonstrates his learning.  

Author and research translator Will Thalheimer is an out of the box thinker when it comes to learning transfer. He developed a Learning Transfer Evaluation Model (check out the full paper here, or the one-page summary here) that speaks to what learning evaluations should look like. Given a situation where a decision is necessary, assess the situation, and rely on an individual’s training to make the right decision and take the right course of action.

Let’s put it into a customer service context, for example. When the phone rings in the call center, there’s no way for the support rep to know who’s on the other end of the line. It could be an angry customer, or it could be someone with a general support question, but once they say “Hello,” it’s time to act. They have to think, decide, and act in a matter of seconds in order to solve the customer’s problem and keep them satisfied.

Getting back to Kirkpatrick, let’s move on to Level 4, Results. This level shows us where true learning analytics live. It all starts with the desired outcome for the learning event. It’s not about whether the learner passes a test, or whether or not they enjoyed the training. It’s about can the learner perform a task that drives business results? Can the learner sell more widgets? Can the learner handle customer inquiries faster with less errors?

It takes a fundamental shift in how you design a learning event to think through what the ultimate outcome should be – up front.  As learning professionals, we’re tempted to look at levels one and two as the golden ticket, but that’s only half the battle. Those metrics tell you whether people like what you’ve built. Moving on to levels three and four truly demonstrates what impact a learning event has on the business. That’s where companies start to realize how valuable learning is for their bottom line.

2018-10-22T13:52:47+00:00