August 22, 2012 By
A
few years ago a colleague sent me a research article. The article stated that 90 percent of training resources are
devoted to the design, development
and delivery of training, yet only 15 percent of what is learned
transfers to the job (Brinkerhoff, 2006).
After reading this article, I not only started digging up more research but I also quickly realized that I didn’t have a process or set of tools for evaluating whether our training program at the Lawrence Berkeley National Laboratory (LBNL) was effective.
When I talked to my colleagues, I found out I wasn’t alone. All of us were evaluating whether participants valued the training and whether they actually learned. Beyond that, many of us also struggled with having the time or resources to evaluate whether our training was having a positive effect on safe work performance or contributing to the success of our organizational goals.
As I began to research the topic of training evaluation, I discovered that there was one dominant model used to evaluate training effectiveness. It is the Kirkpatrick Model. In short, the Kirkpatrick Model is built around a four-step process, in which each step (or level) adds precision, but also requires more time-consuming analysis and greater cost.
The following is a brief overview of each step:
Level One: Evaluating Reactions: Measures how participants value the training. Determines whether participants were engaged, and whether they believe they can apply what they learned.
● Evaluation tools include end-of-course surveys that collect whether
participants are satisfied with the training, and whether they believe
the training is effective.
Level Two: Evaluating Learning: Measures whether participants actually learned from the training.
Evaluation tools include:
After reading this article, I not only started digging up more research but I also quickly realized that I didn’t have a process or set of tools for evaluating whether our training program at the Lawrence Berkeley National Laboratory (LBNL) was effective.
When I talked to my colleagues, I found out I wasn’t alone. All of us were evaluating whether participants valued the training and whether they actually learned. Beyond that, many of us also struggled with having the time or resources to evaluate whether our training was having a positive effect on safe work performance or contributing to the success of our organizational goals.
As I began to research the topic of training evaluation, I discovered that there was one dominant model used to evaluate training effectiveness. It is the Kirkpatrick Model. In short, the Kirkpatrick Model is built around a four-step process, in which each step (or level) adds precision, but also requires more time-consuming analysis and greater cost.
The following is a brief overview of each step:
Level One: Evaluating Reactions: Measures how participants value the training. Determines whether participants were engaged, and whether they believe they can apply what they learned.
● Evaluation tools include end-of-course surveys that collect whether
participants are satisfied with the training, and whether they believe
the training is effective.
Level Two: Evaluating Learning: Measures whether participants actually learned from the training.
Evaluation tools include:
● Pre-test and post-tests
and quizzes
● Observation (i.e. Did
person execute a particular skill effectively?)
● Successful completion of
activities
Level Three: Evaluating Behavior: Measures whether training had a positive effect on job performance (transfer). This is a cost-benefit decision, because this can be resource-intensive to evaluate, requiring a more time-consuming analysis. It may be that a level three is performed for safety skills that have a high consequence to error, where you want to make sure safety skills/performance transfer to the job.
Level Three: Evaluating Behavior: Measures whether training had a positive effect on job performance (transfer). This is a cost-benefit decision, because this can be resource-intensive to evaluate, requiring a more time-consuming analysis. It may be that a level three is performed for safety skills that have a high consequence to error, where you want to make sure safety skills/performance transfer to the job.
Evaluation tools include:
● Work observation
● Focus groups
● Interviews with workers
and management
Level Four: Evaluating Results: Measures whether the training is achieving results. Is the training improving safety performance? Has training resulted in better quality, increased productivity, increased sales and better customer service? The challenge here is that there are many factors that will influence performance, so it is difficult to correlate increased performance to training alone.
Evaluations include:
Level Four: Evaluating Results: Measures whether the training is achieving results. Is the training improving safety performance? Has training resulted in better quality, increased productivity, increased sales and better customer service? The challenge here is that there are many factors that will influence performance, so it is difficult to correlate increased performance to training alone.
Evaluations include:
● Measure reduction in
number, or severity, of incidents or
accidents compared against the organization’s performance
(or contract goals).
accidents compared against the organization’s performance
(or contract goals).
● Measure reduction in total
recordable cases (TRC)
● Measure reduction in DART
rate (days away, or restricted work)
When it comes to evaluating training effectiveness at your organization, what methods do you use? Has the Kirkpatrick Model worked for you? Which metrics do you collect? How have you evolved your training programs based on this kind of analysis?
James Basore will share more details about his training approach in the “Driving Success Through Effective and Efficient EHS Training” session at NAEM’s EHS Management Forum on Oct. 17-19 in Naples, Fla.
When it comes to evaluating training effectiveness at your organization, what methods do you use? Has the Kirkpatrick Model worked for you? Which metrics do you collect? How have you evolved your training programs based on this kind of analysis?
James Basore will share more details about his training approach in the “Driving Success Through Effective and Efficient EHS Training” session at NAEM’s EHS Management Forum on Oct. 17-19 in Naples, Fla.
Source:
http://www.thegreentie.org/
No comments:
Post a Comment