Joette Derricks (jpderricks@gmail.com) is CEO at Derricks Consulting, LLC, in Hunt Valley, MD.
Although training may appear to be straightforward to most healthcare leaders, the effectiveness of follow-up is anything but clear-cut. There is an investment of resources, cost per employee, supplies, test fees, and lost revenue in terms of time away from the employees’ current job duties when a hospital allows long-term instructional courses to be offered during regular business hours. If the hospital is going to offer their employees 20, 40, or even 80 hours of instruction, management wants to know if the training was effective. After training is completed, the primary focus is on the individual employee’s behavior. Did the employees learn the material, and can they use it effectively in their current or future role? If, at the end of the training program, there is a certification test, the goal is to have all the trainees pass. If there is no official certification test, management stills want some type of assurance that the employees have learned the course material, and they know how to apply it. How does management gain that assurance?
Years ago, training evaluation focused on “after the fact” reporting. It’s quick and numbers-based (i.e., completion rates, attendance participation, and due date tracking), but this is just reporting on efficiency and operational activities. It’s not evaluating the training’s effectiveness.
Measuring the training’s effect
The HCCA-OIG Measuring Compliance Program Effectiveness: A Resource Guide,[1] issued in March 2017, provides ideas on what to measure and how to measure the effectiveness of an organization’s compliance program. Because training is a part of an effective compliance program, the ideas offered can be applied to all types of training.
The Resource Guide measurement tips to evaluate the effectiveness of compliance education include:
-
A review of the organization’s documents to determine if the organization has established a method for evaluating the effectiveness of the program;
-
A review of post-training incident logs to determine if employees’ behavior has changed because of the training;
-
The use of post‐training tests or evaluations that include employee feedback and subsequent modifications of the training material, if needed; and
-
The use of a knowledge survey post-training and up to six months after the training.
Effectiveness requires one to validate the results in a meaningful way to determine whether the employees learned the material or not. If not, technical assistance or other assistance may be provided before the participant moves on to the next subject or more advanced training modules. Training is always done with specific objectives. Validating through measurable metrics based on the specific objectives gives leaders the answers they need regarding the training’s effectiveness.
Most professional associations that offer some type of certification maintain data on their pass/fail rate and, at times, will hold that data tight rather than release it through their website or other means. If the association’s training curriculum states that 70% of trainees who attend the full course pass the certification exam the first time they take it, the hospital’s management has a basic fact-finding benchmark to use to judge the success of their training. If ten employees were in the course and only four passed the certification test, the results are below the benchmark, and leaders need to dig deeper into the why. When participants do not learn what was intended from the training, it should prompt the training material to be revised or the instructor to deploy a different training methodology. If the training material has generated the desired results with other instructors, perhaps it is the instructor, rather than the students, who requires additional mentoring or training.