Marie-Hélène Budworth

Associate Professor of Human Resource Management, specializing in learning, development & motivation.

Marie-Hélène Budworth

Evaluating training programs

August 2nd, 2011 · No Comments · Uncategorized

Argh!  The Globe and Mail recently published a short piece supporting the use of Kirkpatrick’s training evaluation model as the standard for training evaluation.

I find this frustrating.  Training evaluation in practice is a place where thinking has not changed in close to 30 years.  It is a bit of a mystery given that organizations rarely evaluate training and those that do rarely do anything with the data.  Could it be that the reason that we struggle with evaluations is because we are not doing them well?  I often hear learning specialists argue that they must evaluate their learning programs and then spend most of their time focused on what Kirkpatrick would call “level 1” – did you like the program?  Did you enjoy the experience?  Did you think the instructor was well prepared?  These questions might be useful for helping you decide whether or not to use the same trainer or caterer, but they tell you little about the effectiveness of the learning initiative itself. 

As we move up Kirkpatrick’s levels, bigger problems arise.  We often measure learning by asking people to indicate how much their knowledge has increased as a result of the training program.  This does not measure learning.  I actually don’t have a clue what it measures. People are notoriously poor at this type of self-assessment and reflection.  Learners are being asked questions that they cannot answer.  I could go on and rant about how measuring behaviour change at Level 3 requires a significant commitment of time and resources and how it is almost impossible to attach learning outcomes to organizational objectives as required by Level 4 – but I will focus on the positive instead. 

There is a lot that organizations can do to ensure that learning interventions (i.e., training programs, mentoring programs, coaching) are effective; however, before we can determine effectiveness, we have to know why we are running the program in the first place.  If you can answer that question, then we have made a productive start on evaluation.  The trainer needs to ask, “If this training program works, what will be different?”  Will people behave differently?  Will there be a change in efficiency?  Will there is a change in mindset?  Once we know what we are looking for, we can then put on our creative hats to find a way to measure ‘it’ or some indicator or ‘it.’  For example, we might measure behaviour by having peers or managers observe trainees on the job.  We might measure values or perceptions through a questionnaire at some point after training.  There might be some outcome that is tangible for which we can develop clear statistics (e.g., number of sales, number of widgets produced, customer service complaints).  The bottom line is that it is going to vary by organization and by program. 

If we begin to evaluate what we actually care about, the evaluations might actually have meaning and significance. 

Tags:

No Comments so far ↓

Like gas stations in rural Texas after 10 pm, comments are closed.