Findings from the American Society for Training & Development (ASTD) and the Institute for Corporate Productivity's (i4cp) study, The Value of Evaluation, highlight that progress is being made in measurement practices, as the pursuit of excellent learning evaluation continues. However, there is still room for development because few organizations think that they've mastered learning evaluation, and many admit to facing some ongoing challenges.
ASTD and i4cp partnered to explore the complex issue of learning evaluation and the value of learning. A total of 704 human resource and learning professionals completed an online survey during May 2009. The majority of them (85 percent) were managers, directors, vice presidents, or C-level officers, and the sample was evenly distributed in terms of workforce size, annual revenue, and industry.
Kirkpatrick/Phillips evaluation model
The findings revealed that the five-level Kirkpatrick/Phillips model of learning evaluation is the most common practice. The five levels include participant reaction (Level 1), level of learning achieved (Level 2), changes in learner behavior (Level 3), business results derived from training (Level 4), and return-on-investment from training (Level 5). While this finding was not shocking, there were some interesting results to additional questions, such as how frequently these levels of evaluation are used and how valuable they are to organizations.
The survey data answered these questions, showing that the majority of respondents (92 percent) measure their learning programs to at least Level 1 of the model. This isn't surprising given that Level 1 is the easiest metric to track, usually with "smile sheets." The use of other types of evaluation drops off as we move up each subsequent level of the Kirkpatrick/Phillips model, which is likely due to the increasing difficulty in obtaining the information at each level.
A little more than four out of five respondents said that their organizations evaluate at Level 2, where companies explore what knowledge was gained. Level 3 registered another drop in usage, with more than half of the respondents indicating that their organization evaluates learners' behavior. Level 4 measures an important area - results, which only 37 percent evaluate to any degree. Level 5 received the least consideration, with only 18 percent of respondent companies measuring the impact on ROI. The survey results indicate that translating the value of a training program into dollars and cents can be a difficult process.
The most used levels are not the most valuable
One of the interesting insights from the survey data was that the extent to which training evaluation occurs at the different levels of the Kirkpatrick/Phillips model does not tell us much about its perceived value. Although Level 1 is the most used form of evaluation, it was found to be the least valuable for organizations. Only 36 percent of respondents said that Level 1 had a high or very high value. By comparison, 55 percent said that Level 2 had high or very high value, and three-quarters said the same about Level 3 and Level 4.
At first, these findings may strike some as perplexing, but there is almost always more value in knowledge gained, behaviors changed, and results achieved than in participants' reactions. It therefore prompts the question as to why companies are not evaluating more programs at higher levels. Of the few companies that measured Level 5, nearly 60 percent perceived it to have a high or very high value.
The survey probed into the perceived barriers that prevent companies from using all of the evaluation levels. The barrier that looms largest is the difficulty in isolating learning as a factor that has an impact on results (52 percent endorsed it to a high or very high extent), which comes into play mostly for Level 4 and Level 5. The next most commonly noted barrier, cited by 41 percent of respondents to a high or very high degree, is the lack of a useful evaluation system with the learning management system. Thirty-eight percent of responding organizations noted an additional barrier - evaluation data are not standardized enough to compare well across functions.
Each barrier was negatively correlated with the Evaluation Success Index (ESI), a measure of the extent to which respondents believe their learning metrics are a worthwhile investment of time and resources. The intention was to assess which barriers are most strongly associated with successful (or unsuccessful) evaluation. The strongest correlation with the ESI occurs when evaluation data is not standardized enough to compare well across functions (r=-0.23). In other words, the more respondents indicate that their evaluation data can't be compared easily across functions, the less likely they are to report organizational success with overall evaluation efforts. Another significant correlation between the ESI and a barrier occurs when the LMS does not have a useful evaluation function (r=-0.21). This negative association suggests that the more respondents believe that their LMS does not have an evaluation function that meets their needs, the less likely they are to give their evaluation efforts high marks.
These findings provide insight into specific steps that organizations can take to avoid learning evaluation pitfalls. Learning evaluation data loses some of its value when it cannot be compared across functions. Therefore, taking the time to coordinate across functions may be beneficial. Additionally, taking steps to ensure the usefulness of the evaluation function of an organization's LMS can be beneficial. Phillips has also suggested using methods such as control groups, trend-line analysis, forecasting models, and impact estimates to help isolate training's affect on results.
Organizations are always searching for ways to demonstrate the increasing value of training, especially in the current economic climate. Although many are taking the initial steps to achieve excellent learning evaluation for their training programs, there is still much ground to cover in their pursuit to master the measurement techniques and to overcome the challenges they face. T+D