It is common to encounter skeptical attitudes toward the measurement of leadership development programs. In truth, the research team was somewhat skeptical as we began the ASTD/ICF study titled "The Impact of Leadership Development Programs." Our initial foray into the literature only reinforced this skepticism. Searching the literature databases, we found thousands of hits on leadership development programs (LDPs) and thousands more on training evaluation programs. However, when the two terms were put together in a search, very little appeared.
Even those articles whose abstracts indicated we had a "hit" often turned out to be off-topic when read. Most articles gave advice on how to do evaluation of LDPs, but very few provided compelling examples. Our big break came when one of our expert panel members, Laurie Bassi, made the comment, "Maybe we have a case here of publishing bias?" That is, people who really do LDP evaluation well do not have time to publish articles. Bassi's hypothesis turned out to be correct, and our team was able to identify practitioners in the field who had some compelling stories to tell.
As our skepticism subsided, we were faced with a new question: Why is it so rare that practitioners and their organizations engage in Level 4 and 5 measurements? Don't people want to know that their LDP investments are paying off and making the organization more successful? After 18 months of research, we are still not sure we can definitely answer that question, but we now have some great clues. Here is what the research taught us:
- "Defense versus improvement." Practitioners have to approach evaluation from the angle of "continuous improvement" rather than "defense of the program." It seems that many practitioners only engage in Level 4 and 5 analyses when forced to defend their programs. If evaluation became a natural part of the instructional design process and the data were used to constantly make the program better, practitioners would become more enthusiastic about investing resources in evaluation.
- Expertise. Many training departments don't have the internal expertise for conducing rigorous Level 4 and 5 evaluations. Many of the "best case" companies in the study had evaluation experts on staff, and they were committed to conducting valid and practical evaluations.
- Barriers, really? Survey respondents tended to assume a significantly higher number of potential barriers if they had never tried to implement a particular evaluation technique than those who had used the technique. While barriers are certainly organization-specific, this finding also calls into question the accuracy of practitioner's assumptions regarding the required resource level to conduct LDP evaluations.
- Lack of creativity. Many practitioners falsely believe that evaluation is about math. In truth, the math element of evaluation is best left to software. The value that practitioners add to evaluation is in the methodology they devise for efficiently and effectively answering the evaluation questions. With a bit of logic and creative thinking, we can often find an innovative way to show clear evidence of the LDP's effect.
So what does it take to make LDP evaluation work? The ingredients may be simpler than you think.
- Leadership support. As with most change initiatives in organizations, you must have leaders who really understand what LDP evaluation is and why it is important. "Best case" companies in this study invariably had forged a relationship with senior leaders who helped drive the LDP program, as well as the evaluation, forward.
- A culture that supports evaluation. "Best case" companies tended to be in environments where their products or services were of critical importance to their customers. For example, several of the companies were in the healthcare field. Because of this, a culture of measurement and improvement already existed before they even recommended an LDP or the evaluation of an LDP. For those "best case" companies who didn't live within such a culture, they spend a lot of time up-front building one.
- Participant support. Because many of the evaluation techniques that were discovered during the research depended on participant involvement, it is essential that the participants themselves buy in to the evaluation process.
- An "object" for the evaluation. "Best case" companies had a clear understanding of what types of organizational metrics the program was intended to effect. We found that in general, "leading indicators" of impact (such as employee satisfaction and turnover) tended to be measured by the best companies, whereas less successful organization often indicated more lagging indicators such as sales, profitability, and customer satisfaction (but were rarely able to demonstrate that they successfully linked these Level 4 and 5 metrics to the LPD).
- Baseline data. Without valid baseline measurements, it is nearly impossible to isolate the effects of the LDP on the business indicator. Although it can be difficult to collect these data while you are in the throes of creating the LDP, it is essential. Keep in mind that if you intend to evaluate the program, you are going to have to "pay the piper" at some point. As the study demonstrated, those organizations who reported having experience with implementing the more rigorous evaluation techniques (for example, using a control group) reported significantly fewer resource barriers than others.
- An evaluation plan. "Best case" companies don't stumble into an evaluation; they had a carefully thought out plan that documented who they were going to evaluate, what statistical tests they would use, where they intended to get the data, why it was valuable to answer their fundamental evaluation questions, and when the evaluation would begin and end.
Measuring the results of LDPs is the single most important way to aid in the improvement and effectiveness of the programs themselves. This study found that evaluation techniques that worked well were not necessarily those that were used with any great frequency. We encourage practitioners to branch out and try something new.