Measuring the success of learning and development has earned a place among the critical issues in the learning and development field. For decades, this topic has been on conference agendas, and journals and newsletters regularly dedicate space to it. Professional organizations have been created to exchange information on measurement and evaluation, and more than 50 books have been written on the topic. More importantly, top executives have an increased appetite for data about the business contribution.
Although interest in measuring training success has been heightened and much progress has been made, it is still an issue that challenges even the most sophisticated and progressive learning and development functions. The top executive group, the most important stakeholder, is the key.
While some learning and development leaders argue that developing a successful evaluation process is too difficult, others are quietly and deliberately implementing effective evaluation systems and reporting results to executives. The latter group has gained tremendous support from the senior management team and has made much progress.
Regardless of the position taken on the issue, the reasons for measurement and evaluation are clear. Almost all learning and development professionals share a concern that they must show the results of learning investments to senior executives. Otherwise, funds may be reduced or the function may not be able to maintain or enhance its status and influence within the organization.
The dilemma surrounding the success of learning is a source of frustration for many senior executives. Most executives realize that learning is a basic necessity when organizations experience significant growth or increased competition.
Formal learning is also important during business restructuring and rapid change, where employees must learn new skills and often find themselves with heavier workloads. However, during an economic decline, some executives are not so sure that learning and development is needed, as evidenced by some of the reductions in learning and development team members in this recession.
Executives intuitively feel that providing learning opportunities is valuable, and they logically anticipate a payoff in important, bottom-line measures, such as productivity improvements, quality enhancements, cost reductions, time savings, and improved customer service. Yet the frustration comes from the lack of evidence to show that programs really work.
While results are assumed to exist and learning programs appear to be necessary, more evidence is needed, or executives may feel forced to adjust future funding. A comprehensive measurement and evaluation process, designed with top management in mind, represents the most promising, logical, and rational approach to accounting for the learning investment.
We conducted a CEO survey aimed at obtaining direct feedback about the success of learning and development from a significant number of CEOs in large organizations. To our knowledge, no significant data has come from these elusive groups. And although many one-on-one interviews are presented in profiles in magazines, rarely do they discuss specifics on results.
Surveys of this nature are often directed toward heads of learning and development, where they are asked to provide their impression of the results their executives want, instead of obtaining this information directly from the CEOs.
To obtain the executives' views on learning and development, we sent a survey and a letter with instructions that asked CEOs not to forward our survey to the learning and development department in hopes that we could hear directly from them. The survey was formatted and designed for the most optimal response rate possible, which for us, was at least 10 percent, and an optimistic 30 percent.
Techniques to achieve a higher response rate. We collected data between October 2008 and February 2009 using the most accurate CEO database directly from Fortune Magazine. We discarded any firms on the list that were currently facing economic turmoil, such as AIG, Lehman Brothers, General Motors, Ford, Merrill Lynch, Morgan Stanley, and Chrysler, or any company that had reported a significant loss.
This trimmed 99 companies out of Fortune 500, leaving 401 for this survey. We selected 50 large, private-sector employers, using Hoover's website as a guide. Essentially, these companies would be listed in The Fortune 500 if they were publicly held companies. Together these provided a total of 451 firms in the large-company sector to receive this survey.
In a well-executed return-on-investment study, for example, it is not unusual to have a 70 to 90 percent return rate. We applied the same discipline, determination, and techniques to this project because this group was particularly difficult to reach.
Their exclusivity is in part a result of their "gatekeepers" (assistants, vice presidents, and others), who protect them from tasks that may be deemed time-consuming or not essential to their role. In many large organizations, dozens of gatekeepers may be assigned to filter just one executive's requests and demands. Consequently, we knew that our approach for this survey had to be creative. We used 10 powerful techniques to achieve a higher response rate. Here are a few of them:
- Unless CEOs elected to provide contact data, survey responses were anonymous. (No one chose this option.)
- We wrote personal notes on almost all of the letters, based on our relationship with that organization. For many of them, we are shareholders, and wrote a plea for results as an act of accountability. In other cases, we are customers, and wrote that we were interested from a customer viewpoint. Sometimes we would mention our current relationship, with phrases such as "we have projects ongoing," "we serve as a regular consultant," or, as in the case of IBM Services, "we act as an official subcontractor." This personal approach may have helped the survey stand out among numerous other requests.
- We tried to work with someone else in the company, who was not in the learning and development area. In approximately 20 percent of these firms, we knew someone, usually a middle manager. We asked them to deliver the survey directly to the CEO.
Response rate. Ninety-six individuals responded, representing 21.3 percent of the total. Executives chose to remain anonymous, and some did not answer particular points or provide comments. A few executives, however, gave us extensive comments and seemed to take a great interest in doing so.
This response is especially significant when considering the difficult economic circumstances during the time the survey was conducted. Spending a few minutes on a survey addressing learning and development is not a priority for most executives during this time. Additionally, to our knowledge, all of the returned surveys were actually completed by the CEO, which was our request. We suspect that if a CLO had completed it instead, she would have acknowledged it.
Investments of these companies ranged from $10 million to $640 million. Sometimes, executives scribbled notes about how they were not sure of the exact amount. The average number was $138 million. Figure 1 provides additional details.
Regarding the rationale for setting the investment level, CEOs selected a strategy from a list. (Figure 2 shows the responses.) Although these results confirmed what we expected to a certain extent, a few surprises surfaced. Only 4 percent acknowledged that they try to avoid these investments, but we suspect that this number may be a little higher.
Twenty percent of CEOs said they invest only the minimum. In part, this may be due to the current economic times, whereby executives have had to trim activities that are perceived to be unnecessary. As expected, benchmarking was the most-reduced (39 percent of respondents), but because we asked that only one strategy be checked, there may be a combination of benchmarking and other possibilities. We limited the choice to only one in the survey to determine the dominant approach for setting the investment level.
Surprisingly, a significant number of CEOs (10 percent) mentioned that they invested in all learning and development needs. Although we worded this option so that they would feel comfortable with the choice, this is probably over-investing. From our own experience, we see this routinely, and some executives are proud that they can invest in practically any learning request.
Finally, it's quite refreshing that a significant number invest when they see value (18 percent). We assume, however, that there are many different definitions of the term "value," and that this does not always mean ROI calculations.
Reporting relationships. An important factor in this study is to determine how close the CLO, the head of learning and development, is to the CEO. In this study, a "1" indicates that the CLOs report directly to the CEO. A "2" means that there are two levels between them, and a "3" means that there are three levels between them.
The average was 3.2, which means that the CEO is at least three levels above the CLO. This distance is a little disturbing, considering the great amount of effort that has been focused on pushing this function to a higher level within the company.
One challenge that has compounded accountability issues in learning and development is the lack of interaction between the learning and development professional and the senior executive team. In most organizations, this interaction is limited. In only a few organizations does the top learning executive report to the CEO. Even in those organizations, the time spent with the CEO is not proportional to the time the CEO spends with other direct reports.
Senior executives have limited time, and they spend it in those areas they perceive to be critical, important, and central to the organization's success. Unfortunately, many executives do not see learning and development rising to this level of criticality, and they therefore allocate little time to engage in it.
The problem is compounded when the learning and development executive reports through one or more executives and only interacts with the CEO on special projects or during periodical reviews of the learning and development budget. Regrettably, these budget reviews are when senior executives hope to see a connection between learning outcomes and the business to justify increasing or sustaining budgets. It is no surprise, then, that there is confusion and misunderstanding of expectations with regard to requirements for measures of learning success.
Satisfaction with the current measures of success. Limited interaction with executives often forces the learning and development leader to "guess" what the top executives want in terms of measures of success. This guesswork becomes more inaccurate when filtered through multiple layers of interpretation.
Asking top executives outright what specific measures they want to improve often yields ineffective or misguided dialogue. After all, top executives do not see their responsibility as defining the measures of success. Essentially, they want the learning and development leaders to report improvement in measures of success that are meaningful to them in terms of business contribution.
We asked the executives a very general question regarding their level of satisfaction with the measures of success for learning and development. We created a four-point scale to force the executives to take a stand. On a four-point scale, a "1" would be very dissatisfied, and a "4" would be very satisfied. We were hoping to see at least a "3," but unfortunately the results were 2.52, indicating some dissatisfaction. (Figure 3 shows the results.)
Metrics coverages. We knew it was easiest to have executives respond to particular checklists, and deciding which metrics to use was very critical. We provided eight categories and mapped them into the levels of evaluation. The first two categories were inputs and efficiencies, which are process measures or inputs to the process, including volume, costs, and speed.
The next two categories, reaction and learning, are typical learning measures. Application is the extent of the use of knowledge and skills. ROI, as well as impact, which is the business measure, created much interest. We included ROI because of the abundance of information about its use. Finally, we included awards, which many CLOs are currently pursuing and reporting to the executives, particularly in large organizations.
Obviously, these map into the levels of evaluation described in the literature. The first two categories (inputs and efficiencies) are Level 0, reaction is Level 1, learning is Level 2, application is Level 3, impact and awards are Level 4, and ROI is Level 5. Although other specific measures may be identified, they all should fit into one of these eight categories.
Given this list, we wanted to know three things:
- What metrics are being reported to you now?"
- What should be reported that isn't being reported now?"
- How would you rank these in terms of value?"
Figure 4 shows the responses. The first percentage column is the percent of CEOs who checked this item as a metric being reported, the second is the percentage indicating that it should be reported, and finally, the last column is the average ranking number for the group, recognizing that the lower the number, the higher the ranking.
A score of 8 would indicate an 8th place ranking on the list, and a "1" would indicate first on the list. Inputs and efficiencies were ranked 6th and 7th, respectfully.
Input indicates the scope and volume—something executives need to know. These types of data are always being reported. While most CEOs receive this kind of information now, they quickly recognize its limited value. Reaction is ranked the lowest, which may not be a surprise, though it is the number 1 outcome measure reported to executives. This particular measure could be improved with more focus on content.
Awards was rated higher than we expected. Both pessimists and optimists wrote comments. The optimists were proud of their awards and thought they reflected the quality and significance of the learning and development team. Others said that the awards mean very little and are often based on how much they're willing to spend on the award application.
The two highest-ranked areas were impact and ROI, which should come with little surprise because CEOs always want to see this kind of data, especially when the economy is struggling. This reporting represents a very important challenge and opportunity for learning and development: These are the least-reported data sets, but at the same time, they are the most valuable to executives.
Learning scorecard. We asked about the learning and development scorecard, and we discovered that only 21 percent of the CEOs surveyed said that they had a learning and development scorecard. This is surprising given the work to develop balanced scorecards in large organizations. Of course, a scorecard very well could be in place, but it does not make its way up to the CEO.
On the positive side, this result indicates that some executives are reviewing scorecards on a routine basis. For the most part, the comments in the scorecard were either negative or constructive. Only one indicated being pleased with the scorecard in use. The other comments indicated a belief that the scorecard was "inadequate," "incomplete," "doesn't have all the data," or "doesn't really connect to the business."
Executive involvement. A critical issue for learning and development departments is the extent of the executive involvement. Most would argue that executives are taking a more active role or are more involved with the investment, and thus, more results will be achieved.
As expected, the top area is that the CEO personally approves the learning and development budget, which was indicated by 78 percent. Second on the list, indicated by 73 percent, was that they review requests for major programs, while 61 percent review the results of those programs. Twenty-four percent use a scorecard to monitor the progress and make adjustments.
Next, 29 percent opened and closed major programs, while 21 percent host or conduct periodic review meetings, and only 18 percent actually teach segments of major programs. Disappointingly, the two weakest levels of involvement, holding periodic review meetings and being involved by teaching segments, can have the most impact on learning and development success.
Periodic review meetings represent an opportunity to review progress, make adjustments, and check results. This is a great way to stay connected and provide feedback to see the results in order to boost funding in the future. Getting involved in teaching is a powerful way to connect learning and development to the organization and deliver value. Jack Welch (GE) and Andy Grove (Intel) are two historical examples where this type of involvement was extremely effective.
While the results are based only on 96 executives, the amount of information is significant. To our knowledge, the results may represent the highest level of CEO involvement in research on measuring the success of learning and development ever assimilated. The results present some challenges for the CLO and the learning and development team.