Framing results solely based on Donald Kirkpatrick's four levels of measurement doesn't clarify the business value of training. For example, if measurement for a course on diversity calls for 2,000 managers to successfully complete a one-day course, then upper management sees only significant overhead.

To demonstrate the value of training to the organization, metrics must show a return-on-investment for the initiative as a whole. For the diversity training example, citing past figures from discrimination lawsuits and stating the objective in terms of expected reduction in litigation expenses makes a business case for training. Measuring this cost over time, before and after training, provides real metrics on the return-on-investment.

Defining return-on-investment

Instructional designers need to look beyond traditional training objectives to identify what to measure, how to measure it, and the current baseline against which to demonstrate value.

To do that, designers should follow these steps:

  1. Understand the business context for the training initiative
  2. Analyze associated costs
  3. Define the desired impact of training to the business and how to measure this impact
  4. Gather a pre-training baseline
  5. Track data for a defined period after training
  6. Report results.

The baseline

Before you can begin to demonstrate value, you must understand the training program's business context and its costs. For example, a wholesale distribution company moved its sales process from a paper-based system to wireless PDAs and laptops. This was a costly initiative because many of the sales staff were unfamiliar with wireless technology.

Based on an initial needs assessment, the training plan called for intensive instructor-led training across the country. To justify the proposed training budget, the training team investigated the reasons behind the initiative. They discovered a slow order process, lost orders, an imbalance of productivity among the staff, and low retention rates for the sales team.

Using this information, the team projected that effective training would not only have a positive effect on overall sales, but also on average productivity per individual and on employee retention. The group invited the human resources department to join the discussion, and together they began to measure the costs of losing, rehiring, and training salespeople as part of the average cost of sales.

To measure this cost, the consultants took a snapshot of the company's productivity, which included a number of factors, such as average sales per individual during a 30-day period, percentage of sales by performance group (top performers versus lowest performers), and annual sales staff turnover.

Human resources and accounting provided accurate figures for each category over several years. The team used the last two years as a baseline. This holistic look at the true cost of the salesforce and its productivity took into account that prior to the automated order process, a small number of salespeople carried each geographic area, with poorer performers and frequent turnover adding to sales overhead.

Training would prove its value not only by increased sales in a given time period, but also by demonstrating that productivity of the salesforce as a whole increase - moving average performers to a higher level of productivity and decreasing the cost of turnover as younger, more technology-oriented salespeople were able to get up to speed quickly.

The return

After the new system was in place, the group revisited all the metrics, gathering monthly figures starting after the first month of training. They were able to demonstrate that the new system not only helped achieve projected productivity increases, but it also had a significant impact on employee retention and on maintaining consistency across the salesforce.

With this data, the training team was able to demonstrate an impressive return-on-investment. As projected, retention of newer, technologically-savvy salespeople increased. The new system boosted the salesforce's productivity and attracted and retained more stable, entry-level salespeople. Suddenly the value of the intensive training program spoke for itself.

This goes far beyond traditional preand posttesting to demonstrate knowledge transfer and beyond the measurement of specific results for employees trained on a new system. This approach integrates the training function with business process change and positions training as a key part of the initiative. As the training team was responsible for gathering and reporting results, upper management saw the results as a benefit of training.

Comprehensive analysis

While training managers are often included in planning initiatives, it's often hard to identify a comprehensive set of metrics.

For example, in 2003 a financial company with more than 3,000 information technology workers turned to consultants for help with its ongoing training projections. The department had an established curriculum for IT staff, but it was not able to predict the type of training these workers would need each year. In this case, the goals were to ensure that IT workers kept their skills current and that the training department had the data it needed to forecast training needs for the coming year.

No clear baseline existed. The annual cost of training was calculated and a database of courses was completed, but no correlation of training's value to the organization was made. To set up a goal against which to measure, the team sought input from IT managers, human resources staff, and the training department. Together they estimated an employee's value to the organization, and considered skill levels, the cost of hiring, and training. They projected the number of IT programs for the year, and the amount the company currently outsourced to IT consultants. If the team provided a predictable set of training needs and improved the abilities of the in-house workforce, the cost of training and outside consulting services would decrease without a negative impact on project completion.

The team then developed an online employee review system that provided managers with access to the defined curriculum for each of their staff. The system linked directly to the training department. For each employee's annual review, the manager could work with the employee to identify the appropriate courses in the curriculum and schedule training as part of the review process.

The new system not only enabled the training department to forecast its needs for the current year by reducing the cost of training, but it also allowed managers and staff to ensure that inhouse staff was prepared for upcoming technology projects. The metrics, such as lower consulting costs, lower training costs, and reduced turnover over a twoyear period, were powerful tools that demonstrated value.

These examples highlight the complexity of defining and using metrics. At the same time, it's important not to forget basic tools such as Kirkpatrick's Level 1 measurement, also known as subjective employee evaluation.

In another case, a global networking corporation launched a program that focused on a new method for contracting network services. Salespeople working internationally participated in the program, which was initiated with a comprehensive change management and training program that included online surveys. The surveys tracked the effectiveness of messaging and training.

The change management and training team was discouraged by the poor adoption rate despite a costly and well-received campaign. But the student evaluations quickly uncovered underlying performance problems with the new software. While learners were enthusiastic about the concept and the training, they were frustrated by poor response time. Relaying this information back to the development group initiated changes to the application, which were quickly followed by improved adoption figures.

A changing landscape

As initiatives become more complex and affect the enterprise as a whole, defining true metrics becomes a multidisciplinary task that requires upfront analysis, coordination between departments, and the resources to measure baseline and ongoing performance. Fortunately, as more corporations integrate their office suites, the necessary data becomes more accessible, which gives training departments the tools they need to demonstrate value.

When learning professionals do a better job of demonstrating value to the organization, management will be able to correlate that value to training budgets, and training will only improve.