In a knowledge economy, an organization's ability to quickly adapt to changing realities is critical to its success. To facilitate the upkeep of knowledge and skills, workplace learning professionals seek innovative training design models and delivery methods so they can provide the right information at the right time to the right people.

Authoring, delivery, collaboration, and management tools can facilitate the development, distribution, sharing, and tracking of learning materials. However, if a solution's impact on learning is assessed erroneously, necessary resources may be limited to programs with little or no value. The result: You waste employees' valuable time.

Of course every learning solution is initiated for a specific reason. The problem, however, arises because learning professionals typically rely on qualitative evidence to build the business case for training. Examples include compulsory training, addressing a performance deficiency, improving productivity, or introducing new processes or equipment. This sort of reasoning makes training difficult to assess and prioritize from a client or executive perspective.

When considering a significant investment or deciding among multiple requests, executives need to be able to review a quantitative measurement that addresses how training will help the unit and organization attain its goals, whether training is worthwhile, and how training compares to other organizational initiatives. For example, if the executive office is considering 20 programs but can only fund 10, which ones should they select and why?

A basic blueprint

Learning professionals need to recognize that an assessment must occur during the planning stages, when budgets and resources are allocated. In other words, training cannot rely solely on current evaluation models, such as Kirkpatrick's four levels, that assess training's impact after it has been delivered. By then, it may be too late to calculate valuable results.

By shifting from historical data collection models to a predictive analysis model, managers will become more responsive to current and future learning needs, increase the impact of training by focusing on the most crucial initiatives, and improve training efficiency by selecting the most cost-effective blend of delivery options.

To convert quantitative evidence into qualitative measures at the planning stage, follow this simple and practical model.

Step 1. Define and prioritize the problem or opportunity and validate assumptions. Answer the question: Who initiated the request for training, and for what reason? Based on circumstantial evidence and constraints, managers can evaluate the validity of a request, estimate the impact learning will have on an organization's or business unit's goals, and confirm the need for further analysis.

By linking the requirement to the mission and goals, the weight for each request can be classified accordingly as critical (4), very important (3), important (2), or somewhat important (1).

The monetary benefit for resolving the problem or initiating an opportunity may be used in lieu of a weight value. In most cases, individuals who initiated the request for training can estimate the monetary value.

Step 2. Assess the impact of tasks on the problem or opportunity. For instance, consider a scenario in which a company needs to retain 90 percent of its existing customers. The relative impact of each task can be easily computed by identifying how tasks performed by each group will affect the problem or opportunity.

The impact of tasks by account executives on the retention issue is classified as "critical" (4) because they are not identifying and addressing potential problems. However, the impact of tasks by customer service on this issue is classified as "somewhat important" (1) because they are not communicating key customer complaints to the account executives.

This value assignment implies that to resolve the issue, the communication between account executives and customer service groups needs improvement. It also points out that account executives have four times more impact on retaining customers as customer service representatives.

Step 3. Confirm the need for training and assess impact. Some form of empirical evidence is required to validate that training has resolved the performance deficiency within a group. To validate the assumption that the organization or business unit needs training, managers can use an array of tools, such as document searches, surveys, and individual or group interviews.

It's important to note that a performance deficiency may be the result of any number of issues, including lack of clarity in described job functions, insufficient feedback, inadequate access or reliable resources, disincentives to perform effectively and efficiently, lack of requisite knowledge and skills, physical and mental incapacity, or motivation. In many cases, the solution for a performance deficiency is not training.

If the organization requires multiple solutions, such as a traditional training program paired with a knowledge management tool that collects and distributes informal learning, then the relative impact of each solution can be computed by factoring the effectiveness of each solution.

For example, perhaps the account executives in our scenario have failed to identify and address clients' potential problems because they lack critical communication skills and need access to the latest CRM technology. The impact of training on performance deficiency is classified as critical (4) because communication skills are essential for resolving the problem, while access to the latest CRM technology is classified as important (2) because current CRM technology will be useful but not required.

This implies that both training and tools are needed to resolve the performance issues, but training is twice as effective.

Step 4. Assess the feasibility of implementation. Assess the effectiveness of solutions by examining available lines of funding, existing resources needed to implement and sustain the final solution, compatibility with existing systems, and organizational attitudes or perceptions regarding the proposed solutions. In other words, the more resistance there is to the performance solution, the less effective you can expect it to be.

Step 5. Forecast the costs of plausible solutions. For each solution, estimate the direct (out-of-pocket expense) and indirect (productivity loss) costs pertaining to design, development, administration, management, delivery, support, and maintenance over life.

Step 6. Prioritize recommendations and prepare a plan of action. Training managers can compute the cost-benefit ratio for each solution by simply dividing the impact (benefits) by the costs.

With this calculation in hand, it is easy to compile, sort, and compare the costs and benefits of training programs, as well as other performance solutions, and allocate money and resources to initiatives that will generate the greatest benefit for the least amount of resources.

While a certain level of subjectivity is inherent in this predictive approach, it is nevertheless based on scientific principles commonly used in making various investment decisions.

In addition, this model helps manage expectations by providing clear march 2008 and measurable performance-based outcomes that will help validate training's return-on-investment.

More importantly, clients and executives will become keenly aware of the value of training services.