Ten years ago, just 9 percent of courses were delivered via technology-based tools. Today, more than three times as much instruction relies on technology. In its 2008 State of the Industry report, ASTD described the situation this way: "Although traditional classroom instruction still occupies a prominent space, learning professionals are turning to technology to help streamline operations and deliver learning at less cost and with greater reach."
These new approaches to training and development raise numerous questions: Are employees learning? More to the point, are they producing? What gets in the way? Are the programs becoming more effective as designers and participants grow more experienced with them?
Inquiring minds want, and need, to know.
Appreciation without action
Workplace learning professionals across the globe can recite the Donald Kirkpatrick mantra: Level 1 is reaction, Level 2 is learning, Level 3 is performance at work, and Level 4 is results.
Familiarity, alas, does not guarantee action. An ASTD Benchmarking Forum study found that while 94 percent of courses are examined for Level 1, only 34 percent are checked for participant knowledge. The two field-based levels, 3 and 4, fare far worse. Just 13 percent of courses are examined for Level 3, and only 3 percent for Level 4.
Kirkpatrick's approach endeavored to measure a training industry that was all classroom instruction, just about all the time. But what about today, when learning moves into the workplace, and programs are dotted with wikis, blogs, social nets, and on-demand resources? It is risky to assume that all is well when so much is new.
Most customers roll their eyes at abstract levels of evaluation. They are not all that interested in reactions to training or even how much their people learned in class.
But there are other matters that grip them. They want reports on impact. Many are concerned about transfer to the workplace and are eager for supervisors to advance key messages. Some line leaders who see the value of on-demand resources want to know if they are delivering the right ones. Let's shift the conversation to the burning questions - theirs and yours.
What are those burning questions? Table 1 frames the conversation about metrics to focus on a dozen key purposes while advancing four goals:
1. Examining what matters to line leaders and their organizations. Familiar reporting basics are covered in Table 1. Participation, outcomes, and compliance are singled out. But every executive is not the same, and their priorities and circumstances vary - one might be keen on reactions, while another might favor outcomes or engagement. The list in Table 1 aids training and development professionals as they engage with customers to find out what is top-of-mind and to then customize metrics to those priorities.
2. Capturing what matters to learning leaders. Learning professionals have their concerns, too. Chris Moore, CEO of Zeroed-In Technologies, said this: "Knowing what to measure is the Holy Grail for learning managers"
When you look at Table 1, certain priorities will "pop." Given a specific project and its competing tasks, at a specific time, and after a conversation with the customer, on what will you focus? For example, metrics for a frequently revised performance appraisal program might lead you to numbers 3, 4, 9, and 10. Efforts to create a new contracting curriculum for insurance support people might bring 1, 3, 5, 10, and 12 into focus. Questions about an online support system for auditors might direct your attention to numbers 2, 4, 7, 9, and 11. A community college professor, meanwhile, seeks insight into the success of her new class on graphic design; she attends to numbers 2, 4, and 6.
3. Using the data in three ways. When you boil it down, learning professionals use data from Table 1 in three ways: to plan, report, and improve. Let's focus on the community college professor. Imagine that she, to provide data about her students' learning (number 6), asks them to criticize 10 graphical treatments using the criteria taught in class. The professor then measures the class on their ratings and rationale, comparing their efforts to those of a panel of experts. That data has implications for planning a short prerequisite offering for the class such that students will enter with shared skills and knowledge. She also reviews their performance, makes improvements to the class for the next time she offers it, and reports results on the exercise to each student.
4. Matching the metrics to the changes. Technology is changing everything. As training and development action shifts to the workplace, employees have more responsibilities. More is expected of their supervisors, too. Web 2.0 encourages user-generated content. The purposes listed previously have been altered to reflect promises and to inquire about reality.
Allow me to introduce Ashwini, director of workplace learning and development for a global retail firm. In response to harsh economic times, her organization has reduced classroom instruction and increased offerings via technology. They commenced by revising the onboarding process. For the past seven months, the program has been delivered through video and podcasts, with a website and online community devoted to providing answers to questions as they arise.
Ashwini had a great deal to talk about when colleagues at an international conference asked what she had been up to. What she lacked, however, was any clue about what difference the new program was making and how it could be enhanced. How might she use the ideas in Table 1 to focus her metrics? Ashwini worked with the executive vice president of human resources to settle on numbers 1, 3, 4, 9, and 10, as shown in Table 1, and in six months, they will pursue numbers 11 and 12.
Because of a brutal deadline, Ashwini and her team rushed the production of the program's media elements. She began to hear occasional grumblings about the new program and realized the need to back track on her planning. She acknowledged that her team failed to query stakeholders about their priorities and about what distinguished the star performers from the less successful. For example, what must new people know by heart? What may they reach for as needs arise? Will managers play their parts in the new process?
No one asked Ashwini to conduct a report, but she was able to see value in gathering targeted data about use and satisfaction:
- whether users can find what they need on the website
- whether employees are turning to the rich media elements, online community, and website
- whether new people are getting the support that they require.
Methods and purpose
Once clear about the purpose of analysis, turn to finding an efficient way to capture what you need. For example, if it is alignment that concerns you (yes, number 4), a survey, series of interviews, or focus group would work to solicit opinions about what is, or might be, getting in the way. If number 6 is your interest, you would test what participants know and can do through exercises and cases. Imagine, as is often the case, that strategic outcomes, number 7, is top of mind. Errors, cycle time, call backs, customer satisfaction, sales, and completion rates might do the job, depending on the goals. Table 2 is a summary of methods matched to purposes.
Let's listen in on a phone message from Mitch, a learning executive for a financial services company: "For the past two years, we have been working to move our training people toward analysis and tailored solutions, not just training programs. We offered development for our people, provided books, articles, an online community rich with resources, and incentives. What I want to know is how they're doing and how to help them more." After some email exchanges, Mitch and Allison had a telephone conversation that went something like this:
Allison: Thanks for rating the items from Table 1. I looked at those you marked as top priority, somewhat of a priority, or not a priority, and I think I know how to help.
Mitch: Allison, I wanted everything on that list, but I know we only have a few weeks and a limited budget to get into it all.
Allison: What I hear is that you are most interested in planning an even better system for your people, right? And that you want to get a fix on outcomes, too, in the opinion of line leaders.
Mitch: That's it. I want to know how we are getting in our own way, if we are getting in our own way, and how to deliver resources that match the needs of our workforce. Some are veterans. Most are new to this. They wouldn't require the same training and support. And I want to look into results. I'm curious. Others might be too.
Allison: What about learning? Number 6?
Mitch: I would love to know if they are learning anything, sure, but that is not critical. My focus is their performance. I want them to deliver valuable services to the line. And we gave them so much on that web site. Do they use the resources? Oh, and are they positive about their new roles in the shift from training to performance?
Note how Table 1 has influenced Mitch. Look at how it sets up the conversation. Mitch reviews the possibilities and then states his preferences. Sure, he wants it all, but does eventually settle on numbers 1, 2, 3, 4, 7, and 12. He chooses from options.
Now, it is time to move from purposes to methods. Review Table 2. Look for ways to address several burning questions through a single collection method. The goal is to collect evidence so that Mitch can report on and improve his program. Table 3 applies these ideas to Mitch's requirement.
No flopping around in the dark
A dozen years ago, a manager opened a door to reveal a room full of grey file cabinets chock-full of postcourse evaluations. This organization, like many others, habitually gathered opinions from participants about satisfaction with instructors and courses, even coffee and cookies. They then placed the forms in a secure room, with no efforts to use the data to plan new programs or to improve old ones. Occasionally, they reported on reactions and participation, but those reports more typically went to the leaders of the learning unit or the line, not instructors or designers.
We can do better. We must do better.
As learning spills into the world of work, there are pressing questions and concerns. Fueled by abundant enthusiasm but limited by scant experience, professionals must rely on metrics to plan, report, and improve. If we skip it, or do as we have always done, the blame will go to the methods. Doubt will be cast on blends, e-learning, blogs, web 2.0, and performance support tools, and many will hunker down with the familiar.
But do these emergent forms deserve the blame? Maybe so. Maybe not. We won't know (or be able to argue for another or a better direction) if we continue to discount metrics. T+D
Brinkerhoff, R.O. Telling Training's Story. San Francisco: Berrett-Koehler, 2006
Moore, C. Selling the C-suite on results. Chief Learning Officer. 8(2), 18-23, 2009
Kirkpatrick, D.J. Evaluating training programs: evidence vs. proof. In: Kirkpatrick, D.J., editor. Evaluating Training Programs: The Four Levels.
San Francisco: Berrett-Koehler, 1994
Phillips, J.J. Return on Investment in Training and Performance Improvement, 2nd Edition. Boston: Butterworth-Heinemann, 2003
Rossett, A. First Things Fast: A Handbook for Performance Analysis, 2nd Edition. San Francisco: Pfeiffer