Your organization or accrediting body may require you to collect evaluations from learners. It is important to recognize the value of these documents and their relevance within a larger process. Evaluation is not just a form or tool, it’s a process. This process is derived from a model, designed by educators such as Donald Kirkpatrick, Roberta Straessle Abruzzesse, and Daniel L. Stufflebeam. To gain meaning from any evaluation tool, it must be viewed in context of the larger model.
Most evaluation models are hierarchical (Figure 1), and most evaluation tools focus on the process level. For clarity in this discussion, I will refer to this process evaluation tool as a participant feedback tool. This tool reflects satisfaction with the delivery and learning environment, the learner’s self-assessment of the achievement of the objectives, and thoughts about the effectiveness of the faculty and teaching strategies.
This tool comprises a small component of the evaluation process. Learners can only see the educational activity from their own perspective and do not have a full appreciation for the intentions of the planning team or the expertise of the faculty in terms of evaluating the learning. Thus relying solely upon participant feedback is a limited approach.
Content evaluation often relies upon concepts such as pretest and posttest. Administering a multiple choice or other item test immediately after the learning has taken place does not measure the integration of learning into new behaviors. It provides a means to identify if content was remembered, but not necessarily learned sufficiently to apply to a situation. Many educators see the use of the process evaluation as a precursor to the content evaluation; learners must first complete the participant feedback tool to also complete the posttest.
Outcome evaluation is used less frequently than content evaluation, relying upon the first two levels to be satisfied before engaging at this third level. Outcomes are the results achieved due to the learning activity. While a direct cause and effect is difficult to measure, the intent of teaching is to change behaviors. In my field of healthcare, for instance, we aim to help learners achieve practices that improve patient outcomes. Therefore, it is fair to look at the outcomes of education as improved patient care measures.
Impact evaluation represents an organizational improvement. It is fair to say that not all education can be linked to a change in an organization’s performance, but it’s unreasonable to say that everyone must pass a test to make a change for the organization.
What about flipping the hierarchical nature of evaluation models into a new structure?
Each element of the evaluation process is important and should be identified during the planning process for any educational activity. Learning activities are based on learning needs or gaps from which the learning outcomes or objectives are developed. Therefore, the learning gap also drives the selection of elements for the activity evaluation. If the learning need is based on patient care or organizational data, then you can automatically plan for outcome and impact evaluation to demonstrate a difference between the performance at the time of learning need and the performance as a result of the learning activity. Figure 2 shows an evaluation process broken into separate elements.
As each item is considered separately, the right tools, strategies, or measures can be planned for, while you’re designing the activity.
- Learners can provide valuable feedback through the participant feedback tool, yet many faculty may value this feedback more for their self-esteem than their self-improvement. This one element does not provide the answer to whether a learning gap was satisfied.
- Content evaluation can be achieved during the learning activity. Interactive exercises, audience response systems, and the types of questions and clarifications required during the session, all contribute to content evaluation.
- Outcome evaluation can come from surveys of learners regarding the changes that have been implemented in practice three months later, for example. The learners can identify what has changed, as well as their perceived effect on the learning gap.
- As introduced earlier, impact evaluation can come from occurrence, retention, and performance data from within the organization. For example, a learning activity intended to provide staff with more details about required elements of documentation can result in improved financial performance.
Separating the elements into distinct components can help us understand each element independently. Figure 3 suggests another way to organize for an efficient and effective comprehensive activity evaluation. The Institute of Medicine offered many observations regarding the status of continuing education for healthcare professionals in their 2009 report. One area of scrutiny was the lack of valid evaluative measures to demonstrate that the effort of delivering healthcare education was contributing to improvements in patient care.
To apply this revised model, the planning team should consider the value of the process, the content, and the outcome evaluation to demonstrate that the learning gap was satisfied and the education achieved its intent.
Figure 4 includes horizontal lines, to show the cross cut of including some outcome component with the process and content measures for each activity. Because some of this evaluation is formative, it can occur during the activity. Thus, there is a reliance on the faculty to provide evaluative feedback based on their interactions with the learners, balanced with the education and content expertise of those who planned the activity and the learners’ impressions.
Each educational activity is unique and should be planned independently. The most effective process for evaluation should be identified before the program is delivered, so that the appropriate strategies can be employed by faculty and planners to elicit the feedback that will demonstrate a return on this educational investment in our healthcare professionals.
Institute of Medicine (2009). Redesigning continuing education in the health professions. Washington, DC: National Academies Press.
ASTD Field Editor Pamela B. Edwards is director of education services for Duke University Health System in Durham, North Carolina; email@example.com.
© 2012 ASTD, Alexandria, VA. All rights reserved.