Ruth Colvin Clark is a specialist in instructional design and technical training with a focus on bridging academic research and practitioner application in instructional methods. She holds a doctorate in the field and is president of her own company, Clark Training & Consulting. Her most recent book, Evidence-Based Training Methods: A Guide for Training Professionals, summarizes the most current evidence we have about critical decisions faced by training professionals every day.
Whether you're a classroom instructor, developer of training materials, training manager, or designer or developer of any form of e-learning, you'll find that your training will be vastly more effective when you base your methods on evidence. Clark answered some common questions about the topic of evidence-based training and her work on the subject.
Q| What is evidence-based training (EBT)?
Have you ever had a client tell you that they needed training, what the training should cover, how it should be delivered, and how long it should last? Because everyone's been to school, everyone considers themselves an expert in learning and instruction. In short, there is little in the way of acknowledgement of our potential as training professionals. Of course, sometimes our clients are spot on. But often their requests are driven by a variety of factors that have little to do with learning.
One hallmark of a professional is the consideration of best evidence when working with clients to solve their problems. And there is a substantive body of valid research evidence about some very basic questions that workforce learning professionals face every day. For example, "Is an investment in graphics going to improve learning?" or, "My client tells me that there is no time for practice exercises. How can I convince them otherwise?" or, "Is learning better or worse in e-learning than in a face-to-face classroom?"
The goal of evidence-based training is to help move training practitioners toward a professional level of practice by incorporating research-based evidence as one of the many factors to consider as we design, develop, and facilitate training.
Q| Why is EBT important right now?
Because training is expensive - according to the recent ASTD State of the Industry Report, a roughly $130 billion annual investment - it's important to invest those resources in instructional methods that work and to avoid methods that are either unproven, or in fact, disproven.
Let me give you an example. A lot of time and money is invested in learning styles every year. But what evidence is there for learning styles? Learning styles is one of the four training myths I tackle in the first chapter of Evidence-Based Training Methods: A Guide for Training Professionals. If nothing else, I'm hoping this book will help curtail the amount of investment wasted each year on learning styles.
Q| What training roles can benefit most from evidence available today?
There is quite a bit of evidence about how best to use visuals, audio, and text - three basic ingredients of all training programs whether delivered in the classroom or online. So whether you work with classroom-based materials, e-learning, or some blend thereof, and whether you are a facilitator, a materials developer, a subject matter expert, or an instructional designer, you can profit from research evidence that informs the decisions you make every day. Decisions about what to put on your slides, how to design e-learning screens, when to use audio, and what kind of practice would be best for a specific learning goal.
Q| What examples do you have of popular training practices negated by evidence?
I already mentioned learning styles, so let me offer a couple of different examples. Common sense would suggest that a good way to explain how something works is through an animation. For example, if you wanted to teach how a braking system worked or how a toilet flushes, you might naturally turn to some kind of animated presentation.
Actually however, we have recent evidence that when teaching how things work, a series of still visuals can be as good as or better than animations for learning. The reasons are that animations, with their rapid transience of visual information, can easily overload the brain and depress learning compared with a series of still visuals that can be reviewed and revisited at the learner's preferred pace. So here we have one example of a counterintuitive instructional method - a guideline about the use of animation that we probably would not consider were it not for research evidence.
Another example has to do with scheduling practice exercises. We know that in skill-building training, practice is an essential component. Suppose however you have planned eight practice questions for each lesson. Does it matter whether you assign all of the practice in one place in your lesson - say, at the end, or disperse the practice throughout your lesson or even among several lessons? In other words, is cramming as good as a more spread out approach to practice?
Research tells us that for immediate learning (say, a test given right after the lesson), both approaches work fine. That's why cramming can get us through the test. But what about a week later? Experiments that measured both immediate and delayed learning show consistently that a more spread out schedule of practice results in better long-term retention.
How many trainers get the opportunity to measure skills a week or month after the training? Not many. Without this research, we would never know the benefits of delayed practice.
Q| What is "good" evidence? How much evidence is available?
There are several different types of evidence that can be useful to workforce learning practitioners. For example survey research such as that used in the ASTD State of the Industry Report helps us learn about what our colleagues are spending, how they are delivering their training, and what tools they are using, to name a few.
Case study research is also useful to give us details and context about specific instructional programs: lessons learned. However in this book I have relied on experimental evidence as the basis for knowing what specific instructional methods can lead to different learning outcomes.
Take a simple question: Do graphics improve learning? We could find out from a survey who is using graphics and what kinds of graphics they are using. Or we could find out from a case study how graphics were used in a particular course and what kinds of results they obtained. However, to really answer the question, "Do graphics contribute to learning," we need an experiment.
In a good experiment, a number of learners are randomly assigned to two different versions of a lesson - one with text alone and another one with the same text and relevant graphics added. The scores from a valid test are compared between those who studied with and without graphics. Statistical tests are run to determine whether any differences in those average scores are due to chance alone as well as to tell us the magnitude of any differences we see.
In the last 20 years, we've had a great deal of useful published evidence based on experiments. However, these appear in a variety of technical journals not generally read by practitioners. So 2010 is the perfect time to summarize and illustrate this evidence for training practitioners.
Q| What are the limits of current evidence on training methods?
There is never one perfect experiment. In any experiment, the subjects are typically of a specific age, such as 4th graders or college students. Also, the experimental lesson is often relatively brief - anywhere from a few minutes to an hour. And of course, the lesson teaches a specific topic, such as algebra problems or negotiation skills. Additionally, conclusions from experimental evidence are usually based on immediate rather than delayed results.
All of these factors will limit the generalizability of any one research study. However, if we accumulate quite a few experiments on a specific question, such as the value of adding graphics to text, we can begin to define the conditions under which a method such as graphics will work best. For example, for what types of topics, what kinds of learners, and what type of visuals will we get improvements in learning?
So for some instructional methods, such as graphics, we actually have quite a bit of evidence to shape our decisions. In other cases, we have only a few studies, and you would need to carefully examine them to see how closely the conditions match your instructional conditions. I would guess that a 10-minute lesson designed for 5th graders may have limited applicability to most workforce learning practitioners.
Q| What training practices or techniques have useful evidence to guide our decisions?
As I mentioned, we have quite a bit of research on how best to use visuals, audio, and text to best promote learning. We have also learned a great deal about some powerful instructional methods, such as examples and practice. I mentioned previously the evidence showing the value of spread out practice compared to practice all in one time or place. Finally, we also have accumulating evidence on how different training strategies, such as inductive or instructive can best be used in workforce learning. In fact, in my book, I discuss the what, when, and why of three main learning architectures that I call "show and tell," "stair step," and "immersive."
Q| How can I learn more about EBT?
I think my book is a great place to start. If you want more depth, I recommend my other books including Building Expertise and e-Learning & the Science of Instruction. All of these books include an additional resources section at the end of each chapter as well as citations of the research on which recommendations are made.
Q| How can I convince my clients to consider guidelines based on evidence?
I find that clients are usually impressed when I can show them data supporting a particular recommendation. Like I said, most everyone has been to school for years and therefore consider themselves experts in learning. However, when they learn that there is evidence regarding how best to shape learning environments, many are pleasantly surprised. Of course, some clients are resistant.
My colleague Chopeta Lyons gave me a good response to a recalcitrant client: "Yes, I can do that [responding to a request that does not reflect best practice]; however, I would be remiss if I did not tell you [summary of your recommendations and reasons]." Naturally, not all clients will buy in. But I figure, my job is to make my best recommendation, explain the reasons, and summarize the evidence. In the end, it's up to our clients to decide how they want to invest their resources.
Q| How will EBT change my approach to training?
I think that most practitioners will reconsider how they are using visuals, text, and audio, whether in a classroom or e-learning environment. I hope that many will revisit their conceptions of learning styles and replace any resources invested in learning styles with practices that are more evidence-based. Also, practitioners need to reconsider the goals of their instruction and be assured that the training architectures and methods are in alignment with those goals.
Q| What were the biggest challenges in writing the book?
There are hundreds of research studies, and these are scattered among diverse research journals and presented at technical conferences. Most practitioners don't have time or resources to access a body of research, review it, and attempt to synthesize it. My challenge was to try to home in on the research that is most relevant to the kinds of decisions made by workforce learning practitioners and present it in a readable and actionable format. Hopefully, the book marks a milestone in moving training practitioners toward a professional level of practice. T+D