Advertisement
Advertisement
ATD Blog

Isolating the Effects of Your Program—An Offensive Play

Thursday, March 15, 2012
Advertisement

Some people will say that the step to isolate the effects of your programs when measuring and evaluating their results is a defensive move. This can easily be said for evaluation at large. Anytime the ball is being run toward your goal, you’re on the defense – protecting what is yours. The key is in taking the offense and addressing tough questions before they are asked.

The Tough Questions

Mike Swan is the training manager at a large tire retail company. He piloted a new training initiative in five stores. The purpose of the training was to reduce customer wait time and increase the number of cars serviced per day. Upon completion of the pilot, data showed that customer wait time had gone down and cars serviced per day increased. Mike shared these data with his Chief Learning Office (CLO) as well as the Chief Financial Officer (CFO), hoping to receive enough funding to implement the initiative in other stores. The CFO, impressed there had been improvement in the two measures, asked:

“How much of that improvement is actually due to the program?”

Mike responded that he could not say with any level of certainty, but he said he knew that without the training, the improvement would not have occurred. The CFO asked a second question:

“How do you know?”

When Mike could not answer, the CFO suggested that he find out before he received additional funding. Mike is now playing defense .

Advertisement

The Emotional Debate

Had Mike addressed the isolation issue during the evaluation and presented the positive results so that answers to the tough questions were evident, Mike may have received funding on the spot. All the executives wanted to know was how much change in improvement was due to the program--a fair question.

Those who argue that you cannot or should not isolate the effects of a program are often uninformed or misinformed. While a long-time part of the research process, this important step of measurement and evaluation was first brought to light in the training industry in the late 1970s when Jack Phillips developed the ROI Methodology. It was later incorporated into the first Handbook of Training Evaluation and Measurement Methods published in the U.S. by Gulf Publishing and authored by Jack Phillips (1983). The book, now going into its fourth edition, is used by training managers and academia worldwide. In spite of the wide application and acceptance by executives and researchers of this important step, the topic of isolating the effects of the program stirs up such an emotion in people that one has to wonder whether or not there is a fear that maybe the training does not make a contribution.

It is because of this debate and the need for more information that this topic is covered in the ASTD Handbook of Measuring and Evaluating Training. In this chapter author Bruce Aaron, Ph.D., capability strategy manager for Accenture, describes the importance of isolating the effects of your programs through the evaluation process. He describes some of the approaches often used by organizations. As you read the chapter, you will find there are a variety of techniques available.

Advertisement

The End of the Debate

Will this debate of isolating the effects of the program ever end? That’s like asking the question, will the need for evaluation ever end? Hopefully the answer to both is no. Without debate, there is no research – without research there is no grounding – and without grounding there is no sustainability.

Fortunately, more than ever, individuals responsible for training measurement and evaluation are taking the offense. They are pursuing good evaluation, including isolating the effects of their programs. They plan ahead and can answer the tough questions – before they are asked.

About the Author

Patti Phillips is president and CEO of the ROI Institute and is the ATD Certification Institute's 2015 CPLP Fellow. Since 1997, she has worked with organizations in more than 60 countries as they demonstrate the value of a variety of programs and projects. Patti serves on the board of the Center for Talent Reporting, as Distinguished Principal Research Fellow for The Conference Board, and as faculty on the UN System Staff College in Turin, Italy.

Patti has written and edited numerous books and articles on the topics of measurement, evaluation, and ROI. Recent publications include Measuring the Success of Leadership Development, Making Human Capital Analytics Work, Measuring the Success of Learning Through Technology, Measuring the Success of Organization Development, and Measuring Leadership Development: Quantify Your Program's Impact and ROI on Organizational Performance.

Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.