A study sought to determine which training method was most effective for behavior change—interactivity or the chance to fail.

Booz Allen Hamilton was interested in determining if the interactive cyber awareness e-learning program and simulated phishing attacks that we provide to our employees and those of our clients would prove more effective than more traditional training methods. To that end, we performed a blind study whose goal was to determine the impact of the level of interactivity in an e-learning course designed to change learners’ behavior and the impact on behavior of allowing learners to experience simulated failure in the work environment.

Our team of instructional designers frequently used the level of interactivity as both a factor of development cost (highly interactive e-learning costs more to develop than e-learning with limited interaction) and as a predictor of learning effectiveness (highly interactive e-learning is more effective for learning transfer than noninteractive e-learning).

We also knew that simulated experiences that allow users to fail are important learning opportunities. We were unsure, however, how much impact these two different design approaches—highly interactive e-learning and opportunities to fail—would have on the target audience’s on-the-job behavior.

Our study revealed that interactive elements were not nearly as important as we had presumed and probably not as important as the context of the training. To establish a meaningful context, we determined that we needed to exploit five key factors that support the learner’s openness to learning.

Case study overview

During approximately nine months, Booz Allen conducted a blind study with almost 500 participants to determine the relative effectiveness of pre-incident awareness training alone compared with a combination of testing and targeted remedial training. We compared the behavior results from each group that was exposed to the following instructional approaches:

  • nonrelevant training or placebo training (control group)
  • relevant e-learning with static pages of content typical of many traditional mandatory e-learning offerings (page-turner group)
  • relevant and highly interactive e-learning (interactive group)
  • testing with a simulated experience combined with remedial training for only those learners who fail (failure-triggered training group).

At the start of the study, all participants received a basic phishing awareness bulletin and were separated into three groups—control, page-turner, and the interactive group. A fourth group, the failure-triggered training group, was formed after the first test event.

Two months into the study, the participants completed the training assigned to their group. Four months into the study, all groups were evaluated by their responses to an unannounced, simulated phishing attack (Test 1, a Kirkpatrick Level 3 evaluation).

Figure 1 shows what we expected as a result of the training approaches. We hypothesized that about 50 percent of the control group would respond incorrectly to the simulated phishing attack test by clicking on a potentially dangerous link in a suspicious email. We also predicted that the page-turner and interactive groups would have significantly fewer incorrect responses than the control group (that’s to say, fewer people would click on dangerous links).

To our surprise, we found no significant difference between the control, page-turner, and interactive groups in their responses to the unannounced simulated phishing attack test (see Figure 2). In spite of good Level 1 and Level 2 evaluation results for the page-turner and interactive training approaches, the on-the-job Level 3 evaluation indicates that the pre-incident training had no significant impact on actual behavior.

We then analyzed the failure-triggered training group that comprised participants who incorrectly clicked a suspicious link in Test 1. Only those participants who failed the test were notified that they had been caught by a simulated phishing attack and, thus, were directed to complete remedial training. The remedial training was the same for all groups—the interactive training that was the least effective pre-incident training approach according to the results of Test 1.

A few months after Test 1 (six months into the study), all participants received a second unannounced simulated phishing attack email (Test 2). And nine months into the study, all participants received a third and final unannounced simulated email attack (Test 3).

Figure 3 shows the results from all three tests. There was a significant difference (p < 0.05) in the number of incorrect responses in Tests 2 and 3 as compared with Test 1. Test 3 had an average failure rate of only 1.4 percent compared with an average failure rate of 44 percent in Test 1. Note that all participants who failed a test, including those in the control group, received the interactive failure-triggered training materials.

Booz Allen’s study makes an argument for the superiority of sustained, unannounced simulated experiences combined with short and targeted remedial training to achieve desired behavior changes. During the nine-month study, behavior change did occur with the failure-triggered training, but not with only traditional pre-incident training.

Implications for instructional design

We were surprised that the level of interactivity in the training materials had no measurable impact on the transfer of learning to on-the-job performance. Additionally, we underestimated the importance of failure-triggered training for creating points of realization.

Our study on the effectiveness of different training delivery methods yielded an unexpected indication—it’s not just a question of combining the right content with interactive approaches, but rather a clear realization of the end-user’s needs and environment (context) that determines the best solution. Failing a simulated experience in their normal job environment created potential learning instances for the participants that we call “points of realization.

As described in the sidebar below, it may take multiple factors to break through the cognitive noise of a typical work day. That breakthrough point can be referred to as learning at the point of realization.

As instructional designers, we no longer can assume that we can pull people away from work tasks for large chunks of training. Instead, we should be supporting employees with opportunities to learn when they realize that they have a need—even if we must provide some simulation to get them to that point.


Learning at the Point of Realization

As training professionals, we recognize that learners must be receptive to learning for a change in behavior based on new knowledge to be applied. Based on the results of this study, we identified five specific factors that support an openness to learning—or, in other words, create the context for learning.

  1. It has to be relevant to the learners—or really interesting. They will ignore it otherwise.
  2. Learners have to realize that they have a knowledge gap. If they think they already know the content, they won’t pay attention.
  3. Learners need it immediately. If it is relevant but not needed immediately, learners probably won’t attend to it. They will delay changing their behavior on the job.
  4. The content needs to be engaging. It should be unexpected or intriguing—it needs a “hook” to bring learners in.
  5. It has to fill learners’ knowledge gaps in a concrete fashion, clearly enabling them to take action—even if it is only the first step.
Figure 1
Figure 2
Figure 3