The volume of knowledge that can be shared via informal learning methods is vast, but that doesnt mean evaluation is impossible.
Imagine for a moment that you are a sailor on an 18th century Man-of-War, the most powerful sea-going vessel in the British Empires armada. The brisk ocean breeze is stretching canvas until its as hard as clam shells and 600 other sailors are racing around the ship performing their duties. This floating city is an amazing system of modern (at that time) technology, well-defined processes, and a strong command-and-control culture. It is a powerful military tool that only is effective when all of the jobs on board are done well.
You are the powder monkey. Two weeks ago when you were kidnapped and pressed into service, you had no intention of joining the royal navy, nor did you know what a powder monkey does. Today, you are able to shuttle packages of black powder from the magazine to any of the 20 cannons on your deck in less than 30 seconds. You learn by following orders barked by a lieutenant. You watch others, model their behavior, and hope that you get it right because you have a strong desire not to be punished by flogging.
Flash forward to 1955 and imagine you are a seaman on a cruiser stationed out of San Diego. You are in a classroom with 29 other recruits listening to a fire control technician explain how to align the sights on a large caliber machine gun. Once at sea you will be responsible for aiming and firing this weapon to protect your cruiser from incoming enemy aircraft.
The three days of lecture are tedious, but by the fourth day when you get to stand behind the gun, align it, and begin your test-firing drill, you are ready. You watch others practice and take note of how you might do it better. When you step behind it, settle in, and begin firing, you are shocked that the gun kicks harder than a mule. But you lean in with confidence. You know every inch of it, and you are ready to defend the ship. As you step away from the gun, the fire control technician gives you the highest praise hes given to anyone that morning: "Not bad. You missed the target, but you didnt kill anyone. Next!"
Flash forward again to a modern aircraft carrier, the USS Ronald Reagan. You are responsible for maintaining a safe perimeter around the ship using ship-to-air missiles against incoming enemy aircraft.
You are prepared for this role because you studied electrical engineering and software design at the U.S. Naval Academy. You have logged hundreds of hours on computer-based simulators. All the systems manuals are available on the naval intranet, and you are connected to a community of similar operators aboard other ships. Electronic job aids are at your fingertips and protocol checklists are available online in the event of an attack.
Different approaches to adult learning
Each of these stories reflects slightly different approaches to adult learning and how they support performance. In all three examples there is some level of formal instruction by an expert, some amount of coaching, copious practice, and external rewards and punishments. All are essential components for improving performance.
The difficult task for most learning professionals is determining the right mix of formal learning and informal learning. In their book, The Career Architect Development Planner, Michael M. Lombardo and Robert W. Eichinger describe the 70/20/10 approach wherein 70 percent of learning comes from on-the-job experiences, tasks, and problem solving; 20 percent comes from feedback and examples (good and bad); and 10 percent comes from courses and reading. In other words, 90 percent of learning is informal and only 10 percent is formal.
What is informal learning? According to Michael Hanleys blog, E-Learning Curve, formal, informal, and nonformal learning are defined the following way:
- Formal learninglearning objectives are set by the training department, which also provides the learning product. Formal learning often leads to certification.
- Informal learningthe learner sets the goals and objectives. Learning is not necessarily structured in terms of time and effort; it is often incidental and unlikely to lead to certification.
- Nonformal learningsomeone in the organization who is not part of the learning department (for example, a line manager, supervisor, or a business leader) sets a learning objective or task. Learning does not lead to certification.
In practical terms, formal learning consists of all the products offered
by the corporate university that have learning objectives, including instructor-led classroom training, online facilitated courses, and self-paced web-based courses. Nonformal learning stems from communications from leaders about requirements by the organization to learn a topic, read a manual, or gain a skill.
Informal learning encompasses everything else as long as it is self-directed. This includes online searches, participation in communities of practice, use of job aids, requests for coaching and mentoring, book reading, blogging, and reading and writing wikis. KnowledgeJump offers a useful tool called the Periodic Table of Learning Elements (www.knowledgejump.com/agile/periodic.html), which provides insight into various components of formal and informal learning.
Structuring informal learning
Learning organizations and their business sponsors want to know whether formal, informal, and nonformal learning actually affects business goals. For formal and nonformal learning, it is fairly easy to measure learning products and determine effectiveness. Materials tend to be well-defined and finite, and measures can be set against objectives.
On the other hand, given the nature of informal learning with self-determined objectives, the vastness of the learning sources, and a learner's ability to surf from site to site or source to source, it is a Promethean task to organize the content into a meaningful structure, thus making measurement of the effect of learning impossible.
Yet, with one simple change in perspective, Prometheus can be unchained. Rather than thinking about the content of informal learning, it is more practical to think of the types of informal learning deployed. Most informal learning can be categorized into one of the following groups:
- communities of practice (online communities aligned to a topic, role, or function)
- virtual knowledge sharing (websites, knowledge portals, wikis, social networking sites, and blogs)
- performance support systems and job aids
- mentoring and coaching
- on-the-job experience.
In 2010, KnowledgeAdvisors conducted research on the use of informal learning within organizations. Results indicate that 9 percent of the learning and development (L&D) budget is spent on informal learning. The figure on page 50 shows how informal learning was supported within organizations.
Platform and timing
With a simple structure for the vast types of informal learning, measurement becomes possible--although still complex. Two factors must be considered and managed to effectively measure informal learning: the learning platform and timing.
For virtual knowledge sharing, the learning platform is the Internet or intranet. It is the physical point of interaction between the learner and the content. The interface is the same for electronic performance support systems and virtual communities of practice. Live, in-person communities of practice, such as local ASTD meetings, pose a challenge to measurement because the platform is outside of the computer desktop. The same is true for other person-to-person learning events such as coaching, mentoring, and on-the-job experience.
With regard to the timing of evaluation, the ideal moment to measure is at the point of need, right when the learning is occurring. In the desktop environment, measurement is feasible at that moment using brief surveys, typically micropolls or pop-up surveys. Timing is more challenging for person-to-person events. Evaluation tools can be deployed for community events, either by paper at the event or electronically after the event.
For coaching and mentoring programs, an evaluation can be sent on a regular basis, quarterly, or twice a year to check in on the process and its effectiveness. Evaluating person-to-person events is rarely aligned to a consistent moment of need, but those moments still can be measured in timely manner. Similarly, on-the-job experience can be evaluated at set intervals such as the first 90 days or bi-annually. Table 1 on page 51 shows various types of informal learning as well as how and when they might be measured.
What should be measured?
For formal learning programs, there are several prominent evaluation approaches. The two most widely recognized and used are Kirkpatrick's 4 Levels of Evaluation and Phillips's ROI Methodology. Both assess attitudes about training, knowledge, and skill acquisition; application of learning; and performance improvement. The question for informal learning is: Should a similar approach be used? The answer is both yes and no.
Yes, because L&D stakeholders want to know whether informal learning provides knowledge and skills that will improve performance. No, because the various types of informal learning do not align well with either evaluation approach.
For example, if a learner logs into a community of practice or a knowledge portal to find a standard operating procedure document--a task that might take less than two minutes--it would not make sense to send an evaluation with more than 20 questions that inquires about knowledge acquisition, job application, and performance improvement. Instead, it makes more sense to ask two or three targeted questions: Did you find what you need? If yes, was it easy to find? If no, what are you looking for? Such simple questions help assess whether the content is relevant for users.
In that example, the questions are delivered at the point of need via the web portal as a micropoll or a pop-up survey. For other types of informal learning, such as mentoring and coaching, more lengthy evaluation tools can be developed and deployed on a regular schedule because they do not interrupt the learning moment.
The survey should be brief and aligned with the needs of the informal event. For example:
- Do you have a coach or mentor? (Yes/No)
- To what extent are you gaining value for your career growth from the coaching or mentoring sessions? (Five-point Likert agreement scale)
- What are the most valuable aspects of the sessions? (Open-ended)
- What are the least valuable aspects? (Open-ended)
- How can the session be improved? (Open-ended)
Some programs have clearly defined end points. A coaching and mentoring initiative may last only one year or have a natural end point coinciding with the fiscal year. Those natural end points provide good opportunities to gather information from participants at the conclusion of the initiative.
In addition to survey measures, organizations also can use key person interviews, focus groups, or existing measurement tools. For web portals and communities of practice, it is particularly useful to gather web analytics that describe how long people remain on a site, which links are followed, how often they visit, and whether they contribute to the site.
A comprehensive, integrated approach
JetBlue University provides a telling case study about its approach to evaluating an assessment, measurement, and evaluation (AME) certification program. The four-month AME certification is a blended learning program consisting of formal components (four six- to eight-hour instructor-led workshops and up to eight one-hour webinars) and informal components (eight mentoring and peer learning sessions wherein work products are shared, reviewed, critiqued, and improved). Certification is designed to build evaluation capacity within the learning and operations groups throughout JetBlue. The end goal is to allow groups to evaluate their own programs and share results with leaders so data-based decisions can drive business performance.
The certification program was developed by the AME group within JetBlue University, which continues to deploy and support it. This core group of four measurement experts easily could have measured the outcomes of the program using their own knowledge and skills, but sought input from a supplier, KnowledgeAdvisors, to vet their plan and to provide data collection and analysis support.
To evaluate the first cohort of participants, an evaluation process was designed with multiple evaluation forms delivered at multiple points in time (see Table 2). Participants received an evaluation at the end of all four instructor-led sessions, after each webinar and mentoring session, and a final evaluation at the end of the four-month program.
This final evaluation gathered feedback about the mentoring and the performance support tool, as well as the entire certification process. Additionally, mentor surveys were sent each week, and participants completed surveys to evaluate the AME certification website (a SharePoint site) at the end of the program. In this way, the formal components were evaluated at the point of need and the informal components were evaluated throughout and at the conclusion of the program. The point of need evaluation feedback provided by the participants and the mentors allowed the team to make immediate adjustments to the program.
As a result of KnowledgeAdvisors' ability to integrate data from the various survey tools, the AME group was able to run reports that compared outcomes across sessions and learning modalities, simplifying the reporting process and saving the team time. Teri Schmidt, manager of the AME group, explains the value her team received. "Taking a program evaluation approach--combining the feedback received on both the formal and informal components--enabled us to look at the program as a system in order to make the best combination of impactful improvements," she says.
The volume of knowledge that can be shared via informal learning methods is so vast it is beyond practical codification. In such an overwhelming sea of information, how can organizations set a strategic course for measurement to determine the impact of informal learning?
The answer is by no means simple, but for now it depends on a few key factors: the learning platform (for example, websites and online communities of practice), accessibility of the audience, and timing. As with the JetBlue example, it is essential to set the scope of the project--such as a certification program or a sales curriculum--to limit the points of measurement. Yet, with an appropriate evaluation framework and capable tools, evaluation of informal learning is within our sights.