Your staff has just been through a 360-degree feedback process, and you have been tasked with distributing the results to each individual. You have the report in-hand, but how can you
- approach the conversation
- assure that the leader (the person being rated) is open to the scores and comments of her raters
- work with each leader to understand the messages within the scores
- help each leader to determine and recommend changes?
One generally desired outcome of a 360-degree feedback process is to improve the behavior of a leader within selected competencies, to move both the leader and the organization toward success. To do so, support during the report debriefing process is key.
A leader who reviews a report without the support of a coach may
- be overwhelmed by the sheer numbers in the report
- try to determine "who said what" and compromise the anonymity of the feedback
- focus only on the weaknesses brought out in the report
- not understand how to prioritize the issues when developing an action plan.
While coaches and internal human resource, organization development, or training staff members may have experience in this debriefing process, many managers are not equipped with this type of training. While a feedback report may look simple and seem easy to understand, communicating the output of the report in a balanced and thorough method can be complex.
Regardless of the objective of the 360; the number of topics, competencies, or questions; the type of response scale used; or the labels of the different types of relationships (managers, direct reports, or peers), there are many similarities that apply to most 360 feedback processes. While reports may look different depending on the software used, there are usually similarities in much of the output.
Notes to the debriefer
When reviewing a leader's report, look for balanced feedback. Balance is an imperative - it is just as important to identify the things one does well as it is to point out areas for improvement. In some cases, leaders and debriefers focus on the lower scores, but this might not be in the participant's best interest.
For example, there be may an individual who is rated low in the "teamwork" category. If this person is not part of any teams, these low ratings may be of little concern to the person. Therefore, this person may want to concentrate on other competencies.
Assume that raters take their roles very seriously. Raters will generally share more constructive feedback if the output of the 360 process is not linked directly to compensation or succession planning.
If possible, it is best to send the report out to the leader roughly 24 hours prior to the session. Distributing the report too far ahead of the debrief meeting often gives the leader too much time to dwell on the report without being able to gain a more thorough understanding of the information.
Conversely, the leader should not receive the report at the start of the debriefing. The meeting will be more efficient if the leader has had a chance to review the report prior to the meeting. An interactive discussion is best.
A leader may say something along the lines of: "Someone was really upset with me that day, and I know he gave me bad scores." Since the scores are anonymous, neither you nor the leader knows this to be true.
Your response to this type of comment is to look at the relativity of the scores. Look at the four to six highest-scoring questions and ask the leader, "Do you believe this is where you excel?" Then look at the four to six lowest-scoring questions, and ask the leader if these scores make sense. If the leader agrees or buys in to the relativity of the scores, the conversation will go more smoothly, and the prioritization for action planning will be easier.
The report introduction
The report introduction should be read before the leader starts reviewing their scores. This report sets the stage so that the leader can
- avoid trying to figure out who said what
- look for trends
- look for balanced feedback, both on the high-scoring and low-scoring ends
- use the information to determine what the next steps will be (after the debriefing).
Figure 2 provides an overview of the feedback, organized by competencies (sometimes called topics). These scores are roll-up averages of the individual question scores within each competency (for example, if there are three questions in "communication," the competency score is a roll-up average of those questions within the communication competency).
Remember that the report you are reviewing might look different than our example, but the overview data is generally very similar in most report outputs. In Figure 1, there is a 5-point response scale, and in this case, a higher score is a better score. In addition, this report includes an overview of the self scores, as well as manager, direct report, and peer scores.
The "total" column is the average of all rater scores (the "self" scores are not factored into this average). On a 5-point scale, generally, scores of 3.9 to 4.2 are good scores, scores of 4.3 to 4.5 are very good scores, and scores of 4.6 and above are outstanding scores. However, this is just a generalization. Once a 360 process has been completed, it will be easier to determine whether this generalization applies to your organization.
Review the number of people present in each category. If there are only one or two people in one of the relationship categories, be sure to keep that in mind when weighing their scores in your discussion. In going through this report, determine three or four areas to highlight, and ask the leader for his feedback on what he wants to highlight as well.
When reviewing the overview report, look at the self scores, and note how they compare to other relationships scores (managers, direct reports, and others). This can provide insight into perception issues between raters and leaders. The larger the gap, the greater the perception issue.
In the example report, this leader's (we'll call him David) self scores averaged 3.75 to 4.40 on the 5-point response scale. David's manager gave him an average score of 4.00 to 5.00, and the direct reports gave him a few scores under 4 and a few scores over 4, generally hovering around a 4.00. Peers gave him scores slightly lower than 4. The "others" category had only one rater, so the scores are high, but only represent roughly 8 percent of the total raters.
These scores indicate that David's direct reports think more highly of his overall competency skills than his peers. The "integrity" scores are all high, which is great news - high "integrity" scores show a higher amount of respect and trust and give the leader the time and good will needed to work on other areas. The "total" column, which represents overall roll-ups, shows strong scores in all areas except for "efficiency/productivity." Later in the report, therefore, David will need to look at that area closely.
Figure 2 shows the roll-up scores by competency. On a 5-point response scale, the percentage of raters who selected the bottom two scale options for the questions in the competency (in this case, "strongly disagree" and "disagree") are in red (unfavorable). The percentage of raters who selected the middle scale option (in this case, "neither agree nor disagree") are in yellow (neutral).
The percentage of raters who selected the top two scale options for the questions in the competency (in this case "agree" and "strongly agree") are in green (favorable). Essentially, red is bad, green is good. No self scores are included in this graph.
Mean scores, as seen in the overview report, are a good first indicator of performance. If someone gets a 4.8 (on a 5-point scale; 5 = high), it indicates very high scores, and if he receives a 2.1 (on a 5-point scale; 5 = high) it indicates very low scores. However, if a leader has a mean score of 3, without more information, it is impossible to decipher whether most raters gave the leader a score of around 3, or if about half of raters gave the leader a 1 and roughly half gave the leader a 5.
In both scenarios, the mean score would be 3 (approximately), but the message would be very different. This report provides some range information; if most raters gave the leader 3.0s, then the yellow line will be larger. If they gave him mostly 1.0s and 5.0s, then the red and green lines will be larger and the yellow line will be smaller.
In prioritizing issues, look for percent favorables (greens) that are 80 percent and above, and then look for percent unfavorables (reds) that are 10 percent and above.
This graph is effective for visual learners. Instead of charts with a potentially overwhelming number of scores, this graph assembles the data into an easily understood format.
As you and David are thinking forward to the action-planning phase, there is a strategy in determining what to do about the scores. Does David want to move his yellow scores up to green or does he want to move his red scores up to yellow? In other words, does he want to create an action plan that helps build on clearly defined strengths, or does he want to work on improving his weaknesses? There is no right answer, and scores must be assessed as they relate to the individual leader. The action plan the leader creates may combine some of both of these strategies.
In this report, David has great change, integrity, leadership. and management scores. Once again, we see that efficiency/productivity is an area that needs more understanding and attention.
Questions by competencies by relationship
The report in Figure 3 breaks down each competency or topic by its questions. A debriefer should approach this report with more emphasis on specific areas of each competency. Is the leader over- or under-rating himself? Is there consistency between the different rater groups? Are there any outliers that need attention?
This report is particularly useful in identifying specific questions that may have raised or lowered scores within a given competency. In the first example, David has consistent high scores for all four of the problem-solving and decision-making questions. His direct reports have given him the best scores in this category, and David and his supervisor seem to have consistent perceptions of his skills in this area.
In the second example, David has varying scores, with a high score on "adjusting to changing work requirements" and a low score regarding "managing multiple projects effectively." Questions 13 and 14 are some of David's lowest scores within the assessment and need attention.
Gap reports (Figure 4) measure the difference between the self scores and the combined rater scores. This gap report is sorted by the size of the gap (total raters' scores [others mean] subtracted from the self scores).
In looking at the gap column, a positive gap reveals that the person under-valued himself, while a negative gap reveals where he over-rated himself. It is important to look for gaps of more than 1.0 (either positive or negative) and discuss why there might be a perception difference.
In the section of the gap report shown, David has undervalued himself on about five questions. In all these cases, David self-scored at a 3.0 (neither agree nor disagree). As a debriefer, you may ask David why he scored himself low on these questions. and wether he really see his behavior so differently than his raters do.
Although this section (Figure 5) appears at the end of the report, do not feel that you must wait until the end of the discussion to review the comments. Some debriefers prefer reading them first with the leader so that they have some kind of information on which to base the rest of the scores.
At any point in this debriefing process, if the leader looks at a score and says "I have no idea what people are talking about; this makes no sense to me," look at the comments to see whether there is any qualitative support for the quantitative scores. Open-ended comments are helpful in understanding why the scores are what they are. Review the open ends wherever you think they will be most useful in your conversation:
We included two sets of comments to review. The first set of comments is for the problem-solving and decision-making competency. Here, David sees good examples for why he received high scores - he gets input from others, is open to suggestions, is fair, and can think outside the box.
The second set of comments is for the efficiency and productivity competency. Once again, there are examples for why David's scores were not high in this area. Difficulty in prioritizing and delegating, as well as some procrastination issues are discussed.
Following a debrief meeting, a leader typically thinks through the meaning of the feedback, prioritizes their thoughts, and assembles an action plan. The leader may use a formal action planning template or simply write down notes. An action plan usually includes three to five areas on which to focus in the coming months.
Some leaders wish to take a strength and make it stronger, while some want to take issues that are perceived as problem areas and improve on one or two of them. Ongoing discussions (monthly or quarterly) help keep the action plan are relevant and on track. Additional follow-up 360s can help measure progress against action planning goals.
Overall, David has solid scores, with one clear area of concern. In his action plan, he might want to consider the following:
- Work with his manager on his prioritization and delegation skills.
- Work with his staff on providing more timely feedback in a more formalized process.
- Continue to treat others fairly and effectively, asking for input when appropriate.
- Take on more cross-department or team leadership opportunities. t+D