Pearn Kandola Banner Pearn Kandola Banner

blogs


Some Lessons from 360 Feedback

 | June 30, 2014

360 degree feedback (or multi-rater feedback) processes are very widely used in organisations (for example, estimated to be 90% of fortune 500 organisations in the USA). The strengths of 360 seem to be clear: they can heighten self-awareness by holding a mirror up to the individual. Self-awareness, as we know, is the cornerstone of personal development. It offers individuals the choice to change or not to change, and in particular ideas about the areas in which they need to change.However, engagement of the individual is key to the success of 360 and as a result, the potential value of 360 is frequently lost in my experience. Here are a number of reasons why:

• Questionnaires are too long, with a question for every single behavioural indicator in the competence framework. The consequence of this is that respondents put less effort into completion, getting frustrated about the length of time taken, and responding at speed without really considering what each item means. This can result in bland, middle of the road feedback. Better to carefully choose the items to be representative of the competencies in the framework. It should be possible to complete a 360 in about 10 minutes.

• Questions are too generic, poorly phrased and the rating scale has not been well thought through. Again, all of these can be an annoyance to respondents. Generic behaviours are difficult to apply to a specific role and the respondent can struggle to interpret them into something concrete that they can relate to. Poorly defined rating scales can result in 'middle of the road feedback', with no differentiation between effective and less effective performance. The net effect of all of these is that the recipient receives vanilla feedback that is of little use to them. Better to carefully write questions that have a clear relevance to the role and to use a well-defined rating scale that will allow you to genuinely differentiate between levels of performance – even in a group of high performing individuals.

• Reports are too long and detailed. The length of reports depends on the number of question items as well as the level of detail that is reported. (I've seen reports that are up to a daunting 40 pages in length!). For example, reports that provide feedback against every single question can appear to be very useful. However the law of diminishing returns is relevant – the more question items there are which are reported individually, the less valuable and less engaging the feedback can become. From experience, recipients of such reports can at times become overly hung up on the detail. This is unhelpful, particularly where items are badly worded. This problem can be exacerbated when feedback indicates the frequency of scores for every single item. In my experience, this can have one of two effects: the recipient either rationalises the feedback, e.g. "well, only one person said that, and I think I know who that might be, so I'll ignore it". Whilst this is useful data in terms of their approach to receiving feedback, and is in itself diagnostic, the result is that the recipient can disengage. Alternatively, a recipient may become overly concerned about the detail.

"I need to know who said that", thus focussing more on the specifics and what one person said rather than engaging with the overall message that the feedback provides.

If you want to get the most out a 360 process, therefore, focus on using fewer questions of higher quality (i.e. expressed in concrete terms that are relevant to the role), ensure that you use a well-defined rating scale and don’t over engineer the feedback report – just because data can be cut and sliced in a particular way does not mean it is better. Sometimes, less is more!



  
Subscribe to the Pearn Kandola blog feed.
PK BLOGGERS