360-degree assessments are deployed to collect multisource feedback on particular employees. 360-degree assessments are normally done as part of the performance appraisal process, especially when appraising the subjective parts of the employee's performance. In most instances, the feedback from 360-degree assessments is used to coach the targeted employees to improve their performance.
There is agreement that poorly designed 360-degree assessments can lead to poor performance. What has reduced the credibility of 360-degree assessments has been poor design leading to no or negative impact. This study looks at the impact of poorly designed 360-degree assessments and how to correct them.
In 360-degree assessments, one of the key considerations is how many people should provide feedback for the assessment to be credible. Others believe that the key consideration in answering this question should be the goal of such a 360-degree assessment. Besides the person being assessed, feedback is often sourced from the immediate manager and work colleagues. Depending on the purpose of the 360-degree assessment, others include external stakeholders such as customers and suppliers.
Greguras and Robie (1995) proposed that to achieve acceptable levels of reliability (0.70 or higher) in 360-degree feedback projects, at least four supervisors, eight peers, and nine direct reports should be involved. Given the structures predominant in most organizations, respondents in the quantities indicated in this research are impossible.
Some researchers are now arguing that in 360-degree assessment, intergroup rater reliability should not be the goal. Instead, they argue that the diversity in opinions that the raters bring enriches the quality of assessments and feedback. Interestingly, self-rating and other category reliability is modest at around 0.3 to 0.6. On the other hand, higher reliabilities are observed between peer ratings and supervisor ratings. Credible research findings show that the supervisor rating is the most reliable source of performance ratings. Even more important is that supervisors' ratings are highly correlated to other important job performance outcomes, such as promotions. Other researchers have noted that supervisors' ratings are based more on actual job-related performance. The same study found that direct reports assessing their manager tend to focus on personal issues. Other researchers have noted that peers provide more accurate ratings than direct subordinates when assessing supervisors.
In 360-degree assessments, one of the contentious issues is whether there is reliability within a particular category of raters, for example, peers assessing a particular individual. The results are mixed as results show between subordinates reliability of 0.30 ad between peers it is 0.37. Others researchers have been working diligently to improve the effectiveness of 360-degree assessments. In one such approach, they developed a model called frame of reference scales to deal with the ineffectiveness of the common rating scales used in 360-degree assessments. In the frame of reference scales, the competency being assessed is defined clearly, followed by examples of ineffective and effective behaviour under that competency. Studies that compared this approach with the traditional rating scales found a significant improvement in the quality of ratings.
On response rating, there is no consensus on whether to use, as an example, a 5-point rating scale or a 7-point rating scale. Survey findings show that the most popular rating scale used in 360-degree assessments is the 5-point scale, followed by the 7-point scale. Effectively response scales of below 3 and above 7 are unreliable for 360-degree assessments.
This study shows that reliability was highest in response scales ranging between 5 and 7. Another study shows that test-retest reliability was lowest in scales with responses ranging from 2 to 4 and highest for 7 to 10 response scales.
Related: A Perspective On 360 Feedback
One area less explored in 360-degree assessments is how to present the results in a way that will enhance the acceptability of the results and drive behavior change. This study attempted to look into this area and had interesting findings. The study shows that individuals are less positive when they receive narrative feedback than when given numeric feedback. The results show that individuals prefer numeric feedback to qualitative narrative feedback. The same study shows that people prefer normative comparisons over results presented by ratings per category of respondents.
One other less explored in 360-degree assessments is the implication of including open-ended questions in a 360-degree assessment with the quantitative assessment. A study exploring this area showed that managers who received fewer unfavorable comments improved more than those who didn't, while those who received a large number of unfavorable comments declined in performance more than other managers.
One interesting area in the 360-degree assessment is cultural differences in 360-degree assessments. Some studies show significant discrepancies in self-ratings and ratings by other groups in culture high in individualism and power distance. One study shows that those in cultures high on collectivism and high power distance tend to rate their managers higher when doing 360-degree assessments.
Besides many of the recommendations above, train users of 360-degree assessments on how to rate behavior. Users are often asked to proceed with rating individuals without understanding how the ratings relate to the assessed competencies.
This article gives direction to anyone who is doing 360-degree assessments. I have shared user data to enable you to design a credible 360-degree assessment tool for your organization.
Memory Nguwi is an Occupational Psychologist, Data Scientist, Speaker, & Managing Consultant- at Industrial Psychology Consultants (Pvt) Ltd, a management and human resources consulting firm.Email:email@example.com or visit our websites https://www.thehumancapitalhub.com/ and www.ipcconsultants.com