This article is part of a series that discusses the management book Humanocracy. The core of Humanocracy is made up of seven principles that are intended to provide orientation when you have to design organizations or act within them. This article deals with the first principle: the principle of meritocracy.
Published in the series:
The core idea
Hamel & Zanini see one of the main problems of bureaucracy in the fact that hierarchical rank and status determine the scope of individual compensation or rewards, and not the performance achieved. The principle of meritocracy is intended to ensure that performance pays off. The authors see the main starting point for replacing a bureaucratic status orientation with a meritocratic performance one in the elimination of assessment bias.
Too often, performance evaluations are distorted due to individual assessment styles, an in-group bias or halo effect,1 which make it difficult to compare performance. Instead of leaving performance appraisals to individual managers, evaluations should therefore be based on a broader basis. Their assumption is that the more data points are available, the more reliable the performance assessment will be. The authors cite Bridgewater Associates, a hedge fund, as an example. The latter uses a sophisticated system of reciprocal evaluation in which every contact with a colleague can be evaluated simultaneously using an app.
Our considerations
Many organizations are concerned with the question of how individual differences in performance can be reflected in a remuneration system and how extra commitment can be rewarded. The question is thus of some relevance. Nevertheless, as much as one may support the issue, there are still grounds for skepticism. Meritocracy is a value that requires approval rather than an idea that can be implemented – even outside of organizations.2 Sociological research confirms that evaluations, especially if they are made transparent, can certainly have disciplinary effects.3
However, it is doubtful whether evaluations adequately reflect performance. On the one hand, there is still the basic problem that evaluations are subjectively colored and subject to bias. Where interacting partners have regular contact with each other, individual assessments reflect less the respective individual situation and more the quality of the relationship as a whole. And where there are one-off encounters, their assessment is practically lost in the wealth of data points.
On the other hand, permanent evaluation conditions promote precisely the kind of conformism that representatives of humanocracy want to overcome. Innovative ideas, thinking outside the box or against the grain may be of benefit to an organization, but it is doubtful whether colleagues and superiors will see it that way if one person’s good idea only underlines how bad the ideas of others are. In this respect, a system of social scoring and alternating performance evaluation is more like yet another bargaining chip in the micropolitical game of optimizing individual career opportunities – and also seems peculiarly bureaucratic in itself.
How it could work
The attempt to objectively measure or evaluate performance is a trap. Once you have committed yourself to this goal, new fantasies keep emerging about how objectivity and comparability can be established once you have sufficient and sufficiently differentiated data. Sophisticated competence models are then created, multidimensional assessment scales devised, and regular 360° feedback rounds conducted; in short, bureaucracy strikes back. Even if these tools do not establish comparability, they do justify comparisons (and unequal treatment) and thus fulfill an important function.
It is probably not even important to establish objective comparability, and probably more important that performance assessments are plausible. It helps if organizational structures are designed in such a way that performance can even be observed at all. Yet the observation of performance reaches its limits where added value specifically arises from the interactions of several people and exceeds their individual contributions.
In most organizations, such constellations are likely to be the rule rather than the exception, and this cannot be resolved by simply increasing the occasions for evaluation. In other words, objective comparability is unattainable. Nevertheless, subjective performance evaluations also fulfill a function – at the very least, as a management tool for superiors. This may not fit in with the idea of meritocracy, but it is a great fit with the reality of organization.
Literature
[1] Individual evaluation styles refer to the fact that some people are generally more lenient, others more strict. In-group bias refers to the phenomenon that we divide the world into ‘us versus them’ dichotomies – and regularly rate others less favorably. And the halo effect describes the fact that we overweight first impressions in our assessment.
[2] Itschert, Adrian (2013): Jenseits des Leistungsprinzips. Soziale Ungleichheit in der funktional differenzierten Gesellschaft. Bielefeld: transcript.
[3] Siehe Sauder, Michael / Espeland, Wendy N. (2009): The Discipline of rankings. Tight Coupling and Organizational Change. American Sociological Review 74, 63–82. See also Foucault, Michel (2016): Überwachen und Strafen. Die Geburt des Gefängnisses. Frankfurt am Main: Suhrkamp. Espeland/Sauder also show, however, that gaming effects are to be expected where performance indicators do not correspond well to performance behavior. See here for Espeland, Wendy N. / Sauder, Michael (2007): Rankings and Reactivity. How Public Measures Recreate Social Worlds. American Journal of Sociology 113, 1–40.