Why Does Poor Performance Get So Much ... - Wiley Online Library

8 downloads 0 Views 609KB Size Report
Dec 4, 2007 - Why Does Poor Performance Get So. Much Attention in Public Policy? ÅGE JOHNSEN*. Abstract: This article explores why information on poor ...
Financial Accountability & Management, 28(2), May 2012, 0267-4424

Why Does Poor Performance Get So Much Attention in Public Policy? ÅGE JOHNSEN∗

Abstract: This article explores why information on poor performance often gets most of the attention in public policy. In order to illustrate the discussion this paper analyses the case of educational policy for secondary schools in Norway, and in particular the policy of participating in the OECD Programme for International Student Assessment (PISA) which measures educational outcomes for 15-year old pupils regarding reading, mathematics and science. Governments, researchers, interest groups and the media await the regular release of the PISA results every third year with great interest, and participate in the strategy of ‘naming and blaming’ based on the relative national performances. The practice of identifying poor performance and the subsequent public discourses has become an institution. Despite the negativity-bias the strategic use of information associated with these processes may have positive impacts on decision making, policy innovation and democratic accountability. Keywords: administrative man, negativity bias, performance management, political man, low performance INTRODUCTION

Focus on outcomes and performance management has been a central element in public sector reforms in many OECD countries since the late 1970s (Hood, 1995). The public sector has utilised ‘modern’ performance measurement for nearly 100 years (Williams, 2003), but we still do not know a great deal about the way performance information is used in public policy in modern democracies (de Lancer Julnes, 2006; van Dooren and van de Walle, 2008; and Pollitt, 2006). ∗ The author is Professor of Public Policy at Oslo and Akershus University College. This paper is based on his Northern Scholars Lecture, 5 November 2008, at The University of Edinburgh. He is grateful to the Northern Scholars Committee, to Professor Irvine Lapsley and the lecture attendants for the opportunity to present and discuss the early ideas; and to Ruth Dixon, Sotiria Grek, Asle Rolland, Kjetil Ulvik and two anonymous referees for their constructive comments on earlier drafts. An earlier version of this paper has also been presented at the Conference of the European Group of Public Administration (EGPA) Study group on performance in the public sector, St Julian’s, Malta, 2–5 September 2009. Address for correspondence: Åge Johnsen, Professor of Public Policy, Faculty of Social Sciences, Oslo and Akershus University College, P O Box 4 St. Olavs plass, NO-0130 Oslo, Norway. e-mail: [email protected]  C 2012 Blackwell Publishing Ltd, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA, MA 02148, USA.

121

122

JOHNSEN

Nonetheless, there lies a great potential for decision makers to use performance information to assess and enhance efficiency, effectiveness and equity as well as to provide feedback to stakeholders in the political system about accountability and the need for policy innovation (Behn, 2003; Jackson, 1993; and Mayston, 1985). Performance information, for instance in the form of balanced scorecards or target regimes, is often assumed to be used for implementing formal strategies based on objectives. Often the alleged organisational vision is to be the leading organisation in some respect or another and the objective may be to implement best practices. In order for organisations to learn conducting monitoring and benchmarking is important (Kaplan and Norton, 1996; and March, 1991). What we often notice, however, is the preoccupation with poor performance, not best practices, in public policy. Many researchers have recognised the preoccupation with poor performance and the academic literature has addressed this issue with different concepts such as blame avoidance (Weaver, 1986), naming and shaming (Pawson, 2002), and negativity bias and asymmetric responses to good and bad economic information (Soroka, 2006). James and John (2007) found a negativity-bias in how voters acted upon performance information about local public services. Incumbents in local authorities with poor performance got a reduced aggregate vote share in the election after the publication of the performance information, but there was not a similar increase in aggregate vote share for incumbents in local authorities with high performance. Boyne et al. (2010) found that information on low service performance affected senior management team turnover and, under certain conditions of simultaneous change in political control, also top executive succession in local governments. This may indicate that performance information matters but information on poor performance matters most. Performance information may matter but the jury is out regarding the net effect of performance management and the publication of performance information for management and democracy. Hood (2007) explored agency, policy and presentational blame-avoidance strategies which may produce nil effects, side-effects or reverse effects when the doctrine of transparency, for instance regarding performance information, meets blame-avoidance. This and other research (Ridgway, 1956; Smith, 1995; and de Bruijn, 2002) has documented many dysfunctional effects of performance measurement. There may be a negativity bias in the research on performance management as well, and the research should therefore also comprehend positive and functional effects. Soroka concluded his study by stating that: well-functioning representative democracies likely require a certain degree of problem identification. And individuals may quite reasonably feel that punishing for errors is more critical to good governance than is rewarding for not-errors. A negativity bias may thus be an important feature of political systems. Certainly, we should give further consideration to the potentially negative and positive functions of asymmetric responsiveness in representative democracies (Soroka, 2006, p. 383)

 C 2012 Blackwell Publishing Ltd

WHY DOES POOR PERFORMANCE GET SO MUCH ATTENTION?

123

This paper explores why information on poor performance often gets most of the attention in public policy. An underlying assumption for this analysis is that even though the phenomenon of negativity bias or preoccupation with low performance is widespread one cannot state that this is beneficial for society at large. On the other hand, an analysis of ‘winners and losers’ of this phenomenon may give some indications on why this phenomenon seems to be prevalent and could have a positive impact on public policy and management. In order to illustrate the discussion, this paper analyses the case of educational policy for secondary schools in Norway under the centre–right coalition government Bondevik II (2001–2005) and the centre–left coalition government Stoltenberg II (2005–). The policy of measuring educational outcomes, participating in international measurements such as the Organisation for Economic Co-operation and Development (OECD) Programme for International Student Assessment (PISA), ranking schools and the publication of the results, were (and to some degree still are) contested issues in the government’s educational policy. PISA is an international comparative survey of the educational school systems in different countries, initiated by OECD. PISA measures 15-year-olds’ competencies in reading performance, mathematics and scientific literacy. In order to study the cumulative yield of education, the assessment takes place every three years and each time all the above educational domains are included. The first cycle of PISA was carried out in 2000. Governments, researchers, interest groups and the media await the regular release of the PISA results every third year with great interest. For example, see Grek (2008) for an account of the UK media’s reception and interpretation of the PISA 2006 results. Grek (2009) studied the impact of the first completed cycle of PISA (2000, 2003 and 2006) on educational policy internationally and conducted case studies of England, Finland and Germany. PISA data were used for justifying changes or supporting existing policies, but there was a bigger impact in Germany and Finland than in the UK. The UK already had long experience with different public sector measurement and testing regimes and had relatively good PISA results. In Finland, which got high rankings, there was a ‘PISA-surprise’ but the Finnish media paid relatively little attention. In Germany there was a ‘PISA-shock’ and much media attention after the PISA data from 2000 showed that Germany ranked twentieth in reading, mathematics and science among 32 countries. Dixon et al. (2010) analysed domestic press responses of international educational rankings including PISA in England, France and Germany systematically and found indications of negativity bias. It is therefore interesting to study the practice of performance management in established democracies using the case of the publication of PISA data and educational policy as the research findings indicate that poor performance got most, or at least much, attention. The reminder of the paper is structured as follows: The next section explains performance information in public policy and management. The subsequent section explains educational policy in Norway since the 1980s and the publication  C 2012 Blackwell Publishing Ltd

124

JOHNSEN

of the PISA results in the Norwegian polity. The fourth section is a discussion of the Norwegian case where different models of man – economic, administrative and political man – are used to provide economic, administrative and political explanations of the focus on poor performance in public policy and management. The final section concludes the discussion. PERFORMANCE INFORMATION IN PUBLIC POLICY AND MANAGEMENT

Implementing performance measurement in complex organisations typically takes many years to accomplish, and the process may be conceptualised as a product life cycle (van Helden et al., 2007). There are stages of design, implementation and use, and during the different stages there are positive and negative impacts on the organisation and its environments. Hence, during any of the stages there may be evaluations of the system, which may stop the implementation, alter the use or initiate a redesign. Implementation of performance management systems could be framed as investment projects. Typically early in the projects the costs are higher than the benefits and this relation is expected to change during the later stages in order for the system to create a positive net present value for society at large. Some of the costs may stem from direct investment costs in the system’s redesign, implementation and use, while other costs may stem from unintended, dysfunctional impacts of the system. Carson et al. (2000) documented that many organisations in fact abort implementation of performance management systems thus realising investment and political costs but failing to benefit from potential gains that typically materialise later in the performance management systems’ life cycle. PISA was implemented and taken into use despite resistance and critique and has now been in use for more than 10 years. As such, PISA is an interesting case for illustrating how performance information is used in practice in public management, and what impact this information has on public policy.

Potential Positive Impacts of Performance Information Performance management is often used for control and learning purposes and could have positive impacts for policy implementation and policy-making. However, the publication of performance indicators (PIs), and to explicitly or implicitly use benchmarking and rankings, may be felt as threatening to some stakeholders such as professions or unions. On the one hand these management tools may be compatible with professional norms and interests – one might presume that even professionals want to reduce uncertainty and address ambiguity. Performance information can be used to put issues on the agenda and to facilitate decision-making (Askim, 2007), which could improve performance and provide innovation (Johnsen, 2005). On the other hand, the professionals may claim that they want to discuss contentious issues within their professions and with the government directly. PIs may then be seen as  C 2012 Blackwell Publishing Ltd

WHY DOES POOR PERFORMANCE GET SO MUCH ATTENTION?

125

an intrusion of professional integrity. Therefore, even though performance information could have substantial positive impacts on public policy it could also increase the level of conflict in the polity. Public information about poor performance may have incentive effects for low performing organisations to adapt in a decentralised way, a trait that many – and in particular liberal – governments could favour (Pongratz, 2006). The PIs in political markets may in this respect function similarly to how prices carry information in product markets (Johnsen, 2005): they signal, for example, where there are problems of scarce resources and hence ‘investment’ opportunities. Furthermore, agents may adjust their behaviour when exposed to the information even without being told to do so by some central body. Hence, the performance information (and naming and shaming) may cause agents to adjust behaviour by self control, a central element in performance management models using management by objectives, which reduce the need for central planning. However, the potential self control effect does not depend on the formulation of objectives, only on the agents’ (public) exposure to performance information and the opportunity for (self)-restoration (Pawson, 2002).

Potential Negative Impacts of Performance Information After a performance measurement system has been put into use, claims of costly measures, data with low reliability and validity, and critical reports of perverting behaviour (de Bruijn, 2002) and dysfunctional effects (Smith, 1995) will emerge. For example, performance management may incur increased resources to measurement and reporting. Unless the benefits outweigh the costs these resources have better alternative use elsewhere. It could therefore be rational for some stakeholders to resist or postpone performance management reforms. In other cases, however, resisting measurements and reporting, thus avoiding transparency, may be dysfunctional and block the potential benefits that often accrue later in performance management programmes.

Dynamics of Performance Information A common phenomenon in relation to the PISA results and their reportage was: an initial critique of the statistics themselves and a questioning of their validity, but then an apparent acceptance of the data and appropriate policy responses to the situation as defined by these data (Soroka, 2009, p. 29).

On the one hand, the loudest voices will naturally come from stakeholders and their lobbyists who perceive themselves as losers from policy changes. On the other hand, the performance information may be used deliberately by other actors in public disclosure in order to enhance performance through the strategy of naming, blaming and shaming (Pawson, 2002). Agencies, politicians and information specialists may subsequently pursue certain strategies that may

 C 2012 Blackwell Publishing Ltd

126

JOHNSEN

or may not improve performance or legitimacy (Hood, 2007). Unless measures are taken, negative processes may spiral into even worse performances because workers, managers, politicians and the public get disillusioned. PIs tend over time to lose their capacity to differentiate. Organisations alter their behaviour in order to perform – or at least they make it appear as if they perform (Meyer and Rowan, 1977) – as well as the best in the class or according to the average (Llewellyn and Northcott, 2005). As a result, the PI no longer shows variation between organisations. This is the ‘running down’ of performance measures (Meyer and Gupta, 1994). The solution is to replace these indicators with new measures that again introduce variability. For example, PISA measures the same three performance dimensions – reading, mathematics and science literacy – every time, but the main focus alternates between the three dimensions. We shall now turn to the Norwegian case in order to explore these matters. EDUCATIONAL POLICY IN NORWAY

Educational policy in Norway traditionally fitted the consensual and corporatist decision making tradition, dominated by the Ministry of Education and by one or two teachers’ unions (Bergesen, 2006). The centrally regulated unitary school system, almost exclusively produced by the public sector, was perceived as comprehensive and providing an equal education of high quality for all pupils regardless of social background and residence in the country. However, since the 1980s, this dominant perception was challenged. Educational policy and the measurement of schools’ performance became a contested issue. In the late 1980s the OECD criticised Norwegian educational authorities for lacking evidence on pupils’ and schools’ performance and for losing control after major decentralisation reforms in the 1970s and 1980s (Ministry of Education and Religious Affairs et al., 1989). The OECD was worried that evaluation and the responsibility for analysing the school performance to a large extent was left to the local teachers without the educational authorities being able or willing to make standardised tests and national evaluations. Economists analysed the educational policy and proposed reforms for improving the performance of the Norwegian school system in the late 1980s and early 1990s (Friestad and Robertsen, 1990; and Norwegian Official Reports, 1991, No. 28), but these proposals were met with massive condemnation from many in the teachers’ union. Economists who in the 1990s and early 2000s wanted to study school performance using grades as output measures were not granted access to the grade data by the educational authorities. During the 1990s the government started to develop and implement a mandatory local to central government reporting system (KOSTRA) that also encompassed some PIs for primary and secondary education. A former Minister of Education from the Labour Party, Mr. Gudmund Hernes, initiated Norway’s participation in the Programme for International Student Assessment (PISA).  C 2012 Blackwell Publishing Ltd

WHY DOES POOR PERFORMANCE GET SO MUCH ATTENTION?

127

After 1997 there have been various coalition or minority governments, and there was largely emerging a consensus regarding the educational policy on more emphasis on basic educational performance such as literacy and numeracy.

The Case of PISA The publication of the first data from PISA was a major event in Norwegian educational policy which received massive media and political attention. The first PISA statistics from 2000 were published shortly after the centre–right coalition Bondevik II government took office in the autumn of 2001. The Bondevik II government had an education minister from the Conservative party, Ms. Kristin Clemet, and the government’s educational policy emphasised testing and the measurement of schools’ performance more than previous governments. The statistics, coming at an opportune time for the new government, showed that the Norwegian secondary schools were underperforming in important educational fields such as literacy, maths and sciences. Moreover, egalitarianism and social inclusion, which traditionally is highly esteemed in the Norwegian unitary school system and society at large (Slagstad, 1998; and Hofstede, 1984), was unsatisfactory, even though the Norwegian school system was relatively resource intensive. Furthermore, the situation in the class rooms was often noisy and many felt that disciplinary problems were not adequately taken care of. The good news was that most pupils and teachers thrived in their daily school activities (Kjærnsli and Lie, 2005). In response to the criticism, the centre–right government’s educational policy emphasised improving the pupils’ basic skills in literacy, mathematics, foreign language, sciences and digital competence (Reports to the Parliament No. 30 (2003–2004)). The Parliament met this educational reform with almost unanimous approval. The PISA statistics underscored the new educational consensus of the late 1990s, focusing more on basic educational skills. This focus had traditionally been the main issue in the Conservative Party’s educational policy. Some parties however, had divergent views on performance management. The policy of measuring, ranking and publication of school performance aroused resistance from the socialist parties, the teachers’ union and the pupils’ organisation, as well as from some academic circles (Bergesen, 2006). After the 2005 national election the centre–left (red–green) coalition government of Stoltenberg II was formed consisting of some of the parties that had been the most critical of the previous government’s performance management policy. Although the new government pledged to change this policy, the new red–green government continued the basic content of the educational policy as well as the Conservative party’s traditional critique of school performance. The international educational rankings, including PISA, may have given some of the ‘pragmatist’ policy makers in the red–green coalition government the motive-and-opportunity (Hood, 1995) for changing some of the socialist parties’ traditional educational policies. However, the  C 2012 Blackwell Publishing Ltd

128

JOHNSEN

red–green government did abolish publication of schools’ mean grades and the ranking of schools based on national school performance data (Bergesen, 2006).

Attention on Poor Performance We can assess poor results by absolute and relative criteria, where deviations from expectations may be a typical relative criterion. The third round of PISA statistics which came in 2006, can indeed illustrate underperformance (relative) poor performance (absolute over time) for Norway. Figure 1 illustrates the PISA 2006 scores for reading, mathematics and science as deviation from the OECD mean scores for selected OECD countries. The Norwegian scores were slightly higher than the German scores in 2000, which had caused the PISA-shock in Germany, but the PISA 2006 data showed that the Norwegian scores had declined since 2000. The Norwegian researchers working with PISA summarised the main findings from the PISA 2006 data in a Nordic perspective in the National PISA Report: unfortunately, Norway appears as the country with the lowest scores in Scandinavia, and significantly below the OECD average in all the subject domains. While Finland scores far above the OECD-average in all subjects, the rest of the countries’ results are close to the average value. The main findings are very similar to what we saw in PISA 2003, and 2000 ( . . . ), but the Norwegian results are even lower. We conclude that the performance is low . . . . (Kjærnsli et al., 2007, p. 13).

Figure 1 PISA 2006 Scores for Some OECD Countries

 C 2012 Blackwell Publishing Ltd

WHY DOES POOR PERFORMANCE GET SO MUCH ATTENTION?

129

Media and Political Reactions For OECD the publication of the PISA data is a major communication event that potentially has big media and political impacts. The PISA 2006 results were published on Tuesday 4 December, 2007, and the news of the poor Norwegian school performance were soon out. Many of the national as well as regional newspapers subsequently ran front pages and used headlines with the PISA rankings as the prominent ‘bad news’. For example, the Norwegian national financial newspaper Dagens Næringsliv (equivalent to the Financial Times) ran a big headline on its front page Wednesday 5 December, 2007, and a two page cover story with rankings of the Norwegian school performance compared to other countries and analyses (‘Norwegian schools severely set back: The international PISA study reveals that the situation in the Norwegian school system is worse than ever’, my translation). Many of the national broadsheets and regional newspapers ran similar stories and follow up articles for many days and even months. So, the poor performance got a lot of attention. In the subsequent media discourse the political right blamed the left for effectively failing to support pupils from deprived backgrounds because the left had traditionally been for ‘progressive pedagogical methods’ since the 1970s while the right had emphasised traditional schooling and testing where teachers have authority, where one could identify poor as well as good educational performance, where students’ (hard) work would give merit and the influence of parents’ background could be better ‘cancelled’ out. The socialist parties and the teachers’ union (initially) blamed the centre–right government for starting to monitor performance by unreliable and low validity measures, ‘teaching to the tests’ and avoiding hard-to-measure but important aspects of education. DISCUSSION

Cognitive psychology (Hogarth, 1987) tells us what kinds of issues grab our attention. The availability, timeliness and concrete nature of information as well as wishful thinking, anchoring and framing, make it easier for us to pay attention to some instead of other issues and bias our attention. We have to admit that some public policy is limited with respect to being evidence based (Pfeffer and Sutton, 2006). Policy makers and civil servants are nevertheless experienced decision makers. That fact does not shield them from cognitive biases, but we can assume that many decision makers are aware of common biases and that training and institutional arrangements such as transparency and checks and balances are in place to counter some of the biases. The issue that decision makers such as policy makers and civil servants are bounded rational (Simon, 1997 [1947]) is well known, and selective attention is according to Jones and Baumgartner (2005) the most important cognitive limitation influencing political choice. We can therefore assume that in the overall information handling process– either by default or by design – some attention will go to

 C 2012 Blackwell Publishing Ltd

130

JOHNSEN

cost-benefit and economic issues, some attention will go to deviations within the administrative system, and some attention will go to political salience within the broader political system.

Models of Man In order to discuss why poor performance gets so much attention we will make use of three simple models or theories of man, see Figure 2. These models are used as illustrative devices to explore certain economic, administrative and political explanations. The first theory is the economic man – what Thaler and Sunstein (2008) term ‘econ’ – and can be found in many introductionary textbooks in micro economics. This theory states that the actor has clear preferences and stable objectives, has knowledge about all alternatives, calculates benefits and costs for all alternatives, and is able to rationally choose and implement the alternative which maximizes net benefit relative to his preferences and constraints. The theory of the economic man is unrealistic as a description of factual behaviour in most decision making processes, but the theory is a good model to use as a benchmark or an ideal type in designing performance management systems. For example, performance information may be used to explore preferences, search for alternatives and monitor implementation (Metcalfe and Richards, 1990). The second theory is the administrative man (Simon, 1997 [1947]), which underlies a lot of thinking in for instance accounting, management and public administration. This theory, for which Herbert Simon received the Nobel memorial prize in economics in 1978, only assumes limited (bounded) rationality and problematises issues in the theory of the economic man: it is difficult to have clear preferences (goals) before one has chosen an alternative and learnt through action what one prefers and what the benefits and costs of the alternatives are. The administrative man model explicitly deals with limited human cognitive capabilities. These limitations impact on the kind of data we calculate, which alternatives we factually recognize, and how rationally we make our decisions. The theory of the administrative man states that there is often more than one

Figure 2 Models of Man

Preferences and goals

Economic Man

Administrative Man

Political Man

Known and stable

Many conflicting goals

Known but contested

Alternatives

Known

Partially known

Partially known

Decision and implementation

Rational

Bounded rational

Strategically bounded rational

 C 2012 Blackwell Publishing Ltd

WHY DOES POOR PERFORMANCE GET SO MUCH ATTENTION?

131

goal but only limited attention and scrutiny. The goals therefore, do not serve as ends relative to traditional preferences but as constraints for the decisions. The decision is ultimately not a maximisation of net benefit but rather a satisfaction relative to a perceived problem. The third theory is the political man. The political man also has limited rationality, but he is cognizant about how different institutions counteract limited rationality at the individual level at the same time as different actors pursue their own interests (Downs, 1957). The political man therefore uses information strategically, which means that information is used to pursue individual interests assuming that all other actors do the same (Feldman and March, 1981). The theory of the political man is, on the one hand, more difficult to use in predicting behaviour than the theory of the administrative man. On the other hand, the theory on the political man is suitable in studies of how institutions are designed and how professional politicians, bureaucrats and other political and administrative actors use performance information.

Economic Explanations Using the economic man model we may assume that rational actors may seek knowledge about the society to inform decision-making as well as to legitimate past decisions. One way of using performance information is to scan the horizon for upcoming problems or, using McCubbins and Schwartz’s (1984) metaphor, using performance information looking for problems like police patrols routinely searching for crime. Depending on anchoring and expectations, solving ‘big problems’ (poor performance) may be perceived to add more to marginal welfare in society than solving ‘minor problems’ (improving on good performance). See Figure 3. Therefore, the efforts of the former centre–right and current red–green coalition governments to manage the educational policy by performance information focusing on the poor performance seem laudable. The motive for reforming the educational system in Norway was present for a long time – the relatively high use of resources has been known about since the 1980s (Rattsø, 2001). There had been some ‘voice’ regarding the costs of education and uncertain results, but there were for most practical concerns little ‘exit’ opportunities (Hirschman, 1970) from the local government provided unitary school system. Private schools were allowed only due to minority religious reasons or due to special pedagogical reasons. The PISA statistics, however, provided the opportunity for reform (Hood, 1995) when they documented the low scores on literacy, mathematics and sciences. This perceived crisis also gave the socialist parties an opportunity to innovate their policies. Socialist parties traditionally have a strong electoral base among teachers (Verpe, 2005) and thus are reluctant to advocate dramatic public sector reforms. The performance information from PISA on the Norwegian secondary schools’ low performance gave a ‘window of opportunity’ for reform. Thus, poor performance relative to expectations is salient information to be grabbed in the political system.  C 2012 Blackwell Publishing Ltd

132

JOHNSEN

Figure 3 Marginal Welfare of Problem Solving Marginal welfare

Good performance

Poor performance Problem solving

The relatively poor Norwegian educational performance revealed by the PISA statistics may have (had) a prominent influence on national educational policy, but the direct effect of PISA on organisational behaviour in the municipalities which own the schools, in the schools and in the many class rooms, is probably minor. The PISA tests are based on random samples from the national population of pupils and schools. Hence, the PISA results may be better for informing the national educational policy debates than for informing local plans and behaviour. There are several other educational performance measurements, such as the national tests that are performed in every school, which may have bigger potential for influencing local plans and behaviour than the PISA results. It may be the case, however, that the different test regimes such as PISA and the national tests are reinforcing each other regarding influencing national policy and local plans and behaviour.

Administrative Explanations The economic man model outlined above explains why it may be rational to pay much attention to information on poor performance. A key question is, however, whether and how the information is used subsequently. The administrative man model may shed some light on this issue. The administrative system is complex with extensive division of labour and specialisation (Thompson, 2003 [1967]). Some bodies like the OECD and the national audit institutions produce performance information. Other bodies – typically parliamentary public accounts, audit, finance and relevant standing committees – might handle the technically demanding information. A lay person (‘outsider politician’) would often not understand all the complexities involved in using performance

 C 2012 Blackwell Publishing Ltd

WHY DOES POOR PERFORMANCE GET SO MUCH ATTENTION?

133

information in a meaningful way and therefore has to consult and trust more knowledgeable colleagues (‘insider politicians’) and specialists (Hyndman, 2008). The majority of the politicians would often be concerned only in special cases and with major deviations relative to plans, budgets or expectations, such as low performance. In this case we can draw on another of McCubbins and Schwartz’s (1984) metaphors: using information on poor performance can be like fire alarms calling for our attention and action. Moreover, because all citizens have first hand experience from being pupils themselves and many have children who are pupils (and relatively many citizens work as teachers) the entry barriers for engaging in educational policy may be relatively low compared to other policy areas. A somewhat paradoxical notion in the institutional organisation literature is that many actors demand performance information but seldom use it, even when there is ample supply. A common explanation is that politicians and managers use information for ritualistic purposes since systematic data collection signals rationality (Meyer and Rowan, 1977). Another explanation asserts that people will ask for more information than they can use because they do not know how and when they are going to use the information (Feldman and March, 1981). The information needs to be available in order to be used when a problem arises. As a result, much information is not used. A third explanation for the supply and demand paradox has to do with the quantitative nature of PIs. It is argued that politicians’ prefer to use rich information such as provided by personal conversations, face-to-face contacts and informal meetings rather than formal information such as provided by PIs (Daft and Lengel, 1990). A potential conclusion from the seemingly low use of the performance information could be that much performance information for public policy is useless. The Norwegian case, however, does not support this view. There was widespread interest in the PISA scores, and the performance information seemed to be useful in public policy making, but there was also much resistance. The new performance regime and in particular the policy of conducting national tests and the ranking of schools aroused much resistance, especially from the political left. When the Ministry of Education in 2004 made the pupils’ average examination scores public for every secondary school, some teachers sabotaged the measurements by leaking the tests before the exams (Bergesen, 2006). The major pupils’ organisation encouraged the pupils to boycott the national tests, even though the policy of conducting these tests was democratically approved by a large majority in Parliament (Johannesen, 2005). The co-operation with many other OECD countries on PISA was an important support for the government. When stakeholders criticised the measures for being unreliable, or that Norway was a special case beyond comparison, the government could point to standardised measures and the participation of other comparable states such as Finland and the Netherlands in the PISA project. Finland, in particular, got a lot of attention since the country had very good  C 2012 Blackwell Publishing Ltd

134

JOHNSEN

performance (cf Figure 1), is comparable in size and is a Nordic country. In this way, it became more difficult for the opposition and other critical stakeholders such as the teachers’ union to claim a lack of evidence for low performance in the Norwegian school system (Bergesen, 2006). Hence, the government could use the PIs to defend its educational policy. By contrast, the school performance of the Netherlands and Austria, both countries with relative similar corporatist traditions such as in the Nordic countries, got relatively little attention in the ensuing Norwegian discourse.

Political Explanations Now we can apply the political man model and add political explanations in addition to the economic and administrative explanations. The Norwegian case highlights two more issues. First, relevant information is available but central actors (in this case leftist political parties, the teachers’ union and the pupils’ organisation) initially did not want to use the information. This implies that politics and interests are important for understanding performance management in public policy, hence this points to the usefulness of the political man model. Second, low performance gets most of the attention and this in spite of the ‘learning from best practice’ rhetoric. In public management as opposed to business management there is a tendency to ‘learn from poor performance’. In the public sector, avoiding low performance that could result in ‘naming, shaming and blaming’ could be as strong an incentive as performing well. There might not be any strong incentive in performing ‘best’ because the ‘winner’ hardly ‘takes it all’. It may rather be that ‘the loser loses it all’ in public management. For example: Metcalfe and Richards (1990) noted that risk avoidance was prevalent in the Whitehall culture because mistakes were punished and successes were not rewarded, and James and John (2007) documented that in England low performing local authorities had a higher rate of executive succession than high performing local authorities. For many public services, avoiding low performance by achieving a certain basic or average level of performance for specific, often vulnerable users and clients, could be more important than achieving a high level of service. In the Norwegian egalitarian society Rawls’ (1971) normative theory of justice may shed light on this phenomenon. Policy makers may design policies and institutions in order to enhance justice. In order to do so, policy makers should act as under a veil of ignorance regarding their own starting point and future in this society and try to maximise the welfare of the least well-off groups. Therefore, in public policy and management it may be more important to avoid underperformance than achieving highest performance, hence poor performance gets so much attention. In order to understand the emphasis on poor performance and problem solving and the incentives that creates, several complementary perspectives are useful.  C 2012 Blackwell Publishing Ltd

WHY DOES POOR PERFORMANCE GET SO MUCH ATTENTION?

135

The administrative or systems perspective explained that in a self-regulating system one would focus on negative and positive deviations. Any feedback on big deviations from normal or expectations would eventually trigger actions to balance the system. The system perspective may in this way explain the focus on deviations but not the emphasis on low performance. Utilising the administrative man model we can search for potential explanations in cognitive psychology, which addresses deviations relative to the economic man model. Prospect theory (Kahneman and Tversky, 1979), for example, highlights the issue that people frame the same economic amount differently when it is regarded as a loss compared to when it is seen as a gain. Losses loom larger than gain is a principle in prospect theory. Hence, when performance is below expectations due to benchmarks such as neighbouring countries this may seem as a poor performance that needs attention, while the same performance without comparisons could have gone unnoticed. Furthermore, a media perspective with a focus on political communication might point to the fact that journalists and voters find it easy to react to low performance. Bad news may be easier to fit within the tools of the ‘media society’ such as making a story dramatic, personalised and conflict ridden (Hernes, 1978). There is therefore also a danger for the information to be covered too much, by inflammatory language and framed politically without taking scientific (methodological) issues properly into account (Cohen, 1983). For civil servants and politicians avoiding unduly negative attention may therefore be a better career or re-election strategy than trying to stick out as high performing (Weaver, 1986), because high performance seldom gets so much public attention. Finally, a political perspective may explain why it might be easier for the government to agree on – or at least it might be useful to admit the responsibility for – low performance, than for identifying or taking the credit for high performance. For the opposition, there is not much reward in identifying high performance. It is exposing and blaming low performance that may initiate a shift in attention and a policy renewal (Jones and Baumgartner, 2005), change of senior management or top executive (Boyne et al., 2010) or eventually bring the opposition into the ministerial seats after the next election. In some instances, it is also important for the ruling government to identify and prevent low performance. Low performance could reframe a problem in such a way that the government could motivate, mobilise and legitimate change (Brunsson, 1985). Hence, focusing on performance below par may pay off both for oppositional as well as governmental politicians who want changes. The focus on poor performance may fruitfully be analysed as an institution. Simon (1997 [1947] defines the institutions of a society as the rules specifying the roles that particular persons will assume in relation to one another under certain circumstances. We can also regard institutions as norms that are reinforced by use (Elster, 2007; and March and Olsen, 1995). Some actors such as managers and policy makers systematically search for low performance and other actors

 C 2012 Blackwell Publishing Ltd

136

JOHNSEN

such as auditors and media focus on areas where low performance is expected. Some actors also utilise techniques such as rankings and league tables that frame relative performance as good and poor performance which easily grab different actors’ attention. Over time, patterns of actors, interests, behaviour and administrative bodies evolve into institutions (by ‘accident’, blaming and naming rather than design) with supply and demand of performance information and roles for prosecutors and defenders regarding perceived poor performance.

Economic Consequences Performance information has impact, but PIs seldom give answers – they do not function as ‘dials’ metering the state of an issue. Rather they function as ‘tin openers’ (Carter, 1991) that indicate which boxes of problems different stakeholders ought to scrutinise more closely. There appears to be some truth in the slogan that ‘what gets measured gets done’. Certain indicators function as incentives and people and organisations adapt. For example: The first PISA statistics from 2000 documented that Norwegian pupils’ performed badly in reading, mathematics and sciences literacy. The statistics from 2003 as well as 2006 documented a further deterioration. Furthermore, performance varied systematically between the pupils depending on gender and other socio-economic factors. This finding contradicted the traditional strong value of equality and challenged the myth about the unitary school system – a school for every one with no groups excluded. The debates following the PISA 2000, 2003 and 2006 publications were major impetus and facilitators for the centre–right government, as well as the red–green government, to reform the Norwegian school system. There is now a wide consensus between the political parties as well as in the teachers’ unions that the Norwegian school system needs to focus more on improving basic educational outcomes. Performance information clearly did facilitate policy learning and innovation.

Administrative Consequences The implementation of the policy of measuring and ranking schools’ performance led to a heated debate. Initially, there were tensions between the political leadership of the Ministry of Education and the teachers’ unions. The Minister of Education in the centre–right government was a high profiled liberalist, while the leadership and many members of the teachers’ union were sympathetic to the left. Teachers’ unions saw a hidden agenda of introducing market liberalism in the public school system. This ideological turn in the debate caused problems since it mobilised hostility among some professionals’ and pupils’ organisations – in this case even long before implementation of the system started. However, the relationship between the minister and the union improved during the implementation stage, partly due to constructive discourses and cooperation on the content of the educational policy that the information on the  C 2012 Blackwell Publishing Ltd

WHY DOES POOR PERFORMANCE GET SO MUCH ATTENTION?

137

poor performance provided (Bergesen, 2006). Also other stakeholders joined the discourse, in particular the Confederation of Norwegian Enterprise (NHO). The end result may have been an improved administrative (hierarchical) control of the educational policy because the teachers’ unions now have more competition in the interest group activities, and more governmental bodies may have shown interest in this policy area where the Ministry of Education previously dealt with the teachers’ unions almost exclusively.

Political Consequences Performance information that helps politicians and other policy makers in expressing their views, get relevant evidence as feedback, and makes policy and society transparent, may be conducive for open societies and democratic discourses (Popper, 1945 [1966]). These processes of political deliberations facilitate effectiveness because of their ability to inform policy decisions about social benefits and costs of policy and policy failure. For example: inadequate literacy skills, as revealed by the PISA indicators, evoke high social costs for individuals as well as for society. Norway is a rich country and has an egalitarian public management culture (Hood, 1998). Norwegian citizens therefore complain about public sector services if they feel the service provision is unjust (Rolland, 2003). One important and longstanding aim for the public sector school system in Norway has been to provide equal opportunity for all pupils regardless of gender, family background and place of residence. The PISA results (as well as other performance information) provided data that gave many actors reasons for doubting the performance of the Norwegian school system. There is now a public discourse on social exclusion and learning in Norway that probably would not have taken place so intensively without the provision of the performance information. Whether interest in poor performance and justice is contingent on national (public management) culture or not, or interest in poor performance is universal for example due to their perceived effect on the economy is unknown. The design, implementation and use of performance management often shift the balance of power in the polity. Traditionally the Norwegian governance system had corporatist elements (Pierre and Peters, 2000). Public discourses on performance information and interference from stakeholders other than certain unions and the government did not fit with this traditional corporatist model, because it reduced the unions’ influence on public policy. For example, before the centre–right (Bondevik II) government came to power, the Ministry of Education used to consult the teachers’ union before issuing policies. These traditions came under pressure (Bergesen, 2006). Making performance information available in the public realm introduced transparency and thus may have increased parliamentary, politically and market-based accountability relative to the traditional professional control and corporatist co-operation.

 C 2012 Blackwell Publishing Ltd

138

JOHNSEN

Figure 4 Consequences of Attention on Poor Performance in Public Policy Model of Rationality

Focus of Attention

Consequence

Economic man

Problem identification and cost–

Policy learning and innovation

benefit calculations Administrative man

Political man

Deviations relative to plans,

Administrative control and public

expectations or benchmarks

discourse

Risk and political salience

Transparency and accountability

Figure 4 summarises the analysis of consequences of the attention on poor performance. CONCLUSIONS

Economically it is important to explore and exploit best practices as well as to identify and prevent poor performance. Administratively there are bodies that identify both low and high performance. Politically, however, poor performance seemingly tends to get most attention. Performance management that identifies low performance may be important for public policy because it effectively informs public discourse. The mechanism of naming and shaming in a public setting involves the identification of ‘losers’ and ‘winners’, engages media and interest groups as well as official authorities, triggers both losers and winners to make their cases, and resonates with many actors in the polity. Even though the stakeholders use the performance information selectively according to their ideologies and interests, its contribution to the public deliberations may be important for open, democratic societies. Attention on poor performance may improve decision making and facilitate policy innovation, enhance transparency and administrative control as well as facilitate responsiveness and democratic accountability. The history and practices of public policy vary between countries, but it seems fair to say that the focus on poor results performance is an institution. Different administrative bodies systematically search for high and low performance, but it is low performance which gets the main public attention. There are administrative bodies that supply as well as bodies that demand information and there are roles for prosecutors and defenders. The attention on poor performance may be especially high when there are both motives and opportunities for change. For example: The national culture as well as the educational policy in Norway has traditionally emphasised equality. Norway has often had minority and/or

 C 2012 Blackwell Publishing Ltd

WHY DOES POOR PERFORMANCE GET SO MUCH ATTENTION?

139

coalition governments, but the ruling parties have had divergent educational policies. The emphasis has therefore often been on formulating policy and implementing reforms. Managing performance may therefore have slipped into the background until PISA and other statistics made the low performance public and the need for policy innovation urgent. The publication of the PISA statistics happened at an opportune time both for the centre–right government and for the polity in general. The new performance regime may have been especially good for the least well-off pupils. Therefore, the widespread tendency to pay disproportionate attention on poor performance may not be so bad after all. Interestingly, it has been the often criticised ‘neo-liberalist’ and elitist rankings and league tables of high performance that have provided the means to address the important egalitarian issues of low performance and inequality in the Norwegian case. The analysis gives some indications of fruitful avenues for future research. This article started from the premise that poor performance gets most attention, and some empirical studies have corroborated that assertion. Some promising issues for future research are to assess the relative prominence of poor and good performance, and ask: what kind of attention do different actors pay to different types of performance in different circumstances? Does negativity bias address relative poor performance as much as absolute poor performance? Does the negativity bias extend to other public services where people in general have less direct experience than they have from their own and their children’s schooling, and where poor performance has less long-term impact? How does naming, blaming and shaming affect relative and absolute performance over time? How does performance management of poor performance vary between different contexts such as national institutions and culture, political regimes, tiers of government, policy area and regulation regimes? For example: Is the interest in underperformance most prevalent in countries with an egalitarian (public management) culture? Are shaming and improvements in performance conditional on regulations such as user choice (vouchers), contracting out and the threat of direct governmental intervention in low performing organisations’ management or activities? How does information on good performance impact public policy relative to information on poor performance? Future research should consider both potential negative and positive functions of asymmetric responses to good and bad information and negativity bias in performance management and public policy. REFERENCES Askim, J. (2007), ‘How Do Politicians Use Performance Information? An Analysis of the Norwegian Local Government Experience’, International Review of Administrative Sciences, Vol. 73, No. 3, pp. 453–72. Behn, R. D. (2003), ‘Why Measure Performance? Different Purposes Require Different Measures’, Public Administration Review, Vol. 63, No. 5, pp. 588–606. Bergesen, H. O. (2006), Kampen om kunnskapsskolen (Universitetsforlaget, Oslo).

 C 2012 Blackwell Publishing Ltd

140

JOHNSEN

Boyne, G. A., O. James, P. John and N. Petrovsky (2010), ‘Does Political Change Affect Senior Management Turnover? An Empirical Analysis of Top-tier Local Authorities in England’, Public Administration, Vol. 88, No. 1, pp. 136–53. Brunsson, N. (1985), The Irrational Organization: Irrationality as a Basis for Organizational Action and Change (John Wiley & Sons, Chichester). Carson, P. P., P. A. Lanier, K. D. Carson and B. N. Guidry (2000), ‘Clearing a Path through the Management Fashion Jungle: Some Preliminary Trailblazing’, Academy of Management Journal, Vol. 43, No 6, pp. 1143–58. Carter, N. (1991), ‘Learning to Measure Performance: The Use of Indicators in Organizations’, Public Administration, Vol. 69, No. 1, pp. 85–101. Cohen, B. (1983), ‘Nuclear Journalism: Lies, Damned Lies, and News Reports’, Policy Review, Vol. 26, No. 3, pp. 70–74. Daft, R. L. and R. H. Lengel (1990), ‘Information Richness. A New Approach to Managerial Behavior and Organizational Design’, in L. L. Cummings and Barry M. Staw (eds.), Information and Cognition in Organizations (JAI Press, Greenwich, Connecticut). de Bruijn, H. (2002), Managing Performance in the Public Sector (Routledge, London). de Lancer Julnes, P. (2006), ‘Performance Measurement: An Effective Tool for Government Accountability? The Debate Goes On’, Evaluation, Vol. 12, No. 2, pp. 219–35. Dixon, R., M. Mullers, C. Hood and C. Arendt (2010), ‘An Iron Law of Negativity? Domestic Press Responses to International Educational Rankings in Three EU Countries’, Paper presented for the 32nd European Group of Public Administration (EGPA), Conference (Toulouse, 8–10 September). Dooren, W. van and S. van de Walle (eds.) (2008), Performance Information in the Public Sector: How it is Used (Palgrave Macmillan, Basingstoke). Downs, A. (1957), An Economic Theory of Democracy (Harper Collins, New York). Elster, J. (2007), Explaining Social Behavior: More Nuts and Bolts for the Social Sciences (Cambridge University Press, Cambridge). Feldman, M. S. and J. G. March (1981), ‘Information in Organizations as Signal and Symbol’, Administrative Science Quarterly, Vol. 26, No. 2, pp. 171–86. Friestad, L. B. H. and K. Robertsen (1990), Effektiviseringsmuligheter i grunnskolen, Agder Research FoU-Report No. 72 (Norwegian Academic Press, Kristiansand). Grek, S. (2008), ‘PISA in the British Media: Leaning Tower or Robust Testing Tool?’, CES Briefing No. 45 (Centre for Educational Sociology, The University of Edinburgh). ——— (2009), ‘Governing by Numbers: The PISA ‘Effect’ in Europe’, Journal of Education Policy, Vol. 24, No. 1, pp. 23–37. Helden, G. J. van, Å. Johnsen and J. Vakkuri (2007), ‘Understanding Public Sector Performance Measurement: The Life Cycle Approach’, Paper presented for the Conference of the European Group of Public Administration (EGPA), (Madrid, 19–22 September). Hernes, G. (1978), ‘Det medievridde samfunn’, in G. Hernes (ed.), Forhandlingsøkonomi og blandingsadministrasjon (Universitetsforlaget, Oslo). Hirschman, A. O. (1970), Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations, and States (Harvard University Press, Cambridge, Massachusetts). Hofstede, G. (1984), Culture’s Consequences. International Differences in Work-Related Values (Sage, London). Hogarth, R. (1987), Judgement and Choice (2nd ed., John Wiley & Sons, New York). Hood, C. (1995), ‘The New Public Management in the 1980s: Variations on a Theme’, Accounting, Organizations and Society, Vol. 20, Nos. 2/3, pp. 93–109. ——— (1998), The Art of the State: Culture, Rhetoric, and Public Management (London: Clarendon Press). ——— (2007), ‘What Happens When Transparancy Meets Blame – Avoidance?’ Public Management Review Vol. 9, No 2, pp 191-210. Hyndman, N. (2008), ‘Accountability and Democratic Accountability in Northern Ireland’, in M. Ezzamel, N. Hyndman, Å. Johnsen and I. Lapsley (eds.), Accounting in Politics: Devolution and Democratic Accountability (Routledge, London). Jackson, P. M. (1993), ‘Public Service Performance Evaluation: A Strategic Perspective’, Public Money & Management, Vol. 13, No. 4, pp. 9–14. James, O. and P. John (2007), ‘Public Management at the Ballot Box: Performance Information and Electoral Support for Incumbent English Local Governments’, Journal of Public Administration and Research Theory, Vol. 17, No. 4, pp. 567–80.  C 2012 Blackwell Publishing Ltd

WHY DOES POOR PERFORMANCE GET SO MUCH ATTENTION?

141

Johannesen, B. M. (2005), ‘S˚a strøk skolen i demokrati ogs˚a’, Horisont, Vol. 6, No. 2, pp. 64–9. Johnsen, Å. (2005), ‘What Does 25 Years of Experience Tell Us about the State of Performance Measurement in Public Management and Policy?’, Public Money and Management, Vol. 25, No. 1, pp. 9–17. Jones, B. D. and F. R. Baumgartner (2005), The Politics of Attention: How Government Prioritizes Problems (Chicago University Press, Chicago). Kahneman, D. and A. Tversky (1979), ‘Prospect Theory: An Analysis of Decisions under Risk’, Econometrica, Vol. 47, No. 2, pp. 263–91. Kaplan, R. S. and D. P. Norton (1996), The Balanced Scorecard: Translating Strategy into Action (Harvard Business School Press, Boston, Massachusetts). Kjærnsli, M. and S. Lie (2005), ‘Hva forteller PISA-undersøkelsen om norsk skole?’, Horisont, Vol. 6, No. 2, pp. 22–31. ——— ———, R. V. Olsen and A. Roe (2007), PISA 2006: Chapter 1 and 11 from the National PISA Report ‘Tid for tunge løft’ (Universitetsforlaget, Oslo). Llewellyn, S. and D. Northcott (2005), ‘The Average Hospital’, Accounting, Organizations & Society, Vol. 30, No. 6, pp. 555–83. March, J. G. (1991), ‘Exploration and Exploitation in Organizational Learning’, Organization Science, Vol. 2, No. 1, pp. 71–87. ——— and J. P. Olsen (1995), Democratic Governance (The Free Press, New York). Mayston, D. J. (1985), ‘Non-Profit Performance Indicators in the Public Sector’, Financial Accountability & Management, Vol. 1, No. 1, pp. 51–74. McCubbins, M. D. and T. Schwartz (1984), ‘Congressional Oversight Overlooked: Police Patrols versus Fire Alarms’, American Journal of Political Science, Vol. 28, No. 1, pp. 165–79. Metcalfe, L. and S. Richards (1990), Improving Public Management (2nd ed., Sage, London). Meyer, M. W. and V. Gupta (1994), ‘The Performance Paradox’, Research in Organizational Behavior, Vol. 16, pp. 309–69. ——— and B. Rowan (1977), ‘Institutionalized Organizations: Formal Structure as Myth and Ceremony’, American Journal of Sociology, Vol. 83, No. 2, pp. 340–63., Reprinted in revised edition in W. W. Powell and P. J. DiMaggio (eds.) (1991), The New Institutionalism in Organizational Analysis (University of Chicago Press, Chicago). Ministry of Education and Religious Affairs, Ministry of Culture and Science, and Organisation for Economic Cooperation and Development (1989), OECD-vurdering av norsk utdanningspolitikk. Norsk rapport til OECD. Ekspertvurdering fra OECD (Aschehoug, Oslo). Norwegian Official Reports 1991 No. 28, Mot bedre vitende? Effektiviseringsmuligheter i offentlig sektor (The Norman Committee) (Ministry of Work and Administration, Oslo). Pawson, R. (2002), ‘Evidence and Policy and Naming and Shaming’, Policy Studies, Vol. 23, Nos. 3/4, pp. 211–230. Pfeffer, J. and R. I. Sutton (2006), ‘Evidence-Based Management’, Harvard Business Review, Vol. 84, No. 1, pp. 62–74. Pierre, J. and B. G. Peters (2000), Governance, Politics and the State (Macmillan Press, London). Pollitt, C. (2006), ‘Performance Information for Democracy: The Missing Link?’, Evaluation, Vol. 12, No. 1, pp. 38–55. Pongratz, L. A. (2006), ‘Voluntary Self-Control: Education Reform as a Governmental Strategy’, Educational Philosophy & Theory, Vol. 38, No. 4, pp. 471–82. Popper, K. R. (1966 [1945]), The Open Society and its Enemies (Routledge, London). Rattsø, J. (2001), ‘Det er ikke penger som er problemet i norsk skole’, Horisont, Vol. 2, No. 2, pp. 10–20. Rawls, J. (1971), A Theory of Justice (Oxford University Press, London). Reports to the Parliament No. 30 (2003–2004), Kultur for læring (Ministry of Education and Research, Oslo). Ridgway V. F. (1956), ‘Dysfunctional Consequences of Performance Measurements’, Administrative Science Quarterly, Vol. 1, No. 2, pp. 240–47. Rolland, A. (2003), ‘Rikdom, likhet og misnøye: Om borgertilfredshet i det egalitære samfunn’, Kirke og kultur, Vol. 109, Nos. 4/5, pp. 495–506. Simon, H. A. (1947/1997), Administrative Behavior: Study of Decision-Making Processes in Administrative Organizations (4th ed., The Free Press, New York). Slagstad, R. (1998), De nasjonale strateger (Pax Forlag, Oslo). Smith, P. (1995), ‘On the Unintended Consequences of Publishing Performance Data in the Public Sector’, International Journal of Public Administration, Vol. 18, Nos. 2/3, pp. 277–310.  C 2012 Blackwell Publishing Ltd

142

JOHNSEN

Soroka, S. N. (2006), ‘Good News and Bad News: Asymmetric Responses to Economic Information’, The Journal of Politics, Vol. 68, No. 2, pp. 372–85. Thaler, R. and C. R. Sunstein (2008), Nudge: Improving Decisions About Health, Wealth, and Happiness (Yale University Press, New Haven). Thompson, J. D. (2003 [1967]), Organizations in Action: Social Science Bases of Administrative Theory (Transaction Publishers, New Brunswick, New Jersey). Verpe, K. (2005), ‘Lærerne er skeptiske og g˚ar mot venstre’, Horisont, Vol. 6, No. 2, pp. 36–45. Weaver, R. K. (1986), ‘The Politics of Blame Avoidance’, Journal of Public Policy, Vol. 6, No. 4, pp. 371–98. Williams, D. W. (2003), ‘Measuring Government in the Early Twentieth Century’, Public Administration Review, Vol. 63, No. 6, pp. 643–59.

 C 2012 Blackwell Publishing Ltd