Narrowing down accountability through ... - Semantic Scholar

4 downloads 52 Views 153KB Size Report
Lancaster University Management School, Lancaster University, Lancaster, UK. Abstract ... of performance monitoring technology does not always ensure accountability in the public sector. .... Those high-up in the hierarchy are ...... Clegg, S. (1998), “Foucault, power and organisations”, in McKinley, A. and Starkey, K. (Eds),.
The current issue and full text archive of this journal is available at www.emeraldinsight.com/1176-6093.htm

QRAM 6,3

Narrowing down accountability through performance monitoring technology

160

E-government in Greece Dimitra Petrakaki, Niall Hayes and Lucas Introna Department of Organisation, Work and Technology, Lancaster University Management School, Lancaster University, Lancaster, UK Abstract Purpose – The purpose of this paper is to explore the relationship between performance monitoring technology and accountability in electronic government initiatives. Specifically, it aims to investigate how performance monitoring technologies are deployed in electronic government and the consequences that may arise from their implementation on public service accountability. Design/methodology/approach – The paper draws upon an in-depth empirical study of several Greek Citizens Service Centres (CSCs). CSCs are a central component of Greece’s e-government strategy. Qualitative methods are deployed during fieldwork and data are analysed in line with the social constructionist paradigm. Findings – Contrary to the mainstream e-government literature, the paper argues that the introduction of performance monitoring technology does not always ensure accountability in the public sector. Overall, it suggests that performance technology may not necessarily lead to a form of accountability that always has the interests of the public at its heart. Instead it argues that it may lead to a narrowing down of accountability and the emergence of an instrumental rationality. Originality/value – The paper argues that the critical literature on management accounting provides important insights in understanding the consequences of performance monitoring in e-government projects and conceptualising the relationship between performance and accountability. Keywords Performance monitoring, Online operations, Government, Public administration, Greece Paper type Research paper

Qualitative Research in Accounting & Management Vol. 6 No. 3, 2009 pp. 160-179 q Emerald Group Publishing Limited 1176-6093 DOI 10.1108/11766090910973911

1. Introduction Most governments are engaged in electronic government projects intended to modernise and transform the ways in which public services are provided to citizens. Information and communication technologies (ICTs) play a pivotal role in the general orchestration of e-government by orientating government services towards the customer/citizen and making the provision of government services more efficient and accountable. Typically, ICTs are deployed to reorganise administrative processes, as well as the ways in which citizens participate in public procedures, and how they request and receive services and information from central and local government (Basu, 2004; Chadwick and May, 2003; Ciborra, 2003; Curthoys et al., 2003; Tan and Pan, 2003; Vintar et al., 2003; von Haldenwang, 2004). More specifically, ICTs often provide a means for devising the redesigned processes (through business process reengineering, for example), by supporting the delivery of the redesigned services, by providing a way to link internal cross departmental processes and, finally, by providing a way to receive and disseminate information and services to citizens through one stop shops, portals and

internet kiosks (Bloomfield and Hayes, 2004; Moon, 2002; Tan and Pan, 2003). Further, ICTs have been used to monitor the efficiency and effectiveness of the provision of such services. It is this theme that our paper will address, namely performance monitoring and accountability in public sector one-stop shops. One-stop shops are a common component of e-government projects. One-stop shops are single contact points: either geographical or electronic, in which multiple governmental departments integrate (physically or electronically) for the direct delivery of public services to citizens (Cowell and Martin, 2003; Jaeger and Thompson, 2004; Ho, 2002; Illsley et al., 2000; von Haldenwang, 2004). One-stop shops typically comprise of a front office that acts as an interface between citizens and public administration and a back office that is responsible for the processing of citizens’ requests (Lenk, 2002; Vintar et al., 2003; von Haldenwang, 2004). In relation to the specific focus of this paper, a number of studies have reported on issues pertaining to performance monitoring and accountability in electronic government. Some studies focus on the development process of specific performance monitoring systems and their attributes (Alford and Baird, 1997; Behn, 2003; Boland and Fowler, 2000; Hoggett, 1996; Wouters and Wilderom, 2008). Others focus on the meaning, development and limitations of performance indicators (Hoggett, 1996), or the purposes and benefits of performance monitoring systems (Behn, 2003; Scales, 1997; Propper and Wilson, 2003; Wholey and Hatry, 1992). Finally, others focus on various concerns about the management of such systems (Smith, 1998). There have also been some critical studies reported. A number of authors are particularly critical of the ways and extent to which people and organisations can be governed by numbers (Clarke, 2005; Clegg, 1998; du Gay, 2000; Hoggett, 1996; Hopper and Macintosh, 1998; Miller, 2001; Miller and O’Leary, 1987; Newman, 2000; Power, 1994; Rose, 1999b). It is this latter work that we will explicitly draw upon. Within this critical literature, there has been little consideration of how performance monitoring technologies are deployed in electronic government initiatives and how they condition a certain mode of accountability. For the purposes of this paper we refer to accountability as the ways in which staff involved in e-government projects are taken to be responsible for specific activities, to whom, against which criteria, and how they are to make such activities explicit to other staff (typically superiors). We contribute to this literature on accountability and performance monitoring by providing an in-depth case study of a Greek e-government project. Specifically, our research illustrates how performance monitoring technology was used by central government and how it constructed a particular mode of accountability (and unaccountability). Thus, we specifically attend to the ways in which technology is implicated in the relationship between performance monitoring and accountability. We do this by drawing on Weber’s (1948) writings on accountability and the bureaucracy, as they have been central in conceptualising the nature of accountability in public services. Overall, we shall suggest that performance technology may not necessarily lead to a form of accountability that always has the interests of the public at its heart. Instead we will argue that it may lead to a narrowing down of accountability and the emergence of an instrumental rationality. Our paper is structured as follows. Section 2 reviews the literature that has considered the theme of accountability, and especially accountability within a public sector context. In particular, here we outline a Weberian perspective of accountability as this forms the basis for much of our subsequent analysis. Section 3 details our

Performance monitoring technology 161

QRAM 6,3

methodological underpinnings. Section 4 first outlines the case, the Greek Citizen Service Centres and the introduction of a performance monitoring technology, before presenting a detailed account of how this technology was made to work by different groups following its introduction. Section 5 discusses the issues emerging from the case in relation to literature on accountability, and especially in relation to Weber’s writings on accountability. Section 6 offers some conclusions.

162 2. Performance monitoring technology and accountability 2.1 Understanding accountability in the public sector Performance monitoring technologies are often deployed in e-government initiatives to engender greater accountability across government. Yet the concept of accountability is vague and ambiguous, and often fails to stipulate who is responsible for what and to whom (Roberts, 2002). Conventional perspectives view accountability as a relationship between a person who is held liable for providing an account of his/her actions (accountor) and a person who is external to this action (accountee) (Chan, 1999; Keohane, 2002; Mulgan, 2000; Parker and Gould, 1999). This relationship, which constitutes “external accountability”, involves an evaluation of accountor’s behaviours against certain standards (legal, constitutional, performance, etc.), and often the imposition of rewards or sanctions (Chan, 1999). Other authors perceive external accountability as being the liabilities of accountors towards any party outside the accountor’s institution (e.g. the government, community, stakeholders, etc.) (Barberis, 1998; Keohane, 2002; Parker and Gould, 1999). External accountability can be further defined in various ways. With regard to public administration, which is the focal point of this paper, Bovens et al. (2008) approach accountability from a democratic, constitutional and learning perspective. They argue that democratic accountability is related to public officials’ answerability to legitimate bodies that represent society, constitutional accountability is legal and regulatory in character, whereas learning accountability refers to the responsiveness of officials to the needs and interests of relevant stakeholders. Further, many authors distinguish between two types of accountability: traditional accountability, which is legal and hierarchical, and performance-based accountability, which is oriented towards identifying the output of public officials’ work or financial outcomes (Gendron et al., 2001; Jos and Tompkins, 2004; Keohane, 2002; Kloot, 1999; Parker and Gould, 1999; Ryan and Walsh, 2004). Moreover, Pina et al. (2007) created a typology of four forms of accountability that relate to prevalent types of public administration in Europe. They argue that accountability is constitutional in Germany, market-oriented in UK, legalistic in Southern Europe, and citizen-centric in Nordic countries. More recent accountability literature that has examined “internal accountability”, has indicated that accountability is a complex process that cannot be separated from an accountors’ self and his or her professional values (Mulgan, 2000), and consequently cannot be reduced to external criteria and standards (Choudhury and Ahmed, 2002; Pratchett and Wingfield, 1996). Emphasis is therefore placed not only on meeting external standards and providing accounts for them, but on meeting them in accordance with personal values, ethics and a sense of responsibility. Gendron et al. (2001), who studied the independence of state auditors, illustrated the interdependent relationship between accountability and personal responsibility by examining how the imperative to become accountable to performance targets influences responsibility towards one’s

professional role and expectations from society. Also, Chan (1999), who studied accountability in police practice, indicated that accountability depends upon multiple issues, among which government officers’ perceptions of the importance of setting standards of accountability and responsibility for their actions. Overall, debates in the literature point to there being great variety in terms of the degree to which accountability is dependent upon external dimensions (i.e. legal/constitutional standards) or internal constraints (i.e. individual discretion) (Frederickson, 1993; Roberts, 2002). A way to deal with this ambiguity is to draw upon Weber’s (1948) work on the ideal type of bureaucracy. As already noted, Weber’s analysis forms an important basis for our subsequent case discussion, as it has significantly influenced the shape and form of accountability in the public sector. Weber’s work reconciles (although implicitly) internal and external accountability by allowing us to conceptualise the meaning, nature and enactment of accountability in public administration from both perspectives (internal and external). Weber indicates that there are four sources of accountability, and types, each with a specific meaning (Table I). Weber (1948, p. 197) perceives accountability as being an inextricable part of bureaucracy as a form of life “distinct from the sphere of private life” that governs the Bureau and the officials who undertake it. Weber (1948) highlights several sources of accountability that constitute some of the distinctive characteristics of the ideal type of bureaucracy namely: the hierarchy, the principle of impersonality and the law. Hierarchy indicates relationships of super-and sub-ordination. Officials low-down in the hierarchy are rendered liable for executing decisions and following the law and rules “without scorn and bias” (Weber, 1948, p. 95). Those high-up in the hierarchy are responsible for setting policy, making decisions and for the outcomes of their subordinates’ actions. The hierarchy therefore attributes responsibility and accountability for specific processes and outcomes. The accountability that the hierarchy engenders seeks to provide equity in the provision of public services to all citizens (Weber, 1948, p. 198). At the same time, laws regulate the conduct of bureaucrats by prescribing what is acceptable and unacceptable within the bureaucracy and thereby ensuring equity in the ways in which citizens are treated. This constitutes a formal rationality, which guides the actions of officials and restricts other emotional (substantive) rationales (Weber, 1948). Moreover, the ideal type of bureaucracy dissociates the subject who holds an office from the authority of his/her subject position. The principle of impersonality implies that officials are not allowed to exploit the power of their office in order to satisfy personal interests and emotions. On the contrary, officials are tied to their office by their “entire material and ideal existence” (Weber, 1948, p. 228). Their role is to protect the public interest by prescribing activities to be undertaken, “without scorn and bias” (Weber, 1948, p. 95), self interest or opportunism.

Source

Type

Meaning

Hierarchy Law Impersonality

Hierarchical Legal Formal and normative

Responsibility for processes Equity and justice Elimination of opportunistic and self-serving behaviours

Performance monitoring technology 163

Table I. The concept of accountability in Weber’s (1948) work

QRAM 6,3

164

One might argue that at the heart of the Weberian bureaucracy, and its associated notion of accountability, is the idea of procedural justice, which refers to the fairness and the transparency of the processes by which decisions are made (Weber, 1992 [1919]; Finer, 1941; Fry and Nigro, 1996). This is not to say that the Weberian bureaucracy in its actual operation necessarily achieves procedural justice. Nevertheless, one could argue that that is its essential intent (Weber, 1992 [1919]). This Weberian perspective on accountability will help us to provide an analysis of how accountability became individualised in the Greek public sector following the introduction of performance monitoring as part of the e-government initiative. 2.2 Monitoring and measuring performance The Weberian pursuit of fairness and transparency often manifested itself in what can be seen as (and experienced as) indifference, excessive and inefficient rules and procedures, and a general sense of alienation. Many would argue that it is these ills of the Weberian bureaucracy, and its associated notion of accountability, that has often been the target of recent public sector reforms. If this is the case then one might ask how accountability is being constituted in a “post-Weberian” public sector. The remainder of this subsection will discuss ways in which performance monitoring has been introduced to change the nature and form of accountability in the public sector. Performance monitoring is deployed in the public sector to ensure that certain performance standards are met in relation to governmental inputs and outputs. Typically it is accompanied by the assumption that what gets measured is what has been done and what is being done (Newman, 2000). In other words, performance monitoring technology aims to capture past performance, and to prescribe what can or ought to be performed in future. Indeed, technology provides information about the outputs of staff which can then be compared and contrasted over time and across groups using statistical analysis. This analysis can then be compiled and disseminated in the form of reports, tables and figures (Clegg, 1998; Hoggett, 1996; Hopper and Macintosh, 1998). Numbers are an inextricable part of the process of governing and as Rose (1999a) argues co-constitutive of politics. Further, Townley et al. (2003) suggest that in the context of performance monitoring numbers substitute the “rationality of politics” for the “rationality of planning”. This is evident in various ways. First, performance monitoring makes public officials’ performance visible and known. This then enables comparisons with other staff or against an ideal. If performance targets are not met, monitoring can legitimate the introduction of changes (Clarke, 2005; du Gay, 2000; Newman, 2000). Second, information that derives from continuous monitoring is thought to enable transparency, not only by making the public officials’ performance visible to their supervisors but also by making officials themselves aware of their performance and thereby responsible for it (Miller, 2001; Power, 1994; Townley, 2002; Townley et al., 2003). Finally, performance monitoring technology diffuses the standards against which officials are held accountable/liable (Townley et al., 2003, p. 1053; Wouters and Wilderom, 2008). Hence, performance monitoring technologies are a means to enable not only economic efficiency, customer orientation, and empowerment but also accountability (Fountain, 2001; Townley, 2002). Further, this indicates that performance monitoring technology may be linked both to external and internal accountability as it diffuses performance standards as well as individual responsibility for meeting them (Chan, 1999; Gendron et al., 2001; Mulgan, 2000; Parker and Gould, 1999).

In spite of these seemingly obvious advantages, several authors have raised concerns with regard to the effects of performance monitoring in the public sector. Many commentators claim that performance monitoring shifted governmental interest from the formation of policy based upon the public interest to the formation of policy based upon numbers and statistical calculations. (E-)Governing becomes a practice of collecting, measuring, comparing and ranking performances out of which standards are set that satisfy government’s targets (economic efficiency, results, etc.) (Clarke and Newman, 1997; Doolin, 2003; Rose, 1999a). For instance, the responsiveness of public services to citizens’ needs is regarded as successful when a certain number of citizens are served within a specific timeframe. This quantitative orientation is tied to the assumption that numbers are objective in representing reality because they are devoid of human judgment (Power, 1994; Rose, 1999a). However, as what gets measured and how it is measured is still an outcome of human judgement, this claim is contentious. Further, by setting performance standards, governments not only compare but also homogenize performances (Clarke and Newman, 1997; Miller, 2001; Power, 1994; Townley et al., 2003). Homogenisation is necessary because without common standards comparisons cannot be made. This practice however neglects the innumerable, dynamic and different contingencies that affect the provision of public services (e.g. overlapping laws, contradictory targets, emergent actions and improvisations, etc.). Therefore, monitoring and measurement of performance is not merely a means to represent, but also to construct and re-construct, the context of public service delivery (Power, 1994; Townley et al., 2003). Further still, some commentators argue that performance measurement is a normative practice that intends to “make-up” and trap government officials in a struggle for continuous improvement (Miller and O’Leary, 1987). Performance monitoring aims to direct public officials’ thinking so that their conduct is consistent with performance targets such as efficiency, customer orientation and accountability (du Gay, 1996). Each of these targets intends to mould attitudes so that staff may become efficient, customer-oriented, and accountable. Consequently, performance targets are directed not to how one does what one is doing, but to what one does as an outcome of who one is or who one can become. The diffusion of performance targets increases the awareness of officials as to what they are supposed to achieve, prompts for continuous effort and leads to a state of constant formulation of their conduct in order to meet the targets set. Some would argue that rather than being an empowering tool, performance monitoring is likely to discipline (Rose, 1999a), constrain (Hood, 1991) and be an obstacle to the exercise of autonomy by public sector officials (Power, 1994). From this discussion, it seems clear that performance monitoring can be considered a dual-purpose technology in that it enables the exercise of power over public sector staff as well as ensuring a certain mode of accountability (Chan, 1999; Mulgan, 2000; Parker and Gould, 1999; Townley et al., 2003). 2.3 Accountability through performance monitoring? Some authors have cast doubt on the potential of performance monitoring technology to engender accountability as such and have rather showed the many negative effects that arise from such practices. Hoggett (1996) argued that an output orientation is likely to make public sector staff focused more on the results rather than the process of service provision, whereas Clarke and Newman (1997) argue that the setting of performance

Performance monitoring technology 165

QRAM 6,3

166

standards is likely to lead to rationalisation of public services at the expense of quality. Similarly, du Gay (2000) suggests that quantitative indicators restrict public officials’ decisions, make them set priorities in the way they deliver services and enable risk taking and self-seeking behaviour. Moreover, there are various studies on public service ethics – with which we sympathise – which argue for the unavoidable use of discretion in public service delivery (Lipsky, 1980). Particularly, Minson (1998) argues that any effort towards the imposition of performance constraints is likely to be in vain because public officials have to judge and make decisions based upon the available options and facts and also in relation to their responsibilities towards the government and citizens. Hogget (2005) also argued that public officials are burdened with various anxieties that are imposed on them by the citizens and the government alike. Consequently, public officials face situations of ambivalence in their effort to balance private and public interests and make decisions out of them. This renders public service delivery, according to Hogget (2005, p. 186), an “art of judgment” and according to Chapman (1993) a process that is inescapably affected by personal values, emotions and beliefs. The above views illustrate that technology does not necessarily of itself prescribe behaviours, nor does it engender accountability (broadly defined to include both its internal and external orientation). 3. Research methodology The research draws upon the qualitative paradigm (Creswell, 1994; Denzin and Lincoln, 1998; Maykut and Morehouse, 1994) and particularly social constructionist ideas (Berger and Luckmann, 1966). Social constructionists argue that human beings can only partially become aware of reality. What they know about reality is through their encounters in everyday life. Reality thus becomes “taken for granted” and is the basis for everyday activity (Berger and Luckmann, 1966). As Berger and Luckmann (1966) put it, we live in a world that is already institutionalised. Hence, in order for one to be accepted as a legitimate member of this reality, one needs to internalise its norms and act consistently with them. People, however, are not simply guided by what they have to do but act upon their institutionalised reality and reproduce or reconstruct it in their everyday practices. As the social constructionist stream indicates, the investigation of a social phenomenon is inseparable from its context and from the practices that take place within it and end to reproduce and reconstruct it. In other words, institutions are sites (Schatzki, 2005) that are ordered in a specific way and are maintained and transformed through every day practices and interactions. The research reported in this paper examines the Greek e-government project as a social site in which language and practices are inextricably interlinked and lead to the project’s ongoing transformation. This is done by presenting the socially constructed rationales and situated practices of those observed and interviewed. The latter enables us to comprehend the limits of what can (and cannot be) considered legitimate in a specific context rather than participants’ true intentions per se (Townley et al., 2003). In order to understand the production and reproduction of practices within Citizens Service Centre (CSCs) we undertook interviews and observation as well as document collection. With regard to the latter, we reviewed laws pertaining to public administration from the 1950s to the present day, government regulations about the function of CSCs and their collaboration with public sector organisations, newspaper articles on public administration along with governmental documents that were

produced by the Ministry of Interior, Public Administration and Decentralization (MIPAD) such as presentations and reports. The research was carried out in three periods. The first period was between October and December 2005, the second from March until May 2006 and the third in August 2006. The first period was mainly but not exclusively devoted to undertaking interviews and observations. We held 17 interviews with public officials from: the MIPAD, particularly the project team members who developed and guide the operation of CSCs’ and administer the performance monitoring system; two politicians who were implicated due to their position in the regulation of CSCs; and the IT vendor who implemented and maintains CSCs’ technological platform. The above interviews were semi-structured, recorded and lasted for an hour on average. Further, we visited five CSCs located in the two largest Greek cities, Athens and Thessaloniki in order to interview supervisors and staff, and observe their daily practices. We held 30 interviews in total. Discussions with CSC staff were held both when they were working and during their breaks. Notes of these discussions were recorded in a diary. In some cases, second interviews were undertaken. In order to understand how CSCs’ functioned we observed their practices for at least four hours a day, each day for a period of two months. Observation mainly focused on understanding the daily practices that are carried out in CSCs, i.e. how CSC staff serve citizens, how they decide about the processing of citizens’ requests and how they collaborate with officials from public organisations. The second and third period were mainly devoted to observing interactions between CSC staff and to undertaking further interviews with both CSC staff and civil servants from those public organisations that had regular interactions with CSCs, such as the transport and communications directorate, municipalities, criminal records department, health organisations, manpower organisation, court of first instance, social security organisation and competent recruitment office (15 interviews were held with civil servants). The civil servants that were interviewed were mainly those officials who came into direct contact with CSC staff and/or who served citizens directly themselves. Observations of the collaboration between CSC staff and civil servants were held mainly during the second period of research for two hours per day. Notes of observations were kept in a fieldwork diary. Themes and issues emerging were then discussed, compared with the literature, and developed. 4. The establishment of Citizens Service Centres (CSCs) 4.1 The performance monitoring technology CSCs were established in 2002 by the MIPAD as one-stop shops that mediate between citizens and public service organisations so as to provide continuous and speedy service of citizens. According to their institutional law (article 31 paragraph 1, L.3013/2002), CSCs are administrative units whose mission is the provision of a broad range of administrative information and services to citizens such as certifications, licenses, social security, etc. MIPAD created 1,054 CSCs (until August 2006) across Greece so that every citizen independently of his/her location and computing skills has instant access to public services. CSCs comprise a front and a back office and are staffed by people employed on a contract basis (CSC staff). The front office is responsible for the provision of information and for inputting citizens’ requests into the CSCs’ information system, whereas the back

Performance monitoring technology 167

QRAM 6,3

168

office has the jurisdiction of checking that requests are complete, communicating with the respective public service organisations, transferring citizens’ requests and monitoring them. Contrary to what one might consider typical back office operations (Tan and Pan, 2003; Ho, 2002), the CSCs back office is not able to process citizens’ requests. Instead, processing is done by the conventional public service organisations to which citizens’ requests are transferred from CSCs, either through fax, courier, e-mail or CSC staff themselves taking citizens’ requests to the relevant departments. Public service organisations also continue to accept direct requests from citizens. Figure 1 illustrates interactions that take place between citizens, CSC staff and government officials. The CSCs’ work flow is represented with the dashed line, whereas the dotted lines represent the direct transactions citizens can still have with public organisations. A central computer system was designed and introduced to serve all CSCs across Greece. The system maintains all administrative information, administrative forms and citizens’ details given by prior requests they made to CSCs. The system also incorporates a management information system (MIS) application that provides performance data for the MIPAD project team responsible for the management of CSCs to review. This data includes, among other things: complete and pending requests in each CSC; the total number of requested administrative services per day in each CSC; the average response time per citizen request; and the number of citizens being served through CSCs during specific time periods. The project team responsible for the functional, technical, economic and human resource issues of CSCs, uses the performance data derived from the MIS for various purposes, one of which is the exertion of control and power over public administration (CSCs and public service organisation). The project team members believed that the MIS was an innovative means to provide an objective representation of the performance of CSC staff and public sector employees working in the different departments. As a ministerial official argued:

Administrative product

Administrative product Citizens

Citizens' request

Citizens' request Citizen's request Public

CSC Administrative product

Figure 1. The interactions between CSCs, public organisations, citizens and the government

Data

Ministry of interior CSC' system-MIS

Data

[. . .] technology gives the opportunity for the first time to control public administration [. . .] In the past, performance was measured by supervisors and thus, all estimations were subjective and biased. With the MIS meritocracy is established in the way CSCs staff’s performance is estimated [. . .] With CSCs we also managed to have an online, daily control of public sector staff. This is a major step in Greek public administration.

Through the MIS the MIPAD project team has real time information pertaining to the performance of each CSC and also of the output of each member of CSC staff. As a project team member argued “the MIS opens a field of visibility of what each CSCs’ staff members do on a daily basis”. This visibility is achieved by a unique log identifying each member of staff. The MIS provides data on how many requests each member of staff has placed on the system, how many requests are pending and how long it took for certain citizens’ requests to be completed. As a staff member we interviewed said: We have credentials: a username and a password. So our pending requests, the requests we place, are all visible [. . .] one can check our work and mistakes, one can know about our productivity, when we enter the system and when not.

With this information, the MIPAD project team has the ability to measure the productivity of each CSC and each member of staff. Governmental officials, supervisors and CSC staff all stated that productivity is calculated by a ratio comprising the daily number of citizens’ requests placed in each CSC divided by the number of staff who work in the specific centre. The information that the MIPAD project team acquires about productivity is central to the formation of performance standards. Based on all the performance data collected from CSCs across Greece, an average number is calculated for each of the key performance standards, such as productivity, response time and number of citizens served. As an official from MIPAD said, it is against this average that each CSC is evaluated: [. . .] by observing what is happening we know how long it took for each CSC to respond to a request. We know that for one it took 3 days, another 4, another 5, another 23 days and we see for that which made 23 days to respond there is a problem. So when we see that there is a difference with the average we realise that a service has a problem and the minister intervenes [. . .]

When performance criteria are calculated, MIPAD staff disseminate a report to increase CSC staff’s awareness about their performance in relation to the average norms. These reports are distributed on a six-month basis. As a project team member said: With the data we concentrate, we send to CSCs reports. We make them understand that we know what takes place and who works and who doesn’t. And I think that this is effective, because CSCS’ staff become aware that if they don’t work they will be dismissed.

CSC staff claimed the reports were both a stimulus for them to work harder if they were identified as having underperformed, and for competition between CSCs. The reports provide rankings of CSCs from the most to the least productive, information about the number of completed and pending requests, along with the average response time for the delivery of each public service. They include comparisons between CSCs. Thus, the MIS provides for the construction of specific performance targets and increases the awareness of a CSC and its staff of both their performance and accountability.

Performance monitoring technology 169

QRAM 6,3

170

4.2 The actual enactment of performance monitoring in CSCs When we look at the actual practices that performance monitoring enacted, a complex and diverse set of relations emerge. Compliance was on some occasions achieved as a consequence of CSC staff wanting to increase their productivity. However, such compliance was not always achieved as “expected”. Staff found ways to trick the system; would submit incomplete requests and often developed client relations with civil servants. Each of these practices is outlined as follows. CSC staff indicated that despite the carefully designed MIS, they were able to trick it in order to boost their recorded productivity. For example, CSC staff would often submit fake requests by using “friends and relatives” personal details’ in order to increase their productivity. They would also submit the same request more than once to the system. As a public sector employee from the Criminal Records Department explained: Sometimes CSCs submit the same request more than once in the same day but at different times. This repetition is done by CSCs in order to show that they have increased productivity. Whether the request is the same or not is not visible to anyone.

Further, in order to meet the performance standards set by the government, staff would submit requests that were incomplete or incorrect, even when they knew in advance that they “[. . .] are likely to fail being processed”. A public sector employee attributed this practice to staff’s lack of knowledge and interest in the quality of their work: [. . .] thousands of requests come to us with many mistakes because either employees lack knowledge or they are not bothered to check what they submit [. . .] Employees are not interested; they view it as a job that has to be done quickly in order to increase their requests and thus their productivity.

On the other hand, a CSC staff member indicated that staff did not feel responsible for the results of their practices: “[. . .] one of the biggest deficits of the CSCs is that it lacks personal responsibility. No one is accountable for one’s work”. The effectiveness of CSCs depends upon the collaboration of civil servants. If civil servants delay or fail to process the requests submitted by CSC staff, then the performance of CSCs will suffer. CSC staff also felt dependent upon civil servants, since their performance was contingent upon the mode and rate of work of the civil servants. Such a dependency conditioned the need for the development of good relations between CSC staff and civil servants. For CSC staff such relationships are important “in order for us to do our job quickly and well”. Furthermore, the dependency of CSC staff on civil servants conditioned the rise of client relations between the two occupational groups. Typically, “clientelism” refers to relationships between unequal parties, where resources such as money or work are exchanged so as to achieve a mutually beneficial outcome (Brinkerhoff and Goldsmith, 2004; Kaufman, 1974; Lemarchand and Legg, 1972). In our case, client relations took the form of CSC staff undertaking work for public sector staff and in return having their requests processed, as was evident from the two quotes below, the first from a CSC staff member and the second from a government official: I have done various personal favours to civil servants [. . .] They used their position and my dependency on them so as to promote their personal interests. They exploited that. But I did the same. [. . .] a give and take relationship that is above the law yet one that we have to do in order to survive and be successful.

The above extracts indicate that the way in which performance monitoring data was used by MIPAD, provided for new modes of accountability and control. However, such control was only ever partial, and further generated practices that run counter to those expected. These issues and their implications will be discussed in what follows. 5. Discussion: monitoring, control and accountabilty This penultimate section discusses the issues that emerged from our case with reference to the phenomena of performance monitoring technology and accountability. Our discussion focuses on two emerging themes. Section 5.1 discusses the ways in which new forms of control may be put in place to engender greater accountability. Section 5.2 discusses the limits to seeking accountability through performance monitoring technology. 5.1 The enactment of control (and resistance) From our discussion above, it seems that performance monitoring technology was used in essentially three interdependent ways: (1) for continuous monitoring; (2) for the estimation of performance standards; and (3) for the creation of reports on CSC staff’s performance. Each of these is discussed in Sections 5.1.1-5.1.3 as follows. 5.1.1 Continuous performance monitoring as a means of control. The rationale for the deployment of performance monitoring technology by the MIPAD project team was to assist in controlling the practices and conduct of CSC staff more tightly (Townley, 2002), and thereby render the provision of public sector services more accountable (Mouzelis, 2002). Performance monitoring technology was central to this as it seemingly enabled continuous and real time monitoring of work undertaken and completed by CSC staff. The visibility that the system provided, through continuous monitoring, allowed members of MIPAD a detailed insight into the number of logged and completed requests each staff member had undertaken, their pace of work, not to mention when staff were at work. This constant monitoring, one could argue, provided data that formed the basis for an ongoing fostering of accountability amongst CSC staff and which Rose (1999a, p. 135) refers to as a seemingly objective and ongoing “regime of visibility” (Chan, 1999; Keohane, 2002; Mulgan, 2000; Parker and Gould, 1999). Continuous performance monitoring is an illustration of the use of technology as a means of electronic surveillance; the exertion of power and the imposition of discipline in the workplace and, as such, a manifestation of the relationship between accountability and power (Chan, 1999; Lyon, 1993; Mulgan, 2000; Parker and Gould, 1999; Zuboff, 1988), but also counter-power (or resistance). However, although the continuous monitoring provides visibility (in the terms being monitored) it does not necessarily provide a transparency of process. This means that the veiled parts of the process often afford many opportunities for innovative forms of resistance, mentioned in the previous section, such as mimicking and exaggeration: this while objectives and targets are seemingly met. The panopticon is never complete and indeed often engenders more subtle forms of resistance. We discuss this further in Section 5.1.2.

Performance monitoring technology 171

QRAM 6,3

172

5.1.2 Comparing performance. By comparing the time taken to perform tasks across CSCs, the performance monitoring technology was used to identify average performance indicators (Clarke, 2005). For the project team, responsible for the management of CSCs, some of the estimated indicators are related to the average productivity (i.e. the number of requests set to the system per CSCs’ staff per day) and the average response time for the completion of requests. Technological monitoring therefore was assumed to be able to account for how “productive” and how “responsive” each staff member from each CSC was. This indicates, first, that performance standards are the outcome of complex socio-technical constructions (Berger and Luckmann, 1966; Clarke, 2005; Power, 1994); as such, they are by necessity that which is socially and technically feasible. Second, it illustrates that the construction of the “good performer” reflects solely quantitative outputs (e.g. number of citizens served, estimated time of service provision, etc.) and mostly neglects public service ideals that surround the embodied in the Weberian bureaucratic ideal such as impartiality and impersonality, devotion and professionalism, ethos and accountability (Clarke and Newman, 1997; Hoggett, 2005; Weber, 1948). Third, the construction of performance standards illustrates a shift of accountability (in the context of e-government) from a systemic accountability based on legal, hierarchical and formal/normative accountability (Weber, 1948) towards an output-oriented or performance-based form of accountability (Gendron et al., 2001; Jos and Tompkins, 2004; Keohane, 2002; Kloot, 1999; Parker and Gould, 1999; Ryan and Walsh, 2004). In essence, accountability becomes associated far more closely with the individual rather than a Weberian bureaucracy. This is a very significant shift with intended and unintended consequences. 5.1.3 The self-regulation of performance. Apart from a means to exercise power over staff, performance monitoring data also provided for the possibility that staff themselves could draw upon this data to exercise control over themselves. Staff were aware that they were the object of continuous observation and comparison through the performance monitoring technology and, thus, at least normatively came to internalise the performance standards they believed they had to achieve (du Gay, 1996; Clegg, 1998; Doolin, 2003; Hoggett, 1996). The generation of reports to increase staff’s awareness about their personal performance, indicates that in the context of e-government accountability is not solely dependent upon standards but rather accountors’ awareness, discretion and responsibility for practices and processes for which they are held responsible (Chan, 1999; Choudhury and Ahmed, 2002; Gendron et al., 2001; Mulgan, 2000; Pratchett and Wingfield, 1996). As Rose (1999a, p. 241) has argued, the conduct of individuals is governed through subtle power mechanisms that entail “systematic self-monitoring and record keeping”, construct “desired and undesired behaviour” and operate “not through airy and overambitious hopes, but through little steps with achievable goals, each followed by rewards”. As such, we suggest that the use of performance monitoring technology constructed a normative context that guided what staff considered to be acceptable or unacceptable practice. Performance standards were not just numbers representing staff’s outputs but were also indicators of how good a performer each staff member was. When CSC staff became aware of their own and their colleagues’ performances, as compared to average performance standards, they were likely to get involved in the process of continuous comparisons and evaluations in terms of where each person’s performance was ranked in relation to others. This has the implication that staff direct their attention to that which is necessary to be accountable to performance standards rather than to the

necessity of serving citizens (Choudhury and Ahmed, 2002; du Gay, 1996; Gendron et al., 2001; Miller and O’Leary, 1987). Indeed, the MIPAD project team believed that performance standards were an “effective” way to make the staff manage their own behaviour; to make them “competitive” and “motivate them to work harder”. Thus, by rendering officials knowledgeable about their own performance and that of others, staff became responsive as much, if not more, to measures, as they were necessarily to citizens (Salaman and Storey, 2008; Gleadle et al., 2008; Miller, 2001; Miller and O’Leary, 1987; Townley, 2002, 2003). Overall, this process of self-regulation represents a shift in accountability from the hierarchy to an individual’s performance. This has the implication that accounting for performance in this way is likely to lead to the creation of opportunistic and self-serving behaviours – the exact behaviours that the Weberian bureaucracy has sought to eliminate. The extent to which such measures are good proxies for the requirements of citizens then becomes fundamental. 5.2 The production of accountability and resistance In many respects, our case illustrated that the degree to which the performance monitoring technology is able to control the practices and conduct of employees, and hence ensure accountability, is limited. This second discussion section will examine the inherently limited nature of accountability through performance technology and the establishment of an instrumental rationality. 5.2.1 Opening up new possibilities for action. In the Greek case, performance monitoring technology opened up a field of multiple possibilities for CSC staff to act in ways other than those envisaged by the performance measures. The many examples of the exercise of discretion by staff in how they do their work highlights that technologies of control, such as performance monitoring technology, can never enable the complete regulation of activities. As such, accountability can only ever be partial. It also indicates that, contrary to what various authors have argued – namely that performance measuring is often a disciplinary and constraining tool that obstructs public officials’ initiatives (Hood, 1991; Power, 1994; Rose, 1999a) – the case revealed that staff in the CSCs often resisted the instrumental logic that surrounded performance monitoring. They acted, in part, to pursue what they thought to be suitable, appropriate and worthwhile services (Hirschheim and Newman, 1988; Markus, 1983; Townley et al., 2003). In some cases, the space opened through the partial visibility enabled staff to use their discretion to engage in activities that simultaneously met the output-orientated performance measures and provided a service to the citizen (Lipsky, 1980; Townley et al., 2003). In other cases, staff used the partial visability to subvert the intentions of control implied in the monitoring, thereby creating possibilities for ways of acting in the interest of alternative intentions. 5.2.2 Diffusing an instrumental rationality. The case also highlighted how staff would often instrumentally manage their performance by focusing on the quantitative outputs of their work (Hoggett, 1996). For example, they would input false, incorrect and incomplete requests to the system, transfer the same request at multiple times (Hirschheim and Newman, 1988; Markus, 1983) or develop “client relations” with civil servants. Client relations have historically been prevalent in the Greek public sector between public officials and citizens, with the former trying to take advantage of the dependency of the citizens on the position of the official in order to gain economic, political or emotional benefits. With the establishment of CSCs client relations were not eliminated, but re-emerged in a slightly different form between CSC staff and public

Performance monitoring technology 173

QRAM 6,3

174

sector employees, as they were still considered to be an important way to get the job done “quickly and well” and also for securing personal and occupational interests. Here, staff would ensure that they had recorded a satisfactory number of requests to avoid identification as underperforming. The rationale CSC staff used to justify their actions indicates the degree of autonomy they exercised to judge, decide and act based upon the situations they faced by prioritising, in many cases, their personal interests at the expense of their responsibilities towards government and citizens alike (Clarke and Newman, 1997; Townley et al., 2003). The above practices illustrate, first, that the quantitative orientation of performance monitoring technology triggers quantitatively oriented practices (Hoggett, 2005) and, second, that increased performance does not necessarily lead to increased outputs but possibly instead to increased efforts to manipulate outputs so that they satisfy performance target measures. This raises various concerns about public service accountability. The quantitative orientation of the performance standards led to CSC staff undertaking practices that met their performance obligations, while allowing them to escape from the power of technology and ensure their job security (Gendron et al., 2001). These consequences were neither intended by the project team of MIPAD nor necessarily in the interests of the citizens; rather, they were unintended consequences that were shaped by the staff responding as best they could within the bounds of discretion to the situations they faced (du Gay, 2000; Lipsky, 1980; Power, 1994). Overall, our case study shows that the performance monitoring system acted as a mechanism to establish the legitimacy of instrumental rationality amongst CSC staff (Tonwley et al., 2003, p. 1064) as it became the basis upon which they drew to account for their practices. This supports the argument that performance monitoring, in the name of accountability, is likely to substitute morality for technical rationality (Townley et al., 2003). Further, such an instrumental rationality is in contrast to the aims of a Weberian bureaucracy that provides for an equitable rationality. 6. Conclusion: locating accountability? The question of accountability has been a long-standing debate amongst public administration practitioners and scholars alike. This debate is often characterised as an opposition between two positions (Martinez, 1998). At the one end is the Weberian view that accountability is achieved through procedural justice as embodied in bureaucracy. In this notion of accountability, accountability is in the design of the organisation (hierarchy, law and impersonality), a view defended by Finer (1941) in what was known as the Finer-Friedrich debate (Martinez, 1998). At the other end is the view that accountability is something that the individual should take responsibility for, especially with regard to efficiency and effectiveness. This is a view that is strongly associated with the “New Public Administration” movement. The traditional argument against the Weberian bureaucracy is that it is impersonal, bureaucratic, inefficient and alienating. It is argued that, in the end, everybody is responsible and nobody is responsible, since questions of accountability are always deferred to an impersonal “elsewhere” such as to the hierarchy, or to rules and procedures. In contrast to this, performance monitoring is seen as a way to make the individual official accountable, to secure efficiency and to ensure service to the citizen. However, our research has shown that this is not necessary the case. Indeed, performance monitoring can lead to a narrowing down of accountability and the

emergence of an instrumental rationality, in which targets are met, yet the public service fails to meet its purpose. As such, it reproduces the problem it was supposed to address. In our case, performance monitoring was not the sole dimension that affected the conduct of CSC staff. Clearly, the broader public service context played a significant role. These observations indicate that performance monitoring technology has added complexity to the provision of public services, owing to the new forms of output-oriented accountability that exist in conjunction with the conventional legal, hierarchical and professional responsibilities which already shape the practices of officials (Bovens et al., 2001; Jos and Tompkins, 2004; Power, 1994; Weber, 1948). As was evident in our case, staff have to routinely confront multiple and often contradictory interests including government performance measures, serving citizens effectively, encouraging the collaboration of public sector staff, not to mention secure their own job security as they respond to citizens’ requests. Consequently, no technology or process map, however well designed, could ever fully envisage how things will actually work. Thus, performance systems are always limited in the extent to which they are able to account for the richness of everyday practices (Hoggett, 2005). Further, our discussion has pointed to the propensity for performance-monitoring technology to be individualising, which may result in the emergence of an instrumental rationality (Chan, 1999; Townley et al., 2003). In the context of the provision of public services, we suggest that such an instrumental rationality is likely to lead to the interests of public sector staff taking precedence over the interests of citizens. This, of course, begs the question “Is this the sort of accountability that we expect in the public sector?” References Alford, J. and Baird, J. (1997), “Performance monitoring in the Australian public service: a government-wide analysis”, Public Money & Management, Vol. 17 No. 2, pp. 49-58. Barberis, P. (1998), “The new public management and a new accountability”, Public Administration, Vol. 76, Autumn, pp. 451-70. Basu, S. (2004), “E-government and developing countries: an overview”, International Review of Law Computers, Vol. 18 No. 1, pp. 109-32. Behn, R. (2003), “Why measure performance? Different purposes require different measures”, Public Administration Review, Vol. 63 No. 5, pp. 586-606. Berger, P. and Luckmann, T. (1966), The Social Construction of Reality: A Treatise in the Sociology of Knowledge, Penguin Books, Harmondsworth. Bloomfield, B. and Hayes, N. (2004), “Modernisation and the joining-up of local government services in the UK: boundaries, knowledge and technology”, paper presented at the Workshop on Information, Knowledge and Management: Reassessing the role of ICTs in Private and Public Organisations, Superior School of Public Administration, Bologna, available at: www.lums.lancs.ac.uk/publications/viewpdf/002166/ Boland, T. and Fowler, A. (2000), “A systems perspective of performance management in public sector organisations”, The International Journal of Public Sector Management, Vol. 13 No. 5, pp. 417-46. Bovens, M., Schillemans, T. and ’T Hart, P. (2008), “Does public accountability work? An assessment tool”, Public Administration, Vol. 86 No. 1, pp. 225-42. Bovens, M., ’T Hart, P. and Peters, B.G. (Eds) (2001), Success and Failure in Public Governance: A Comparative Analysis, Edward Elgar, Cheltenham.

Performance monitoring technology 175

QRAM 6,3

176

Brinkerhoff, D. and Goldsmith, A. (2004), “Good governance, clientelism and patrimonialism: new perspectives on old problems”, International Public Management Journal, Vol. 7 No. 2, pp. 163-85. Chadwick, A. and May, C. (2003), “Interaction between states and citizens in the age of the internet: e-government in the United States, Britain and the European Union”, Governance: An International Journal of Policy, Administration and Institutions, Vol. 16 No. 2, pp. 271-300. Chan, J. (1999), “Governing police practice: limits of the new accountability”, British Journal of Sociology, Vol. 50 No. 2, pp. 251-70. Chapman, R. (Ed.) (1993), Ethics in Public Service, Edinburgh University Press, Edinburgh, pp. 155-71. Choudhury, E. and Ahmed, S. (2002), “The shifting meaning of governance: public accountability of third sector organizations in an emergent global regime”, International Journal of Public Administration, Vol. 25 No. 4, pp. 561-88. Ciborra, C. (2003), “Unveiling e-government and development: governing at a distance in the new war”, available at: http://is.lse.ac.uk/wp/pdf/WP126.PDF Clarke, J. (2005), “Performing for the public: doubt, desire, and the evaluation of public services”, in du Gay, P. (Ed.), The Values of Bureaucracy, Oxford University Press, Oxford, pp. 211-32. Clarke, J. and Newman, J. (1997), The Managerial State: Power, Politics and Ideology in the Remaking of Social Welfare, Sage, London. Clegg, S. (1998), “Foucault, power and organisations”, in McKinley, A. and Starkey, K. (Eds), Foucault, Management and Organisation Theory, Sage, London, pp. 29-48. Cowell, R. and Martin, S. (2003), “The joy of joining-up: modes of integrating the local government modernization agenda”, Environment and Planning C: Government and Policy, Vol. 21 No. 2, pp. 159-79. Creswell, J. (1994), Research Design: Qualitative and Quantitative Approaches, Sage, London. Curthoys, N., Eckersley, P. and Jackson, P. (2003), “E-government”, in Jackson, P. (Ed.), E-Business Fundamentals, Routledge, London, pp. 227-57. Denzin, N. and Lincoln, Y. (1998), The Landscape of Qualitative Research: Theories and Issues, Sage, London. Doolin, B. (2003), “Narratives of change: discourse, technology and organization”, Organization, Vol. 10 No. 4, pp. 751-70. du Gay, P. (1996), “Organizing identity: entrepreneurial governance and public management”, in Hall, S. and du Gay, P. (Eds), Questions of Cultural Identity, Sage, London, pp. 151-69. du Gay, P. (2000), In Praise of Bureaucracy: Weber, Organization, Ethics, Sage, London. Finer, H. (1941), “Administrative responsibility in democratic government”, Public Administration Review, Vol. 1 No. 4, pp. 335-50. Fountain, J. (2001), Building the Virtual State: Information Technology and Institutional Change, The Brookings Institution, Washington, DC. Frederickson, G. (1993), “Ethics and public administration: some assertions”, in Frederickson, G. (Ed.), Ethics and Public Administration, M.E. Sharpe, Armonk, NY. Fry, B.R. and Nigro, L.G. (1996), “Max Weber and US public administration: the administrator as neutral servant”, Journal of Management History, Vol. 2 No. 1, pp. 37-46. Gendron, Y., Cooper, D.J. and Townley, B. (2001), “In the name of accountability: state auditing, independence and new public management”, Accounting, Auditing & Accountability Journal, Vol. 14 No. 3, pp. 278-310. Gleadle, P., Cornelius, N. and Pezet, E. (2008), “Enterprising selves: how governmentality meets agency”, Organization, Vol. 15 No. 3, pp. 307-13.

Hirschheim, R. and Newman, M. (1988), “Information systems and user resistance: theory and practice”, The Computer Journal, Vol. 31 No. 5, pp. 398-408. Ho, T.A. (2002), “Reinventing local governments and the e-government initiative”, Public Administration Review, Vol. 62 No. 4, pp. 434-44. Hoggett, P. (1996), “New modes of control in the public service”, Public Administration, Vol. 74 No. 1, pp. 9-32. Hoggett, P. (2005), “A service to the public: the containment of ethical and moral conflicts by public bureaucracies”, in du Gay, P. (Ed.), Values of Bureaucracy, Oxford University Press, Oxford. Hood, C. (1991), “A public management for all seasons?”, Public Administration, Vol. 69 No. 1, pp. 3-19. Hopper, T. and Macintosh, N. (1998), “Management accounting numbers: freedom or prison – Geneen versus Foucault”, in McKinley, A., Starkey, A. and K. (Eds), Foucault, Management and Organisation Theory, Sage, London. Illsley, B., Lynch, B. and Lloyd, M.G. (2000), “From pillar to post? A one-stop shop approach to planning delivery”, Planning Theory & Practice, Vol. 1 No. 1, pp. 111-22. Jaeger, P. and Thompson, K. (2004), “Social information behavior and the democratic process: information poverty, normative behavior, and electronic government in the United States”, Library & Information Science Research, Vol. 26 No. 1, pp. 94-107. Jos, P. and Tompkins, M. (2004), “The accountability paradox in an age of reinvention the perennial problem of preserving character and judgment”, Administration & Society, Vol. 36, pp. 255-81. Kaufman, R. (1974), “The patron-client concepts and macro-politics: prospects and problems”, Comparative Studies in Society and History, Vol. 16 No. 3, pp. 284-303. Keohane, R. (2002), “Global governance and democratic accountability”, available at: www.lse.ac.uk/collections/LSEPublicLecturesAndEvents/pdf/20020701t1531t001.pdf Kloot, L. (1999), “Performance measurement and accountability in Victorian local government”, The International Journal of Public Sector Management, Vol. 12 No. 7, pp. 565-83. Lemarchand, R. and Legg, K. (1972), “Political clientelism and development: a preliminary analysis”, Comparative Politics, Vol. 4 No. 2, pp. 149-78. Lenk, K. (2002), “Electronic service delivery – a driver of public sector modernization”, Information Polity, Vol. 7, pp. 87-96. Lipsky, M. (1980), Street-level Bureaucracy: Dilemmas of the Individual in Public Services, Russel Sage Foundation, New York, NY. Lyon, D. “An electronic panopticon? A sociological critique of surveillance theory”, The Sociological Review, Vol. 41 No. 4, pp. 653-78. Markus, L. (1983), “Power, politics and MIS implementation”, Communications of the ACM, Vol. 26 No. 6, pp. 430-44. Martinez, J.M. (1998), “Law versus ethics: reconciling two concepts of public service ethics”, Administration & Society, Vol. 29, p. 690. Maykut, P. and Morehouse, R. (1994), Beginning Qualitative Research: A Philosophical and Practical Guide, The Falmer Press, London. Miller, P. (2001), “Governing by numbers: why calculative practices matter”, Social Research, Vol. 68 No. 2, pp. 379-96. Miller, P. and O’Leary, T. (1987), “Accounting and the construction of the governable person”, Accounting, Organisations and Society, Vol. 12 No. 3, pp. 235-65.

Performance monitoring technology 177

QRAM 6,3

178

Minson, J. (1998), “Ethics in the service of the state”, in Dean, M. and Hindess, B. (Eds), Governing Australia: Studies in Contemporary Rationalities of Government, Cambridge University Press, Cambridge, pp. 47-69. Moon, J. (2002), “The evolution of e-government among municipalities: rhetoric or reality?”, Public Administration Review, Vol. 62 No. 4, pp. 424-33. Mouzelis, N. (2002), From Change To Modernization: State’s Intervention, Policy, Society, Culture, Theory, Themelio, Athens. Mulgan, R. (2000), “Accountability an over expanding concept”, Public Administration, Vol. 78 No. 3, pp. 555-73. Newman, J. (2000), “Beyond the new public management? Modernizing public services”, in Clarke, J., Gewirtz, S. and McLaughlin, E. (Eds), New Managerialism, New Welfare?, Sage, London, pp. 45-61. Parker, L. and Gould, G. (1999), “Changing public sector accountability: critiquing new directions”, Accounting Forum, Vol. 3 No. 2, pp. 109-35. Pina, V., Torres, L. and Royo, S. (2007), “Are ICTs improving transparency and accountability in the EU regional and local governments? An empirical study”, Public Administration, Vol. 85 No. 2, pp. 449-72. Power, M. (1994), The Audit Explosion, Demos, London. Pratchett, L. and Wingfield, M. (1996), “Petty bureaucracy and woolly-minded liberalism? The changing ethos of local government officers”, Public Administration, Vol. 74, Winter, pp. 639-56. Propper, C. and Wilson, D. (2003), “The use and usefulness of performance measures in the public sector”, Oxford Review of Economic Policy, Vol. 19 No. 2, pp. 250-66. Roberts, N. (2002), “Keeping public officials accountable through dialogue: resolving the accountability paradox”, Public Administration Review, Vol. 62 No. 6, pp. 658-69. Rose, N. (1999a), Governing the Soul: The Shaping of the Private Self, 2nd ed., Free Association Books, London. Rose, N. (1999b), Powers of Freedom: Reframing Political Thought, Cambridge University Press, Cambridge. Ryan, C. and Walsh, P. (2004), “Collaboration of public sector agencies: reporting and accountability challenges”, The International Journal of Public Sector Management, Vol. 17 Nos 6/7, pp. 621-31. Salaman, G. and Storey, J. (2008), “Understanding enterprise”, Organization, Vol. 15 No. 3, pp. 315-23. Scales, B. (1997), “Performance monitoring public services in Australia”, Australian Journal of Public Administration, Vol. 56 No. 1, pp. 100-9. Schatzki, T. (2005), “The sites of organization”, Organization Studies, Vol. 26 No. 3, pp. 465-84. Smith, P. (1998), “Outcome related performance indicators and organisational control in the public sector”, British Journal of Management, Vol. 4, pp. 135-91. Tan, C. and Pan, S. (2003), “Managing e-transformation in the public sector: an egovernment study of the inland revenue authority of Singapore”, European Journal of Information Systems, Vol. 12 No. 4, pp. 269-81. Townley, B. (2002), “The role of competing rationalities in institutional change”, Academy of Management Review, Vol. 45 No. 1, pp. 163-79. Townley, B., Cooper, D.J. and Oakes, L. (2003), “Performance measures and the rationalisation of organisations”, Organization Studies, Vol. 24 No. 7, pp. 1045-71.

Vintar, M., Kunstelj, M., Decman, M. and Bercic, B. (2003), “Development of e-government in Slovenia”, Information Polity, Vol. 8 Nos 3/4, pp. 133-49. von Haldenwang, C. (2004), “Electronic government (e-government) and development”, The European Journal of Development Research, Vol. 16 No. 20, pp. 417-32. Weber, M. (1948), “Bureaucracy”, in Gerth, H.H. and Mills, W. (Eds), From Max Weber: Essays in Sociology, Routledge & Kegan Paul, London. Weber, M. (1992 [1919]), “Selections from politics as a vocation”, in Rubin, L.G. and Rubin, C.T. (Eds), The Quest for Justice: Readings in Political Ethics, 3rd ed., Ginn, Needham Hights, MA, pp. 273-83. Wholey, J. and Hatry, H. (1992), “The case for performance monitoring”, Public Administration Review, Vol. 52 No. 6, pp. 604-10. Wouters, M. and Wilderom, C. (2008), “Developing performance-measurement systems as enabling formalization: a longitudinal field study of a logistics department”, Accounting, Organisations and Society, Vol. 33, pp. 488-516. Zuboff, S. (1988), In the Age of the Smart Machine: The Future of Work and Power, Basic Books, New York, NY. Further reading Farrell, C. and Morris, J. (2003), “The neo-bureaucratic state: professionals managers and professional managers in schools, general practices and social work”, Organization, Vol. 10 No. 1, pp. 129-56. Findlay, P. and Newton, T. (1998), “Re-framing Foucault: the case of appraisal”, in McKinley, A. and Starkey, K. (Eds), Foucault, Management and Organisation Theory, Sage, London, pp. 211-29. Finn, P. (1993), “The law and the officials”, in Chapman, R. (Ed.), Ethics in Public Service, Edinburgh University Press, Edinburgh, pp. 135-45. Hall, C. and Rimmer, S. (1994), “Performance monitoring and public sector contracting”, Australian Journal of Public Administration, Vol. 53 No. 4, pp. 453-61. Heeks, R. (1999), Reinventing Government in the Information Age: International Practice in IT-Enabled Public Sector Reform, Routledge, London. Henry, G. and Dickey, K. (1993), “Implementing performance monitoring: a research and development approach”, Public Administration Review, Vol. 53 No. 3, pp. 203-12. Kernaghan, K. (1993), “Promoting public service ethics: the codification option”, in Chapman, R. (Ed.), Ethics in Public Service, Edinburgh University Press, Edinburgh. Kumar, R. and Best, M. (2006), “Impact and sustainability of e-government services in developing countries: lessons learned from Tamil Nadu, India”, The Information Society, Vol. 22 No. 1, pp. 1-12. Osborne, D. and Gaebler, T. (1992), Reinventing Government: How the Entrepreneurial Spirit is Transforming the Public Sector, Addison-Wesley, New York, NY. Corresponding author Niall Hayes can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Performance monitoring technology 179

Suggest Documents