Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin
Accountabilities, automations, dysfunctions, and values: ICTs in the public sector Matthew L. Smith1 International Development Research Centre, Ottawa, Canada London School of Economics and Political Science, London, UK Merel E. Noorman Council for Social Development2, The Hague, The Netherlands Aaron K. Martin London School of Economics and Political Science, London, UK
Abstract In this paper we examine the ways in which public sector accountability is affected by the implementation of new and emerging information and communication technologies (ICTs). We focus on these issues to uncover how new technologies are altering the conventional modes of behaviour within the public sector, shedding light on certain areas of bureaucratic practice, obscuring others, enhancing accountability, and exacerbating dysfunctions. Thus, the question considered in this paper is: how do new ICTs, which potentially alter and transform the nature and processes of the bureaucracy, influence the accountability equation? To answer this question, we explore a range of empirical examples of e-government implementations, from low-levels of automation such as simple transactions to high-levels of automation such as in fingerprint analysis technologies. Drawing on the empirical examples, we develop a taxonomy of ICT-exacerbated accountability dysfunctions. In an attempt to move forward constructively, we then discuss potential accountability arrangements for different types of e-government applications such that the benefits from new technologies can be realised, while mitigating the potential accountability downfalls and dysfunctions. In concluding, we stress the necessity of striking a balance between deriving the benefits of technology whilst entertaining the balance of competing public values and the tendency for ICTs to exacerbate accountability dysfunctions. Key words Accountability, automation, decision-making, information and communication technologies (ICTs), public sector
dysfunction,
e-government,
1. Introduction Rhetoric and reality are often at odds. For example, the power of new information and communication technologies (ICTs) to promote transparency and accountability in the public sector (commonly referred to as e-government) is often asserted (Bhatnagar, 2004; Meijer, 2007; Wong & Welch, 2004). However, the results of the interactions between technology and public sector organisations have proven more differential and nuanced (Dunleavy, Margetts, Bastow, & Tinkler, 2006; Fountain, 2002; West, 2005). This is not surprising given the complex nature of social organisations and the inherent flexibilities of new and emerging technologies. Indeed, many authors
1 2
Corresponding author. E-mail:
[email protected] In Dutch, Raad voor Maatschappelijke Ontwikkelingen
24th EGOS Colloquium, 10-12 July 2008 1
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin have noted the difficulties and limitations of applying technologies to improving accountability in the private and public sectors (Barata & Cain, 2001).
E-government interventions include technologies explicitly introduced as a means to enhance transparency and accountability, as well as implementations intended to increase efficiency and effectiveness, enhance internal communications, and bring other benefits to the public. The introduction of these new technologies is altering the operation of the public sector in significant ways while raising important issues of accountability. For example, the British government continues to pursue a biometric-based identity (ID) scheme for its citizens which has raised a series of concerns about privacy and potential future abuses (LSE Identity Project, 2005), among others. However, questions regarding accountability arrangements for the future ID system have largely been neglected to date and are likely to remain ignored as government officials stay fixated on cost and political aspects of the scheme.
In this discussion piece we reflect on the introduction of ICTs in the public sector and their influence and interaction with accountability processes. In doing so, we focus mostly on problematic areas of ICTs and accountabilities, such as where ICTs intrude upon transparency or where they exacerbate accountability dysfunctions. This is not to imply that ICTs do not at times enhance transparency and other accountability related outcomes or that a lack of transparency and accountability dysfunctions are new or novel to the introduction of ICTs. Rather, we focus on these issues to uncover how new technologies have altered the conventional modes of behaviour, shedding light on certain areas of bureaucratic practice, obscuring others, enhancing accountability, and exacerbating dysfunctions. Thus, the question considered in this paper is: how do new ICTs, which potentially alter and transform the nature and processes of the bureaucracy, influence the accountability equation? Overall, our goal is to contribute to the understanding of how ICTs can be satisfactorily integrated into the public sector so that the benefits from new technologies can be realised, while mitigating the potential accountability downfalls.
The paper proceeds as follows. In the following section, we introduce the core concepts of accountability and explore certain problematic aspects of accountability functions. Included in this discussion is how ICT are theoretically implicated in public sector accountability. In section 3 we consider several empirical examples of different types of ICTs in the public sector, from simple eservices to more complex ones that automate decision-making, with a focus on how accountability processes are altered, shifted, improved, and diminished. In particular, we are interested in how and 24th EGOS Colloquium, 10-12 July 2008 2
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin in what circumstances different types of e-services alter accountability, in the hopes of improving our understanding of how best to restore it. In section 4, we present a tentative taxonomy that draws from the empirical examples, to structure our view of where and how the introduction of technology appears to exacerbate accountability dysfunctions in the public sector. Then, in section 5, we discuss some possible appropriate accountability arrangements depending upon the characteristics of the e-government implementation and other aspects of the broader socio-technical context.
2. Accountability, public sector, and ICTs Accountability is the “hallmark” of modern democratic governance (Bovens, 2005, p. 182). Democracy, with its checks and balances, was developed specifically out of a philosophy of distrust of those in positions of power and specific mechanisms were developed to hold them to account for their actions (Braithwaite, 1998). Accountability mechanisms ensure that the contract between government and people is fulfilled (Barata & Cain, 2001, p. 248). There exist many different kinds of accountability in the public sector, such as managerial accountability to senior management, legal accountability to the judiciary, professional accountability to peers, financial accountability to funders, political accountability, and public accountability to citizens (Heeks, 1998).
Theoretically, despite these multiple dimensions and functions of accountability, citizens are fundamental to the accountability equation in democratic societies. All of the other public sector accountability mechanisms are in the service of the public, and as such are ultimately accountable to the citizens. As we will see, understanding accountability from the perspective of the citizen, rather than from the institutional perspective, highlights a series of issues that create tensions in the application of accountability mechanisms. Before we can turn to that discussion, though, we must conceptualise what we mean by accountability.
24th EGOS Colloquium, 10-12 July 2008 3
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin 2.1 Conceptualising accountability Accountability in the public sector can be conceived of as a social construct or organising principle. The concept of accountability has been described as “a distinctive and pervasive feature of what it is to be human” (Willmott, 1996, p. 23). More specifically, it is a type of social relationship “in which an actor feels an obligation to explain and to justify his or her conduct to some significant other” (Bovens, 2005, p. 184). Willmott also argues that human beings, as social beings, are always involved in providing accounts not just to others, but to themselves as well. He states that “This universal aspect of accountability is a condition of our participation in any social world” (Willmott, 1996, p. 23).
Conceived as such, accountability presupposes some form of moral responsibility. One way to look at the ascription of moral responsibility is as a social process that serves the objective of blaming, praising, sanctioning, or rewarding someone to obtain a result (Stahl, 2004). Responsibility is a relational concept that has to do with tracing the causes of actions and events. An agent can be causally responsible for an event, like the breaking of a dam causing a flood. Moral responsibility, however, is a particular kind of responsibility, for which causal responsibility can be a condition.
In order to be held morally responsible, an agent has to be capable and in the position to act appropriately on this responsibility. Moral responsibility is therefore traditionally tied to knowledge, resources, causality, and autonomy (Eshleman, 2008). In order to act morally responsible, a person needs to have the knowledge, capacity, and ability to act appropriately, along with access to the appropriate resources. In other words, it involves a choice. We consider individuals morally responsible agents that can be held accountable for their conduct if they can give an explanatory account of themselves and can also engage in a discussion about the appropriateness of their comportment and are willing to acknowledge, apologize, and make amends for their possible errors in judgment (Kuflik, 1999).
Accountability refers to the ways in which responsibility relationships between agents and outcomes of events or actions can be established and verified. As it is a social phenomenon, agents such as a government agency or corporation can also be held to account (Johnson, 2001, p. 173). An authoritative agent establishes which responsible agent is the appropriate agent to account for a particular event or action, in terms of answering as well as sanctioning. Emanuel and Emanuel (1996) argue that, fundamentally, there are 1) the loci of accountability (parties who can be held accountable), 2) the domain of accountability (activity, practice, or issue), and 3) the procedures of 24th EGOS Colloquium, 10-12 July 2008 4
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin accountability (the evaluation of the domain of accountability and disseminating the evaluation and responses by accountable parties. However, missing from this ideal-typology is the purpose of accountability (i.e., the “why”). Accountability processes enforce and verify responsibility for some reason. In the public sector, this “why” is particularly important and, thus, is something we focus on here.
Like the concept of trust, we hold that accountability is inextricably linked to discretion. There is no need for trust without the bestowing of human discretion over some action or thing (Hardin, 1991), and, likewise, there is no need for accountability. Where particular actions are entirely constrained, the notion of trust dissolves into a “reliance upon a regularity” (Offe, 1999, p. 52), just as accountability becomes superfluous. Social theorists have found it useful to make the distinction between the trust that one has in a friend from the confidence that we have in, say, a computer system to operate the way it should (Offe, 1999; Solomon & Flores, 2001). The difference emerges when the element of human discretion enters into the relationship. Human discretion is a fundamental element of any social relationship, adding a level of complexity that distinguishes it from a human-computer relationship. Just as a modern day computer cannot know that it is not trusted, it also cannot be responsible or held accountable. Computers are also not susceptible in the same way as humans to the sanctions and incentives that, as we will see, are all part of the process of accountability. As Michael (2004) writes, people can be embarrassed; ICTs cannot be embarrassed, at least not yet.
Information provision about actions is a key element in accountability processes, as literature on accountability has underlined. Scholars have, for example, described accountability processes in terms of three stages: the information provision stage, the debating stage, and the judgement stage (Bovens, 2005; Elzinga, 1989; Schillemans, 2007). Provision of information as well as justification in the first stage enables an inquiry into the validity of the information and the appropriateness of actions during the debating stage. Finally, at the judgement stage actions are rewarded or sanctioned. In light of these stages of accountability, Schillemans (2007) distinguishes between accountability arrangements and accountability processes. An accountability arrangement constitutes the formal and informal mechanisms with the objective of realising accountability processes.
In our exploration, we focus almost entirely on the information provision component of accountability processes given the potential for ICTs to make information easily accessible to a 24th EGOS Colloquium, 10-12 July 2008 5
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin wide audience. That said, it is difficult to cleanly disentangle the different components of accountability, and thus our discussion ranges in scope at times.
2.2 Problematic accountabilities Bovens (2005) points out that the different accountability mechanisms in the public sector serve a variety of functions including democratic control, enhancement of integrity and legitimacy, improved performance, and catharsis after tragic incidents or failures. However, he also points out that while judiciously applied accountability arrangements can bring these positive benefits to public governance, the same mechanisms applied in excess can lead to accountability distortions and dysfunctions (see Table 1). These dysfunctions emerge from the “inherent and permanent tension between accountability and effective performance” (Bovens, 2005, p. 194). For example, excessive democratic control squeezes discretion out of the hands of public managers and can result in rule-obsessed bureaucracies. Or, more transparency might reveal more blemishes of public sector governance, effectively lowering legitimacy despite potentially improved behaviour (O’Neill, 2002).
Functions of accountability Dysfunctions Democratic control
Rule-obsession
Integrity
Proceduralism
Improvement
Rigidity
Legitimacy
Politics of scandal
Catharsis
Scapegoating
Table 1 Functions and dysfunctions of accountability, source: (Bovens, 2005, p. 194)
Public sector accountability is further complicated by the deeply rooted values within liberal democratic societies that serve a normative role in organising society.3 For one, public sector institutions have a moral responsibility with regards to public welfare. This responsibility as well as the responsibilities that follow from it can be interpreted in many different ways depending on a wide range of values. These values can include the promotion of equality, justice, liberty, quality of life, security, and freedom, but also performing in an efficiency and effective manner. Furthermore, the existence of the competing and often conflicting values held by citizens implies that there are competing moral interpretations of government activities. Even if there were formalised 3
Recently, this concern for the various values of the public sector has lead to the development of a new theoretical framework for evaluating the activities and outcomes of the public sector called ‘public values’. Public value is a theoretical perspective “rooted in people’s preferences” that considers governments’ outputs in terms of what people want (Grimsley & Meehan, 2007; United Nations, 2003).
24th EGOS Colloquium, 10-12 July 2008 6
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin accountability processes, exactly who the public sector is accountable for and for what is a matter of debate. Any consideration of public sector accountability must take into account these diverse values. Interestingly, this significant component has been, for the most part, missing from the accountability discourse. As Michael points out: “Giving account in no way ensures that the accounts given correspond to the values desired by society” (Michael, 2004).
2.3 ICTs and public sector It is into this complex social dynamic of the public sector that new ICTs are entering. In many ways, however, new ICTs are extensions of earlier technologies such as pen, paper, and the filing system (Kallinikos, 2001). Fundamentally, bureaucracies are socio-technical information processing systems (Dunleavy et al., 2006, pp. 10-12). This synergy has motivated the belief that ICTs entail the potential to bring about a wide range of benefits to the public sector such as increased efficiency, effectiveness, transparency and accountability, reduced corruption, better service provisions, and improved more participatory democracy, among others (Bellamy & Taylor, 1998; Fountain, 2001; Layne & Lee, 2001; Moon, 2002; Silcock, 2001; Weare, 2002; West, 2005). Until now, however, e-government implementations have remained mostly in the incremental improvement range with most gains in the area of efficiency benefits (Anderson, 2004; Ronaghan, 2002; West, 2005). For example, e-government implementations have found the most success in automating financial processes such as tax administration (Dunleavy et al., 2006).
The introduction of ICTs in the public sector is, of course, not a completely technical question and is subject to both the instrumental logic of technology (i.e., what is possible with the technology) as well as the various political, economic, and social contextual influences (Fountain, 2001; Heeks, 2003, 2005; Kallinikos, 2006; Relyea, 2002; West, 2005). The process of automation does not simply produce a substitute for human activity. It seldom involves a one-to-one mapping of tasks previously performed by human beings onto formalised computational structures. It generally entails a transformation of existing bureaucratic roles, practices, and processes. It also involves the creation of new responsibilities for humans, as they have to make good on the deficiencies of these automated technologies (Collins, 1990; Collins & Kusch, 1998). To make a machine work correctly humans have to perform a considerable amount of work in the form of “repair”. Repair not only means modifying a machine to perform the appropriate action. Humans also interpret and adjust to the behaviour of the machine to make it fit to practices, expectations, and conceptual frameworks.
2.4 ICTs and public sector accountability 24th EGOS Colloquium, 10-12 July 2008 7
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin There are several ways through which ICTs can alter the accountability equation that we briefly mention here. The two key mechanisms are information provision and automating decision-making processes.
Given the centrality of information flows to accountability, it is no surprise that new technologies provide a potentially new means to hold the public sector accountable to citizens. New channels of downward communication are opened (Weare, 2002) potentially an empowering resource allowing citizens to monitor the activities of government (Wong & Welch, 2004). Examples of relevant information provision include public sector performance, information on rules and activities (Bhatnagar, 2004), policies, and policy intentions (Gelders, 2005).
Of course, the potency of transparency is directly linked to the quality, veracity, completeness, and timeliness of the information (Gelders, 2005). A system of information provision for accountability is only as good as the information going in; “garbage in, garbage out”, as the saying goes. These qualities of the information supplied are, of course, influenced by who inputs the information, for what reasons, as well as what type activities and the ability to translate these activities and performance into presentable information.
ICTs, with the power to transfer information and perform symbolic processing, enable the automation of certain decision-making processes in the public sector. This automation is often pursued for the goals of increased efficiency and effectiveness. However, a third component (or perceived benefit) of many e-government implementations is the removal of corruption (human discretion) in favour of a “neutral” or “objective” way to making these decisions. Ultimately, such an implementation is effectively a policy choice in favour of computer software engineers’ discretion subject to technical and other constraints over civil servants’ (Bovens & Zouridis, 2002). Granted, in certain e-government implementations the discretion of civil servants cannot be entirely removed as they still might need to feed in and interpret data, for example, but their discretion is constrained by decisions originally made by system designers.
Of course, there are a wide range of e-government implementations and not all of these implementations are designed or even associated with the goals of transparency and accountability. For example, simple e-service transactions such as applying for and receiving a birth certificate or a licence through the Internet may reduce bureaucratic costs and bring convenience benefits to the citizen, but are not generally considered technologies intended to enhance accountability. In these 24th EGOS Colloquium, 10-12 July 2008 8
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin cases, the technology facilitates particular types of interactions or transactions between the government and the citizen, but does not explicitly make decisions. Such activities alter the accountability equation through at least three means: 1) the quality and content of the information they provide about the process, 2) the potential for technological error (theoretically auditable in ways that human error may not have been), and 3) the constraining of civil servant discretion.
3. ICTs and accountability in the public sector: some empirical considerations As discussed, the range of ICT implementations in the public sector is varied, with some technologies working more to assist civil servants and others taking more decisive roles. In this section, we differentiate between several different types of e-government implementations and explore how they alter the accountability equation between the public sector and citizens. We do this through the use of e-service examples drawn from secondary sources and one author’s own empirical work on two e-services in Chile (see: Smith, 2007).
To understand the differences in the types of e-services we draw from Sheridan’s (1992) gradual scale of automation framework. This scale illustrates the incremental levels of control that can be shared between the human operator and computers. At the lower levels of automation, the human makes all the decisions and takes all the actions. The computer offers no assistance. The higher the level of automation, the more the decision-making opportunities for the human are constrained by the actions of the computer, ranging from offering a set of complete decision/action alternatives to providing a narrow selection of choices. The more a system is capable of collecting, analysing, interpreting, and acting on information - be it sensory information or explicit symbolic representations of knowledge - the more autonomous the system is considered to be. Higher levels of autonomy are, then, attributed to those automated systems (machines or computers) that are left to perform tasks on their own, and have the authority over these processes, i.e., humans have neither the need nor the ability to intervene.
3.1 Lower levels of automation In this section we present two types of e-services: information provision mechanisms and simple transactions. For each type we provide examples that illustrate the different ways in which they potentially impact on (i.e., intervene in or transform) accountability processes.
3.1.1 Information provision mechanisms
24th EGOS Colloquium, 10-12 July 2008 9
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin The classic simple case of information provision is the placing of information on a government web site. In a multi-case study, Eschenfelder (2004) explored the web site content production for four United States (US) state agencies. Eschenfelder found that a wide range of political and other factors influenced the nature of information presented on the government agency’s web site, including, among others, public education mission, public inquiry burden, top-down directives, review and approval process, resources, and management interest and goals. The politicised nature of the public sector and individuals’ concern with protecting their own positions therein, for example, means that what information is presented is the result of a political negotiation. While the outcome of this negotiation will find representation as “transparency” information on a web site, the details of the background negotiation itself will not. These negotiations determine the “veracity” of the information to be presented and this truth will often reflect internal interests rather than public values.
A slightly more complex example is the ChileCompra (“Chile Buys”) e-procurement system in Chile (see: Avgerou, Ciborra, Cordella, Kallinikos, & Smith, 2005). ChileCompra was developed to make public sector purchasing in Chile more efficient, effective, and transparent. The central transparency feature is an information publishing system that makes available a broad range of information about the public purchasing process. This system automatically publishes information including public sector organisations’ yearly procurement plans, bid invitations, and a searchable database of past public sector purchases including who bought from whom, what, final purchase price, etc. This currently happens for almost all public sector organisations, including most recently, many purchases made by the military. The overall amount of information produced is impressive, if not overwhelming.
Much like the US state agency web sites studied by Eschenfelder, despite the copious amount of information published on the ChileCompra portal, ChileCompra arguably still does not present the information that is central to accountability. There are two distinct stages, the point of constructing the procurement invoice, and the selection of the winning bid, which, while constrained by the input parameters of the system, are characterised by a large amount of human discretion and, often, expertise. When a purchasing invoice is made, the procurement officer must include a set of criteria by which the bids must be judged. These judgement criteria contain both objective (number needed, cost parameters, etc) and subjective components (e.g., quality) as well as subjective weightings (adding up to 100%) given to each criterion. Thus, the development of these criteria and the actual process of selection are afforded the necessary flexibility that the procurement officer needs to 24th EGOS Colloquium, 10-12 July 2008 10
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin make good decisions, balancing off the various criteria as established by the officer. Included with the implementation of this system was a series of public sector purchasing training reforms to professionalise the position, providing the purchasing officers with the expertise that they needed to be able to improve procurement planning and decisions. The key point here is that while the system does actively constrain the procurement officer, forcing the production of purchasing plans and justifications for purchasing, it still cannot reveal a fundamentally cognitive process – the crucial decision points.
These examples illustrate that technology generally only affords a superficial representation of the underlying political and cognitive processes that go into the generation of the information. In one case, the political process behind choosing what information was to be presented is hidden from view. ChileCompra provides a lot of financial information in a transparent manner, but does not, and cannot, reveal the underlying institutional needs or cognitive decision-making process that went into the purchasing decisions. In other words, we lose much of the relevant context to judge the information in a reasonable manner. Without the necessary context, which may be impossible to provide, key underlying components remain opaque. In the case of ChileCompra, of course, the amount of information provided goes way beyond what was provided by what was previously an almost entirely opaque process. Thus, provided that the information is of good quality, it will provide more insight than what existed before, potentially strengthening accountability. It is important, however, to understand the limits of the information provision; there is only so much depth and effectiveness of accountability mechanisms that represent the outward manifestations of internal processes. Consequently, there is always some room for manoeuvrability and the avoidance of accountability measures by a skilful and motivated civil servant.
3.1.2 Simple transactions Simple transactions are e-services where the computerised processes do not include a component of discretion. Rather, they are generally the function of established procedure, once in paper and human form, now transferred to a technological interface. Such systems are intended to facilitate and enable service transactions. Two examples illustrate these transactions: filing for a grant electronically and electronic voting (e-voting). Both transactions are almost entirely technical and do not involve a decision component.
A recent example illustrates how computer outputs can be manipulated for political purposes, and are effectively devoid of accountability. Recently, in Boston, a computer glitch on a federal 24th EGOS Colloquium, 10-12 July 2008 11
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin government web site resulted in an application for funding for an award winning, inner city education programme being filed 46 minutes late (Abel, 2007). The US Department of Education refused to consider the grant proposal, following the guideline that all grant proposals must arrive before the deadline. Accusations quickly arose that the administration was using this error as a pretext to achieve the political aim of dismantling the programme. Congressional representatives wrote the director of the Education Department’s higher education programmes, stating the following: "Constituents - and the students they serve - should not be penalised because of computer glitches that are beyond their control. [...] The difficulties encountered were completely out of their hands." The response from the department was an appeal to procedures, “The Department does not have the discretion to waive the deadline nor the flexibility to alter Grants.gov requirements” (Abel, 2007, emphasis added).
E-voting machines offer a good example of how it is possible to automate relatively simple government-citizen interactions. In theory, the switch from paper to electronic vote counting involves a fairly low level of technological sophistication in terms of the fundamental process, i.e., tallying inputs from voters. The e-voting machines take no decisions themselves as they are simply meant to count up votes. Indeed, it is basic accounting. If implemented correctly, with a high degree of political and technological transparency (including audit trails), the automation of the voting process can effectively remove elements of potential human error or malfeasance (both forms of discretion). Such a system would provide all of the necessary information, directly linking outcomes to processes, for an accountability verification process.
It should also be noted, of course, that there are myriad security issues that make the system technologically quite complex. So, in essence, what was once a moderately difficult social process involving the hand counting of votes and observers, has been transmuted into a complex algorithmic and technical computer security affair.
Of course, how such technologies are implemented has a huge influence over the quality of information. For instance, the ongoing e-voting controversy in the United States revolves around decisions by election officials to contract work to companies such as Diebold, which continue to refuse to make available the code for their proprietary software. Furthermore, the systems do not produce a paper receipt for voters, making it impossible to physically check the inner activity of the system. The end result is an almost entirely opaque system that, not surprisingly, has led to significant political controversy. 24th EGOS Colloquium, 10-12 July 2008 12
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin
These two examples illustrate how, even highly technical, simple service transactions can alter accountabilities. Interestingly, they do so quite differently than the information provision e-services. Both of these services do not just automate vote tallying, or online filing, they automate part of the operation of a socio-cultural organisation, from which the human actor is removed (Hutchins, 1995, p. 363). This automation involves more than a replacement of humans, it redistributes tasks, responsibilities, and accountabilities, as the technologies enable and constrain human actions. In a highly technical and procedural activity such as in the relatively mundane technology of online applications, reliance on technological performance replaces the reliance on the bureaucratic paperpushers who once owned the process. In the case of e-voting, testing the software and complex computer security systems is the new substitute for human counting and election observers.
Effectively, both of the applications shift at least some responsibility for the acceptable completion of a set task from people to technology. The potential for technological error is substituted for bureaucratic error or malfeasance. In such cases, if the underlying process is opaque, this might open up a gap in accountability, with potentially detrimental results. In the case of the inner city education funding example above, this shift in “responsibility” allowed for the off-loading of blame onto the technology, in this case arguably as a convenient excuse to achieve a political end. In the case of e-voting in the US, an “open” process of vote counting has been replaced by an almost entirely obscured process involving proprietary software and insecure systems. The problem of responsibility and accountability has actually been created by the use of ICT, whereas the social process that existed before that helped to radically reduce the potentials for error and malfeasances through distributed local responsibilities and built-in accountabilities (observers and redundant counting).
24th EGOS Colloquium, 10-12 July 2008 13
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin 3.2 Higher levels of automation: decision-making In this section we shift from low levels of accountability to those e-government implementations that begin to make decisions, as opposed to just assisting human decision-making. The first example of this draws on original empirical research and concerns a tax information system in Chile (Avgerou et al., 2005; Smith, 2007). The second illustration builds on recent case study research on the automation of fingerprint analysis for forensic purposes (Davis & Hufnagel, 2007), which we reinterpret in this paper in terms of accountability4, and a high-profile case involving Brandon Mayfield.
3.2.1 E-tax system The Chilean tax authority has, over the last fifteen years progressively implemented an increasingly sophisticated e-tax system that has met with huge success, with an impressive 97+% of the taxpaying population filing online. This system collects and processes data from a variety of institutions, including banks and businesses, to produce a completed tax form for many citizens. This system has resulted in a huge increase in internal efficiency, effectiveness, and tax revenue collection, along with time savings and convenience for many taxpayers.
Two interesting accountability issues emerge from this situation. First, theoretically for a large portion of the tax-paying population the entire process could have been automated. Instead, the completed tax forms are presented as only “proposals”. Legally, it is the citizen who is responsible for the veracity of the data. The citizen must verify the data and “OK” the proposal, at which point the tax is considered filed. Here the government effectively off-loads the responsibility for the automated process of data collection to the citizen. They have replaced what was once a task of the citizen with an automated process, but left the responsibility with the citizen.
Second, this gathering of financial information has enabled the tax authority to engage in a whole new process of auditing tax returns. Before, the main job of an auditor was mostly that of legwork and gathering information, severely restricting the number of people that could be audited. With the new information, the system effectively checks every tax return for errors and consistency. Furthermore, given the limited resources to perform more in-depth audits, still a human activity,
4 In their study of expert-based fingerprint analysis, Davis and Hufnagel (2007) explore the organisational consequences of the automation of fingerprint work. Their focus is on “the effects complex systems have on users’ perceptions of their work and the role-altering effects of new technologies” (Davis & Hufnagel, 2007, p. 681). They conclude that technicians’ “occupationally defined values and norms” (Davis & Hufnagel, 2007, p. 681) play a significant role in organising work practices and that tensions arise when new technologies are introduced which “restructure the logic of their expertise-based hierarchies” (Davis & Hufnagel, 2007, p. 698).
24th EGOS Colloquium, 10-12 July 2008 14
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin they have implemented a series of algorithms which locate tax returns to audit that will maximise the potential return from the audit. The algorithms now effectively make the decision of who will be audited.
3.2.2 Automated fingerprint identification system (AFIS) Fingerprint analysis involves a highly specialised type of training and expertise and, in organisations such as forensics laboratories where such work takes place, there traditionally exists a high degree of both hierarchical and horizontal internal accountability regarding work quality. For example, novices are accountable to expert technicians (typically with at least 5 years of occupational experience) and experts often review one another’s work in attempting to ensure high quality and consistent analysis. Thus, communication and accountability play a crucial role in fingerprint work.
While in their study (2007), Davis and Hufnagel analyse the organisational impacts of new computing technologies in terms of work practices and norms and values formation, it is surprising that the term “accountability” does not appear once, although it is implied throughout their discussion of the socio-cognitive perspectives on automating fingerprint analysis. Regardless, the potential implications of the automation of fingerprint work in terms of public sector accountability are significant and, thus, we discuss these implications here.
Davis and Hufnagel note that the expert fingerprint technicians they studied were highly disturbed by what they referred to as the "ghost in the machine"; that is, the algorithms for searching and matching that perplexed their work. With the automation of fingerprint analysis, the expert technicians were often unable and “helpless” to explain how the system arrived at its decisions. That these experts were incapable of giving an account for system outputs in situations where a questionable fingerprint match might have been made raises particularly difficult questions concerning accountability. They state, “Lacking an authoritative source to help them interpret the results, [analysts] could only speculate about how the software was designed and what they could do to influence its selections” (Davis & Hufnagel, 2007, p. 698). This bewilderment and speculation speaks to novel issues of computing technologies, automation, and public sector accountability that we address below.
From this example it is apparent that no longer do experts alone make informed decisions regarding fingerprint matches. Rather, it is the algorithms that do most of the work and occasionally this 24th EGOS Colloquium, 10-12 July 2008 15
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin algorithmic software is wrong. More disturbing, however, is the fact that these algorithms are beyond comprehension to fingerprint experts and can even be proprietary and thus beyond scrutiny. The values that are inevitably embedded in such algorithms also remain off limits. For example, depending on the organisation, software might be programmed such that its acceptance threshold is lower, thus increasing the possibility of a (false) hit. Alternatively, it might bias certain populations (Introna & Wood, 2004) without users being aware of such biases. Mistakes inevitably do happen, as was the case for Brandon Mayfield who was falsely accused of participating in the Madrid train bombings of 2004. His case arose when FBI fingerprint examiners searched for possible matches to a digital image of a fingerprint discovered on a bag of detonators suspected to be related to the bombings. The automated fingerprint identification system (AFIS) used returned 15 possible matches, including prints belonging to Mayfield, which had been recorded in 1980s when he was a teenager. Following the initial selection of suspects by the AFIS, three separate FBI examiners further narrowed the identification to Mayfield. Additionally, a courtappointed fingerprint expert agreed with the FBI experts. However, as was later concluded, they were wrong. It appears their decision to pursue Mayfield was influenced by other details in the case, namely that Mayfield had converted to Islam years before. This point regarding the politicization of Mayfield’s case has been deliberated elsewhere (for example, see: Lichtblau, 2006) but we lack an informed analysis of the role of automated systems in the misidentification, particularly in terms of accountability. Cherry & Inmwinkelried (2006) do acknowledge that certain system factors might explain Mayfield’s misidentification (i.e., the inescapable distortion of important details in digital images and the use of inferior displays) but they, too, do not interrogate the accountability question. 3.2.3 Automation and hidden values All decision-making is, of course, value-laden but when values are obscured and hidden through technological automation, we face a challenging situation in terms of public sector accountability. Much like the “politics” of search engines in which certain values are hidden in algorithms (Introna & Nissenbaum, 2000), the systems described in this section are necessarily political in that they favour certain models and assumptions over others, by design. However, unlike the case of the search engines described by Nissenbaum and Introna, the automated e-tax and fingerprint identification systems operate in the context of the public sector in which a special set of values exist. Granted, the first case brings with it the benefits of massively increased efficiency and effectiveness, but where might we find accountability in situations where certain individuals are
24th EGOS Colloquium, 10-12 July 2008 16
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin accused of political targeting with the tax system, for example? Can vengeful or politically motivated civil servants simply blame the technology and eschew accountability?
Moreover, the philosopher Jeroen van den Hoven (2002) notes that the cognitive dependencies that new computer technologies create can limit the extent to which users can take or be ascribed responsibility. These complex technologies increasingly hide the theories, models, and assumptions that they embody. They limit users in their possibilities to assess the validity and relevance of the information presented by computer systems, which are never fully free from errors, while they are often under pressure to make choices based on this information. Moreover, effects like “automation bias” or a lack of alternative knowledge sources to validate beliefs can interfere with users’ ability to make appropriate decisions.
4. Dysfunctions of technological accountability As mentioned, the displacement of accountabilities resulting from the introduction of new ICTs is never straightforward. Drawing from our examples above, we suggest that inherent in technology is the tendency to exacerbate several of the dysfunctions that emerge from excessive accountability. Technologies work in effect like in-built procedural accountability mechanisms in that they shape, constrain, or remove human discretion from the equation. In doing so, it appears that, in excess, this could lead to extreme forms of rule-obsession, proceduralism, rigidity, and an unproductive shirking of responsibility in the public sector.
4.1 Output-obsession sacrificing democratic control Borgmann (1984) argues that technology makes a commodity out of reality. Technology works as a device that prioritises the production of outcomes and hides the underlying processes from view: “In a device, the relatedness of the world is replaced by a machinery, but the machinery is concealed, and the commodities, which are made available by a device, are enjoyed without the encumbrance of or the engagement with the context” (Borgmann, 1984, p. 47). Indeed, Borgmann presciently noted, as early as 1984, that as computers become more widely used and increasingly “friendly” they simultaneously become increasingly unknowable, to lay people and even most professional programmers.
This focus on outcomes over process takes the rule-obsession dysfunction and transmutes it into the equally dysfunctional output-obsession, where the output of the computer process cannot be questioned and is endowed with the ultimate authority. Underlying such output-obsession is a tacit 24th EGOS Colloquium, 10-12 July 2008 17
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin acceptance of the ability of the system to produce the “correct” information and to do so in an “objective” manner. This is witnessed with the increasing reliance on quantitative decision criteria (Anderson, 2004). Such a situation was witnessed even with the simple transaction technology involved with submitting a funding application online, as in the inner-city schools funding example. The situation becomes even more exaggerated when considering the various technologies of biosurveillance. In such cases, it is common for due process to be skirted as the desire to pinpoint threats and identify high-risk individuals relies on the often unquestioned outputs of profiling software and related emerging technologies.
4.2 Encoded-proceduralism sacrificing integrity Excessive accountability measures can effectively restrict this discretion and cause these civil servants and public managers to fall back on procedures to skirt issues of responsibility and accountability, sometimes to the point of undermining their integrity. They effectively lose the ability to balance procedures with public values and the nuances of the contingent situation.
Fundamentally, the drive to automate and rationalise the public sector through ICTs is part and parcel of the techno-rational, Weberian bureaucracy. Increasingly automated processes are the “zenith of legal rational authority” (Bovens & Zouridis, 2002, p. 181) as the operating procedures are embedded in the system and have become more rigid and more highly rationalised than manuals or supervisors were ever able to do (Fountain, 2002, p. 130).5 Such a situation is taken further by new paradigms, such as “new public management", which seek to translate private sector mechanisms such as competition to improve the working of the public sector. However, such a perspective overlooks the fact that the Weberian style of bureaucracy is fundamentally an organisational technology that embodies the values of egalitarianism, enabling, in theory, equal treatment of all citizens (Cordella, 2007; Kallinikos, 2004b). The bureaucracy defines roles and positions with assigned responsibilities and practices, including discretion. While the discretion is structured by rules and standard operating procedures (Dunleavy et al., 2006; Fountain, 2002, p. 137), it does allow for some discretion for civil servants to take into consideration contextual variations, and presumably act according to other norms of integrity. In this way, encoded computer procedures remove not only human discretion, but also sacrifice the uniquely human ability to act on broader societal norms in contingent circumstances.
5
This is effectively a shift of discretion from the street-level bureaucrats to system analysts and software designers (Bovens & Zouridis, 2002; Reddick, 2005).
24th EGOS Colloquium, 10-12 July 2008 18
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin 4.3 Encoded-rigidity sacrificing improvement The encoding of procedures also results in encoding rigidity. In new systems, many simple encoded processes, such as automating the tax procedures, may be able to deal with a majority of cases without serious incident. However, due to the increased rigidity of the process, the ability to take into consideration contextual variations is limited, especially those that were not originally conceived to be important (Bovens & Zouridis, 2002, p. 182). Furthermore, such a situation becomes harder to change as it requires altering the software, although the difficulty depends upon the complexities of the particular system. In the extreme case of legacy systems, the ability to alter the pre-existing code has become basically non-existent. Thus, after the development of a system, the embedded rigidity can make it impossible to engage in organisational learning and development to improve performance with respect to those embedded processes.
4.4 Blame the technology sacrificing legitimacy and the ability for catharsis The problems of the increasing automation of decision-making processes underline that the introduction of ICTs in the public sector implicates designers as well as policy makers in the displacement of accountabilities. In her paper on accountability in computerised societies, Helen Nissenbaum warns that “the conditions under which computer systems are commonly developed and deployed, coupled with popular conceptions about the nature, capacities and limitations of computing”, can create barriers to accountability (Nissenbaum, 1997, p. 43). The tendency to use the computer as a scapegoat to attribute blame for errors is one of them. Furthermore, Nissenbaum goes on to note how the shift in accountability from the front-line bureaucrat to the software engineer, whose role does not include the responsibility to answer to the citizen, leaves an accountability void, later exploited. We witness this void in the example of the inner city school funding mishap example in which there was no one there to blame and no form of recourse. Politics of scandal become a politics of technocratic irresponsibility and blamelessness. As competent government performance is generally a pre-requisite for legitimate government, this situation is a recipe for a loss of legitimacy in the public sector.
The tendency to blame the technology has other psychological roots besides the desire to pass the buck. As computers increasingly automate, users are encouraged attribute a kind of decisionmaking capacity to the computer that sits uncomfortably with the practical implementation of responsibility and accountability in daily life (Johnson, 2006; Nissenbaum, 1994).
24th EGOS Colloquium, 10-12 July 2008 19
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin When grave errors have occurred – especially massive failures in technology – the form of catharsis becomes a similar form of rejecting the technology. Three Mile Island and Chernobyl initiated a generation of anti-nuclear activists the effects of which are still felt today in contemporary energy policy debates. The dysfunctions of the e-voting machines in the States have led legislatures in many counties to turn back to paper based techniques, although the election experiences in Florida in 2000 brought even that into question. Ultimately, however, scapegoating technology wielded as a political tool can be just as dysfunctional as that of targeting particular people. It also means that there might be a tendency to overlook potentially beneficial solutions to problems because of their association with particular technologies.
Of course, these dysfunctions can interact and reinforce themselves. Output-obsession and encodedproceduralisms are often precursors to blaming the technology and ducking responsibility. Indeed, as encoded procedures become ever increasingly autonomous and opaque, who can you blame but the technology?
Functions of accountability
Dysfunctions
ICT and public sector dysfunctions
Democratic control
Rule-obsession
Output-obsession
Integrity
Proceduralism
Encoded-proceduralism
Improvement
Rigidity
Encoded-rigidity
Legitimacy
Politics of scandal Blame the technology (Politics of blamelessness)
Catharsis
Scapegoating
Blame the technology (Sacrifice the system)
Table 2: Potentially exacerbated dysfunctions through the addition of technology (adapted from Bovens, 2005, p. 194)
5. Moving forward: potentials for public sector accountability
As we move towards the increased application of technology in the public sector — which we almost inevitably will in order to achieve the increased efficiencies from new ICTs – thinking through how we design accountabilities needs to be a central part of information systems (IS) design in the public sector. As discussed, technology only superficially reveals the inner workings of a bureaucracy. People still need to make decisions, and thus technology will be limited in its ability to reveal the full content and context of these decisions. For these activities, internal accountability mechanisms are perhaps still central. The technology will also help to structure certain activities, constraining and enabling human decision-making in significant ways – 24th EGOS Colloquium, 10-12 July 2008 20
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin sometimes for the benefit of the public sector through a reduction of corrupt or non-professional activities. Although humans delimit the space in which technology performs, technologies in turn set conditions on the range of actions humans can perform, often in ways not anticipated in their design. Technological artefacts persuade, facilitate, and enable particular human cognitive processes, actions, or attitudes, while constraining, discouraging, and inhibiting others (Kallinikos, 2004a).
As technology is increasingly used to automate decision-making processes then perhaps we need to conceive of new forms of transparency and accountability. If technology is to increase transparency and thus potentially lead to greater accountability, especially towards the public, then we need to make transparent the major decision-making points embedded in the software. These decisionmaking points are where, as discussed, values are embedded in the technology. The decisionmaking criterion and the types of information upon which these decisions are based can be made explicit.
5.1 Developing specific accountability mechanisms A central point from the empirical cases covered is that the implications for accountability are a function of the type of e-government implementation. To nuance our discussion of accountability, here we discuss different potential accountability mechanisms depending upon the level of automation. Central to the following discussion is the notion that finding appropriate accountability arrangements requires a broader socio-technical perspective. With this in mind, the discussion below tries to stay at a level low enough to enable the engagement with the influential elements of the context, but at a level abstract enough where we might find lessons applicable to other egovernment applications in different contexts. This implies that any suggested “solutions” are really partial; that is, it is always necessary to consider how they might have interactions, favourable or unfavourable, in any given context.
5.1.1 Accountability in low-level automation In the context of low-levels of automation, software does not make decisions. Rather, as we established above, it simply automates what are generally well-established processes, and, in doing so, may also constrain the activities of the public sector bureaucrats. For such applications, one possible way forward would be to pursue a strategy of opening up the software code to allow for public scrutiny. This strategy has been suggested elsewhere, but not in the context of e-government systems (David, 2004). 24th EGOS Colloquium, 10-12 July 2008 21
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin
For some cases of low-level automation, the open source strategy would not appear to add anything in terms of benefits in accountability. For example, with the e-procurement process described earlier, opening up the source code does not influence the subjective dimensions of the procurement officer’s work in which discretion is important. In such cases, we see no pressing case for open source implementations. Likewise, it seems excessive to make the inner-workings of form processing code available. Such a situation calls for a simpler and more direct accountability arrangement where someone can be held directly responsible for the errors of functioning.
Software in e-voting machines, however, would seem to benefit from open source code. If e-voting technology is proprietary or otherwise unavailable to outside scrutiny, the whole integrity of the system has been sacrificed for the value of efficiency and, perhaps, a false sense of security. Opening up the source code to analysis not only arguably decreases security concerns, but it goes a long way in ensuring the public trust—an essential public value when it comes to public elections. What benefits you might gain from hidden code (if there are any) are quickly overcome by benefits of code transparency.
However, in such systems publicly available software only addresses one component of the operation of the e-voting machines as it leaves the hardware unaddressed. Indeed, the Dutch government recently decided to switch back from machines to the prior, paper-based election process. In 2007, a special committee advised that the government should refrain from using these machines after they were shown to be too vulnerable to tampering. The activist group Wij vertrouwen stemcomputers niet (which translates as “we do not trust voting computers”) demonstrated that the hardware of the system could be rigged relatively easily. In May 2008 the government decided that, until an adequate alternative is available, the Netherlands will vote by red pencil and paper.6 The government website states that not only are machines too vulnerable, but “the development of new applications requires a substantial investment in terms of finance as well as organisation” (translation authors). It states that the new e-voting system does not add value above and beyond the benefits of a pen and paper system.
The Chilean e-tax system represents another dynamic. It would appear that many of the aspects of codifying tax-processing could also be a potentially straight straightforward case for open source based low-level automation. The code is a direct translation of tax-policy, and the data format is 6
See: http://www.minbzk.nl/actueel/112441/nieuw (in Dutch)
24th EGOS Colloquium, 10-12 July 2008 22
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin ideal for codification. However, not all tax codes are as simplified as Chile’s and Chile also underwent an extensive simplification of the tax process before they bought computers (Constance, 2000). Highly complex tax legislation might require systems that fall into higher level categories of automation and, hence, might be subject to different considerations.
It should be noted that the placing of source code online also introduces another powerful incentive into the mix. If the software designers know that the code is publicly available, then there is a powerful incentive to behave appropriately (Meijer, 2005). Meijer (2007), in a study of the impacts of transparency on Dutch schools and hospitals found that information transparency did indeed have an impact on the activities of public service organisations. The fear of negative publicity in the media prompted action by the public service to try to improve upon the performance indicators being reported on. The public sector did not seem to react to citizen stakeholder exit and voice mechanisms, but instead picked up on the transparency itself as the motivating signal.
That said, this strategy also means that the public place their trust in a relatively small group of software and hardware experts to do the monitoring. This introduces problems of scale, if more and more software were to be made open to the public. The strength of the standard voting system is that it is open to a much wider, and more politically diverse, scope of the public, if they desire.
In systems where open source does not appear applicable, or is not perfect, other accountability arrangements need to be devised. The situation in Chile where citizens are responsible for the tax filing data points to one such arrangement. The accuracy of the tax form is a responsibility of the citizen, and the government’s provision of data is simply a convenient service. In this case, citizens would appear to be best situated to deal with the problem. In such a system, as in the Chile case, the citizen has the ultimate say in the validity of the data and a clear mechanism exists for the citizen to rectify this error.
In other situations, where citizens are not responsible for the output, there must be a means to acknowledge, make amends, and apologize for errors or mistakes that occur, as Kuflik notes (1999). This is where humans must step in and do the repair-work, both in terms of technological errors, but also in human relations. We can imagine that, in many instances, automated decisions might only be in error at the margins – only happening to a small percentage of citizens. This makes collective action more difficult. If the government wants to balance the efficiencies of automation with the public values of trust and accountability, then a potential logical step is the establishment of a 24th EGOS Colloquium, 10-12 July 2008 23
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin contract with the public as to the ability to appeal decisions. This process should guarantee a response to citizens’ complaints within a particular time frame. Something like this would have helped greatly in the inner city school program funding case where a simple technical error caused such havoc because there was no policy in place to deal with such situations.
5.1.2 Accountability in higher-level automation In devising and establishing contracts between the government and the public, as we move to higher-levels of automation, issues of accountability come in at an earlier stage than the implementation and use stages. This is because the values that underlie the decision-making component of these technologies are built into the technology. For example, our discussion of the biometric software in the automated fingerprint ID system demonstrated that important decisionmaking processes are hidden behind complex algorithms. The discussion underlines the point illustrated by literature in the field of science and technology studies (STS), that the design of technological devices reflects the values and world-views of their designers (Akrich, 1992; Friedman & Nissenbaum, 1996; Woolgar, 1987). These in-built values can challenge and conflict with prevailing values of the context in which they are introduced as a result of the efficacy of technologies. This point underscores the responsibility of the companies and research institutions that develop these technologies. As van den Hoven stresses in his discussion on value sensitive design, “A value analysis needs to be made in advance and protocols need to implement them. No IT application of this type can work satisfactorily if its value implementation is inadequate” (van den Hoven, 2007, p. 3).
Addressing ICT and accountability issues therefore requires a broader focus that extends over time and across organisations. Nissenbaum conceives of accountability as something very akin to answerability, which can be used as “a powerful tool for motivating better practices, and consequently more reliable and trustworthy systems” (Nissenbaum, 1997, p. 43). Accepting the explanation such as it is “the computer’s fault”, she argues, stands in the way of a “culture of accountability” that is aimed at maintaining clear lines of accountability. A culture of accountability is worth pursuing because a developed sense of responsibility is a virtue to be encouraged, and it is valued because of its consequences for social welfare. Holding people accountable for the harms or risks caused by computer systems provides a strong motivation for minimising them. Moreover, accountability can provide a starting point to assign just punishment. Nissenbaum’s instrumental take on accountability shifts the focus to the socio-technical system in which technologies are
24th EGOS Colloquium, 10-12 July 2008 24
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin developed and used. It underscores that increasingly autonomous technologies are the result of choices in developing technologies, rather than an inevitable outcome.
For example, the dependency of public sector institutions on high technology companies and research centres necessarily implicates these organisations in the distribution of responsibility, as decision-points are located within these organisations as well. Nissenbaum has argued, "If we consistently respond to complex cases by not pursuing blame and responsibility, we are effectively accepting agentless mishaps and a general erosion of accountability" (Nissenbaum, 1994, p. 76). This implies that, to prevent the gradual erosion of accountability in such cases of high-level automation, it might prove necessary to locate new accountabilities within the third parties (i.e., the companies, research centres, and others) that provide many of the complex technologies employed in such systems. These parties should be acknowledged in any potential contract between a government and a public.
There is also another central question that the notion of value sensitive design raises: are new technologies worth certain social costs? An answer to this question can only be found in concrete practices, where values can be weighed and discussed. Here, the proposed British national biometric ID scheme appears a clear example as to how certain technologies might simply not be worth the social costs. Perhaps at this point, the push for efficiency in public sector delivery by way of an ID card scheme with serious surveillance implications (i.e., the databases that comprise the scheme and the extensive use of biometric identifiers, among other features) compromises too greatly other important values such as privacy and personal autonomy. As with the accountability argument we have made throughout this paper, it is crucial that the rich variety of values that make up the public sector be respected, balanced and, where possible, promoted.
An important point to keep in mind is that one-size-fits-all solutions do not exist. Automation changes practices and displaces accountabilities. The development and implementation of technologies therefore requires the continuous interrogation and discussion of the values at stake and they way in which new technologies can change practices. Thus, we need to focus on organising for accountability and not be distracted by rhetoric about increasing complexity or the benefits of efficiency and effectiveness. So far we fear that efficiency, as a value, has unjustly prevailed over other crucial public values. We therefore underscore the need for public debates about the conditions under which ICT technologies should or should not be developed, integrated, and used. 24th EGOS Colloquium, 10-12 July 2008 25
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin
6. Conclusion ICT implementation in the public sector is about maintaining a suitable balance. It is a balance between competing public values. It is also a balance between reaping the benefits of efficiency, effectiveness, and transparency that come from improved information flows and automated processes and the tendency of technology towards certain dysfunctions. This balance extends beyond the implementation of the technology to the broader socio-technical system. The shift in responsibilities, roles, and processes that comes from the implementation of ICTs in the public sector requires a thoughtful and judiciously applied set of social accountability processes. Where technologies intervene or break down existing accountability processes, or introduce new problematic dimensions, new social arrangements have to be developed as accountability repairwork.
As we have tried to illustrate in this paper, different types of ICT implementations require different types of accountability mechanisms to compensate for their inherent weaknesses. This raises an important question: what agents are best held accountable for what types of actions and through what accountability arrangements? Through our empirical examples, we see that different types of e-services alter the form and content of the available information regarding the underlying process. As such they can potentially support accountability processes. At the same time, they can also displace accountabilities or even exacerbate accountability dysfunctions. Moreover, high-level automation can encapsulate decision-making processes, potentially reducing human discretion. Removing human discretion can sometimes prove a good thing when, for example, it reduces corruption in the public sector as is intended in many e-government programmes. However, the removal of the human element in decision-making can also prove problematic when the outcome is the severe displacement or dissolution of accountability.
Such a consideration alters the calculus behind the implementation of ICT in the public sector. In situations where dysfunctions are prone to emerge, and where there is, as of yet, no acceptable social mechanism to balance these dysfunctions, the implementation must be brought under question. The question is: are the efficiency and effectiveness benefits worth the potential dysfunction? This balance point will be different in different cultures, of course, with different historical orientations towards government and varied public value systems influencing the calculus.
24th EGOS Colloquium, 10-12 July 2008 26
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin If we could peer in to the future and observe the newer, faster, and more “intelligent” technologies, we believe that we would find that negotiating this balance will remain a central concern for the governance of information systems in the public sector. The concerns about the increasing pervasiveness and ubiquity of ICTs in the public sector are often countered by the promise of increasingly intelligent technologies, which will prove capable of reasoning about the moral and social consequences of their actions. The ability to operate without continuous direction of human operators is an appealing feature of computer-regulated systems that perform tasks that are too complex, too dangerous, or that require accurate time-critical control (see, for example: Noorman, Forthcoming). It is still an open question as to the extent that new developments in software can codify and automate “non-legal, non-routine, street-level interactions, such as teaching, nursing, and policing” (Bovens & Zouridis, 2002, p. 180). However, what the foregoing discussion illustrates is that new technologies will not independently provide an overarching solution to the problem of accountability in the public sector. A preoccupation with technology-centred solutions distracts us from addressing which, why, and how particular accountabilities should be enforced or shifted. The analysis in this paper underscores the responsibility of individuals in constructing, pursuing, integrating, and accepting ICT solutions. References Abel, D. (2007, November 2) Technicality May End Student Program. The Boston Globe, from http://www.boston.com/news/local/articles/2007/11/02/technicality_may_end_student_prog ram/ Akrich, M. (1992) The De-Scription of Technical Objects. In W. Bijker & J. Law (eds.), Shaping Technology/Building Society: Studies in Socio-Technical Change. Cambridge: The MIT Press, pp.205-224. Anderson, K. V. (2004) E-Government and Public Sector Process Rebuilding: Dilettantes, Wheel Barrows, and Diamonds. Dordrecht: Kluwer Academic Publishers. Avgerou, C., Ciborra, C., Cordella, A., Kallinikos, J., & Smith, M. L. (2005) The Role of Information and Communication Technology in Building Trust in Governance: Toward Effectiveness and Results. Washington, D.C.: Inter-American Development Bank. Barata, K., & Cain, P. (2001) Information, Not Technology, Is Essential to Accountability: Electronic Records and Public-Sector Financial Management. The Information Society 17: 247-258. Bellamy, C., & Taylor, J. A. (1998) Governing in the Information Age. Buckingham: Open University Press. Bhatnagar, S. (2004) E-Government: From Vision to Implementation. London: Sage Publications Ltd. Borgmann, A. (1984) Technology and the Character of Contemporary Life: A Philosophical Inquiry. Chicago: The University of Chicago Press. Bovens, M. (2005) The Oxford Handbook of Public Management. In E. Ferlie, J. Laurence E. Lynn & C. Pollitt (eds.), The Oxford Handbook of Public Management. Oxford: Oxford University Press, pp.182-208. 24th EGOS Colloquium, 10-12 July 2008 27
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin Bovens, M., & Zouridis, S. (2002) From Street-Level to System-Level Bureaucracies: How Information and Communication Technology Is Transforming Administrative Discretion and Constitutional Control. Public Administration Review 62(2): 174-184. Braithwaite, J. (1998) Institutionalizing Distrust, Enculturating Trust. In V. Braithwaite & M. Levi (eds.), Trust and Governance. New York: Russell Sage Foundation, pp.343-375. Cherry, M., & Inmwinkelried, E. (2006) A Cautionary Note About Fingerprint Analysis and Reliance on Digital Technology. Judicature 89(6): 334-338. Collins, H. (1990) Artificial Experts: Social Knowledge and Intelligent Machines. London: The MIT Press. Collins, H., & Kusch, M. (Eds.). (1998) The Shape of Actions: What Humans and Machines Can Do. Cambridge, Massachusetts: The MIT Press. Constance, P. (2000, August. 2000) Simplify, Simplify, Simplify and Then Buy the Computers. IDBAmerica: magazine of the Inter-American Development Bank. Cordella, A. (2007) E-Government: Towards the E-Bureaucratic Form? Journal of Information Technology 22: 265-274. David, S. (2004) Opening the Sources of Accountability. First Monday 9(11). Davis, C. J., & Hufnagel, E. M. (2007) Through the Eyes of Experts: A Socio-Cognitive Perspective on the Automation of Fingerprint Work. MIS Quarterly 31: 681-704. Dunleavy, P., Margetts, H., Bastow, S., & Tinkler, J. (2006) Digital Era Governance: It Corporations, the State, and E-Government. Oxford: Oxford University Press. Elzinga, D. J. (1989) Politieke Verantwoordelijkheid: Over Verval En Vooruitgang in De Politieke Democratie. In M. Bovens, C. Schuyt & W. Witteveen (eds.), Verantwoordelijkheid: Retoriek En Realiteit. Zwolle: Tjeenk Willink, pp.63-79. Emanuel, E. J., & Emanuel, L. L. (1996) What Is Accountability in Health Care. Annals of Internal Medicine 124(2): 229-239. Eschenfelder, K. R. (2004) Behind the Web Site: An inside Look at the Production of Web-Based Textual Government Information. Government Information Quarterly 21: 357-358. Eshleman, A. (2008) Moral Responsibility. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2008 ed.). Fountain, J. E. (2001) Building the Virtual State: Information Technology and Institutional Change. Washington, D.C.: Brookings Institution Press. Fountain, J. E. (2002) Toward a Theory of Federal Bureaucracy for the Twenty-First Century. In E. C. Kamarck & J. Joseph S. Nye (eds.), Governance.Com: Democracy in the Information Age. Washington, D.C.: Brookings Institution Press. Friedman, B., & Nissenbaum, H. (1996) Bias in Computer Systems. ACM Transactions on Information Systems 14(3): 330-347. Gelders, D. (2005) Public Information Provision About Policy Intentions: The Dutch and Belgian Experience. Government Information Quarterly 22: 75-95. Grimsley, M., & Meehan, A. (2007) E-Government Information Systems: Evaluation-Led Design for Public Value and Client Trust. European Journal of Information Systems 16: 134-148. Hardin, R. (1991) Trusting Persons, Trusting Institutions. In R. J. Zeckhauser (ed.), Strategy and Choice. Cambridge: The MIT Press, pp.185-209. Heeks, R. (1998) Public Sector Accountability: Can It Deliver? Information Systems for Public Sector Management Working Paper Series, No 1: Institute for Development Policy and Management, University of Manchester. Heeks, R. (2003) Most E-Government-for-Development Projects Fail: How Can Risks Be Reduced? iGovernment Working Paper Series: Institute for Development Policy and Management, University of Manchester. Heeks, R. (2005) E-Government as a Carrier of Context. Journal of Public Policy 25(1): 51-74.
24th EGOS Colloquium, 10-12 July 2008 28
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin Introna, L. D., & Nissenbaum, H. (2000) Shaping the Web: Why the Politics of Search Engines Matters. The Information Society 16(3): 169-185. Introna, L. D., & Wood, D. (2004) Picturing Algorithmic Surveillance: The Politics of Facial Recognition Systems. Surveillance & Society 2(2/3): 177-198. Johnson, D. G. (2001) Computer Ethics (3rd ed.). Upper Saddle River, New Jersey: Prentice Hall. Johnson, D. G. (2006) Computer Systems: Moral Entities but Not Moral Agents. Ethics and Information Technology 8: 195-204. Kallinikos, J. (2001) The Age of Flexibility: Managing Organizations and Technology. Lund: Academia Adacta AB. Kallinikos, J. (2004a) Farewell to Constructivism: Technology and Context-Embedded Action. In C. Avgerou, C. U. Ciborra & F. F. Land (eds.), The Social Study of Information and Communication Technology: Innovation, Actors, and Contexts. Oxford: Oxford University Press, pp.140-161. Kallinikos, J. (2004b) The Social Foundations of the Bureaucratic Order. Organization 11(1): 1336. Kallinikos, J. (2006) The Consequences of Information: Institutional Implications of Technological Change. Cheltenham: Edward Elgar. Kuflik, A. (1999) Computers in Control: Rational Transfer of Authority or Irresponsible Abdication of Authority? Ethics and Information Technology 1: 173-184. Layne, K., & Lee, L. (2001) Developing Fully Functional E-Government: A Four Stage Model. Government Information Quarterly 18: 122-136. Lichtblau, E. (2006, November 30) U.S. Will Pay $2 Million to Lawyer Wrongly Jailed. New York Times, LSE Identity Project. (2005) The Identity Project: An Assessment of the Uk Identity Cards Bill and Its Implications: The Department of Information Systems, London School of Economics and Political Science. Meijer, A. J. (2005) 'Public Eyes': Direct Accountability in an Information Age. First Monday 10(4). Meijer, A. J. (2007) Publishing Public Performance Results on the Internet: Do Stakeholders Use the Internet to Hold Dutch Public Service Organizations to Account? Government Information Quarterly 24: 165-185. Michael, B. (2004) Questioning Public Sector Accountability [WWW Document] http://search.ssrn.com/sol3/papers.cfm?abstract_id=553342 (accessed May 31st 2008) Moon, M. J. (2002) The Evolution of E-Government among Municipalities: Rhetoric or Reality? Public Administration Review 62(4): 424-433. Nissenbaum, H. (1994) Computing and Accountability. Communications of ACM 37(1): 72-80. Nissenbaum, H. (1997) Accountability in a Computerized Society. In B. Friedman (ed.), Human Values and the Design of Computer Technology. Cambridge: Cambridge University Press, pp.41-64. Noorman, M. (Forthcoming) Mind the Gap: A Critique of Human/Technology Analogies in Artificial Agents Discourse. Unpublished PhD, Maastricht University, Maastricht. O’Neill, O. (2002) A Question of Trust: The Bbc Reith Lectures 2002. Cambridge: Cambridge University Press. Offe, C. (1999) How Can We Trust Our Fellow Citizens? In M. E. Warren (ed.), Democracy and Trust. Cambridge: Cambridge University Press, pp.42-87. Reddick, C. G. (2005) Citizen Interaction with E-Government: From the Streets to Servers? Government Information Quarterly 22: 38-57. Relyea, H. C. (2002) E-Gov: Introduction and Overview. Government Information Quarterly 19: 935.
24th EGOS Colloquium, 10-12 July 2008 29
Accountabilities, automations, dysfunctions, and values M. L. Smith, M. E. Noorman, & A. K. Martin Ronaghan, S. A. (2002) Benchmarking E-Government: Assessing the Progress of the Un Member States. New York: United Nations: Division for Public Economics and Public Administration and American Society for Public Administration. Schillemans, T. (2007) Verantwoording in the Schaduw Van De Macht: Horizontale Verantwoording Bij Zelfstandige Uitvoeringsorganisaties. Lemma, Den Haag. Sheridan, T. B. (1992) Telerobotics, Automation, and Human Supervisory Control. Cambridge, MA: MIT Press. Silcock, R. (2001) What Is E-Government? Parliamentary Affairs(54): 88-101. Smith, M. L. (2007) Confianza a La Chilena: A Comparative Study of How E-Services Influence Public Sector Institutional Trustworthiness and Trust. Unpublished PhD, London School of Economics and Political Science, London. Solomon, R. C., & Flores, F. (2001) Building Trust: In Business, Politics, Relationships, and Life. New York: Oxford University Press. Stahl, B. C. (2004) Information, Ethics, and Computers: The Problem of Autonomous Moral Agents Minds and Machines 14: 67-83. United Nations. (2003) World Public Sector Report 2003: E-Government at the Crossroads. New York: Department of Economic and Social Affairs, United Nations. van den Hoven, J. (2002) Wadlopen Bij Opkomend Tij: Denken over Ethiek En Informatiemaatsshappij. In J. de Mul (ed.), Filofie in Cyberspace. Kampen: Uitgeverij Klement, pp.47-65. van den Hoven, J. (2007) Ict and Value Sensitive Design. In P. Goujon, S. Lavelle, P. Duquenoy, K. Kimppa & V. Laurent (eds.), Ifip International Federation for Information Processing, the Information Society: Innovations, Legitimacy, Ethics and Democracy (Vol. 233). Boston: Springer, pp.67-72. Weare, C. (2002) The Internet and Democracy: The Causal Links between Technology and Politics. International Journal of Public Administration 25(5): 659-691. West, D. M. (2005) Digital Government: Technology and Public Sector Performance. Princeton: Princeton University Press. Willmott, H. (1996) Thinking Accountability: Accounting for the Disciplined Production of Self. In R. Munro & J. Mouritsen (eds.), Accountability: Power, Ethos and the Technologies of Managing. London: International Thompson Business Press. Wong, W., & Welch, E. (2004) Does E-Government Promote Accountability? A Comparative Analysis of Website Openness and Government Accountability. Governance: An International Journal of Policy, Administration, and Institutions 17(2): 275-297. Woolgar, S. (1987) Reconstructing Men and Machine: A Note on Sociological Critiques of Cognitivism. In W. Bijker, T. Hughes & T. Pinch (eds.), The Social Construction of Technological Systems. Cambridge: The MIT Press, pp.311-328.
24th EGOS Colloquium, 10-12 July 2008 30