Unintentional Power in the Design of Computing Systems

0 downloads 0 Views 158KB Size Report
about what computer professionals can do, and ought to do, about the power ... ...computing professionals who design and develop systems must be alert to, ...
Unintentional Power 1

Unintentional Power in the Design of Computing Systems

Chuck Huff Psychology Department St. Olaf College

Originally published in T. W. Bynum & S. Rogerson (Eds.) (2003). Computer Ethics and Professional Responsibility. (pp. 76-81), Basil Blackwell. This version has been somewhat modified from that original version and includes new references and text.

Copyright  2003 by Charles W. Huff.

Unintentional Power 2

Abstract

Unintentional power is the ability to affect others even though one does not intend, or even foresee, the likely effect. Software designers and other computer professionals have this kind of power because computers instrument human action, and thus control it, or at the least guide it. Examples of this power are provided from accidents with medical technology and in the case of gender bias embedded within educational software. I also make some suggestions about what computer professionals can do, and ought to do, about the power their role gives them.

Unintentional Power 3

Unintentional Power in the Design of Computing Systems For in much wisdom is much grief: and he that increaseth knowledge increaseth sorrow. Ecclesiastes 1:18 ...computing professionals who design and develop systems must be alert to, and make others aware of, any potential damage... ACM Code of Ethics, 1992 Why was the Hebrew scholar and author of Ecclesiastes so skeptical about the worth of knowledge? At least in our time, we find the rapid increase in knowledge to be both exhilarating and hopeful. As knowledge increases, we cure more diseases, connect more people, ease much poverty. Increases in knowledge certainly drive the technology industry and make faster, better, more almost a mantra of progress. So it can be surprising to read words like those above. They smack of obscurantism, obstruction, willful ignorance. Surely attitudes like this can only come from unreconstructed technophobes. The implied advice is to avoid sorrow by avoiding knowledge -- to retreat into ignorance. By the time you have done with this article, I hope to have convinced you that knowledge brings with it increased responsibility. If I succeed, you may have some sympathy for the weariness of the ancient scholar. You may still reject the implied advice. What sort of knowledge increases sorrow? At least for our purposes, it is the sort of knowledge that allows us to predict possible effects of the products we design. The sort of knowledge that makes us, at least in part, responsible for either producing or avoiding those effects. This knowledge makes our lives more complicated because it brings with it the sorts of “trouble” that involve more responsibility. Those who know about dangers or difficulties have a responsibility to take them into account. The ACM code recognizes this in the second quote listed at the beginning of this article. Computer professionals have a responsibility to design products that are safe and that perform well the functions for which they were designed. For instance, the designers of the Therac-25 Radiation Therapy machine (Jacky, 1991; Leveson & Turner, 1992) knew the radiation their product used could be delivered in dangerous dosages. Yet they produced a machine which, when used under the standard conditions in busy hospitals, could and did result in serious mistakes in dosage. Most analyses of the design process in this case agree that the designers were negligent in both the initial design and in following up on reports of malfunction. As a result several people died, and many were injured. The great advance of the Therac-25 was that all of its controls had been moved into software. The operator now interacted with the machine solely through the computer terminal. Safety interlocks that might prevent lethal dosage levels were also included in the software, and eliminated from the hardware of the machine. This allowed easy reprogramability of the dosage levels, and easy maintenance and upgrading of the machine. It meant that the operator had more time to actually interact with the patient and made treatment a more humane experience. It also meant that the safety interlocks depended upon the correctness of the software. And not only the software, but also the software as it was used by the technician. It turned out that, if the technician set the machine for one type of dosage (low level electron beam) and then changed the setting to another type (high level electron beam with a metal target interposed to alter the beam to low level X-rays), the machine would switch to the high electron beam but not interpose the target -- thus directly irradiating the patient with lethal levels of the electron beam. When this happened, the computer screen would simply indicate “malfunction

Unintentional Power 4

54” for incorrect dosage. But since “malfunction 54” occurred up to 40 times a day for entirely innocuous reasons (e.g. the beam was slightly “out of tune”), technicians learned to ignore it. When a malfunction occurs this regularly in a busy medical facility, it is no wonder the technicians ignored it rather than stopping for the day to recalibrate (and thereby making patients wait). Had the designers thought carefully about the conditions under which their product would be used, they could have made a better attempt to avoid the delivery of lethal or harmful dosages. Had they considered how widespread use of their product could become, they might have designed a feedback process that could have sent “fixes” to these widespread sites. They did not do these things. And in part they neglected to do them because they had construed their job narrowly as technically proficient design (Leveson & Turner, 1992). Levels of Constraint in Software Engineering But isn’t the job of the software engineer simply to be technically proficient and to make the best technical decisions he or she can, on time and within budget? Certainly technical proficiency is crucial. And because technical proficiency is so difficult to achieve and to maintain, we often think it should be the only criterion by which work is measured. At least life would be simpler if this were so. But there are very few technical decisions that are entirely constrained by math and physics. Those that are (like some issues in queuing theory) are still likely to be only portions of a larger project that has additional constraints. What are these additional constraints? Table 1 lists some of them. These constraints on design range from thoroughly tested design Technical systems design issues, standards, Level 1: standards to concerns about the tradeoffs in design and performance “worth” of computing to society. Company policies, specifications, budgets, Many are clearly in the domain of Level 2: project timelines “engineering” popularly conceived. Anticipated uses & effects; interactions with Some are clearly far from engineering. Level 3: other technologies & systems. Level 1 constraints are commonly Larger “impact on society” issues (e.g. privacy, covered in training programs that Level 4: property, power, equity). serve as gateways into the field. But even at this level value judgments abound: which standards? how to resolve tradeoffs? Often these decisions are not based on mathematical proofs or on physical constraints. They are based instead on criteria whose applicability is at least a matter for debate. And these debates are based on disagreements about what things we should value more. Thus, value judgments.

Table 1: Constraints on the Design of Systems

For example, the decision to implement all the safety interlocks on the Therac-25 in the software -- and none in the hardware -- was based (at least in part) on the value associated with having reprogrammable drivers for the machine. Easy reprogramability is of value because it reduces costs on upgrades. It did not have to be done this way, but if you value flexibility and easy upgrading, you should design this way. In this instance, the designers mistakenly underestimated other, equally important values such as those associated with causing injury to patients. In this instance the designers underestimated the difficulty of controlling the dosage levels in a busy hospital radiation therapy room. Their neglect of this difficulty was based on their ignorance of the hurried and hectic conditions that occur in these settings. In a very

Unintentional Power 5

important sense these designers had it “within their power” to design a safe product if they had inquired about the conditions under which their product would be used. Subsequent designs now take into account these difficulties (Jacky, 1991). Another, less catastrophic example may help. There is widespread concern about the gender imbalance in computer science in the United States (Martin & Murchie-Beyma, 1992, Camp, 1997). The large majority of graduate students in computing (and even larger majorities of professors) are men. Fewer undergraduate women are enrolling in computer science courses today, so it seems as though this imbalance is likely to continue. There is some consensus among researchers that a major reason women do not pursue careers in computer science or in related fields is that, from early adolescence onward, computing is defined as a male field and most uses of computing are portrayed as interesting only to males (Martin & Murchie-Beyma, 1992). We were interested in this claim, and designed a study to determine the extent to which this portrayal of computing as a male domain infiltrated itself into the software that students might use in school. We had teachers design educational software for either boys, girls, or (gender unspecified) “children” (Huff & Cooper, 1987). We then had the designers and independent raters rate those programs in terms of characteristics like time pressure, verbal interaction required, control given the user, etc. We found that programs designed for boys looked like games (with time pressure, eye-hand coordination, and competition most important), while programs for girls looked like “tools” for learning (with conversation and goal-based learning). So far, this is unremarkable. However, programs designed for “children” looked just like those designed for boys. Thus, programs designed for “students in general” were really programs designed for boys. Interestingly, 80% of the designers of our programs were female, many of whom expressed concern that educational software was male-biased. Thus, the portrayal of computing as a male domain can be subtly and strongly woven into software itself, even by well-meaning, female educators (see Friedman & Nissenbaum (1996) for a detailed exposition of how bias is incorporated in computer systems). This statement bears repeating on a more general level. The design of the software itself was affected by the social expectations of the designers. This is as true for this more subtle effect of gender bias in design as it is for the clearly mistaken (and clearly deadly) effects of the expectations of the designers of Therac-25. However, even if value judgments show up in the lower levels of system engineering, perhaps it is still possible to limit the work of the software designer to only those relatively simple value judgments that occur when choosing among algorithms or standards. Unfortunately, even a little knowledge about how computing systems are used will increase the sorrow of those who hope this. Look again at the two examples we have covered. In both cases issues from higher levels in the table of constraints worked their way down to the system design level. Choices about where to implement safety checks should have been made based upon better knowledge of the hectic work environment of radiation therapy. Choices about basic program characteristics should have been made based upon a broader assumption about who the users of the software would be. Thus, though it might be desirable to cleanly delineate the work of the software engineer, it is clearly impossible. Good engineering at the level of basic system design requires attention to many levels of constraint. So, if you limit yourself only to considering constraints that are clearly at the “engineering” level, you can have an effect on the world that you would clearly prefer to have avoided. This is true, to a greater or lesser extent for both the examples we have seen here. And

Unintentional Power 6

thus, to a greater or lesser extent, the designers of these products had power over the users of their products. They were unaware of the effect their design might have had. They certainly did not intend any negative effects from their design. But they occurred nonetheless. Unintentional Power I call this power to harm1 others in ways that are difficult to predict unintentional power. Obviously, the design decisions that software engineers make will affect the performance of a product, and they thereby affect the users of that product. They have these effects because computing systems instrument human action, in much the same way that hammers instrument human action (Johnson, 1998). There are some things one can do with a hammer, and other things best not attempted. Thus designer of tools structure the way the user of the tool can behave, and thus effect the user, and possibly others. Many of these effects are intentional: the product works faster, is easier to maintain, has increased capability. Some are unintentional: the product is more difficult to maintain, it confuses or frustrates users, it kills users. In the same way that a large man may unintentionally thrust people aside as he carries an awkward package down a street, a software designer may unintentionally harm users of a product that she has designed for a good purpose. Both are exercising power. Both the software designer and the package carrier are affecting others intentionally and unintentionally. One cannot conveniently claim credit for the positive effects and deny responsibility for the negative ones. To understand the issue of unintentional power, we must first get the most useful definition of this sort of power firmly in hand. To begin with, it is similar to both the physical science definition of power (the potential to do work) and our usual social definition of power (the ability to influence others). At this level, both definitions make it clear that intention is not important. In the one case it is simply irrelevant, and in the other, we recognize that one can have both intentional and unintentional influence on others (remember the package courier?). Thus, we all have unintentional power associated with our actions whenever those actions have unintentional consequences. The issue then becomes, should we have been aware of the likelihood that those consequences would occur? Could we have foreseen them? Might we have been more careful? Here is an important principle about unintentional power: it carries with it a concomitant responsibility to be as aware as reasonably possible of the consequences of our actions. The Problems of Unintentional Power One difficulty with unintentional power in computing systems is that the designers are often far removed from the situations in which their power (now carried by the software they have designed) has its effect. Software designed in Chicago might be used in Calcutta. Software, or bits of software, might be reused for purposes other than ones its original designer envisioned. Software might be used in environments that are more complex or more dangerous than those for which it was initially designed. Thus, the person or persons with the power are removed from those who are affected. This makes it difficult for the designer to foresee possible consequences.

1

In this paper, I am only looking at the negative effects of unintentional power. Obviously, positive effects are possible. One of the claims of the Open Source movement is that positive but unintentional effects can come from the decision to make source code publicly available (Raymond, 2001).

Unintentional Power 7

This distancing also makes it difficult for the user to recognize that it was, in fact, the designer who affected them. People are likely to blame themselves for the difficulties the software produces rather than to see it as an issue of bad design (Huff, Fleming & Cooper, 1991). After all, they are the ones closest to the harm, and it often doesn’t occur to them that their software was designed by someone. Responsibility for harm, then, becomes difficult to assign and easy to avoid. Another effect of distancing is that the response to the problem cannot be standardized for any particular domain of applications. The effects change too much from implementation to implementation. In response to a similar problem, computer professionals in human computer interaction have taken to field testing, iterative reviews, user testing, and other methods to improve the odds that their products will fit in the particular domains for which they are designed (Landauer, 1995; Shneiderman, 1992; Borenstein, 1991). There are simply far too many possible consequences to catch them all (on time and within budget). Coping With Unintentional Power. Here, then, comes the rub: what is a “reasonable” attempt on the part of a designer to avoid the negative consequences of unintentional power? Clearly, anyone who now designs software for radiation therapy should take into account the conditions under which their product will be used. We now know this because several people have died. I would submit, and hope you agree, that this sort of “user-testing” is to be avoided. A general lesson we might draw from this is that designers of “safety-critical” systems need to take more constraints into account than simply low level design constraints. But surely we cannot expect a wide ranging inquiry, looking at all the levels of constraint listed in table 1, before we decide to build any system? Still, there are some things we can do, and that we should be expected to do. Research on the broader effects of computing has advanced enough to have some clear things to say about the dangers of ignoring the constraints listed earlier in this article. Software design is no longer in its infancy and should be expected to develop methods to deal with these constraints without bankrupting designers or their employers. Here are some beginning suggestions as to how we might go about addressing these issues: Recognize the problem and attempt to limit its domain. Clearly we cannot address a problem we prefer to ignore. Some designers prefer to inflate the costs of looking at these issues (e.g. you mean we have to look at every possible implementation?) declare the problem too large and frightening to approach, and then to ignore it. Ignoring it will not make it go away. A better approach is to recognize the parameters of the problem, attempt to limit the domains where it can cause a problem, and to then address the problem within those limited domains. Developing standards (for safety critical computing, for computing interfaces, for data exchange, etc.) is one way to limit the domain. Use developing methods to inform yourself of those effects worth predicting. Methods for quality-based (TQM) software design are now becoming available (Dunn, 1994; Arthur, 1992). In addition to these methods, the use of a social impact statement, or SIS (Huff, 1996; Shneiderman, 1990; Shneiderman & Rose, 1996) can help you both to determine what sorts of effects you should care about and to investigate the constraints that will guide your solution. Rogerson & Gotterbarn (2001) make credible claims that doing this is not an additional burden on the software design process, it is simply part of the process and results in good software design and financial savings to clients. They have designed a method called SoDIS (for Software Development Impact Statement) to investigate the sort of questions we are concerned with and supporting software to help instrument this method. But even with

Unintentional Power 8

all this help, TQM methods or SIS approaches will not make the decisions for you. Making these decisions depends upon the computing professionals’ judgment for a particular project in a particular setting. This is what professionalism is about. Make provision in the life-cycle of software to look for the effects. You simply cannot identify all the possible effects of a computing system ahead of release. For this reason, you should be ready to identify them after release, and as soon as possible after release. Software design methods currently incorporate a life-cycle design philosophy, and it is relatively easy to incorporate some of the methods from social impact statements or SoDIS into this lifecycle model. Conclusion The approaches I recommend are not a sea change in software engineering standards, but an evolutionary step. The standards are already designed to take account of late occurring effects, and to make designers aware of interactions between the software and some environmental issues. Quality design requires that we broaden our vision about the constraints we should consider in our designs. You cannot make all designs safe under all conditions, but you can make them more safe, or more usable, or more equitable, under more conditions. Software engineers should take responsibility where emerging methods allow them to, and should be humble about their ability to guarantee perfect functioning where they cannot measure or test performance in real conditions. By increasing knowledge about the social effects of software, and by adopting methods that allow us to anticipate these effects, we may be able to decrease sorrow, and thus confound the prophet’s prediction. But we will do so at the expense of our own more simplistic approaches to software design.

Unintentional Power 9

References Arthur, L. J. (1992). Improving Software Quality: An Insider's Guide to TQM. New York: John Wiley Borenstein, N. S. (1991). Programming as if people mattered: Friendly program, software engineering, and other noble delusions. Princeton, NJ: Princeton Univ. Press. Camp, Tracy (1997). "The Incredible Shrinking Pipeline," Communications of the ACM, 40,2, pp. 103110. Dunn, R. H. (1994). TQM for Computer Software. New York : McGraw-Hill. Friedman, B. & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems. 14, 3. Huff, C. W. & Cooper, J. (1987). Sex bias in educational software: The effects of designers' stereotypes on the software they design. Journal of Applied Social Psychology, 17, 519—532. Huff, C. W. (Feb. 1996). Practical Guidance for Teaching the Social Impact Statement (SIS). in C. Huff (Ed.) Computers and the Quality of Life: The proceedings of the Symposium on Computers and the Quality of Life. 86-89, New York: ACM Press. Huff. C. W., Fleming, J. F., & Cooper, J. (1991). The social basis of gender differences in humancomputer interaction. In C.D. Martin & E. Murchie–Beyma (Eds.). In search of Gender-free paradigms for computer science education. (pp. 19-32). Eugene OR: ISTE Research Monographs. Jacky, J. (1991). Safety Critical Computing: Hazards, Practices, Standards, and Regulation. in C. Dunlop & R. Kling (Eds.) Computerization and Controversy, (pp. 612 - 631), New York: Academic Press. Johnson, D. (2000). The future of computer ethics. In G. Colllste (Ed.) Ethics in the age of information technology. (pp. 17-31), Linköping Sweden, Linköping University Press. Landauer, T. K. (1995). The trouble with computers: Usefulness, usability, and productivity. Cambridge, Mass: MIT Press. Leveson, N. G., & Turner, C. S. (1992). An investigation of the Therac-25 accidents. Technical Report #92-108. Department of Information and Computer Science, University of California, Irvine. Martin, C. D., & Murchie-Beyma, E. (Eds.) (1992). In search of gender free paradigms for computer science education. Eugene, Oregon: International Society for Technology in Education. Raymond, E. S. (2001) The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary. New York: O'Reilly & Associates Rogerson, S., & Gotterbarn, D., (2001) The Ethics of Software Project Management, In G. Colllste (Ed.) Ethics in the age of information technology. (pp. 278-300), Linköping Sweden, Linköping University Press. Shneiderman, B. (1990). Human Values and the Future of Technology: A Declaration of Empowerment. Computers & Society, 20(3):1-6. Shneiderman, B. (1992). Designing the user interface: Strategies for effective human-computer interaction. New York: Addison Wesley. Shneiderman, B., & Rose, A. (1996). Social Impact Statements: Engaging Public Participation in Information Technology Design. In C. Huff (Ed.) Computers and the Quality of Life: The proceedings of the Symposium on Computers and the Quality of Life. 90-96, New York: ACM Press.

Suggest Documents