sional view of CPMCSS describes monitor designs in terms of object of ... Categories and Subject Descriptors: H.4.2 [InformationSystems Applications]:Types of.
Computerized Performance Monitors as Multidimensional Systems: Derivation and Application REBECCA
A. GRANT
and CHRIS
A. HIGGINS
University of Victoria and University of Western Ontario
An increasing number of companies are introducing computer technology into more aspects of work. Effective use of information systems to support office and service work can improve staff productivity, broaden a company’s market, or dramatically change its business. It can also increase the extent to which work is computer mediated and thus within the reach of software known as Computerized Performance Monitoring and Control Systems (CPMCSS). Virtually all research has studied CPMCSS as unidimensional systems. Employees are described as “monitored” or “unmonitored” or as subject to “high,” Umoderate,” or “low” levels of monitoring. Research that does not clearly distinguish among possible monitor design cannot explain how designs may differ in effect. Nor can it suggest how to design better monitors. A multidimensional view of CPMCSS describes monitor designs in terms of object of measurements, tasks measured, recipient of data, reporting period, and message content. This view is derived from literature in control systems, organizational behavior, and management information systems. The multidimensional view can then be incorporated into causal models ta explain contradictory results of earlier CPMCS research. Categories and Subject Descriptors: H.4.2 [InformationSystems Applications]: Types of Systems-employee monitoring; work measurement; K.4.3 [Computers and Society]: Orga-
nizational Impacts-employee productivity;
customer
service
General Terms: Management, Theory Additional Key Words and Phrases: Computerized performance evaluation, computerized work monitoring, work monitoring system design
1. INTRODUCTION The on-going computerization of the office can help organizations streamline their operations, improve efficiency, and enhance effectiveness. It can also increase the extent to which work is within the reach of Computerized Performance Monitoring and Control Systems (CPMCSS). These monitors Authors’ addresses: R.A. Grant, Department of Information Systems, Faculty of Business, University of Victoria, Victoria, British Columbia V8W 3P1, Canada; email: rgrant@ business. uvic.ca; C. A. Higgins, Department of Information Systems, School of Business Administration, University of Western Ontario, London, Ontario N6A 3K7, Canada; email: chiggins@novell. business .uwo.ca. Permission to make digital lhard copy of part or all of this work for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage, the copyright notice, the title of the publication, and its date appear, and notice is given that copying is by permission of the ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and for a fee. 01996 ACM 1046-8188/9610400–0212 $03.50 ACMTransactionson InformationSystems,Vol. 14, No. 2, April 1996,Pages212-235
Computerized
Performance
Monitors
.
213
track the performance of employees doing computer-mediated work. They range in complexity from simple monitors that count transactions to sophisticated software that tracks performance on a variety of measures. Supporters of monitoring systems point out that human supervisors cannot avoid being subjective when observing, measuring, and recording an employee’s performance [Ferderber 1981; Olson and Lucas 1982; Young 1980]. Whether by intention or simple physical proximity, a human super-. visor may not observe the work of all employees equally. Computers, on the other hand, can gather precise data about performance from all employees with equal frequent y. Furthermore, the software can store quantitative measurement data in a standard format. 1 This means that monitors have the potential to improve measurement, feedback, the work environment, and thus productivity. This potential has made monitors attractive to many service firms, where labor is intensive and direct supervision often difficult. Opponents of monitoring argue that the systems are dehumanizing, degrading, and detrimental to the quality of work life (see Gregory and Nussbaum [1982] and Nussbaum [1984]). They suggest that monitors alter the balance of power in the office by making the measurement system less visible to the workers, even as they make the workers more visible to the measurement system [Walton and Vittori 1982]. The trade and popular press report that increased stress, deteriorating health, poorer working relationships with peers and supervisors, and a lower quality of work life accompany the use of computerized monitors [Carey 1985; Garson 1988; Koepp 1986; Oreskovich 19851. Finally, a major complaint is that the systems are inflexible and cannot consider the context in which performance occurs: only a human can consider contextual factors that might affect performance statistics. Studies that examine monitoring have produced contradictory and often inconclusive results. Most research has concentrated on health and privacy (e.g., Garson [19881, Nussbaum [1984], OTA [1987], and Westin [1986]). Controlled studies of the effects of monitoring on role perceptions or actual performance are a recent phenomenon [Eisenman 1986; George 1993; Grant et al 1988; Griffith 1988; Tamuz 1987]. Finally, much of the literature on monitoring is exploratory, anecdotal, or atheoretical (e.g., Garson [19881, Danaan [19901, and Smith et al. [19861). As a result, the impact of computerized monitoring systems is still unclear and subject to wide debate. Businesses are poorly equipped to anticipate their potential effect, Carefully designed research is needed to determine the full range of effects and their causes, but it must be based on strong theory. This article describes a multidimensional view of monitor design that has been used successfully to study work monitoring. Using control systems as a foundation, we derive five dimensions of monitor design we believe are 1 It is common to refer to quantitative performance data as “objective .“ However, the decisions re arding performance to measure, how tQencode it, and who can assess it reflect value-laden IU ‘f gments. Even when these decisions are applied uniforml to a group of employees, they potentially introduce bias or subjectivity into the data. T{“M subjectivity is discussed at greater length later in this article. ACMTransactionson InformationSystems,Vol. 14, No. 2, April 1996.
214
●
Table L
R. A. Grant and C. A. Higgins Monitoring Techniques and Their Targets (adapted from OTA [19871)
Targeta Performance Techniques x Output(keystrokes,etc.) x Use of Resources(computertime, call accounting) x ‘CommunicationsContent(service monitoring,eavesdropping) Location (cards, beepers,television cameras) Concentration,MentalActivity (brainwaves) Predkpositionto Error (drug testing) Predispositionto HealthRisk (genetic screening,pregnancytesting) Truthfulness(polygraph,brainwave)
Behaviors
Personal Characteristics
x x x
x
x x
x x x x
important independent constructs in studying system impact. This article does not advocate a particular causal model to explain impact. Instead, it focuses on identifying the crucial dimensions of monitor system design. These dimensions can then replace the monolithic “monitor” construct in a variety of models. The article concludes by demonstrating the research applications of the proposed dimensions. 2. BACKGROUND Organizations use many different monitoring systems. The OffIce of Technology Assessment (OTA) describes three broad targets of monitoring (performance, behaviors, and personal characteristics) and eight techniques for computer-mediated monitoring, ranging from keystroke counting to polygraphs [OTA 1987]. These are outlined in Table I. In this article, “monitor” or “CPMCS” refer only to computer software that automatically tracks and measures the performance of individuals using a computer terminal. This deliberately excludes audio and video monitoring, which also capture behavior or personal characteristics, so we can concentrate on systems primarily aimed at performance. To date, researchers have studied a variety of monitoring issues. The majority of the research has concentrated on its effects on health and stress. Nussbaum’s [1984] survey results indicated a significant correlation between electronic monitoring, increased stress, and reduced employee health. Garson [198] argued that workplace automation, and monitoring in particular, threatened to turn offices and service firms into information sweatshops. Smith et al. [1990] reported on the relationship between electronic performance monitoring and tension, anxiety, and depression, as it pertained to telephone operators. These findings are not without contradiction. GriffW’s [1993a] lab experiments demonstrated no significant correlation between monitoring and perceived stress. Galletta and Grant [1993] reported similar results from experimental and survey research. Nebeker and Tatum [1991] also reACMTransactionson InformationSystems,Vol. 14, No. 2, April 1998.
Computerized
Performance
Monitors
.
215
ported on experiments in which they found little evidence that monitoring affected stress. Instead, they concluded that rewards and quotas, rather than the data collected by computer, led to increased stress. DiTecco and Andre [1987, p. iv] surveyed Bell Canada employees and concluded that average wait time targets, not the monitoring, were the “most important influence on experienced work stress.” The OTA acknowledged that monitoring seemed to contribute to stress-related illness [OTA 1987]. Its report cautioned, however, that none of the studies up to that point proved that monitoring was the cause. That is, none separated the effects of monitoring from other aspects of computer-based office work—such as lighting, job redesign, and quotas. Researchers have also explored CPMCS use as a privacy issue (see Marx and Sherizen [1986], Westin [1986], and OTA [1987]). These works concluded that monitors had the capacity to significantly threaten privacy and the quality of work life. At the same time, they suggested that the effects of monitoring were not consistent. Instead, the impact seemed to depend on the purpose, design, and use of the systems. A few researchers have looked at how monitoring affects performance. Companies that sell monitors promise significant improvements in employee output, but research has generally produced inconsistent results. Walton and Vittori [19821 studied one insurance company in depth and concluded that the combined use of CPMCSS and quotas increased stress to the point that it reduced productivity. Griffith [1993a] demonstrated no significant differences in productivity for monitored, supervised, or unsupervised workers. Nebeker and Tatum [1991], on the other hand, showed that data entry personnel who were aware of monitoring and received feedback from the computer significantly outperformed staff who were not aware of being monitored. Tamuz [1987] examined the effect of monitoring airline pilots’ performance. She used longitudinal data to compare the frequency of selfreported incidents of safety violations before and after a system was installed to track violations. She found that self-reports of both monitored and unmonitored activities increased over time. These findings contradicted literature that said monitored employees would pay less attention to unmonitored activities. CPMCS also appears to have inconsistent effects on supervision. Eisenman [1986] found no increase in perceived closeness of supervision with monitoring, while Irving et al. [1986] reported that monitored employees felt more closely supervised than unmonitored employees. What could explain these widespread inconsistencies? One factor might be timing. Many companies install monitors as part of a new or greatly modified information system; they are rarely introduced after the effects of other computer systems have stabilized. A second factor might be novelty. Some monitors are adopted by companies that have never used extensive quantitative measurement of performance. Are the employees reacting to monitoring, or simply to being measured quantitatively? A third factor could be differences between electronic monitors and human supervision. ACMTransactionson InformationSystems,Vol. 14, No. 2, April 1996. .
216
c
R. A. Grant and C. A. Higgins
Walton and Vittori [1982] suggested that the monitor they studied had less “legitimacy” than the system it replaced. The monitor increased the visibility of individual performance while decreasing the visibility of supewisors and managers monitoring performance. Thus, monitoring shifted the balance of power between the employee and the supervisor or manager. Interaction theory [Markus 1983] and segmented institutionalism [Kling 1980] say that individuals react when automation causes power shifts. Thus, the belief that a monitor increased the power of the supervisor at the expense of the worker could contribute to its impact. Fourth, the researchers and the subjects may not have had the same picture of the monitor and its use. Eisenman [1986] categorized all computer-mediated work as monitored to some degree, but few supervisors in her study used all of the monitoring capacity available. The actual use of monitoring was not equivalent to its capacity. Other research [Grant et al. 19881 has shown that employee perceptions of the monitor may be very different from its actual use. A fifth and largely unexplored factor is the design of the monitor itself. Many of the studies discussed above suggest that different monitoring methods might have different effects. Yet, almost no research has considered explicitly the design of the monitor being studied. Most often, researchers divided their subjects into “monitored” and “unmonitored” groups (see Nussbaum [19841, Irving et al. [19861, Tamuz [19871, and Walton and Vittori [19821. Occasionally, a study distinguished among “high,” “moderate,” or “low” levels of monitoring (e.g., Eisenman [1986]). To date, only two studies [George 1993; Grant 19901 collected data on the features of different monitoring systems and incorporated the data into their findings. While many studies have described or predicted the effects of monitors, few have explained them. We believe one answer lies in studying monitors as multidimensional systems. Nussbaum [19891 said, Monitoring isn’t simply the benign use of computers to collect data. It is different in three important ways: it monitors not just the work, but the worker; it measures work in real time; and it is constant. This monolithic view of monitors surfaces in almost every study of CPMCS impact discussed above. Except for George [1993] and Grant [1990], none compared monitor designs. Researchers asked whether or not a monitor was used, not specifically how it was designed or operated. This is a major problem because many studies concluded that the design and purpose of the monitor probably contributed to its impact. Explanatory models of CPMCS impact should include independent constructs reflecting each critical dimension of monitor design. By building such models, researchers provide richer explanations of impact. They also provide the means to test assertions made in earlier research that monitors may be helpful or harmful, depending on their design and use [Long 1989; Smith et al. 19861. The following sections examine the theories, constructs, and relationships appropriate for developing such models. ACMTransactionson InformationSystems,Vol. 14, No. 2, April 1996
Computerized
3. DERIVATION:
DIMENSIONS
Performance
OF COMPUTERIZED
Monitors
●
217
MONITORING
To build multidimensional models of monitoring, one should first use an appropriate theory to derive relevant dimensions of monitoring. In this section, we discuss theories that might be used to study monitor impact. We explain our choice of a theoretical foundation and examine its potential contribution to CPMCS research. Then, building on that foundation, we derive five core dimensions of monitor design.
3.1
Choosing
a Theoretical
Base
Computerized performance monitoring is a broad topic, and many theories promise insight into its effects. Attewell [1987] identified five: (1) corporate culture, (2) neo-Marxist, (3) product life-cycle, (4) contingency, and (5) industrial sociology. Others could also be considered. Tamuz [1987], for example, based her research on theories of salience, while Griffith [19881 used social distance theory. Kidwell and Bennett [1994] proposed procedural justice as an appropriate framework. Each theory implies a view of the reasons to monitor, as well as questions or relationships to study. Neo-Marxist theories look at power, the pace of work, and relationships between management and labor. Social distance theories examine how monitors mediate the relationship between supervisor and employee. Salience considers the impact of monitor data on behavior, and so on. We have chosen control theory as the foundation for our work. We believe it is a strong organizing framework for three reasons. First, control is the process whereby systems or individuals restrict, constrain, or direct the freedom and discretion another individual can exercise in performing a job [Whisler 19701. Whether it is used to plan staffing or evaluate employee performance, a monitor operates as part of an organization’s performance control system. Second, computerized performance monitors demonstrate a wide range of designs. So do other control systems. Control theory allows us to compare monitors to each other and to other control systems. Using such a framework shows how various systems are alike and how they differ. It also indicates where theory and data from other fields can contribute to understanding monitors. Finally, control theory offers both static and process models with which to study monitoring. Static models can be used to describe and compare control system features. For example, Landy and Farr [1983] classified performance measurement systems on the basis of ( 1 ) the specificity of measures employed, (2) the time span of the measures, and (3) the proximity of the measured component to organizational goals. Using these classifications, we could compare a monitor that collected continuous, specific measures of critical success factors to one that intermittently gathered data about group error rates. Other CPMCSS and manual systems could be compared to that design on Landy and Farr’s three dimensions. Woodward [19701 and Slocum and Sims [19801 described control systems in terms of personalversus-impersonal modes of control. Computerized monitoring is an imperACMTransactionson InformationSystems,
Vol. 14, No. 2, April 1S96.
218
●
R. A. Grant and C. A. Higgins
fndwldulD1ffemw ClwactenXICS of Scc]p]eat
“’:rI
,
complex
Percavcd
Fcedtmk
Acqtanm of Feedback
Desll-cto
Intmded
akdk~
=1=
-=
External Constmm
source
~ b’
‘
Fig. 1. Process model of feedback [Ilgen et al. 1979].
sonal control technique, one we could then compare to more personal, traditional evaluation systems. Process models are also important in explaining how monitoring effects arise. Control theory offers many relevant process models. For example, Taylor et al. [19841 said behavior is triggered by a comparison process. The employee compares feedback about performance (the achieved state) to a desired state or performance standard. If the desired and achieved states do not match, the employee acts to bring the two more in line with one another. While usefd, this model seems too general for CPMCS research. The comparison process could involve customers, supervisors, employees, or a combination of groups. Behavior could be determined autonomously or directed by the supervisor or the monitor. In each case, we would want a more detailed model to compare control system designs, uses, and effects. Ilgen et al. [1979] synthesized the feedback literature into a process model (Figure 1). In it, control systems influence performance via the feedback they convey to employees and the way that feedback is perceived. The model concentrates on how the employee interprets and reacts to feedback, and it helps explain what happens once a control system has produced feedback. In CPMCS research, however, we are equally concerned about the role of the control system as a stimulus and source of feedback. We have chosen a third model as the foundation for our dimensions: the thermostat system elaborated by Lawler and Rhode [1976]. It focuses on the components of the control system (Figure 2). The sensor (a control device) measures the performance level. It communicates that measurement to a discriminator (such as the supervisor) who compares it to a standard. The results of this comparison are then communicated to the e~,ector (the employee) responsible for initiating activities, some or all of which produce measurable performance. This performance is detected by the sensor, and the cycle begins again. A computerized monitor can perform many different roles in the thermostat system. A basic monitor may simply sense performance, while sophisticated designs incorporate the sensor, discriminator, and effecter roles. For example, word processing logs that count keystrokes merely sense ACMTransactionson InformationSystems,Vol. 14, No. 2, April 1996.
Computerized
Fig. 2.
Performance
Monitors
.
219
Thermostat model of control [Lawler and Rhode 1976]
performance. Systems that direct incoming telephone calls to the fastest operator sense, discriminate, and effect. The detailed roles specified by the thermostat model make it possible to study how different designs might fill these roles and to hypothesize effects. Furthermore, the concepts apply equally well to computerized and manual control systems. At the same time, the thermostat is mechanistic, and its limitations must be considered. In general, it predicts simple responses to primarily negative feedback. It does not consider the motives or intentions behind the design
of the breaks
control will
system. be
For
perceived
workloads.
Nor
does
intentions.
Bureaucratic
example, very
a system
differently
it expressly behavior,
consider
used
from reactions
resistance,
to
one
police
used
bathroom
to
to those
sabotage,
even
out
motives
and high
or
turnover
are all possible reactions [Lawler and Rhode 19761. Yet, the thermostat model only accounts for desirable adjustment to measurable performance. Furthermore, it does not depict how information is communicated to the effecter. Does the does a supervisor sphere? account
Finally,
employee sit down thermostats
for complex
Researchers
receive instantaneous computer messages, or and discuss performance in a trusting atmo-
must
power deal
are
primarily
or political with
these
rational
relationships limitations,
models found
since
that
do
not
in organizations.
respwises
to control
systems are rarely simple. One way to compensate for some of the limitations is to draw on other bodies of literature in deriving the monitor dimensions. Numerous characteristics of the feedback and its presentation are important, for example, as are lessons learned about information systems implementation and organizational behavior. Section 3,2 addresses many of these limitations by incorporating feedback and other literature with the basic concepts of the thermostat model. Another approach
is to incorporate
tic models article
of monitor
demonstrates
the dimensions impact.
such
The
into
“Research
an approach.
Thus,
more
complex,
Applications” we both
less
mechanis-
section
to the thermostat model and discuss their role in models that address complex organizational and interpersonal work environments. 3.2 Examining the Theoretical
of this
link our dimensions
more
Base
This section uses the components of the thermostat control theory and research relevant to monitoring.
model to examine the We begin by consider-
ACMTransactionson InformationSystems,Vol. 14, No. 2, April 1996.
220
●
R. A. Grant and C. A. Higgins
ing the sensor and the issues of human-versus-computerized performance sensing, before turning to performance discrimination and evaluation. Next, we discuss the types of feedback available in monitored environments and the information they may convey. We then look at the employee’s response to the feedback, before concluding with a discussion of the measurable performance. 3.2.1 The Sensor: Who or What Performs the Measurement? One important step in designing a control system is choosing the device(s) that will measure, evaluate, and report performance. Qualitative systems generally rely on humans to carry out these roles. Quantitative systems may use machines or humans in each role. An organization may use either or both types of systems to detect the performance it will measure. Ilgen et al. [1979] said that the effect of a sensor depended on the employee believing that (1) the sensor accurately sensed the performance, (2) the sensor accurately measured the performance it sensed, and (3) the performance being measured represented the important aspects of their job. There is strong evidence that these beliefs apply to monitoring systems [Albrecht 1979; Walton and Vittori 19821. To understand what this might mean in a monitored environment, consider the example of customer service jobs. Direct customer service is characterized by high task or workflow uncertainty. CPM(XS are highly structured and facilitate remote (rather than direct or personal) sensing. One would expect workers to consider computers as being less able to sense and accurately measure customer service. Furthermore, employees might not believe that the tasks that could be monitored were important aspects of their jobs. Thus, the perception that monitors sensed and measured important work activities would be a significant factor in their impact. 3.2.2 The Discriminator: Who Performs the Comparison? The sensor collects raw data; the discriminator interprets that data. Thus, the discriminator adds information and creates a feedback message. Employees rarely evaluate feedback independent of its source. They judge the credibility of the discriminator, and their judgment influences their responses. A credible discriminator must (1) have the expertise to evaluate performance, (2) be a reliable evaluator, and (3) hold a view of the job consistent with that of the employee. A computerized control system allows for three possible discriminators: the employee, the supervisor, and the monitor. How these discriminators affect performance will depend on the employee’s perception of their credibility and intentions [Ilgen et al. 1979]. Bannister [1986] wrote, When it comes time in an appraisal
process to provide directive feedback, the individual’s perceptions of the source’s qualifications to provide that feedback become critical to their intentions to use the feedback.
The credibility of human supervisors influences acceptance of feedback in traditional control systems [Ilgen et al. 1979]. Does this suggest that the credibility of computers will influence acceptance in monitored environACMTransactionson InformationSystems,Vol. 14, No. 2, April 1996.
Computerized
Performance
Monitors
●
221
ments? Three orientations influence one’s answer to that question. They are technological determinism, technological neutrality, and the sociotechnical perspective. Technological determinism is the belief that “technology itself has a determinative impact” [Turkle 1984]. Understanding the specific technology used in a given situation helps to predict its effects. Determinists argue that using computers, rather than humans, to sense performance
can
change
reactions
to
the
control
process
[Turkle
1984;
that technology is merely a tool, and its effects result entirely from the applications for which it is used. It generally takes the form of implicit assumptions about systems implementation (e.g. Wright [1982], Young [1980], and Ferderber [1981]). Neutralists do not acknowledge that a computerized system might be perceived as being significantly different from its manual counterpart. Olson and Lucas [19821 exemplify the socio-techniccd view that the social, task, structural, and technical factors interact to influence work, This perspective says that employees respond primarily to the application, not the technology. Negative outcomes are possible, but can be minimized through good management. Few studies have tried to determine whether the credibility of computers influences reactions to monitoring, but research by Tamuz [1987] and Griffith [19881 assume that it does make a difference. This is an important issue. If the credibility of the computer does not contribute to the impact of monitoring, explanatory models need not differentiate between manual and automated control systems. If it does contribute, a model explaining CPMCS impact should include perceptions of computer credibility. A consistent view of the job is also important, but one cannot assume employees and employers define the job in the same way [Grant et al. 1988]. Employees may give different priorities to the components measured by the control system. For example, a travel agent may see his or her job as one of providing complete travel advice to clients (service), while the agency sees it as generating a dollar volume of ticket sales (production). If measurement criteria conflict with the employee’s view of the work, feedback from the discriminator will have little effect on behavior [Taylor et al. 1984]. Thus, the feedback literature suggests that workers who see quantity as a relatively unimportant aspect of their job will have little concern for the monitor or its message. This expectation may be an oversimplification. Employees may still attend closely to such data, if rewards or punishment are linked to its use. Finally, the monitor may influence discrimination without being the discriminator. Neutral feedback, such as a simple transaction count, forces the employee to act as the discriminator. If the employee lacks information or holds inappropriate standards, he or she may be unable to discriminate accurately. Farh and Dobbins [19891 compared the accuracy of self-evaluations of employees who received comparative performance data to those who received only absolute data about their individual performance. The evaluations of those who received comparative data were significantly more accurate. Thus, monitors that perform the discrimination may help Zuboff
1982].
Technological
neutrality
says
ACMTransactionson InformationSystems,Vol. 14, No. 2, April 1996.
222
R. A. Grant and C. A. Higgins
●
employees
set standards
and evaluate
their performance
Monitors that do not discriminate performance. In such cases, the
would have supervisor’s
should
on performance
have
monitor
a much
stronger
effect
feedback
message
known
following: (1) the perception the feedback, and (3) the contained the
accurately.
than
they
effects on standards would
if the
discriminated.
What is the Message? Control 3.2.3 The Feedback: employees when they feed back performance information. the
more
less predictable or employee’s
in the message
thermostat
actually The
depicts
respond raw
data
to affect
its
acceptance
of the message’s accuracy, amount and importance [llgen
simple
to perceived or explicit
et al. 1979; responses
feedback message
Taylor
influence
Characteristics of of feedback are the (2) the frequency of of new information
et al. 1984].
to explicit in a complex
fed back
systems
feedback,
Although individuals
manner.
is one factor
in the response,
Messages must provide increased knowledge of performance to have an effect. Consider a system in which computer terminal operators receive their own production or accuracy counts each day. Reports of monthly average production provide little new information. that reported monthly averages to employees who counts
would
ignore
feedback
rewards
be unlikely
to affect
about job factors
[Taylor
et al.
1984].
productivity.
they consider
For
Employees unimportant
example,
mance in a company that ties salary probably have no effect on teamwork,
Similarly, a monitor kept weekly, manual
feedback
increases
also
tend
or unrelated
about
group
to individual
to to
perfor-
output
would
Finally, employees are most likely to believe that feedback is accurate if it is provided frequently. ln addition, the chance of interpreting feedback correctly improves if the feedback occurs as close to the activity as possible [llgen et al. 1979; Lawler and Rhode 1976]. Monitors make it practical to collect
data
creasing decrease
more
frequently
and
consistently
feedback frequency may increase the desire to respond, particularly
[OTA
1986].
However,
in-
the sense of being controlled and if it adds little new information
of timing [llgen et al. 19’791. Thus, one would expect the interaction (frequency) and content to be a factor in CPMS impact. Acceptance of feedback, and its influence, also depends on the accuracy of the message. Automated systems have the potential to provide more accurate
measurement.
But
employee’s
self-perception
considered
“accurate.”
belief 3.2.4
is itself
correct
The Activity:
the message and
Ilgen
belief
et al. [19791
is inconsequential
What
must
about
his
noted
also be consistent or her that
with
performance
“whether
the
to be
or not that
to acceptance.”
is the Employee’s
Response?
Work
typically
comprises a number of different tasks, and employees generally have some latitude in the priority they give to each one. Therefore, the direction in which control systems motivate employees is significant. The difference between success energy between
and failure at a job may come down to the allocation of relevant and irrelevant activities [Katerberg and Blau
1983]. ACMTransactionson InformationSystems,Vol. 14, No, 2, April 1996.
Computerized
Performance
Monitors
.
223
Research
suggests that employees direct most of their attention and toward areas that are measured, particularly if the control system on extrinsic motivation [Cammann and Nadler 1977; Kerr 1975].
efforts relies This
is the tendency
to do what
is inspected
more
often
than
what
is not
et al. 19861. Lawler [19761 called this “bureaucratic behavior.” [lrving Monitors cannot adequately or economically capture all facets of a service job; even the most pervasive monitors focus on quantifiable aspects of work. This
suggests
that monitored
quantifiable forms systems to measure
employees
will focus
of performance unless service or interaction.
would include rewards visibly linked to frequent comments by supervisors about sanctions for poor service.
3.2.5
The Performance:
thermostat the ways system. control cautioned
model they
qualitative quality of
What is Measured?
is performance.
are measured
The
choice
contribute
on production
and other
the company uses equally strong Examples of such strong systems performance service, or
data, explicit
The final component
of the
of behaviors
directly
to measure
to the impact
and
of a control
Walton and Vittori [1982] emphasized the importance of having a system that supported attributes valued by the organization. They that
The measurements will be resented if they . . . are seen as emphasizing quantifiable aspects of the work at the expense of equally important nonmeasurable aspects.
As early as 1960, Whisler and Shultz [19601 forecasted effects when predicting the impact of computer-mediated
the but
such dysfunctional control:
The techniques themselves feed on numbers and on systems, the elements of which are explicit and unambiguous. Wherever these techniques are adopted, they will bring with them increased emphasis on quantification about the firm’s operations and environment. . important danger may result from overemphasis on the kind of information that can be made explicit in a quantitative sense.
Computerized monitors” can only measure computer-mediated activity in quantitative terms. They must capture service roles via surrogate measures (e.g., with group production figures used as a measure of effective teamwork). This may or may not capture the major components of a job. Numerous authors have argued that employees believe the presence or absence of measures of performance indicates how important that particular performance is to their employer (e.g., Irving et al. [19861, Kerr [1975], Landy and Farr [19831, Lawler and Rhode [19761, and Walton and Vittori [1982]). In the absence of strong qualitative systems, for example, one would expect monitored employees to believe that their organizations valued output more than service or interaction. To summarize, the reaction to measuring performance depends upon: (1) the perceived
importance
(2) the perceived sure;
completeness,
ACM
of the factor being measured; accuracy,
and appropriateness
of the mea-
Transactionson InformationSystems,Vol, 14, No. 2, April 1996.
224
●
R. A. Grant and C. A. Higgins
Table II.
Criteria for Analyzing Privacy Effects [Marx and Sherizen 19861
Criterion
Degree of Intrusiveness Frequency Relevanceto Performance Visibility Focus Targeting Natureof datacollected Accuracy
(3) the degree to which the measure performance; and (4) how closely
rewards
discriminates
individual,
controllable
are linked to the measurement.
3.3 Deriving Dimensions of Computerized
Monitoring
Research incorporating multiple dimensions of monitor design are rare. However, the concept that monitors have many dimensions is not new. Marx and Sherizen [19861 proposed eight dimensions to determine the impact of monitoring on privacy (Table II). Westin [19861 suggested five elements relevant to socio-legal monitoring issues: (1) the site of monitoring, (2) the person and activity being monitored, (3) the observation technique(s), (4) the use of the information gathered, and (5) the privacy safeguards in effect. Most recently, George [1993] looked at four dimensions: variations in work environment, monitoring practices, the use of performance information, and employee perceptions of work and monitoring. Researchers have also investigated dimensionality in other information systems. The “Minnesota Experiments” [Dickson et al. 1977] focused on format, time, and decision aids. Firth [1980] considered how aggregation, presentation style, and data collection period or frequency affected performance evaluation. These works concluded that different dimensions of IS design did affect the interpretation and perceived importance of data. Generally, these are empirically derived dimensions, often specific to a particular type of information system or application. Theory-based models require broader constructs with wider generality. Combining our discussion of the thermostat model with the work of Ilgen et al. [1979] on feedback enables us to identify five such constructs. Ilgen et al.’s [1979] model considers feedback in the context of the more general communication process. In that context, traits of the source (or sender), the message, and the recipient of feedback are key to determining ACMTransactionson InformationSystems, Vol. 14, No. 2, April 1996,
Computerized
Performance
Monitors
.
225
the effect of feedback. To apply this model to monitoring, we must identify the important traits of the communication and relate them to monitor design and use. Five such traits influence the interpretation and outcome of feedback [Bannister 1986; Ilgen et al. 1979; Lawler 1976; Taylor et al. 1984]: (1) credibility of the source— can the source sense and adequately the performance? (2) accuracy and completeness—are accurate?
the measures
collected
measure
complete
(3) perceived importance of the tasks—are the tasks gathered important to the job and rewards?
for which
(4) relevance—is control? and
the individual
(5) frequency negative?
this information
about
and sign— how frequent
performance is feedback,
and
data are
and is it positive
can or
These traits are related to various dimensions of a monitor’s design. First, the object (or target) of monitoring will affect the relevance of the message, as well as its accuracy and completeness. For example, systems that collect data about individuals may not provide high-quality feedback in tightly coupled jobs. Second, the period over which the monitor collects data will affect relevance and frequency of feedback. Third, the recipient of the monitor data will influence the credibility of feedback and its perceived importance. That is, if supervisors receive and interpret monitor data, they become another source of feedback for employees. Distributing the data to supervisors and management also provides the visible link between the monitor and the organization’s system of rewards and punishments. In addition, extensive distribution of data may be interpreted as indicative of its importance or value to management. Fourth, the tasks monitored will contribute to the perceived importance of the tasks being measured, the accuracy and completeness of feedback, and the credibility of the source. We can describe individual designs in terms of their features on each of these dimensions. We also need to compare different monitors to one another. The concept of pervasiveness permits such comparison. That is, monitor designs exist along a continuum of pervasiveness on each dimension. We have developed scales for these dimensions. Table 111 depicts the scales, each of which is explained as follows. Object. The object can be thought of as the control system’s unit of analysis. Conceptually, it indicates how precisely and directly the system targets individual performance. Systems might collect information about individual employees. Less pervasive systems might aggregate performance for an entire work group, while an even less pervasive design focused on the business unit as the object of measurement. In a retail chain, for example, point-of-sale systems can report the sales of individual clerks, the total sales by department, or the totals by store. The choice of object ACM Transactions
on Information
Systems,
Vol. 14, No. 2, April 1996.
226
●
R. A. Grant and C. A. Higgins Table 111. Dimensions of Control System Design
determines the person or group to which performance is attributed and thus who will be held responsible for maintaining or improving that performance. The more directly a control system attributes performance to an individual, the more pervasive we would call its design. Period. Control systems also differ in terms of the period over which they capture and report information. A supervisor might be able to query the system at any time and obtain an up-to-the-minute report of the status of monitored performance. As an equally pervasive alternative, the system might automatically and immediately report unacceptable variances in performance. It could alert a supervisor when a telephone operator has been disconnected from the switchboard for more than five minutes. Or, it could send a message to a group leader, telling her that the group’s error rate has exceeded 3% for the morning. Less pervasive designs report on historical activity. They tell the recipient about performance at fixed intervals daily, weekly, or monthly. The shorter the period between performance and the availability of data about that performance, the more immediate the data. The more immediate the data, the more pervasive the system design. Recipient. The recipient of measurement data plays a critical role in the feedback process. Many parties might receive data, interpret it, and then transmit or act on it. For example, monitors can provide feedback directly to the employee. These designs help the employee track and evaluate personal performance without relaying that information to anyone responsible for overall performance evaluation. More pervasive (and common) designs make the data available to the immediate supervisor or area manager. The most pervasive designs would broadcast information, making it available to anyone in the workplace. Posting the results of all monitored individuals or groups in a central location or allowing anyone to access the data via the computer exemplify pervasive designs. The wider the audience for the data, the more pervasive the monitoring system. ACM
Transactions on Information Systems,Vol. 14,
No, 2, April 1996
Computerized Task.
Software
can
be
designed
to
Performance
monitor
Monitors
virtually
any
.
227
computer-
mediated activity. A control system may capture the complete process or only the results of an employee’s activities. In terms of the OTA taxonomy shown in Table I [OTA 1987], the monitor can capture data about behaviors (the process of doing one’s job) or about performance (the results of doing one’s job). For example, a monitor that counted completed transactions, error rates, or completion time would be said to monitor performance characteristics. On the other hand, systems can be designed to collect data about work in-process or to permit supervisors to observe that work as it is being performed. Other designs direct work to particular employees at predetermined intervals. Such monitors can then track the nonresponse rate of individuals or groups who fail to act on work directed to their station. These systems would be said to monitor process. Thus, monitors can vary from capturing post facto data about work results to proactively directing and evaluating work processes. The more the monitor focuses on process, the more pervasive its design. Example. To demonstrate the concept of pervasiveness and dimensionality, consider a system studied by Grant [1988]. She examined the monitoring system used by a large insurance company. Employees each had a batch of claims they managed, determining the sequence in which the claims would be processed and the disposition of each claim. The monitor accumulated daily counts of claims processed. Once a month, the system averaged the daily counts to produce a figure of “average daily claims” for each employee. Supervisors and managers then received these monthly figures. Supervisors discussed the figures with employees during each individual’s annual performance review. The characteristics of this system are plotted in terms of their pervasiveness in Section A of Figure 3. Such a system represents a relatively low level of monitoring pervasiveness on the task and period dimensions, moderate on the recipient dimension, and high on the object dimension. A more pervasive variation of this design (Section B, Figure 3) might take claims from a master file and direct each one to the next available processor (task), post performance on a public, electronic bulletin board (recipient), and provide daily reports of actual claims processed rather than monthly averages (period). In contrast, a less pervasive design (Section C, Figure 3) might calculate an overall monthly average of claims processed for each supervisor’s group of employees (object) and make the group’s average available only to individuals in that group (recipient ).
There is a fifth variable aspect of monitor design: the Message Content. content of the performance or monitoring messages produced by the system. “Message content” captures the verbal feedback (whether textual, graphic or numeric) provided by the system. That message might be “your group processed 2000 claims this month.” It might be a graph showing each employee’s daily output and average response times, ranked from best to worst. Or it might be the statement, “that last transaction dragged your average for the hour below the minimum acceptable level.” While the other ACMTransactionson InformationSystems,Vol. 14, No. 2, April
1996.
.
228
R. A. Grant and C. A. Higgins
●
SectionA
Individual Work Group Business Unit . .. . . . .. .. .. . .. . . .. . .. .. .. .. .. . .. . .. . .. .. . .. .. . . .. .. .. .. . .. . . .. . . . .. .. .. .. . .. . . .. . Immd*iate Frequent Period Infrequent .. . . . .. . . .. . .. .. .. . .* . .. .. .. . . . . .. . .. . . .. . .. .. .. .. .. . . .. . .. .. .. . .. .. .. . .. .. .. .. .. .. Public Spvr/Mgr Recipient Employee . . . .. . .. .. . .. .. . .. .. . . .. . .. .. .. . . . . .. .. . .. . . .. .. .. .. .. . .. .. .. . .. .. . .. .. .. . . .. .. . .. Assign/Track Process Trac~ Process Task Track Outcomes .. .. . . . . * . . .. .. .. . .. . . .. . .. .. .. . .. . .. . .. . . .. .. .. .. .. . .. .. .. . .. . . .. . . . .. .. .. .. .. . .. . ObJeCt
S(@o:
B
Period Recipient
Task
Individual Work Group Business Unit .. . . . .. . . .. . . . .. . .. .. .. . .. .. .. . . . . .. .. . .. .. .. . .. ... .. . .. .. .. . . . . . . .. .. . ... .. .. .. .. Imm$diate Frequent Infrequent . .. . .. .. .. . .. .. . .. .. .. . .. . . .. . . . . .. .. . . . .. .. . .. . . .. .. .. .. . . . .. .. . . . .. .. .. .. . .. .. .. I%blic Spvr/Mgr Employee .. . .. . .. .. .. . .. .. .. . .. . . .. . .. . . . .. .. . .. .. .. . .. .. .. .. .. .. . .. . .. .. . . . .. . .. .. .. . . .. .. . As&gn/Track Process Track Process Track Outcomes .. .. . .. .. .. . . . .. . .. .. . . . .. .. .. . .. . .. .. . .. .. . . .. . . .. .. . .. . . .. . .. * . . . .. .. .. .. .. .. . .. .
Section C Individual Work Group Business Unit Object . .. . . . .. . .. .. . .. .. .. . .. . . . .. .. .. . . . . . . .. .. . .. .. .. .. .. .. .. . .. . . .. . .. .. .. . .. .. .. .. .. . Immediate Freque~t Period Infrequent . .. .. . .. .. .. . . . .. .. . .. . . . .. .. .. . .. . . . .. . .. .. . . .. .. .. .. . .. .. . . . .. .. .. . .. .. .. .. .. .. .. Public SpvrlMgr Recipient Employee .. . . . .. ... .. . .. .. . .. . . .. . .. . . .. . . . . . . .. . .. .. .. .. .. .. . .. .. .. .. . .. .. . .. . . .. .. .. .. . .. .. Assim./Track Process Track Process Track Outcomes Task . .. . .. .. . .. .. * . .. .. . . . .. .. .. . .. . . . . . . .. . . .. .. . .. ... . .. . . .. . .. . . .. . . . .. .. .. .. . .. .. .. . Fig. 3.
Plotting the pervasiveness of monitor design: An example,
four dimensions represent individual traits that can be manipulated in the design of a monitor, message content is shaped largely by the choices vis-a-vis object, recipient, task, and period. A system that gathers individual, immediate, work processes data and distributes it widely can provide data for a more pervasive message than one that gathers summary performance about a workgroup only. While message content can vary in pervasiveness, thinking of it in terms of a simple continuum does not capture its essence. That is, unlike task, period, object, and recipient, message content is not unidimensional. Instead, it must itself be examined in terms of a complex design, the individual dimensions of which can be manipulated to be more or less pervasive. Section 3.2.3 describes many of the characteristics of the message content that one would expect to affect employee responses to monitoring. The proposed dimensional view has four direct benefits to researchers trying to explain CPMCS impact. First, it is technology independent. Its dimensions apply equally well to any hardware and software. Second, it enables us to categorize monitor designs more precisely and to compare control systems in greater detail. Third, the dimensions also apply to control systems that do not incorporate computerized monitors. This means that they can be used to compare computerized designs to manual systems, as well as to one another. Fourth, the concept of pervasiveness makes it possible to talk about causal hypotheses and explanatory models. One can ACM
Transactionson InformationSystems,Vol. 14, No. 2, April 1996.
Computerized
Performance
Monitors
●
229
-’”m’ GiE3=fccl Es Ea
-=’(EI -“(m GD=f’El Fig. 4.
Em m E, Em m H’ Ea Ea Ea H) ipient
)
Thermostat elements affected by design dimensions.
hypothesize that more or less pervasiveness along a dimension (such as recipient) will lead to more or less of a particular dependent variable (such as stress). For example, one hypothesis could be: “Broad distribution of monitor data will result in increased stress. ” 3.4 Linking the Dimensions to the Thermostat
Model
The design dimensions provide a way to categorize and compare systems. They can also be integrated into process models to explain impact. This section demonstrates that integration by linking the dimensions to the dynamic thermostat model discussed earlier. The first five elements of the thermostat model [Lawler and Rhodes 1976] are each affected by one or more CPMCS design dimensions. As summarized in Figure 4, the “sensor” is defined largely by the choice of tasks being evaluated and whether they are evaluated manually or by computer. The character of “measured performance” will be a function of the tasks and reporting period chosen and whether the syst( m measures individual or group performance. The “discriminator” may be the employee, the supervisor, or the system, depending upon the choice of monitored tasks and the recipient of the data. The tasks and period will determine whether the discriminator has adequate data to make a realistic evaluation. The “feedback” depends on the object of the evaluation, the recipient of the data, the measurement period, and the content of the message. It will also depend on the tasks monitored, since that determines whether the monitor reports raw data or discriminated evaluations. Finally, the “activity,” or employee’s response, is shaped by the sensor, performance, discriminator, and feedback—and hence by all five dimensions of the control system. Linking the dimensions to the components of the thermostat model demonstrates how one can elaborate thermostats to reflect the design of ACMTransactionson InformationSystems,Vol. 14, No. 2, April 1996.
230
●
R. A. Grant and C. A. Higgins
monitors. However, the thermostat process remains simple, mechanistic, and rational. Causal models that reflect the complex relationships among job, worker, management, and environment are necessary. The next section looks at three studies using such complex approaches.
4. RESEARCH APPLICATIONS: STUDY CPMCS IMPACT
USING
MULTIPLE
DIMENSIONS
TO
The dimensions of CPMCS design were derived as potential independent constructs for causal modeling and theory-testing research. The thermostat is one explanatory model. It shows the role of design dimensions in shaping crucial processes of a control system. These processes can then be integrated with knowledge about the role of the individual, the job, and the organization to produce more powerful models of CPMCS impact. This section demonstrates how the five dimensions can be applied to study three different research issues. It begins with a study in which the dimensions were identified and measured as an explicit part of the research design, then shows how using the dimensions could strengthen the conclusions of two studies in other areas of monitoring. The first study discussed here is by Grant [1990]. Grant proposed a causal model to explain the impact of monitoring on productivity and interaction roles of service workers. Monitor systems that increase production may be undesirable if employees neglect customers or other qualitative elements of their work as a result. Popular press articles suggest that this is a common result of CPMCS use [Carey 1985; Koepp 1986; Oreskovich 1985]. While researchers hesitate to make this generalization, their case studies indicate some support for the argument [Irving et al. 1986; Gregory and Nussbaum 1982; Walton and Vittori 19821. Grant [1990] excluded message content to focus on the other dimensions of CPMCS design and their effect on employee attitudes. Her survey research combined four design dimensions with constructs related to qualitative evaluation systems. She analyzed how those factors affected acceptance of the evaluation systems and attitudes about the importance of production and interaction in customer service jobs. Other researchers, more interested in the role of system messages, could use a different model and research method. Case studies or lab experiments could hold the other four CPMCS design dimensions constant while varying the content of monitor messages. Such research would use a model that excluded the object, task, recipient, and period of the control systems. It could combine design dimensions with constructs reflecting important aspects of motivation such as type and source. A complex model of this kind could help explain how the wording and presentation of evaluation contributes to its interpretation and impact. Still, other studies might look at the impact of a single monitor design used with a variety of qualitative systems. Such studies would be represented by a submodel that dropped the quantitative system dimensions and concentrated on the qualitative system dimensions. Lab or field experiments would enable researchers to manipulate the qualitative control system ACMTransactionson InformationSystems,
Vol. 14, No. 2, April 1996.
Computerized
Performance
Monitors
.
231
dimensions and compare the effects of those changes on the overall impact of control. Studies of this type could demonstrate effective and ineffective ways to combine a monitor with human supervision. Researchers more interested in other dependent variables could use the dimensions in other models. Griffith [ 1993bl, for example, discussed the impact of CPMCS on self-management and quality. She observed the implementation and impact of a monitor system at Hughes Aircraft. From that case observation, she concluded that managers who wished to use monitoring effectively should: (1) use the CPMC!S to gather data about habits [elements of task and message); (2)
use the system to feed information team (recipient);
(3) be flexible
and adaptive
work
back to all members
in the data gathered
(4) gather integrated data allowing within tasks (object, task ); and
performance,
not social of the work
(task);
for comparison
between
(5) realize that monitor designs need not serve noxious practices (task, object, period, recipient, message).
as well
as
management
While empirically derived, these guidelines are still propositions. They have not been tested beyond the original case site. Will they lead to higher-quality production processes and teamwork? The italicized dimensions indicate the link between Grifith’s propositions and the theory-based dimensions. Research using the five monitoring dimensions could now be designed to test the generality of the propositions via experiments, multiple case studies, or large-sample surveys. A third area ripe for study is the relationship between monitoring and job stress. As noted in Section 2, findings in this area are contradictory and inconclusive. One reason seems to be that different dimensions of monitoring contribute to different kinds and degrees of stress. Yet, researchers concentrate on comparing monitored employees to unmonitored employees without giving attention to the specific dimensions of monitor design. Smith et al. [1990], in an often cited survey of telephone workers, reported significant differences in feedback, workload, skill use, anxiety, depression, and anger between monitored and unmonitored employees. Respondents to this survey came from AT&T and seven “Baby Bell” operating companies. Presumably, the monitored employees worked (1) for different companies, (2) in different locations, or (3) in different jobs than the unmonitored respondents. Were the two groups then subject to control systems that differed in more than just the use of a computer to collect data? Would those different designs explain the survey findings? Smith et al. asserted that four factors contribute to work stress: (1) work standards (2)
reduced
(3) reduced
control
accompanying (message,
objectivity
monitoring
(task,
message,
object );
recipient);
(message,
task);
ACM Transactions
and on Information
Systems,
Vol. 14, No. 2,
April 1996.
232
●
R. A. Grant and C. A. Higgins
9A+~n~ ~~ Murage
Y
Constant Supemwon
loss
ObJcct
T&
of
control
Stalldanls
obJ’xhvlty
o stress
Fig. 5.
(4) constant
A multidimensional view of Smith et al.’s [1990] findings.
supervision
(period,
recipient).
Replicating the survey or reanalyzing its responses using our proposed dimensions would provide a systematic way to compare responses and test the assertion that the stress is due to using a monitor, rather than to concomitant job or evaluation factors. For example, Figure 5 incorporates our dimensions into a casual model inferred from Smith et al.’s conclusions. Such a model extends those conclusions beyond correlations between work stressors and a one-dimensional view of monitoring. 5. CONCLUSION Keen [1981] called for research that would “offer something that remains meaningful as technology changes.” The dimensions derived in this article can be generalized to many system technologies. They accommodate manual and automated performance measurement and support models built on a variety of theoretical frameworks. Studies that consider monitors as unidimensional systems cannot effectively compare a wide range of CPMCS designs, nor can they do a good job of explaining different impacts arising from equally “high” or “low” levels of monitoring. The dimensions proposed here also provide a more detailed taxonomy of monitor design than has been commonly used—making it easier to collect and interpret largesample data. There may be almost as many questions about CPMCS impact as there are monitored employees. Answering so many questions requires cumulative research, which in turn depends on shared models and theoretical foundations. The dimensional view of CPMCS is one step toward building ACMTransactionson InformationSystems,Vol. 14, No. 2, April 199S.
Computerized
frameworks questions.
Performance
that will support studies with different
Monitors
objectives,
.
designs,
233
and
ACKNOWLEDGMENTS
The authors gratefully acknowledge the assistance of the University of Western Ontario, the University of Cincinnati, the Social Sciences and Humanities Research Council (Canada), the National Sciences and Engineering Research Council (Canada), and Labour Canada in funding and supporting this work. We also appreciate the guidance and suggestions by Rob Kling and the anonymous reviewers. REFERENCES ALBRECHT, G. L.
1979.
Defusing technological change in juvenile courts. Sot. Work Occup. 6,
3, 259-282.
A’ITEWELL,P. 1987. Big Brother and the sweatshop: Computer surveillance in the automated ofice. Sot. Theor. 5, (Spring), 87–99. BANNISTER, B. D. 1986. Performance outcome feedback and attributional feedback: Interactive effects on recipient responses. J. Appl. Psychol. 71, 2, 203–210. CAMMANN, C. ANDNAOLER, D. 1977. Making effective use of control systems. In Pempectiues on Behauior in Organizations, J. R, Hackman, E. E. Lawler, III, and L. W. Porter, Eds. McGraw-Hill, New York. CARIW,E. 1985. Big Brother is watching you work, The Toronto Star. (Oct. 7), Al, A4. DANA.A~.S. 1990. Stories of Mistrust and Manipulation: The Electronic Monitoring of the American Workforce. 9t05 Working Women Education Fund, Cleveland, Ohio. DICKSON,G., SENN, J., AND CHERVANV, N, 1977, Research in management information systems: The Minnesota experiments. Manage. Sci, 23, 9, 913–923. DITECCO,D. AND ANDRE, M. 1987. Operator stress survey: Report to the health and safety subcommittee on machine packing and remote electronic monitoring. Management Sciences Consulting, Bell Canada, Montreal. Mar. EIWWAN, E. J. 1987, Employee perceptions and supervisory behaviors in clerical VDT work performed on systems that allow electronic monitoring. In The Electronic Supervisor: New Technology, New Tensiorzs