Combining Qualitative and Quantitative Methods in

6 downloads 0 Views 416KB Size Report
Nov 21, 2007 - College of Business. University of Texas at San Antonio. San Antonio, TX 78285-0634. Keywords: Methodology, research methods, re-.
Combining Qualitative and Quantitative Methods in Information Systems Research: A Case Study Bonnie Kaplan; Dennis Duchon MIS Quarterly, Vol. 12, No. 4. (Dec., 1988), pp. 571-586. Stable URL: http://links.jstor.org/sici?sici=0276-7783%28198812%2912%3A4%3C571%3ACQAQMI%3E2.0.CO%3B2-U MIS Quarterly is currently published by Management Information Systems Research Center, University of Minnesota.

Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/misrc.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.

The JSTOR Archive is a trusted digital repository providing for long-term preservation and access to leading academic journals and scholarly literature from around the world. The Archive is supported by libraries, scholarly societies, publishers, and foundations. It is an initiative of JSTOR, a not-for-profit organization with a mission to help the scholarly community take advantage of advances in technology. For more information regarding JSTOR, please contact [email protected].

http://www.jstor.org Wed Nov 21 13:13:10 2007

Qualitative & Quantitative Methods

Combining Qualitative and Quantitative Methods in lnformation Systems Research: A Case Study1 BY: Bonnie Kaplan Department of Quantitative Analysis and lnformation Systems University of Cincinnati

Cincinnati, OH 45221-01 30

Dennis Duchon

Division of Management

and Marketing

College of Business

University of Texas

at San Antonio San Antonio, TX 78285-0634

Abstract This article reports how quantitative and qualitative methods were combined in a longitudinal multidisciplinary study of interrelationships between perceptions of work and a computer information system. The article describes the problems and contributions stemming from different research perspectives and methodological approaches. It illustrates four methodological points: (1) the value of combining qualitative and quantitative methods; (2) the need for context-specific measures of job characteristics rather than exclusive reliance on standard context-independent instruments; (3) the importance of process measures when evaluating information systems; and (4) the need to explore the necessary relationships between a computer system and the perceptions of its users, rather than unidirectional assessment of computer system impacts on users or of users characteristics on computer system implementation. Despite the normative nature of these points, the most important conclusion is the desirability for a variety of approaches to studying information systems. No one approach to information systems research can provide the richness that information systems, as a discipline, needs for further advancement,

Introduction

Keywords: Methodology, research methods, research perspectives, qualitative methods, interpretivist perspective, computer system impacts, computer system evaluation, organizational impacts, work, medical and health care applications ACM Categories: K.4.3, H.0, J.3

' This paper was presented at the Ninth Annual International Conference on lnformation Systems, Minneapolis, MN, November 30-December 3, 1988.

lnformation systems had its origins in a variety of reference disciplines with distinct theoretical research perspectives on the important issues to study and the methods to study them (Bariff and Ginzberg, 1982; Dickson, et al., 1982; Mendelson, et al., 1987). This article describes a study that combined some of these distinct perspectives and methods. The article discusses how limitations of one research perspective can be addressed by also using an alternative. This emphasis reflects the lesser familiarity information systems researchers might have with the perspective receiving more in-depth discussion and promotion. A key point to this article is the importance that both perspectives had for this study; no implication of preference in perspectives is intended. The discussion of perspectives provides background for understanding the process and methods of this research. Following the discussion,

Qualitative & Quantitative Methods

the article describes how this study evolved. The emphasis is on methods rather than on research findings.

The positivist perspective and quantitative methods Despite the differences in reference disciplines and the debate over a paradigm for information systems, American information systems research generally is characterized by a methodology of formulating hypotheses that are tested through controlled experiment or statistical analysis. The assumption underlying this methodological approach is that research designs should be based on the positivist model of controlling (or at least measuring) variables and testing pre-specified hypotheses (Kauber, 1986), although alternative methods might be acceptable until research has reached this more advanced and "scientific" stage. Despite some recent recognition given different research perspectives and methods (e.g., lves and Olson, 1984; Klein, 1986; Kling, 1980; Lyytinen, 1987; Markus and Robey, 1988; Weick, 1984), even those who argue for introducing doctoral students to such alternative approaches as field study and simulation methods nevertheless advocate research based primarily on the positivist tradition (e.g., Bariff and Ginzberg, 1982; Dickson, et al., 1982). Exclusive reliance on statistical or experimental testing of hypotheses has been soundly criticized in the social sciences, where some of its major proponents have called its effects "disastrous" (Cook and Campbell, 1979, p. 92). There are two grounds on which the approach can be faulted. First, the assumption that only through statistical or experimental hypothesis testing will science progress has come under attack, particularly by psychologists who perhaps have the dubious distinction of having practiced it the longest. Meehl (1978), for example, argues that science does not, and cannot, proceed by incremental gains achieved through statistical significance testing of hypotheses. Sociologists, too, have contributed to this debate, notably with Glaser and Strauss' (1967) influential argument for theory building through inductive qualitative research rather than through continual hypothesis testing. A second fault with the approach is the reliance on experimental or statistical control as the defining feature of scientific research. This reliance

572

MIS QuarterlyIDecember 7988

stems from the admirable goal of controlling experimenter bias by striving for objective measures of phenomena. Achieving this goal has been assumed to require the use of quantifiable data and statistical analysis (Downey and Ireland, 1983; Kauber, 1986) and also removing the effects of context in order to produce generalizable, reproducible results. However, because the study of social systems involves so many uncontrolled - and unidentified - variables, methods for studying closed systems do not apply as well in natural settings as in controlled ones (Cook and Campbell, 1979; Manicas and Secord, 1983; Maxwell, et al., 1986). Moreover, the simplification and abstraction needed for good experimental design can remove enough features from the subject of study that only obvious results are possible. As illustrated in the next section, the stripping of context buys "objectivity" and testability at the cost of a deeper understanding of what actually is o c c ~ r r i n g . ~

The interpretive perspective and quantitative methods The need for context-dependent research has been remarked upon by researchers in a variety of disciplines that, like information systems, necessarily incorporate field research methods. Even such strong advocates of quantitative and experimental approaches in behavioral research as Cook and Campbell (1979) state, "Field experimentation should always include qualitative research to describe and illuminate the context and conditions under which research is conducted" (p. 93). Immersion in context is a hallmark of qualitative research methods and the interpretive perspective on the conduct of research. Interpretive researchers attempt to understand the way others construe, conceptualize, and understand events, concepts, and categories, in part because these are assumed to influence individuals behavior. The researchers examine the social reality and intersubjective meanings held by subjects (Bredo and Feinberg, 1982a) by eliciting and observing what is significant and important to the subjects in situations where the behavior occurs ordinarily. Consequently, qualitative methods are characterized by ( I ) the detailed observation of, and ' W e would like to credit an anonymous reviewer with t h ~ spoint.

Qualitative & Quantitative Methods

involvement of the researcher in, the natural setting in which the study occurs, and (2) the attempt to avoid prior commitment to theoretical constructs or to hypotheses formulated before gathering any data (Yin, 1984).

either case, the nature of the information technology and users is considered static in that they are assumed to have an essential character that is treated as unchanging over the course of the study (Bakos, 1987; Lyytinen, 1987).

Qualitative strategies emphasize an interpretive approach that uses data to both pose and resolve research questions. Researchers develop categories and meanings from the data through an iterative process that starts by developing an initial understanding of the perspectives of those being studied. That understanding is then tested and modified through cycles of additional data collection and analysis until coherent interpretation is reached (Bredo and Feinberg, 1982a; Van Maanen, 1983b). Thus, although qualitative methods provide less explanation of variance in statistical terms than quantitative methods, they can yield data from which process theories and richer explanations of how and why processes and outcomes occur can be developed (Marcus and Robey, 1988).

Markus and Robey (1988) characterize such approaches as "variance theory formulations of logical structure and an imperative conception of causal agency." In variance theories, some elements are identified as antecedents, and these are conceived as necessary and sufficient conditions for the elements identified as outcomes to occur. According to Markus and Robey, much of the thinking about the consequences of information technology in organizations assumes that either technology ("the technological imperative") or human beings ("the organizational imperative") are the antecedents, or agents, of change rather than that change emerges from complex indeterminant interactions between them ("the emergent perspective").

Research traditions in information systems The growing recognition of the value of qualitative methods in social, behavioral, organizational, and evaluation research is manifest in studies and research methodology texts (Argyris, 1985; Bredo and Feinberg, 1982b; Lincoln and Guba, 1985; Miles and Huberman, 1984; Mintzberg, 1973; Patton, 1978; Van Maanen, 1983~).Van Maanen (1983a), for example, has long advanced and practiced these approaches in organizational research. However, despite the strong ties of information systems with organizational and behavioral research, the use of qualitative research, though practiced and advocated in information systems, has not been as visible in this field as in others. Instead, recently there has been greater reliance on laboratory studies and surveys (Goldstein, et al., 1986). The dominant approach to information technology studies has been based on a positivistic experimental ideal of research. Using this approach, researchers examine the effects of one or more variables on another. These analyses tend to treat the research objects in one of two ways. Either they portray information technology as the determining factor and users as passive, or they view users or organizations as acting in rational consort to achieve particular outcomes through the use of information technology. In

Most studies of computer systems are based on methods that measure quantitative outcomes. These outcomes can be grouped into technical, economic, and effectiveness and performance measures. Such studies treat organizational features, user features, technological features, and information needs as static, independent, and objective rather than as dynamic, interacting constructs, i.e., as concepts with attributes and meanings that may change over time and that may be defined differently according to how individual participants view and experience the relationships between them. Because such studies are restricted to readily measured static constructs, they neglect aspects of cultural environment, and social interaction and negotiation that could affect not only the outcomes (Lyytinen, 1987), but also the constructs under study. Indeed, most evaluations of computer information systems exhibit these characteristics. Published accounts of computing traditionally focus on selected technical and economic characteristics of the computer system that are assessed according to what Kling and Scacchi (1982) call the "discrete-entity model." Economic, physical, or information processing features are explicitly chosen for study under the assumption that the computer system can be broken down into relatively independent elements of equipment, people, organizational processes, and the like. These elements can then be evaluated independently and additively. Social or political issues often are ignored.

MIS QuarterlylDecember 1988 573

Qualitative & Quantitative Methods

That these assumptions underlie much of the research on information technology is evident in the implementation and impacts literature. The effects of some intervention are studied with respect to implementation success or impacts on, for example, organizational structure, user attitudes, or job satisfaction (Bakos, 1987; Danziger, 1985; lves and Olson, 1984; Kling, 1980; Markus and Robey, 1988). In these studies, information systems or computers are treated as having "impacts" (Danziger, 1985) rather than as socially constructed concepts with meanings that are affected by the "impacts" and that change from person to person or over time. Few such impact studies are longitudinal - a design promoted in information systems research to track changes over time by collecting data as events occur rather than retrospectively (Franz and Robey, 1984; Vitalari, 1985). Often such studies do not proceed from an interactionist framework - one that focuses on the ~nteractionbetween characteristics related to people or subunits affected by the computer system and characteristics related to the system itself (Markus and Robey, 1988). For example, they tend not to explore interrelationships between job-related issues and effectiveness of information technology. Jobs are often considered fixed, even though there are empirical and theoretical reasons to expect that computers change the amount and nature of work performed by system users (Brooks, et al., 1977; Fok, et al., 1987; Kemp and Clegg, 1987; Kling, 1980; Millman and Hartwick, 1987; Zuboff, 1982; 1988). Moreover, job-related issues are interpreted differently by different individuals within an organization, and these differences can affect what constitutes a technology's "effectiveness" (Kling, 1980; Lyytinen, 1987). Goodhue (1986), for example, advises assessing the "fit" of an information system with a task. However, one person performing the task may have a rather different view of it than another person performing ostensibly the same task, and thus the "fit" will differ for different users. Consequently, different individuals may have different responses to the same system. Interactionist theories would account for this response. Theories that assume that the individual, the job or task, the organization, or the technology is fixed and that one of these determines outcomes would not (Markus and Robey, 1988). Some argue that each type of research method has its appropriate uses (Markus and Robey, 1988; Rockart, 1984; Weick, 1984); different re-

search perspectives focus on different research questions and analytical assumptions (Kling, 1980; Kling and Scacchi, 1982; Lyytinen, 1987; Markus, 1983; Markus and Robey, 1988). However, there is disagreement concerning the value and use of suggested alternative theoretical perspectives and practical approaches, such as critical social theory (Klein, 1986), structuration theory (Barley, 1986), case study (Benbasat, et al., 1987), and socio-technical design (Fok, et al., 1987; Mumford and Henshall, 1979). On the one hand, Mumford (1985) advocates a qualitative approach. She calls for studies of a "total" situation through action-centered, interdisciplinary, participatory research in which research questions and hypotheses evolve as new developments are introduced. Lyytinen (1987) makes a similar appeal for case studies and action research on the grounds that "this research strategy seems to be the only means of obtaining sufficiently rich data" and because the validity of such methods "is better than that of empirical studies." On the other hand, even some proponents of case study base their position on a research design that fits the quantitative or quasiexperimental approach rather than the qualitative one (Benbasat, et al., 1987; Campbell, 1984; Yin, 1984). Moveover, there has been strong sentiment that information systems researchers need to move beyond case study to more experimental laboratory or field tests (Benbasat, 1984).

Combining Methods Although not the dominant paradigm, qualitative methods and interpretive perspectives have been used in a variety of ways in information systems research (Barley, 1986; Fok, et al., 1987; Franz and Robey, 1984; Goldstein, et al., 1986; Hirschheim, et al., 1987; Markus, 1983; Mumford and Henshall, 1979; Mumford, et al., 1985). Interpreting information technology in terms of social action and meanings is becoming more popular as evidence grows that information systems development and use is a social as well as technical process that includes problems related to social, organization, and conceptual aspects of the system (Bland, 1978; Hirschheim, et al., 1987; Kling and Scacchi, 1982; Lyytinen, 1987; Markus, 1983). However, many information systems researchers who recognize the value of qualitative methods often portray these methods either as stand-alone or as a means of exploratory research preliminary to the "real" research of generating hypotheses to be tested using experimental or statistical tech-

Qualitative & Quantitative Methods

niques (Benbasat, 1984). Even papers in which qualitative and quantitative methods are combined rarely report the study's methodological rationale or details (Benbasat, et al., 1987). One result is the failure to discuss how qualitative methods can be combined productively with quantitative ones. There has been a move in other fields toward combining qualitative and quantitative methods to provide a richer, contextual basis for interpretating and validating results (Cook and Reichardt, 1979; Light and Pillemer, 1982; Maxwell, 1986; Meyers, 1981; Van Maanen, et al., 1982; 1983a). These methods need not be viewed as polar opposites (Van Maanen, 1983b). It is possible to integrate quantitative and qualitative methods (Maxwell, et al., 1986). Combining these methods introduces both testability and context into the research. Collecting different kinds of data by different methods from different sources provides a wider range of coverage that may result in a fuller picture of the unit under study than would have been achieved otherwise (Bonoma, 1985). Moreover, using multiple methods increases the robustness of results because findings can be strengthened through triangulation - the cross-validation achieved when different kinds and sources of data converge and are found congruent (Benbasat, et al., 1987; Bonoma, 1985; Jick, 1983; Yin, 1984), or when an explanation is developed to account for all the data when they diverge (Trend, 1979). This article describes how qualitative and quantitative methods were combined during the first phase of an ongoing multi-method longitudinal study. Detailed research results are reported elsewhere (Kaplan, 1986; 1987; Kaplan and Duchon, 1987a; 1987b; 1987c) and summarized here. This article hasa methodological focus. It describes the development and process of the research and omits all but sketches of the data and analysis necessary to understand how the research evolved. Particular attention is given to how both quantitative and qualitative methods were used productively in a collaboration among investigators from different research perspectives.

Research Setting Organizational and information context Computers have been used more in clinical laboratories than in many other areas of medical prac-

tice (Paplanus, 1985). The clinical laboratory represents a microcosm of automat~onand information needs from throughout a medical center (Lincoln and Korpman, 1980). The laboratory is responsible for performing tests ordered by physicians to diagnose illness or track the course of therapy and disease. Laboratories meet this responsibility by receiving and processing physicians' test orders, i.e., collecting specimens (such as blood) from patients, performing the designated tests (such as blood sugar measurements or assessments of bacterial sensitivity to antibiotics), and reporting the test results for use by the physician and for inclusion in the patient's medical record. Laboratory technologists, who are specially trained, perform the laboratory tests. They may also report the results and discuss them with those treating the patient. The principal function of computers in clinical laboratories involves data management. Computers can relieve the clerical burden of data acquisition and transcription while adding new data entry and computer-related tasks. In addition, they improve legibility, organization, and accuracy of laboratory results reports; increase productivity and efficiency; reduce transcription error; and change laboratory organization and turnaround time for test results (Brooks, et al., 1977; Flagle, 1974; Lewis, 1979; Nicol and Smith, 1986). Thus, such computer information systems affect both the process of work as well as the service product of a clinical laboratory.

The research site Research was conducted within the Department of Pathology and Laboratory Medicine at a 650bed midwestern urban metropolitan university medical center. A widely used and well-respected commercial laboratory computer information system was installed in April 1985 for use by all nine laboratories within the Division of Laboratory Medicine. These nine laboratories were responsible for laboratory work to support the care of patients admitted to the hospital and those treated in clinics and in the emergency unit. The laboratories also performed specialty tests for other institutions. This research site was selected because a new computer information system was replacing a manual operation in a context representative of the entire institution's information systems needs. Another advantage was that multiple organizational units would use the same computer

Qualitative & Quantitative Methods

system. Consequently, a comparison between units would be possible under conditions where system variables would be reasonably constant. Thus, the study could include both macro and micro units of analysis: individuals, workers and managers, individual laboratories as a whole, the department in which the laboratories were situated, and the organization as a whole. Mixing levels of analysis made it possible to explore the interplay among units at each level and across levels (Markus and Robey, 1988). The site became available because of the first author's prior contact with the director of laboratory medicine. The director arranged for entry into the laboratories and meetings and gave his support throughout the study's duration. He was available to the research team as needed.

Research team The project was undertaken by four faculty members in the College of Business Administration at the University of Cincinnati. The researcher from the information systems (IS) area, Bonnie Kaplan, conceived the project, conducted the fieldwork, and provided knowledge of the research setting. She envisioned the purpose of the study as researching what happens when a computer information system is installed into a new setting. The other three researchers were from the organizational behavior area. Two of them left the study at an early stage; Dennis Duchon remained. Their primary interest was in testing preexisting theory in a new setting using questionnaires as the means of gathering quantitative data for statistical analysis. They viewed qualitative methods as a means for deriving quantitative measures, rather than as rich sources of research data useful for grounding theory and interpretation. Consequently, they approached the study differently from Kaplan in three ways: (1) they decided to research the impact of the computer information system on work in the laboratories; (2) they began the study intending to test theory through statistical analysis of quantitative survey data; and (3) they did not consider interviewing and observation as a means of data collection.

Methods Research question Each member of the original research team formulated different research questions. The three

576

MIS QuarterlyIDecember 1988

organizational behavior members viewed the new computer system as an opportunity to test existing theory concerning job characteristics and job satisfaction in a new setting. Although each of their research questions differed, these three wished to investigate how job characteristics varied in the clinical laboratories. Consequently, the study focused on laboratory technologists. The research questions for these three researchers were to investigate (1) the effects of the computer information system on job characteristics, (2) how departmental technology affected computer acceptance, (3) how leadersubordinate relationships affected computer acceptance, and (4) how job characteristics varied among laboratories and changed over time. Working in the interpretive tradition, Kaplan expected to shuttle among questions, hypotheses, and data throughout the study. Because of the other team members' primary interest in the laboratory work, she also adopted this focus. Her research question was to identify and account for both similarities and differences among laboratory technologists and among laboratories in their responses to the computer information system.

Research design Each research question reflected differences between a quantitative hypothesis-testing approach (where the effects of an intervention on dependent variables are statistically assessed) and a qualitative approach (where categories and theories are developed inductively from the data, generalizations are built from the ground up, and various interpretive schemes are tried and hypotheses are created and reformulated during the course of the study) (Glaser and Strauss, 1967; Van Maanen, 1983b). These differences resulted in a longitudinal multi-method case study that incorporated each team member's interest and skills. Because this initial case design included both qualitative and quantitative methods, both positivist and interpretive perspectives were incorporated in order to best link method to research questiom3

"ase study, an investigation using multiple sources of evidence to study a contemporary phenomenon within its real-life context (Bonoma, 1985; Yin, 1984), has been advanced for information systems research in order to understand the nature and complexity of the processes taking place (Benbasat, et al., 1987). Although they are often distinguished from quantita-

Qualitative & Quantitative Methods

The initial design was developed after considerable discussion and, as is common in qualitative research, was left open for modification and extension as necessary during the course of the study (Glaser and Strauss, 1967). Because research access to the site was not secured until shortly before the new system was installed, no pre-installationsurvey measures were taken. Consequently, as is often true in case studies, there could be no comparison of measures after installation (Cook and Campbell, 1979, p. 96). The first step in the design was to interview laboratory directors and selected hospital administrators prior to system installation. The second step was to observe in each laboratory after installation. The remaining steps were to administer questionnaires at several periods after the computer system was installed. The first wave of questionnaire data gathering occurred when a new routine had been established after the computer system was installed. The second wave was planned for approximately one year later, when the initial changes caused by the computer system became part of normal procedure. Future waves would be at intervals depending on initial results. Initially, regular participant observation at laboratory management meetings and staff meetings was not included in the design. This component was added when the meetings were instituted.

Data collection Qualitative methods included open-ended interviewing, observation, participant observation, and analysis of responses to open-ended items on a survey questionnaire. Quantitative methods were employed to collect and analyze data from survey questionnaires. All participantswere assured of confidentiality. Although both qualitative and quantitative approaches were used, it quickly became apparent that they were viewed differently by research team members. Each team member conducted interviews and observations, but only the qualitive or quasi-experimental research (Bonoma, 1985; George and McKeown, 1985), case studies may or may not exhibit the defining conditions of either quantitative or qualitative research (Campbell, 1984; Yin, 1984). There are a variety of recognized approaches to case study (Benbasat, et al., 1987; Bonoma, 1985).

tatively-oriented member kept systematic field notes to be used for data analysis. Other team members viewed interviews and observations as providing "background" rather than "data." Consequently, Kaplan's field notes from each of these activities were used for analysis. Interviews and Observations

The director of the laboratories, the chairman of the department, and the administrator of the hospital were interviewed early in the study. Teams of two or three researchers also interviewed the individual laboratory directors and some chief supervisory personnel during the week prior to computer system installation. Kaplan was present at all but two of these interviews. The purpose of the interviews was threefold: (1) to determine what interviewees expected the potential effects of the computer system to be on patient care, laboratory operations, and hospital operations; (2) to inquire about possible measures and focus of the study; and (3) to generate questionnaire items for a survey of laboratory technologists. During the month prior to administering the survey, individual researchers were present in the laboratories to observe and talk with laboratory staff while they worked. These observations were intended to influence questionnaire item development. Starting three months after the computer information system was installed, Kaplan was urged by one of the laboratory directors to attend weekly meetings where directors and head supervisors discussed laboratory management problems. These meetings were instituted as a result of system installation and they became a regular feature of laboratory management even after system problems ceased to be discussed. Kaplan attended these meetings regularly throughout the study as an observer and occasional participant, and was a participant observer at other departmental meetings. Survey Questionnaire

A survey instrument was developed for laboratory technologists, the primary direct users of the computer information system. It was composed of three parts. The first part consisted of measures adapted from the standard instruments that addressed job characteristics (Hackman and Oldham, 1976), role conflict and ambiguity (House and Rizzo, 1972), departmental technol-

MIS QuarterlyIDecember 1988 577

Qualitative & Quantitative Methods

ogy (Withey, et al., 1983), and leader-member relationships (Dansereau, et al., 1975).

head supervisors, and an administrator of one unit of the hospital.

The second part of the questionnaire used Likertscale measures of expectations, concerns, and perceived changes that may be related to the use of the computer system. These measures were developed by analyzing the interviews and observations to derive categories for questions that focused on the primary expectations expressed by interviewees, attendees at meetings, and the laboratory technologists who were observed. Additional questions concerning expectations were adapted from Kjerulff, et al. (1982).

All 248 members of the laboratory staff were surveyed starting seven months after system implementation. Data from all 119 completed (48%) questionnaires were analyzed. Only seven of the respondents had been interviewed. Most respondents were technologists who had college degrees, worked first shift, and had not worked previously in a laboratory with a computer information system. As is typical of laboratory technologists, almost all were women.

The survey instrument concluded with four openended questions that assessed changes caused by the computer system and elicited suggestions for improved system use. These questions were also derived from the observations and interviews. They were intended to serve two purposes. The first was to ensure that important issues were addressed even if they had been included in scaled-response questions. The second was to elicit information about impacts for which measures were difficult to develop.

Analysis and Results

All questionnaire items were pretested on a sample of laboratory personnel selected by head supervisors. After revision, the questionnaire was administered to all 248 members of the laboratory staff seven months after computer system installation. The staff knew about the study because of the prior laboratory observations and announcements at meetings. Each survey was accompanied by an explanatory letter. A research team member also explained the study during weekly staff meetings when the questionnaires were distributed. In some laboratories, this meeting was devoted to answering the questionnaire. In others, staff members were allowed to complete the survey during work hours provided that it not interfere with their job functions. A modified version of the questionnaire was distributed to all laboratory technologists for the second wave of data collection beginning nearly a year after the first. No further surveys were conducted.

Sample Just before the system was installed, 11 interviews were conducted with 20 interviewees representing all the laboratories. These interviewees included laboratory directors at all levels,

578 MIS QuarterlyIDecember 1988

Data are presented from interviews immediately prior to installation and from the first wave of survey questionnaires seven months later. The quantitative data were analyzed using a standard statistical software package. Interview notes and responses to open-ended questions on the questionnaire were analyzed by the constant comparative method (Glazer and Strauss, 1967). Using this method, categories reflecting computer system issues important to laboratory directors and technologists were derived systematically.

Open-ended questions Kaplan first analyzed open-ended questions on the questionnaire. Three themes predominated in the answers: (1) changes in technologists' work load, (2) improvements in results reporting, and (3) the need for physicians and nurses to use computer system terminals rather than telephones for results inquiry. Technologists expressed a general sense that their clerical duties and paperwork had increased and productivity had suffered. However, they credited the computer system with making laboratory test results available more quickly. They said that results reports were also more complete, more accurate, easier to read, and provided a picture of "the whole patient." Even though phone calls interrupted laboratory work, they felt that doctors and nurses expected to get test results by calling the laboratories rather than by using the computer system. In addition, respondents sensed they were being blamed by others in the medical center for problems caused by the computer system. When responses were grouped by laboratory, marked differences were evident among some

Qualitative & Quantitative Methods

of the laboratories. Laboratories differed in their assessment of changes in their workload, number of telephone calls, improvement in reporting, and attitudes expressed toward the computer system.

increased, or that laboratory staff was blamed more and cooperated with less by physicians and nurses (increased blame). There were no statistically significant correlations between job characteristic measures and computer system measures.

Scaled-response questions

At this point, nothing reportable had been found in the quantitative data. This was surprising because computer system variables were similar to themes identified in responses to the openended questions, where some laboratories differed markedly in their responses. Consequently, Kaplan continued to seek explanations for the differences among laboratories that were evident in the qualitative data.

Responses to the questionnaire items pertaining to job characteristics and to the scaledresponse items assessing computer expectations and concerns were analyzed next by two researchers who intentionally remained unaware of the findings on open-ended questions. Having already analyzed the responses to open-ended questions, Kaplan assisted them in an initial Qsort of computer system questionnaire items and later interpretation of a factor analysis of these items. A fourth researcher had left the study team. Another of the original team members left the study during data analysis, leaving Duchon and Kaplan to interpret the results of Duchon's statistical analysis. A factor analysis of job items provided evidence of construct validity for the measures. Four factors were extracted: skill variety, task identity, autonomy, and feedback. These factors comprise a four-factor model of the core job dimensions (Birnbaum, et al., 1986; Ferris and Gilmore, 1985), and are also used to assess job complexity (Stone, 1974). Overall, no job characteristic differences due to environmental or individual factors were found. Respondents reported the same levels of job characteristics regardless of age, gender, job experience, etc. Five computer system variables were extracted: external communications, service outcomes, personal intentions, personal hassles, and increased blame. Reliability for these five factors ranged between .53 and .87. Data on these variables indicated that respondents generally were positive about the computer system. They reported that the system had improved relations and aided communication between staff in the laboratories and the rest of the medical center (external communications), and that results reporting and service was better (service outcomes). Respondents were very positive about their own continued use of the computer system (personal intentions). They did not think that irritants of their jobs, such as the number of phone calls and the amount of work (personal hassles), had

lnterviews Next, Kaplan again analyzed the interviews with laboratory directors, this time to determine expectations of the directors prior to system implementation. She thought that perhaps different expectations among directors could have contributed to different responses within the laboratories. Different expectations were not found. The analysis indicated that prior to implementation, directors generally agreed that there would be more work for laboratory technologists, but that technologists' jobs would not be changed by the computer system. This assessment was made with full knowledge that technologists would have to enter test results and some test orders, and that their paperwork and billing duties were expected to decrease. lnterviewees also expected that test results would be available more quickly. They thought that reports would have fewer transcription errors, would be more legible, and would provide useful cumulative information and patient profiles. They expected these improvements to reduce the number of unnecessary laboratory tests and stat (i.e., emergency) and duplicate test orders, and anticipated that the number of telephone inquiries concerning results would decrease.

Developing an Interpretative Theoretical Model The five computer system variables that were developed from the questionnaire items reflected themes very similar to those found in the qualitative data from interviews and responses to

MIS QuarterlyIDecember 1988 579

Qualitative & Quantitative Methods

open-ended questionnaires: changes in workload and changes in results reporting. This congruence of themes from independent sources of data strengthened confidence in their validity. The analysis of qualitative data from the questionnaires suggested important differences among laboratories, even in the absence of statistical verification, because the differences were so striking. Knowledge obtained from observations in the laboratories and at meetings, as well as from comments made to her by individual laboratory technologists and directors, strengthened Kaplan's conviction that these differences were important. It remained to determine the nature of the differences and to explain the lack of reportable statistical results. An impression gained from the qualitative data suggested an interpretation. Kaplan had noticed that laboratory technologists seemed to emphasize either the increased work or the improved results reporting in their answers to the openended questions. The repetition of these themes in all the data sources reinforced their importance. This repetition suggested that there were two groups of respondents corresponding to these two themes. The finding that directors did not expect technologists' jobs to change raised the question of what constituted a technologist's job. It seemed that directors and technologists might have different conceptions of the technologist's job and that these differences were reflected in their assessment of the computer

system. Individual laboratory technologists might also differ in their views of their jobs, just as they differed in what they emphasized about the computer system in their comments on the questionnaires. These insights led to a model depicting two groups of laboratory technologists. According to this model, one group saw their jobs in terms of producing results reports, the other in terms of the laboratory bench work necessary to produce those results reports. As shown in Figure 1, the group who saw its jobs in terms of bench work was oriented toward the process of producing laboratory results, whereas the group who viewed its work in terms of reporting results was oriented toward the outcomes of laboratory work; the members of this group saw themselves as providing a service. The ways in which the computer affected the production work in the laboratories were assessed in those questionnaire items comprising personal hassles and increased blame, e.g., effects on paper work and telephone calls. Thus, the group who saw its jobs in terms of the bench work (i.e., the process of producing laboratory tests results) would have responded to the computer system according to how it had affected what was assessed in these items. The productoriented group of respondents who saw its jobs in terms of the service provided (i.e., the product, rather than the process, of laboratory work) would have assessed the computer system in

Orientation Process View of Job

Bench Work

Responses to Computer System

Increased Work Load

Computer System Variables

Hassles Blame

Product

I

Results, Service Improved Results Reporting Communications Service

+

Orientation = (Communications Service) - (Hassles Blame)

+

Figure 1. A Model Depicting Different Orientations to Work and the Computer in Clinical Laboratories

580

MIS QuarterlyIDecember 1988

Qualitative & Quantitative Methods

terms reflected in its responses to external communications and service outcomes items, e.g., improved results reporting. This interpretation indicated a reason why job characteristics measures did not depict differences among laboratories and why there was no correlation between job characteristics and computer system variables. The kinds of differences in job orientation depicted in the model would not have been measured by job characteristic measures. After this model was proposed, two new variables were created to measure whether technologists' responses differed according to the computer system's impact on process versus product aspects of their jobs. As shown in Figure 1 , one variable combined scores on external communication and service outcomes, and the other combined scores on personal hassles and increased blame. The variable personal intentions was omitted because there was no theoretical basis for including it; personal intentions did not assess the interaction between specific aspects of the computer system and the job. Moreover, this variable was not a good discriminator between laboratories or individuals because individual respondents' scores were all high. There was a significant negative correlation between these two new variables, thus indicating that respondents tended to have high scores on one variable and low scores on the other, i.e. they were either product- or process-oriented. An orientation score for each respondent was then computed by subtracting the sum of that person's scores on personal hassles and increased blame from his or her scores on external communication and service outcomes. When the orientation score was regressed on laboratories, statistically significant differences in orientation were found across laboratories. Thus, some laboratories, like some technologists, were process-oriented while others were productoriented. Moreover, respondents from the laboratories rating the strongest process orientation expressed the most hostility on the open-ended questions from the survey, whereas respondents from laboratories with the strongest product orientation expressed strong satisfaction with the computer system. Thus, our results suggested that the interpretation was correct. Discussion with the laboratory director about the intermediate and final results

and about the research papers produced from the study supported the interpretation.

Discussion This study illustrates the difficulties as well as the strengths that occurred when research perspectives from different disciplines were combined. Although both authors agree that this collaboration enriched the study as well as their own understanding of research, it was rife with frustrations. Neither author initially realized the extent to which differing values, assumptions, and vocabularies would interfere with the project. Continued interaction was necessary to recognize that there were differences even in understanding the same words. Persistent effort was needed to identify and resolve these differences. A key incident in the study stemmed from these differences. Because the incident is illustrative of the difficulties as well as the enrichment caused by the different perspectives of research team members, we recount it here. After the initial statistical analysis of data from the scaled response questions, Duchon thought that there were no statistical results worth reporting. Kaplan thought he meant that there were no statistically significant differences among laboratories in their reaction to the computer system. However, she did not agree with these results because they did not fit with her observations and analysis. Duchon remained convinced that there were no results worth pursuing. Consequently, Kaplan began to analyze the remaining data from the interviews. This analysis strengthened her convictions that the qualitative data did indicate patterns worth investigating and that Duchon had to be convinced of this. That determination led to the development of the interpretive theoretical model. The turning point in the study, and in the authors' collaboration, happened when Duchon reanalyzed the quantitative data as suggested by Kaplan. This new analysis supported her interpretation. When the initial analysis was repeated, there were statistically significant differences among laboratories for the computer system variables. We were not able to determine the reason for the apparent lack of reportable differences during the first statistical analysis. Because of continued difficulties in communication, misunderstandings, in retrospect, were not surprising. Some of the difficulties we experienced have been described so that others who participate

MIS QuarterlyIDecember 1988 581

Qualitative & Quantitative Methods

on similar research teams may shorten the learning period and reduce the communication problems. It should be clear from this account that, as is common in research that includes qualitative approaches, the process is messy. Impressions, interpretations. propositions, and hypotheses were developed over the course of the study, a process that hardly fits the positivist ideal of objective collection of neutral or purely descriptive "facts" (Van Maanen, 1983b). This messiness should not dismay those who experience it. Despite the methodological and communications problems, the collaboration was productive and friendly. Our tenacity in holding to our initial independent analyses of the different data though a problem at the time - and the increased respect each of us developed for the other's approach, in fact, were positive aspects of our research. As has happened in other projects (Trend, 1979), a strong determination to accommodate each approach and to reconcile apparently conflicting data resulted in an interpretation that synthesized the evidence. We cannot state too strongly that the advantages of our collaboration outweighed the difficulties.

Conclusions This article describes how qualitative and quantitative approaches were combined in a case study of a new information system. It illustrates four methodological points: ( I ) the value of combining qualitative and quantitative methods; (2) the need for context-specific measures of job characteristics; (3) the importance of process measures when evaluating information systems; and (4) the need to explore the necessary relationships between a computer system and the perceptions of its users. Combining qualitative and quantitative methods proved especially valuable. First, the apparent inconsistency of results between the initial quantitative analysis and the qualitative data required exploration. The initial statistical analysis revealed only that certain well-recognized job characteristic measures were not capturing the differences in the sample and that the correlations between job characteristics and computer system variables were not significant. However, systematically different responses to the computer system were present in the qualitative data. Further analysis to resolve the discrepancy led to the use of new measures developed from

582

MIS QuarterlyIDecember 1988

the quantitative questionnaire data that captured job-related responses to the computer system and that supported conclusions drawn from the qualitative data. Thus, triangulation of data from different sources can alert researchers to potential analytical errors and omissions. Mixing methods can also lead to new insights and modes of analysis that are unlikely to occur if one method is used alone. In the absence of qualitative data, the study would have concluded that there were no reportable statistically significant findings. However, on the strength of the qualitative data, a theoretical interpretative model was developed to drive further statistical analysis. The inclusion of qualitative data resulted in the generation of grounded theory. The results also suggest the value of investigating specifically how a computer system affects users' jobs. In this study, as in others, no correlation was found between standard job characteristic measures and variables measuring responses to the computer system. Nevertheless, these responses were related to differences in job orientation. Studies that independently assess characteristics and attitudes ignore the specific impacts of a computer system on work. Standard job characteristic measures do not necessarily assess job-specific factors or capture the different job orientations held by workers with "the same job." Instead of sole reliance on these measures, instruments need to include contextspecific measures of how a computer system supports work, as conceptualized by the worker. In this study, measures assessing these aspects were derived from knowledge of the setting gained by field experience, again suggesting the value of combining qualitative and quantitative methods. The study further suggests the need to move beyond outcome evaluations to investigating how a computer system affects processes. By extending research to examine specific efforts of individual computer information systems in particular contexts, our general understanding of what affects the acceptance and use of computer information systems can be improved. Lastly, there is a growing amount of literature that assesses the impacts of computer information systems. This study was intended, by part of the research team, as an assessment of the impacts of a computer information system on work. However, impact studies consider only one

Qualitative & Quantitative Methods

side of the important interaction between computer information system users and the computer information system: the effects of the system on users, the organization, or society. Research designs could just as well consider the impacts of users, the organization, or society on the computer information system. Sorting out the direction of causality is a difficult, if not meaningless, task that is made all the more difficult by the multitudes of confounding factors common to any field setting. It also may be impossible to do because the design does not account for changes due to the interrelationships between the "dependent" and "independent" variables. For example, in this study, the job orientation of the users could have mediated their response to the computer information system. Alternatively, the system itself could have caused the differences in job orientation. Rather than formulate the analysis in either of these ways, it seemed more fruitful to take an interactionist approach and consider the interrelationships between users' job orientations and their responses to the system. "Impacts" implies unidirectional causality, whereas, as has been found in other studies, investigations of interactions can be more productive and meaningful. Despite the normative nature of the methodological points, the most important conclusion is the need for a variety of approaches to the study of information systems. For the same reasons that combining methods can be valuable in a particular study, a variety of approaches and perspectives can be valuable in the discipline as a whole. No one method can provide the richness that information systems, as a discipline, needs for further advance.

Acknowledgements Joseph A. Maxwell of the Harvard Graduate School of Education provided invaluable theoretical assistance and editorial advice. We are also grateful to the anonymous reviewers whose comments resulted in further clarifying and strengthening the paper.

References Argyris, C. Action Science: Concepts, Methods, and Skills for Research and Intervention, Jossey-Bass, San Francisco, CA, 1985. Bakos, J.Y. "Dependent Variables for the Study of Firm and Industry-Level lmpacts of Infor-

mation Technology," Proceedings of the Eighth International Conference on Information Systems, Pittsburgh, PA, December 6-9, 1987, pp. 10-23. Bariff, M.L. and Ginzberg, M.J. "MIS and the Behavioral Sciences: Research Patterns and Prescriptions," Data Base (14:1), Fall 1982, pp. 19-26. Barley, S.R. "Technology as an Occasion for Structuring: Evidence from Observations of CT Scanners and the Social Order of Radiology Departments," Administrative Science Quarterly (31), March 1986, pp. 78-108. Benbasat, I. "An Analysis of Research Methodologies," in The Information Systems Research Challenge, F.W. McFarlan (ed.), Harvard Business School Press, Boston, MA, 1984, pp. 47-85. Benbasat, I.,Goldstein, D.K. and Mead, M. "The Case Research Strategy in Studies of Information Systems," MIS Quarterly (1 1:3), September 1987, pp. 369-386. Birnbaum, P.H., Farh, J.L. and Wong, G.Y.Y. "The Job Characteristics Model in Hong Kong," Journal of Applied Psychology (71:4), November 1986, pp. 598-605. Boland, R.J. "The Process and Product of System Design," Management Science (24:9), May 1978, pp. 887-898. Bonoma, T.V. "Case Research in Marketing: Opportunities, Problems, and a Process," Journal of Marketing Research (22:2), May 1985, pp. 199-208. Bredo E. and Feinberg W. "Part Two: The Interpretive Approach to Social and Educational Research," in Knowledge a n d Values in Social and Educational Research, E. Bredo and W. Feinberg (eds.), Temple University Press, Philadelphia, PA, 1982a, pp. 115-128. Bredo, E. and Feinberg, W. (eds). Knowledge and Values in Social and Educational Research, Temple University Press, Philadelphia, PA, 1982b. Brooks, R.C., Casey, I.J. and Blackmon, P.W. Jr. Evaluation of the Air Force Clinical Laboratory Automation Systems (AFCLAS) at Wright-Patterson USAF Medical Center, Vol. I: Summary, HDSN-77-4 (NTIS no. AD-A043 664); Vol. II: Analysis, HDSN-77-5 (NTIS no. AD-A043 665), Analytic Services, Arlington, VA, 1977. Campbell, D.T. "Foreword" in Case Study Research: Design and Methods, R.K. Yin (ed.), Sage Publications, Beverly Hills, CA, 1984, pp. 7-9.

MIS QuarterlyIDecember 1988 583

Qualitative & Quantitative Methods

Cook, T.D. and Campbell, D.T. Quasi-Experimentation: Design and Analysis Issues for Field Settings, Houghton Mifflin, Boston, MA, 1979. Cook, T.D. and Reichardt, C.S. (eds.). Qualitative and Quantitative Methods in Evaluation Research, Sage Publications, Beverly Hills, CA, 1979. Dansereau, F. Jr., Graen, G. and Haga, W.J. "A Vertical-Dyad Linkage Approach to Leadership within Formal Organizations: A Longitudinal lnvestigation of the Role Making Process," Organizational Behavior and Human Performance (13:1), February 1975, pp. 4678. Danziger, J.N. "Social Science and the Social Impacts of Computer Technology," Social Science Quarterly (66:1), March 1985, pp. 3-21. Dickson, G.W., Benbasat, I. and King, W.R. "The MIS Area: Problems, Challenges, and Opportunities," Data Base (14:1), Fall 1982, pp. 712. Downey, H.K. and Ireland, R.D. "Quantitative versus Qualitative: The Case of Environmental Assessment in Organizational Studies," in Qualitative Methodology, J. Van Maanen (ed.), Sage Publications, Beverly Hills, CA, 1983, pp. 179-190. Ferris, G.R. and Gilmore, D.C. "A Methodological Note on Job Complexity Indexes," Journal of Applied Psychology (70:1), February 1985, pp. 225-227. Flagle, C.D. "Operations Research with Hospital Computer Systems," in Hospital Computer Systems, M.F. Collen (ed.), John Wiley and Sons, New York, NY, 1974, pp. 418-430. Fok, L.Y., Kumar, K. and, Wood-Harper, T. "Methodologies for Socio-Technical-Systems (STS) Development: A Comparison," Proceedings of the Eighth lnternational Conference on Information Systems, Pittsburgh, PA, December 69, 1987, pp. 319-334. Franz, C.R. and Robey, D. "An lnvestigation of User-Led System Design: Rational and Political Perspectives," Communications of the ACM (27:12), December 1984, pp. 12021209. George, A.L. and McKeown, T.J. "Case Studies and Theories of Organizational Decision Making," in Advances in lnformation Processi n g i n Organizations (2), JAI Press, Greenwich, CT, 1985, pp. 21-58. Glaser, B.G. and Strauss, A.L. The Discovery of Grounded Theory: Strategies for Qualitative Research, Aldine, New York, NY, 1967. Goldstein, D., Markus, M.L., Rosen, M. and Swanson, E.B. "Use of Qualitative Methods in MIS

584

MIS QuarterlyIDecember 1988

Research," Proceedings of the Seventh International Conference on Information Systems, San Diego, CA, December 15-17, 1986, pp. 338-339. Goodhue, D. "IS Attitudes: Towards Theoretical Definition and Measurement Clarity," Proceedings of the Seventh lnternational Conference on lnformation Systems, San Diego, CA, December 15-17, 1986, pp. 181-194. Hackman, J.R. and Oldham, G.R. "Motivation Through the Design of Work: Test of a Theory," Organizational Behavior and Human Resources (16:2), August 1976, pp. 250-279. Hirschheim, R., Klein, H. and Newman, M. "A Social Action Perspective of lnformation Systems Development," Proceedings of the Eighth lnternational Conference on Information Systems, Pittsburgh, PA, December 6-9, 1987, pp. 45-57. House, R.J. and Rizzo, J.R. "Toward the Measurement of Organizational Practice: Scale Development and Validation," Journal of Applied Psychology (56:6), October 1972, pp. 388396. Ives, B. and Olson, M.H. "User Involvement and MIS Success: A Review of Research," Management Science (30:5), May 1984, pp. 586603. Jarvenpaa, S.L., Dickson, G.W. and DeSanctis, G. "Methodological Issues in Experimental IS Research: Experiences and Recommendations," MIS Quarterly (9:2), June 1985, pp. 141-156. Jick, T.D. "Mixing Qualitative and Quantitative Methods: Triangulation in Action," in Qualitative Methodology, J. Van Maanen (ed.), Sage Publications, Beverly Hills, CA, 1983, pp. 135148. Kaplan, B. "Impact of a Clinical Laboratory Computer Systems: Users' Perceptions," in Medinfo 86: Fifth World Congress on Medical Informatics, R. Salamon, B. Blum and M.J. Jorgensen (eds.), North-Holland, Amsterdam, 1986, pp. 1057-1061. Kaplan, B. "Initial Impact of a Clinical Laboratory Computer System: Themes Common to Expectations and Actualities," Journal of Medlcal Systems (1 1:2/3), June 1987, pp. 137147. Kaplan, B. and Duchon, D. "Job-Related Responses to a Clinical Laboratory Computer Information System Seven Months Post Implementation," in Social, Ergonomic and Stress Aspects of Work with Computers, R. Salvendy, S.L. Sauter and J.J. Hurrell, Jr.

Qualitative & Quantitative Methods

(eds.), Elsevier, Amsterdam, 1987a, pp. 1724. Kaplan, B. and Duchon, D. "Employee Acceptance of a Computer lnformation System: The Role of Work Orientation," Working Paper IS1988-004A, Department of Quantitative Analysis and lnformation Systems, University of Cincinnati, Cincinnati, OH, April 1987b. Kaplan, B. and Duchon D. "A Qualitative and Quantitative Investigation of a Computer System's Impact on Work in Clinical Laboratories," Working Paper IS-1987-001, Department of Quantitative Analysis and lnformation Systems, University of Cincinnati, Cincinnati, OH, December 1987c. Kauber, P. "What's Wrong W~tha Sc~enceof MIS?" Proceedings of the 1986 Decision Science Institute, Honolulu, HA, November 2325, 1986, pp. 572-574. Kemp, N.J. and Clegg, C.W. "lnformation Technology and Job Design: A Case Study on Computerized Numerically Controlled Machine Tool Workers," Behavior and lnformation Technology (6:2), 1987, pp. 109-124. Kjerulff, K. H., Counte, M.A., Salloway, J.C. and Campbell, B.C. "Predicting Employee Adaptation to the Implementation of a Medical Information System," Proceedings of the Sixth Annual Symposium on Computer Applications i n Medical Care, IEEE Computer Society Press, Silver Springs, MD, 1982, pp. 392-397. Klein, H. "The Critical Social Theory Perspective on lnformation Systems Development," Proceedings of the 1986 Decision Science Institute, Honolulu, HA, November 23-25, 1986, pp. 575-577. Kling, R. "Social Analyses of Computing: Theoretical Perspectives in Recent Empirical Research," computing Surveys (12:1), March 1980, pp. 61-110. Kling, R. and Scacchi, W. "The Web of Computing: Computer Technology as Social Organization," in Advances in Computers (21), M.C. Yovits (ed.), Academic Press, New York, NY, 1982, pp. 2-90. Lewis, J.W. "Clinical Laboratory lnformation Systems," Proceedings of the IEEE (67:9), September 1979, pp. 1229-1300. Liqht. - R.J. and Pillemer, D.B. "Numbers and Narrative: Combining Their Strengths in Research Reviews," Haward Educational Review (52:1), February 1982, pp. 1-26. Lincoln, T.L. and Korpman, R.A. "Computers, Health Care, and Medical Information Science," Science (210:4467), October 17, 1980, pp. 257-263.

Lincoln, Y.S. and Guba, E.G. Naturalistic Inquiry, Sage Publications, Beverly Hills, CA, 1985. Lyytinen, K. "Different Perspectives on Information Systems: Problems and Solutions," ACM Computing Surveys (19:1), March 1987, pp. 5-46. Manicas, P.T. and Secord, P.F. "Implications For Psychology of the New Philosophy of Science," American Psychologist (38:4), April 1983, pp. 399-413. Markus, M.L. "Power, Politics, and MIS Implementation," Communications of the ACM (26:6), June 1983, pp. 430-444. Markus, M.L. and Robey, D. "lnformation Technology and Organizational Change: Causal Structure in Theory and Research," Management Science (34:5), May 1988, pp. 583-598. Maxwell, J.A., Bashook, P.G. and Sandlow, L.J. "Combining Ethnographic and Experimental Methods in Educational Research: A Case Study," in Educational Evaluation: Ethnography in Theory, Practice, and Politics, D.M. Fetterman and M.A. Pitman (eds.), Sage Publications, Beverly Hills, CA, 1986, pp. 121-143. Meehl, P.E. "Theoretical Risks and Tabular Asterisks: Sir Karl, Sir Ronald, and the Slow Progress of Soft Psychology," Journal of Consulting and Clinical Psychology (46:4), August 1978, pp. 806-834. Mendelson, H., Ariav, G., Moore, J., DeSanctis, G. "Competing Reference Disciplines for MIS Research," Proceedings of the Eighth International Conference on lnformation Systems, Pittsburgh, PA, December 6-9, 1987, pp. 455458. Meyers, W. R. The Evaluation Enterprise, JosseyBass, San Francisco, CA, 1981. Miles, M.B. and Huberman, A.M. Qualitative Data Analysis: A Sourcebook of New Methods, Sage Publications, Beverly Hills, CA, 1984. Millman, Z. and Hartwick, J. "The Impact of Automated Office Systems on Middle Managers and Their Work," MIS Quarterly (11:4), December 1987, pp. 479-490. Mintzberg, H. The Nature of Managerial Work, Harper and Row, New York, NY, 1973. Mumford, E. "From Bank Teller to Office Worker: The Pursuit of Systems Designed for People in Practice and Research," Proceedings of the Sixth International Conference on Information Systems, Indianapolis, IN, December 16-18, 1985, pp. 249-258. Mumford, E. and Henshall, D. A Participative Approach to Computer Systems Design: A Case

MIS QuarterlyiDecember 1988 585

Qualitative & Quantitative Methods

Study of the Introduction of a New Computer System, John Wiley and Sons, New York, NY, 1979. Mumford, E., Hirschheim, R., Fitzgerald, G. and Wood-Harper, T. Research Methods in Information Systems, North-Holland, Amsterdam, 1985. Nicol, R. and Smith, P. "A Survey of the State of the Art of Clinical Biochemistry Laboratory Computerization," International Journal of BioMedical Computing (18:2), March 1986, pp. 135-144. Paplanus, S.H. "Clinical Pathology," in Computer Applications in Clinical Practice: An Overview, D. Levinson (ed.), Macmillan, New York, NY, 1985, pp. 118-122. Patton, M.Q. Utilization-Focused Evaluation, Sage Publications, Beverly Hills, CA, 1978. Rockart, J.F. "Conclusion to Part I," in The Information Systems Research Challenge, F.W. McFarlan (ed.), Harvard Business School Press, Boston, MA, 1984, pp. 97-104. Stone, E.F. "The Moderating Effect of Work Related Values on the Core Job-Scope Satisfaction Relationship," unpublished doctoral dissertation, University of California, Irvine, CA, 1974. Trend, M.G. "On the Reconciliation of Qualitative and Quantitative Analyses: A Case Study," in Qualitative and Quantitative Methods in Evaluation Research, T.D. Cook and C.S. Reichardt (eds.), Sage Publications, Beverly Hills, CA, 1979, pp. 68-86. Van Maanen, J. "Reclaiming Qualitative Methods for Organizational Research," in Qualitative Methodology, J. Van Maanen (ed.), Sage Publications, Beverly Hills, CA, 1983a, pp. 918. Van Maanen, J. "Epilog: Qualitative Methods Reclaimed," in Qualitative Methodology, J. Van Maanen (ed.), Sage Publications, Beverly Hills, CA, 1983b, pp. 247-268. Van Maanen, J. (ed.). Qualitative Methodology, Sage Publications, Beverly Hills, CA, 1983c. Van Maanen, J., Dabbs, J.M. Jr. and Faulkner, R.R. Varieties of Qualitative Research, Sage Publications, Beverly Hills, CA, 1982. Vitalari, N.P. "The Need for Longitudinal Designs in the Study of Computing Environments," in Research Methods in lnformation Systems, E. Mumford, R. Hirschheim, G. Fitzgerald and T. Wood-Harper (eds.), North-Holland, Amsterdam, 1985, pp. 243-265. Weick, K.E. "Theoretical Assumptions and Research Methodology Selection," and ensuing discussion, in The lnformation Systems Re-

586

MIS QuarterlylDecember 1988

search Challenge, F.W. McFarlan (ed.), Harvard Business School Press, Boston, MA, 1984, pp. 111-133. Withey, M., Daft, R.L. and Cooper, W.H. "Measure of Perrow's Work Unit Technology: An Empirical Assessment and a New Scale," Academy of Management Journal (26:1), March 1983, pp. 45-63. Yin, R.K. Case Study Research: Design and Methods, Sage Publications, Beverly Hills, CA, 1984. Zuboff, S. "New Worlds of Computer-Mediated Work," Harvard Business Review (60:5), September-October 1982, pp. 142-152. Zuboff, S. In the Age of the Smart Machine: The Future of Work and Power, Basic Books, New York, NY, 1988.

About the Authors Bonnie Kaplan is an assistant professor of information systems at the University of Cincinnati's College of Business Administration. She is also an adjunct assistant professor of clinical pathology and laboratory medicine. Dr. Kaplan received her Ph.D. in history from the University of Chicago. She has had extensive practical experience in information systems development at major academic medical centers. Dr. Kaplan's research interests include behavioral and policy issues in information systems, implementation of technological innovations, information systems in medicine and health care, and history and sociology of computing. Her publications have appeared in Journal of Medical Systems, International Journal of Technology Assessment in Health Care, Journal of Health and Human Resources Administration, Journal of Clinical Engineering, and volumes on human-computer interaction and on medical computing. Dennis Duchon is an assistant professor of organizational behavior in the College of Business at the University of Texas at San Antonio. Before joining the faculty there, he was an assistant professor of organizational behavior at the University of Cincinnati. He received a Ph.D. in organizational behavior from the University of Houston. He has worked as a manager both in the United States and abroad. Dr. Duchon's research interests include behavioral decision making and the implementation of new technologies. He has published in the Journal of Applied Psychology, IEEE Transactions in Engineering Management, and Decision Sciences.

Suggest Documents