development of information systems in a technology- centric sense, the same ... Firstly, the business costs and benefits associated with the development and use of .... baseline or (b) the degree of support in achieving dynamic and changing ...
Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000
Component-Based Information Systems: Toward a Framework for Evaluation Mark Lycett and George M. Giaglis Department of Information Systems and Computing Brunel University Uxbridge Middlesex, UB8 3PH United Kingdom Email: {Mark.Lycett | George.Giaglis} @ brunel.ac.uk Abstract Information systems evaluation is problematic. It is noted to be under-researched in relation to development, is contentious in terms of quantification, measurement and effort required and there is a marked difference between theoretical development and practical use. Drawing on a critical review of ‘tradition’ in information systems evaluation this paper applies the lessons learnt to the context of componentbased development. In common with development approaches that have preceded components, there is consequently a danger that investment decisions fall prey to ‘hype’ and there is evidence to suggest that there is considerable risk inherent in the migration to component-based systems. Such risk provides an opportunity to bridge the perceived chasm between information systems development and evaluation in three ways. Firstly, by integrating business driven evaluation with the development approach at an early stage of the adoption process. Secondly, by using this integration to lessen the perceived effort of evaluation. Thirdly, by making evaluation a dynamic and ongoing process. With this in mind, the paper discusses the demands of evaluating component-based development in the context of a conceptual framework that concentrates on gaining a pluralistic understanding of the information needs of stakeholders.
1. Introduction Information system evaluation is an area noted to be somewhat under-researched [14, 45]. Indeed, whilst much intellectual effort has been devoted to the development of information systems in a technologycentric sense, the same does not seem to hold true for the evaluation of investment in such systems in a business-centric sense. Evidence suggests that the majority of investment decisions take place without a rigorous appraisal of the expected costs and
organisational benefits and that they often represent an ‘act of faith’ based on competitive imperatives [12, 20, 41]. Similarly, there is little evidence of evaluation during the operational lifecycle of the system. This may be argued to be due largely to the problems of ‘measurement’, which can be described as follows. Firstly, the business costs and benefits associated with the development and use of an information system are inherently hard to understand and predict and, as a consequence, difficult to quantify and measure [2, 13, 44]. Secondly, business organisation is dynamic and changing and, as a consequence, business costs, benefits, risks and the like are a relative concept [24]. Measurement and evaluation thus need to be treated as an ongoing process. The pragmatic consequence of these points is that it may be argued that the perceived effort required for evaluation is assumed to be too great in the context of current business practice despite the high levels of investment. Given current levels of information system ‘failure’ and the cost associated with ongoing system maintenance, this assumption may be questioned. Despite the adoption of a methodical approach to system development, there is considerable evidence to suggest that information systems continue to take too long to build, cost too much to implement and maintain and fail to meet the needs of their environment in the long-term [15]. The dynamic nature of business organisation may be argued to have much to do with these problems and, increasingly, the ‘silver bullets’ of system development attempt to address flexibility, the capability of the system to respond to the changing needs of the business environment in a timely and graceful manner. These approaches include modularity, object-orientation and, most recently, component-based development. The latter approach builds upon the former two and aims at the dynamic ‘plug-and-play’ composition of information systems from heterogeneous off-the-shelf software components. Though this has been an historical industry goal [26], the approach is now supported with the requisite
0-7695-0493-0/00 $10.00 (c) 2000 IEEE
1
Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000
underlying technologies and is rapidly gaining in popularity. The implications of a component-based approach to systems development and evolution are significant and it may be argued that there is considerable risk involved, not least from the hype surrounding the technology. Such risk, however, provides an opportunity to bridge the perceived ‘chasm’ between information systems development and evaluation. Firstly, by integrating business-driven evaluation with the development approach at an early stage of the adoption process. Secondly, by using such integration to decrease the perceived effort of evaluation. Thirdly, by making evaluation a dynamic and ongoing process. In attempting to achieve these three aims, the paper begins by examining the tradition in information systems evaluation, looking both at process and current methodical approaches. A review of tradition also provides the background for a critical discussion on the limitations of evaluation that follows, which further explores the difficulties of measurement. With these points made clear, the paper then considers the perceived benefits and implications of componentbased development. This allows for a reconciliation of the demands of component-based development with the demands of business-driven evaluation, alongside an examination of the ways in which the effort associated with evaluation can be lessened. Reconciliation is explored through the development of a conceptual framework that concentrates on gaining a pluralistic understanding of the information needs of the stakeholders in the component-based systems development and composition. The framework aims to address the noted limitations of evaluation and provide both the researcher and practitioner with the foresight to make informed decisions related to costs, benefits, risks and the like.
2. The Process of Evaluation The need to evaluate information systems investment is not new. The term ‘software crisis’ was coined in the late 1960s to articulate problems associated with information systems development [25]. In recent years, the changing role of information systems in business organisation has given added impetus to the problem of information systems evaluation [13]. The increasingly high level of expenditure on information systems, their increasing penetration of the core functions of business, together with their potential for changing the nature of business itself (witness e-business), have all served to raise the profile of the importance of evaluation. Practitioners and managers alike have expressed increasing concern with regard to their ability to
evaluate information system investment. The most prevalent concerns have been found to be (a) measuring and improving information systems effectiveness/productivity and (b) aligning the information systems organisation with that of the enterprise [42]. Clearly, evaluation in such respects is important for several reasons. Firstly, organisations need to justify information system investments on the basis of the large sum of capital consumed and the need to prioritise between heterogeneous investment proposals competing for scarce organisational resources [37]. Secondly, managers need to have a better understanding of the impact of an information system on organisational performance to better deploy resources and improve the organisational position visà-vis its competitors [10]. Diametrically, a lack of understanding in this respect may lead to inappropriate resource allocation and competitive disadvantage [12]. Thirdly, viewed in systems terms, evaluation provides a basic managerial feedback function as well as forming a fundamental component of the organisational learning process [36]. Lastly, evaluation provides benchmarks for what is to be achieved by the information system investment. Such benchmarks can later be used to provide a measure of the success of the implementation of development projects [12]. In conceptual terms evaluation may be argued to be ‘omnipresent’, in that it may be carried out pre-, during development and post-development. It is the contention here that pre- and post development evaluation are the most problematic. Pre-development evaluation is problematic for the reason that, as the information system does not exist at that point, no real data/information related to performance or organisational impact exists. Evaluation thus has to be based on assumptions, forecasts and judgement. This has long been considered a difficult and elusive domain and many reasons have been offered in support. These are summarised in Table 1. This shows that the major difficulties with evaluation relate either to benefit measurement or to the methodical approach adopted, which is not surprising as the direct costs associated with developing a given information system are relatively easy to measure. Post-development evaluation is problematic in the sense that business organisation and the processes, structures and mechanisms that are seen to comprise it (which are modelled as part of the information system) are neither static nor invariant [24]. In evaluating costs and benefits immediately after development, and at given times in the longer term, there is no guarantee that the organisational situation is ‘as-was’ at the time that costs and benefits were set out. The pertinent question this raises in connection with both organisational practice and information systems evaluation is whether the focus should be placed on
0-7695-0493-0/00 $10.00 (c) 2000 IEEE
2
Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000
measuring (a) a set of predictions against an historical baseline or (b) the degree of support in achieving dynamic and changing organisational goals. In similar fashion to pre-development evaluation, the latter point Category Cost-related Benefit-related
Risk-related
Method-related
Politics-related
raises many of the issues noted in Table 1. In addition, it raises the issue of understanding the consequences of not meeting the ongoing requirements of business organisation.
Reason Estimating the cost and time to develop new applications is difficult and unreliable Human and organisational costs are often neglected during evaluation Benefits may include intangible, indirect or strategic advantages that are inherently difficult to express in quantitative terms (especially monetary) Benefits are indirect to business and therefore indistinguishable from other confounding factors (people, process and strategy for example) Many applications are targeted at achieving second-order effects that are difficult to predict and measure Fractional savings cannot be aggregated to provide realistic savings on an organisation-wide scale The planning horizon (for which benefits must be assessed) may be longer than the forecasting horizon (for which benefits can be assessed) Organisations may simply be unaware of the potential benefits of innovative new systems The life-span of an information system is uncertain (due to technological obsolescence or changing requirements) The impact of the information system depends on a number of external factors that may lie outside the sphere of organisational control Financial and accounting techniques may be inappropriate for evaluation The information system is usually part of a wider business reorganisation and hence the investment cannot be evaluated out of the context of the overall change Tasks left out of the scope of the information system must also be evaluated as they can contribute significantly to the overall costs Project champions tend to underestimate costs and overestimate benefits
Table 1 . The Difficulties of Pre-development Evaluation
3. Method in Evaluation In the absence of a concrete theory of information system evaluation, a plethora of methods and techniques for aiding decisions related to the desirability and priority of investment exist. For reasons of space, an exhaustive review of these methods is beyond the scope of this paper. The interested reader, however, is referred to [13, 19, 36]. The classical financial/accounting methods of investment evaluation are currently the most widely used methods for information systems evaluation. These are methods that originate from the notion of Discounted Cash Flows (DCF), which is based on estimating and comparing the outflows (costs) and inflows (benefits) of a proposed investment using a given discount factor to compute the present value of future monetary estimates. Variants upon this theme include Net Present Value (NPV), Return on Investment (ROI) and Internal Rate of Return (IRR). These methods have the advantage of being widely used and tested in a variety of investment evaluation settings. Their major drawback is that they focus exclusively on the estimation of cash flows and consequently tend to based on data that satisfy accounting criteria and that can be legitimised via appearance in financial statements [13]. In general, they are not suitable for evaluating investments that are
expected to yield benefits that are primarily intangible, indirect or strategic in nature [8]. Cost Benefit Analysis (CBA) is a variant of DCFbased methods that attempts to overcome the problem of valuing intangibles. It does this by assigning a monetary value for each element contributing to the costs and benefits of an information system project, including intangibles. The drawback here is that, to achieve this, the method is necessarily based on surrogate measures for intangible costs and benefits, which may involve considerable controversy and debate [13]. SESAME is a variant of CBA in which the payback of the information systems project is derived by computing what the costs would have been if the same functionality had been delivered by noncomputer-based methods [22]. Return on Management (ROM) provides another alternative, which seeks to provide an index of the contribution of Management Information Systems (MIS) to the enterprise [38]. Lastly, in formal terms, Information Economics (IE) provides a comprehensive evaluation method that is argued by its authors to be applicable to all evaluation situations [29]. The method extends CBA with three additional processes, these being ‘value linking’, ‘value acceleration’ and job enrichment. From a ‘softer’ perspective, a second group of methods can be identified that are more qualitative in nature. These methods focus on involving a wide number of stakeholders in the evaluation process in an
0-7695-0493-0/00 $10.00 (c) 2000 IEEE
3
Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000
effort to facilitate informed judgement on the expected value of the information system. Multi-objective, multi-criteria methods provide one example of this approach that recognises that there are measures of worth other than monetary values and that even direct measures may have different value for different people [9]. Value Analysis, in a similar vein, emphasises ‘better information’ and ‘better decision-making’ as the primary benefits of information systems and seeks to explore the value added to organisation by such improvements [34]. Lastly, experimental methods such as prototyping and simulation have also been proposed as useful means of evaluating information systems [13, 36]. Prototyping can yield real data on which to estimate a system’s potential organisational impact at a relatively early stage of development. Simulation offers the potential to allow experiments to be run with alternative system configurations and for ‘what-if’ and sensitivity analyses to take place. Despite these claims, however, few studies appear to have addressed the issue of evaluation via simulation in an explicit manner.
4. A Critique of IS Evaluation It may be argued that the problem of many of these approaches to evaluation is that they focus more on processing the relevant data during the decisionmaking process than on generating the data that will drive evaluation. In other words they focus on carrying out and managing the process of evaluation and not on the actual measurement of costs, benefits, risks and alike. The softer approaches described above are a clear reflection of the difficulty associated with measurement itself but, despite theoretical legitimacy, they are very rarely used in practice. This lack of use is reflected in empirical surveys, which have consistently shown that most companies are using variants of a small number of methods, notably financial and accounting techniques such as ROI and CBA [8, 12]. These would indicate that the requirement of managers and decision-makers is for simple, general purpose measures of value that are widely understood and allow information system investments to be treated in the same manner as other capital expenditure proposals [32]. These criteria are satisfied by all standard accounting and financial methods and it is therefore unsurprising that they are currently the ‘natural choice’ for information systems evaluation since they are already in widespread use for evaluating other types of capital expenditure. To use financial methods effectively, however, it may be argued that ways are needed of generating reliable and ‘objective’ estimates of both the costs and
benefits of information systems in relation to business performance. Without such data, over-reliance on such methods can lead to an excessively conservative information systems portfolio and an associated loss of competitiveness. Despite acknowledging the need for measurement in theory, however, researchers in the information systems field have characteristically avoided addressing it in practice [1]. This, as has been noted previously, may be rooted in the problems of measurement itself. Analyses of information systems development show that positivism is the dominant underlying philosophy of many approaches [18]. This tends toward promoting meaning as some form of correspondence relationship between an entity in the real world and its representation and, further, assumes that the entity in question exists independently of that representation [18]. Thus, in evaluative terms, there is an implicit danger of defining a construct in terms of cost or benefit and taking that to be reality instead of an ‘interpretation’ of reality. The position taken here is an interpretive one asserting that, in social arenas such as business organisation, objectivity translates to a consensus understanding and agreement between the stakeholders concerned in an organisational situation [24]. The constructs of measurement are thus shared interpretations of agreed aspects of reality that subsequently serve as a vehicle for communication. In this respect their output provides the data points for a map that can, importantly, act as the blueprint upon which organisational action can be based [43]. The dynamic aspect of business organisation serves to complicate matters further, as it notes that these interpreted aspects of reality are volatile and will change over time. This has the consequence of strengthening the view that measurement should be based against an assessment of where an organisation wants to be as opposed to where it was. Similarly, the map analogy raises awareness that there may be varying levels and units of analysis that different concepts, frames of reference and evaluation criteria will apply (different ‘scales’ of map). In this respect, in a comprehensive review of existing research in information systems evaluation, Smithson and Hirschheim [36] identify both ‘levels’ and ‘zones’ of evaluation. Levels range from the macro-economic down to individual stakeholders and zones include efficiency, effectiveness and understanding. Significantly, the authors note that, whilst many evaluation methods appear well founded academically, there is a marked gap between theory and practice in the understanding zone. This supports the earlier assertion here that evaluation is seen as effort intensive, but is used by them to note industrial concentration on effectiveness as opposed to understanding. The advantage of including the latter perspective is that it provides a means for explicitly recognising that it is not
0-7695-0493-0/00 $10.00 (c) 2000 IEEE
4
Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000
the singular information systems investment that is to provide returns but the more holistic integration of technology with people and process. This reflects current wisdom related to the ability of information systems to ‘transformate’ and ‘informate’ as well as ‘automate’ [33].
5. A Component-Based Perspective Whilst evaluation is important in the context of existing approaches to systems development and operation, it may be argued to be particularly pertinent to new and emerging approaches. Software development and maintenance have for some while held the mantle of being the major cost drivers in systems development [3, 15]. Accordingly, since the articulation of the ‘software crisis’ in the late 1960s, the major motivations of development approaches, methods, techniques and tools have been those of cost reduction, cycle-time compression and increased system flexibility. This is witnessed, for example, in general approaches such as object-orientation, software reuse and rapid application development. In broad terms, the evolution of systems development thinking is moving away from the ‘develop-from-scratch’ mentality that has traditionally dominated, toward one that emphasises construction from reusable building blocks. This reflects the following points. Firstly, the diminishing economic value of developing large-scale systems from scratch [6, 17]. Secondly, the need to make information systems more responsive to the dynamic change of the business environment [30]. Component-based development is the latest in a line of approaches that promise to minimise development cost, compress cycle-time and improve flexibility. It is an approach that strongly espouses the ‘building block’ perspective, as the software element of componentbased information systems is viewed as a dynamic composition of reusable, pre-tested components that can be upgraded independently [27, 39]. Composition is enabled through a software architecture that allows components to be removed, replaced and reconfigured in a dynamic fashion, which provides the primary mechanism for flexibility in the face of change [27]. Components, as constituents of this architecture, represent units of independent production, acquisition and deployment that interact to form a functioning system [39]. As independent units of production, different people can develop components at different times, in complete ignorance of each other. As independent units of acquisition and deployment, organisations potentially benefit from reduced cost and risk associated with commercial-off-the-shelf-software. In ideal terms, an organisation thus selects the functionality they require, purchase or build the
appropriate component(s) and ‘composes’ a system from them. Component-based development thus aims to provide a software environment (a) where reuse and interoperability are the rule, as opposed to the exception, and (b) that is extensible, scaleable and thus more flexible in the face of changing business needs. The potential benefits that are offered by the approach are seductive and it may be argued that innovative developers and early-adopter partner organisations are following the pattern of making investment decisions that represent an ‘act of faith’ based on competitive imperative [23]. Evidence from studies of software reuse provides fuel for such acts of faith, reporting return on investment as high as four to one after two years, consistent cycle time reductions of more than thirty percent and a significant increase in competitive edge [4]. The implications of the approach are significant for the following reasons. Firstly, the ability of components to act as independent units of development, acquisition and deployment allows for a sharper divorce between ‘producers’ and ‘consumers’, which may be argued to have ramifications both for the structure of organisation and the structure of industry. Secondly, the emphasis on architecture indicates that significant investment in infrastructure is required to enable the widespread reuse of common assets. It this respect it is increasingly recognised that architecture has to blend different perspectives that are context dependent and often evolutionary in nature (see [21] for example perspectives). Thirdly, it is generally more expensive to develop reusable components in the short term and significant organisational and cultural barriers need to be addressed [4]. Fourthly, following evidence related to Enterprise Requirements Planning (ERP) systems for example, there is likely to be significant effort and cost involved in acquisition and assembly [11]. Though non-exhaustive, this list is adequate for illustrating implications and both the producer-consumer nature of the component approach and the effort profile across the development lifecycle is subject to changes in emphasis.
6. Toward an Component-Based Evaluation Framework Drawing the threads of previous argument it has been noted (a) that evaluation is a necessary ongoing process, (b) that it is traditionally divorced from development, (c) that it requires a plurality of perspectives and (d) that the output is objective only in that it provides a vehicle for organisational action. Having consequently articulated the implications of component-based development, the aim here is to reconcile that development with an enlightened view of evaluation. The vehicle for doing this is via the
0-7695-0493-0/00 $10.00 (c) 2000 IEEE
5
Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000
construction of a framework whose understanding is based on content, context and process (CCP) analysis [31, 40]. From an evaluation perspective the elements of this analysis may be described as follows. Content focuses on understanding the elements of evaluation including selection criteria and values. Context focuses on understanding why the evaluation is to be carried out, who undertakes it and the situational aspects of social relations. Process concentrates on understanding how the evaluation is to be carried out, the frequency of evaluation and the dissemination of results. Smithson and Hirschheim [36] note that CCP analysis allows the use of non-standard evaluation tools such as stakeholder maps. Such maps may be argued to provide a good start point in terms of addressing the plurality of concerns noted earlier; though, from a component-based perspective, a more generalised view can be facilitated by relating stakeholders to emerging component-based roles. This is more in line with a multi-actor approach [16], whose ‘political’ nature would suggest that it is wise to normalise evaluative measurements to account for implicit and/or explicit political bias. Smithson and Hirschheim [36] also note that content analysis is likely to extend beyond a narrow conceptualisation of costs and benefits, linking to notions of risk and organisational strategy for example. This provides an explicit link to context and it is posited here that it is the people embedded in a given context who are best suited to interpret that context on an ongoing basis, though this is not to say that external interpretation does not have value. One means of extracting content from context is by attempting to identify the information needs that may be associated with the component-based roles, an approach that is based on others’ observation of the information intensive nature of evaluation [35]. Accounting for the earlier differentiation between production and consumption it is accepted that the information needs of these roles, and the roles themselves, will vary across contexts. The advantage of identifying information needs, however, is that potential exists to align one or more measurement criteria with them. Consequently, information needs become the primary means for generating evaluation data. While information needs may vary across component-based roles, one purpose of evaluation is that of interpreting and communicating results both within and across roles. In this respect it is argued that the roles that can be discerned are actually different universes of discourse [18]. Similar professional language, values and beliefs may be shared between individuals with common roles, but barriers of professional language, values and beliefs will often exist between roles. For example, whilst technical architects may be interested in the effectiveness of
subjective heuristics for architectural infrastructure design, senior managers may only be interested in the return on investment for that infrastructure. In converting the former to the latter, it is argued that simplification is unavoidable. Given the view herein that measurement is a shared interpretation of reality that serves as a basis for communication, and thus as a map for organisational action, this is acceptable if it increases awareness on the part of decision-makers (following [5]). It does, however, suggest a dual role for evaluation and its associated measures. Firstly, to provide a means of generating and interpreting data for the purpose of increased understanding. Secondly, to provide a feedback mechanism for the efficiency and effectiveness of the measures themselves. Both are required for organisational learning. It also suggests that tracking mechanisms are required to provide an audit of the process of communication across universes of discourse. An example of broad framework that results from CCP analysis to date is shown at Figure 1. This is conceptualised as a role-based ‘web’ of information needs that generate contextually filtered data, which is communicated via a shared understanding of the perspectives of different universes of discourse. The framework is flexible in that it is not prescriptive with regard to either roles or information needs; both can be driven by and evolve within the organisational context. This flexibility is purposeful and relates to the earlier evaluative critique where it was noted that interpreted aspects of reality are volatile and will change over time. Dual aspects of the process of interpretation, the remaining part of the CCP analysis to be addressed, are intrinsically interwoven here. At the level of a given component-based role, the information needs represent a first-order interpretation of context. To address the contemporary perception that too much effort is required for evaluation, it is argued that these needs should, where possible, be generated and collated as an implicit part of day-to-day operations. This points strongly to automated support. At the collective level, where different universes of discourse meet, current practice would indicate that interpretation is represented by a series of static snapshots that are taken pre, during and post development. For both first and second order levels of interpretation, a process model can be identified by modifying a model developed by Ward et al [41]. This relates to the evaluation of IS/IT benefits and the contribution made here is to broaden the focus of the model by relating it to the ‘expectations’ of a component-based role. Expectation may be argued to be a more valuable concept in this context as it has the ability to relate to cost, benefit, risk or any other perceived element of interest. In addition, it also has the ability to relate to both sets of predictions that
0-7695-0493-0/00 $10.00 (c) 2000 IEEE
6
Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000
provide a baseline for evaluation and changing organisational goals. In the latter respect, evaluation is
oriented toward understanding the gap between an existing situation and a desired one.
Senior Management
Information Nee ds ‘Filter’
Implementation Mana gement
Project Manageme nt
Conversion and au dit trail between Universes of Discourse
Organisational Context
Architectural Design and Man agement
Domain Analy sis and Mana gement
Figure 1. Example Role Web for 'Producer' The revised model is illustrated at Figure 2 and its elements may be briefly described as follows. Firstly, the expectations related to information needs are identified and suitable measures developed. Given the plethora of methods and metrics that currently exist, the emphasis here is on understanding which measures are appropriate and their contextual limitations. Secondly, both the means of evaluating whether expectations are met and the constraints on variation
have to be agreed both within and across universes of discourse. Thirdly, the operational aspects of achieving expectations have to be executed. Fourthly, the data that is generated has to be reviewed and evaluated via the agreed means. Lastly, and most prevalent in the context of ongoing evaluation, expectations will be modified in the light of the organisational learning that should result from evaluation.
Identify ing and Structuring Expectations
Agreeing Expectation Realisation
Modif ication of Expectations
Ev aluating and Rev iewing Generated Data
Achiev ing Expectation Realisation
Figure 2. Process Model of Expectation Management
0-7695-0493-0/00 $10.00 (c) 2000 IEEE
7
Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000
7. From Concept to Practice The outlined conceptual framework represents the foundation of ongoing research. In moving from concept to improved practice, a thirty-six month research programme has been instigated that is divided into the phases of grounded conceptual development, CASE tool development and empirical testing. Following the preceding discussion, the primary activities of the grounded conceptual development phase are to identify the key (a) roles in componentbased development, (b) information needs of those roles, (c) metrics for the generation of evaluative data and (d) metric combinations for the generation of evaluative data applicable across domains of discourse. This demands a mix of primary and secondary research methods, the latter providing the initial role webs (see Figure 1) that can evolve via empirical study. Given that several evaluation methods and metrics exist, secondary research is more a case of distilling their essential aspects than reinventing the wheel. Primary research in this phase takes the form of lightweight case studies undertaken in four organisations currently involved in component-based development. The data extracted will then be generalised and verified/validated via an online questionnaire administered across a number of roles in twenty other organisations involved in component-based development. The aim of the second phase is to develop a CASE tool to dynamically generate the data necessary to allow interpretation in the context of Figure 2. This will be populated initially with the role-based sets of data generation metrics and cross-role metric combinations. It is important to note, however, that
extensibility is a key aim of the tool and roles, information needs, metrics and metric combinations will be allowed to be developed and tailored in context. Given that one programme aim is that of reducing the perceived effort in evaluation, a degree of implicit data generation will be achieved via interfacing with other common CASE tools, such as those used in objectoriented modelling and project management. Evaluative capability will also be extended via the addition of simulation and ‘what-if’ style decision features, meta-evaluation of like-minded data sets over time and audit trails. Two organisations are involved in the development of the CASE tool and interaction between industry and academia will be achieved via an action research approach [7]. A variant of action research, akin to clinical fieldwork, also provides the means of achieving the final phase of the programme, which is the empirical testing and evaluation of the framework via its CASE tool representation. The primary activities in this phase are to (a) install, train and use the CASE tool, (b) evaluate tool results, (c) evaluate the grounded concepts and (d) revise the conceptual framework and/or CASE tool as necessary. The evaluation of the tool itself can be viewed from internal and external perspectives. The latter perspective is the more important in the interpretive sense and time will be spent establishing existing evaluation practice as a baseline for comparison. In keeping with the discussion herein, the form of evaluation will be CCP analysis. Lastly, in order to maximise the worth of empirical testing, clinical fieldwork will be achieved via two consulting organisations who will each act as a proxy for a further two end-user organisations. The set of primary activities are summarised at Table 2.
Stage Grounded conceptual development
Primary Activities (a) Identify key roles (b) Identify key information needs (c) Identify key metrics (d) Identify key metric combinations
CASE tool development
(a) (b) (a) (b) (c)
Empirical testing and evaluation
(d) (e)
Tool design Tool implementation Establish current evaluation practice Install, train and use CASE tool Evaluate tool results (part in the context of current practice) Evaluate grounded concepts Revise concepts and/or tool implementation
Industrial Participation Initial development aided by 4 organisations involved in component-based development, widening to 20 for verification and validation 2 organisations involved in CASE tool development 2 organisations involved in CASE tool development 2 consulting organisations, each acting as a proxy for 2 end user test sites
Table 2. Outline of Research Programme
0-7695-0493-0/00 $10.00 (c) 2000 IEEE
8
Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000
8. Conclusion This paper has articulated the foundations of an ongoing research programme into the evaluation of component-based development. Through a review and critique of common approaches to information systems evaluation three major themes were highlighted. Firstly, that many existing approaches to evaluation concentrate on processing the relevant data during the decision-making process rather than on generating the data that will drive evaluation. Secondly, that there is a chasm between the theoretical development of evaluation methods and their empirical use. Thirdly, that the ‘objective’ nature of evaluation is questionable and the process is one that should be seen as a means for generating pluralistic understanding. With these themes clear, the paper then placed the focus of ‘evaluative’ attention on component-based information systems development. The component approach can be viewed as the latest in a line of potential ‘silver bullets’ for the problems of software development and, as a consequence, holds significant potential in terms of both benefit and risk. Given that the approach is currently in the ‘innovative’ and/or ‘early’ stage of organisational adoption, however, an opportunity is open to embed an evaluation philosophy at an early stage of the lifecycle. In applying the lessons learnt from the critique of evaluation a conceptual framework was constructed that is argued to be capable of generating the data necessary for interpretive evaluation. This was based on an articulation of the roles of the stakeholders involved in component-based system development and their associated information needs. Implicit within the framework is an ongoing first and second order evaluation process related to the ‘expectations’ of a component-based role, which aims to maximise individual and organisational learning. In moving from the first to second order it was noted that a simplification of interpretation is unavoidable as the differences in professional language, values and beliefs between roles have to be negotiated. It was proposed that this is acceptable if it (a) increases awareness on the part of decision-makers and (b) an audit trail of how simplifications were arrived at is held as part of the framework. Given that the framework represents the output of an early stage of research, roles and information needs are only loosely defined and no attempt has been made to prescribe any means of measurement. This is the subject of ongoing research and the position taken in respect of the means of measurement is that they must be treated as a ‘toolkit’ that can be tailored to organisational context. This applies, to a lesser extent, to both roles and information needs. In addition, the
reduction of evaluation effort is argued to be key to the organisational acceptance of ongoing evaluation and implicit CASE tool support provides one means of achieving this. Consequently, the articulated framework is currently the basis for negotiation with several industrial organisations that are interested either in improving their evaluative understanding and capability, or providing CASE tool support. The purpose of long-term action research is (a) to refine and empirically test the framework, (b) to empirically evolve a toolkit of general information needs and appropriate metrics, (c) to collect substantial empirical evaluative data on both existing and component-based development projects and (d) to improve the long-term evaluative capability of organisations.
Acknowledgements The authors’ sincere gratitude is extended to David Sprott of Butler Group for his insight related to rolebased analysis and his efforts in helping set up the industrial participation in the research.
References [1] Bacon, C. J. (1992). The Use of Decision Criteria in Selecting Information Systems/Information Technology Investments. MIS Quarterly (September), pp. 335-353. [2] Ballentine, J., Galliers, R., and Stray, S. (1994). Information Systems/Technology Investment Decisions: The Use of Capital Investment Appraisal Techniques in Organisations. First European Conference on Information Technology Investment Evaluation, Henley on Thames, UK, 13-14 September. [3] Bansler, J. P., and Havn, E. (1996). Industrialised Information Systems Development. CTI Working Paper No. 22, Center for Tele-Information, Technical University of Denmark, Lyngby. [4] Basili, V. R., Briand, L. C., and Melo, W. L. (1996). How Reuse Influences Productivity in Object-Oriented Systems. Comm. of the ACM, 39(10), pp.104-116. [5] Baskerville, R. (1991). Risk Analysis: An Interpretive Feasibility Tool in Justifying Information Systems Security. European Journal of Information Systems, 1 (2), pp. 121-130. [6] Baskerville, R., Travis, J., and Truex, D. (1992). Systems without Method: The Impact of New Technologies on Information Systems Development Projects. In The Impact of Computer Supported Technologies on Information Systems Development. Edited by K. Kendall, J. DeGross, and K. Lyytinen, Elsevier Science Publishers, B.V., North Holland Press, Amsterdam, pp. 195-213. [7] Baskerville, R., and Wood-Harper, A. T. (1998). Diversity in Information Systems Action Research Methods. European Journal of Information Systems, 7 (2), pp. 90-107. [8] Brown, A. (1994). Appraising Intangible Benefits from Information Technology Investment. First European
0-7695-0493-0/00 $10.00 (c) 2000 IEEE
9
Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000
Conference on Information Technology Investment Evaluation, Henley on Thames, UK, 13-14 September. [9] Chandler, J. S. (1982). A Multiple Criteria Approach for Evaluating Information Systems. MIS Quarterly, 6 (1), pp. 61-74. [10] Clemons, E. (1991). Evaluation of Strategic Investments in Information Technology. Communications of the ACM, 34 (1), pp. 22-36. [11] Davenport, T. H. (1998). Putting the Enterprise into the Enterprise System. Harvard Business Review, 76 (4), pp. 121-131. [12] Farbey, B., Land, F. F., and Targett, D. (1992). Evaluating Investments in IT. Journal of Information Technology, 7 (2), pp. 109-122. [13] Farbey, B., Land, F. F., and Targett, D. (1993). How to Assess Your IT Investment: A Study of Methods and Practice, Butterworth Heinemann, Oxford. [14] Farhoomand, A. F. (1987). Scientific Progress of Management Information Systems. Data Base, 18 (3), pp. 48-56. [15] Fitzgerald, G. (1990). Achieving Flexible Information Systems: The Case for Improved Analysis. Journal of Information Technology, 5 (1), pp. 5-11. [16] Gregory, A. J., and Jackson, M. C. (1992). Evaluation Methodologies: A System for Use. Journal of the Operational Research Society, 43 (1), pp. 19-28. [17] Grimes, J., and Potel, M. (1995). Software is Headed Toward Object-Oriented Components. Computer, 28 (8), pp. 24-25. [18] Hirschheim, R., Klein, H. K., and Lyytinen, K. (1995). Information Systems Development and Data Modeling: Conceptual and Philosophical Foundations, Cambridge University Press, Cambridge. [19] Hirschheim, R., and Smithson, S. (1988). A Critical Analysis of Information Systems Evaluation. In Information Systems Assessment: Issues and Challenges. Edited by N. Bjorn-Andersen and G. B. Davis, North Holland, Amsterdam, pp. 17-37. [20] Hochstrasser, B. (1993). Quality Engineering: A New Framework Applied to Justifying and Prioritising IT Investments. European Journal of Information Systems, 2 (3), pp. 211-223. [21] Kruchen, P. B. (1995). The 4 + 1 View Model of Architecture. IEEE Software, 12 (6), pp. 42-50. [22] Lincoln, T. (1988). Retrospective Appraisal of Information Technology using SESAME. In Information Systems Assessment: Issues and Challenges. Edited by N. Bjorn-Andersen & G.B. Davis, North-Holland. [23] Lycett, M. (1999). The Development of ComponentBased Evolutionary Information Systems, Unpublished PhD dissertation, Brunel University, London. [24] Lycett, M., and Paul, R. J. (1999). Information Systems Development: A Perspective on the Challenge of Evolutionary Complexity. European Journal of Information Systems 8 (2), pp. 127-135. [25] Mahmood, M. A. (1993). Associating Organisational Strategic Performance with Information Technology Investment: An Exploratory Research. European Journal of Information Systems, 2 (3), pp. 185-200. [26] McIlroy, M. D. (1969). Mass Produced Software Components. In Software Engineering: Concepts and
Techniques. Edited by P. Naur, B. Randell, and J. N. Buxton, Mason/Charter Publishers, NY, pp. 138-150. [27] Nierstrasz, O., and Dami, L. (1995). ComponentOriented Software Technology. In Object-Oriented Software Composition. Edited by O. Nierstrasz and D. Tsichritzis, Prentice-Hall, Englewood Cliffs, pp.3-28. [28] Nierstrasz, O., and Meijler, T. D. (1995). Research Directions in Software Composition. ACM Computing Surveys, 27 (2), pp. 262-264. [29] Parker, M. M., Benson, R. J., and Trainor, H. E. (1988). Information Economics: Linking Business Performance to Information Technology, Prentice Hall, Englewood Cliffs, NJ. [30] Paul, R. J. (1994). Why Users Cannot 'Get What They Want'. International Journal of Manufacturing Systems Design, 1 (4), pp. 389-394. [31] Pettigrew, A. M. (1985). The Awakening Giant: Continuity and Change in ICI, Blackwell, Oxford. [32] Powell, P. (1992). Information Technology Evaluation: Is it Different? Journal of the Operational Research Society, 43 (1), pp. 29-42. [33] Remenyi, D., Money, A., and Twite, A. (1993). A Guide to Measuring and Managing IT Benefits, NCC Blackwell, Manchester. [34] Rivard, E., and Kaiser, K. (1989). The Benefits of Quality IS. Datamation (January), pp. 53-58. [35] Serafeimidis, V., and Smithson, S. (1994). Evaluation of IS/IT Investments: Understanding and Support. First European Conference on Information Technology Investment Evaluation, Henley on Thames, UK. [36] Smithson, S., and Hirschheim, R. (1998). Analysing Information Systems Evaluation: Another Look at an Old Problem. European Journal of Information Systems, 7 (3), pp. 158-174. [37] Strassman, P. (1985). Information Payoff: The Transformation of Work in the Electronic Age, Free Press, New York. [38] Strassman, P. (1990). The Business Value of Computers, Information Economics Press, New Cannan, CN. [39] Szyperski, C. (1998). Component Software - Beyond Object-Oriented Programming, Addison-Wesley, Harlow, Essex. [40] Walsham, G. (1993). Interpreting Information Systems in Organisations, John Wiley and Sons, Chichester. [41] Ward, J., Taylor, P., and Bond, P. (1996). Evaluation and Realisation of IS/IT Benefits: An Empirical Study of Current Practice. European Journal of Information Systems, 4 (4), pp. 214-225. [42] Watson, R. T., and Brancheau, J. C. (1991). Key Issues in Information Systems Management: An International Perspective. Information and Management, 20 , pp. 213-233. [43] Weick, K. E. (1990). Cartographic Myths in Organisations. In Mapping Strategic Thought. Edited by A. S. Huff, John Wiley and Sons, Chichester, pp. 1-10. [44] Weill, P., and Olson, M. (1989). Managing Investment in IT. MIS Quarterly, 13 (1), pp. 3-17. [45] Willcocks, L. (1992). IT Evaluation: Managing the Catch-22. European Management Journal, 10 (2), pp. 220-229.
0-7695-0493-0/00 $10.00 (c) 2000 IEEE
10