Allocation of function - Semantic Scholar

6 downloads 0 Views 361KB Size Report
Andy Dearden, Michael Harrison. & Peter Wright ...... Taylor & Francis. Fields, B. ... Behavioural Analysis and Measurement Methods. New York, NY.: John.
Allocation of function: scenarios, context and the economics of effort Andy Dearden, Michael Harrison & Peter Wright Dept. of Computer Science University of York York YO1 5DD UK Email: {andyd,mdh,pcw}@cs.york.ac.uk Tel: (+44) 1904 433376 Fax: (+44) 1904 432767

Abstract In this paper, we describe an approach to allocation of function that makes use of scenarios as its basic unit of analysis. Our use of scenarios is driven by a desire to ensure that allocation decisions are sensitive to the context in which the system will be used and by insights from economic utility theory. We use the scenarios to focus the attention of decision makers on the relative financial costs of developing automated support for the activities of the scenario, the relative impact of functions on the performance of the operator’s primary role and on the relative demands placed on an operator within the scenario. By focussing on relative demands and relative costs, our method seeks to allocate the operator’s limited resources to the most important and most productive tasks within the work system, and to direct the effort of the design organisation to the development of automated support for those functions that deliver the greatest benefit for the effective operation of the integrated human-machine system.

1

1 Introduction In this paper, we describe an approach to allocation of function that makes use of scenarios as its basic unit of analysis. Our use of scenarios is driven by a desire to ensure that allocation decisions are sensitive to the context in which the system will be used and by insights from economic utility theory. We use the scenarios to focus the attention of decision makers on the relative financial costs of developing automated support for the activities of the scenario, the relative impact of functions on the performance of the operator’s primary role and on the relative demands placed on an operator within the scenario. By focussing on relative demands and relative costs, our method seeks to allocate the operator’s limited resources to the most important and most productive tasks within the work system, and to direct the effort of the design organisation to the development of automated support for those functions that deliver the greatest benefit for the effective operation of the integrated human-machine system.

1.1 Structure of this paper In the next section, we seek to scope our discussion by defining the problem of ‘function allocation’ as we understand it. In section 3, we review previous work on allocation of function. In section 4, we argue that the practice of considering the allocation of individual functions in isolation from each other is unlikely to identify optimal allocations. This argument rests on results from recent studies of the impact of technology on work practice and from economic utility theory. In section 5, we propose an alternative method for allocation of function based around the use of scenarios. Our use of scenarios is intended to support designers in considering contextual issues in allocation decisions, and to provide a method that respects the insights of economic utility theory. In section 6, we illustrate the application of the method using a case study of allocation of function for a hypothetical future civil airliner. In section 7, we discuss the relation of the method to our own critique of allocation methods and to the significant critique previously presented by Fuld (1993, 1997). In section 8, we present our conclusions and discuss further work.

2 Defining the allocation problem In this paper we define allocation of function by the following statement.

2

‘Allocation of function is an early stage of the design of a human-machine system. The input to allocation of function is a specification of the functions that the human-machine system must deliver. The output from allocation of function is a specification, at an appropriate level of abstraction, of the functionality of the automated subsystems that will be required. The goal of allocation of function is to design a system for which: the performance (including considerations of safety and reliability) is high; the tasks of the operator are achievable and appropriate to the operator’s role; and the development of the system is technically and economically feasible.’ Some of the terms of this definition require elaboration. Functions We take the system ‘functions’ to be activities that the integrated human-machine system is required to be capable of performing. The description of the system at a functional level should not assume any particular division of work between a human and a machine. Our interpretation of functions as device independent inputs to the allocation process is consistent with the view propounded by Cook & Corbridge (1997). For instance, ‘monitor fuel levels’ would be an acceptable function required of the human machine system for an aircraft, but ‘monitor fuel display’ would not. To keep the distinction between functions that are requirements of the whole human machine system and human activities that are consequences of design decisions, we shall refer to work eventually assigned to a human operator as a set of ‘activities’ (our ‘activities’ are thus equivalent to ‘tasks’ in Cook & Corbridge (1997)). Some authors prefer to use the term ‘task’ rather than ‘function’ when discussing the inputs to allocation, when reviewing their work we retain their nomenclature. Early stage of design UK Ministry of Defence standards (MOD, 1989) place allocation explicitly as an element of preliminary design prior to the detailed design of equipment and construction of prototypes. A proposal for allocation of function may be necessary when initially tendering for a large development contract to demonstrate that the proposed concept is feasible. Goom (1996) points out that the ‘gestation period’ for a complex system can be up to 15 years, hence allocation decisions may need to be made more than 10 years before delivery of the final system.

3

An appropriate level of abstraction Given the very early stage of design that we are considering, the specification output by the allocation of function process necessarily describes the requirements for the automation at a high level of abstraction. However, the specification must be sufficiently precise to allow a design team to compare and contrast the relative difficulty of developing the proposed automated system; the relative demands upon the human that would follow from different allocation decisions; and the relative performance that might be achieved by a human-machine system based on alternative allocation decisions.

3 Existing approaches to allocation In reviewing existing methods of allocation we shall focus on three specific aspects of the process: 1) the way in which alternative allocations are characterised; 2) the criteria that are advocated as relevant to the allocation decision; and 3) the procedures that are used to explore the space of possible allocations.

3.1 Characterising different allocation alternatives In their extensive review of the function allocation literature, Older et al (1997) draw a distinction between techniques that restrict the allocation alternatives for a function to either allocation to the human or allocation to the machine, and those techniques that permit an allocation in which the function is shared between the human and the machine. The early work of Fitts (1951) and allocation approaches using MABA-MABA (Men are better at …. Machines are better at ….) lists are typically restricted to considerations of either the human or the machine performing each individual function. In such schemes, the space of allocation of alternatives can be viewed as a vector of binary values (human or machine), whose dimension is the number of functions identified for the system. Many recently developed techniques for dynamic 1

allocation of functions treat the space of alternative allocations in this way. Typically such techniques deal with a small number of functions each of which can be 1

Dynamic allocation of functions occurs when the allocation of a function changes during the operation of the system. Dynamic allocation may be achieved by the operator delegating functions to an automated system (adaptable automation), or may be achieved by the automated system changing its mode of operation based on monitoring of the operator and/or the environment (adaptive automation).

4

decomposed into multiple recurring instances of specific tasks. For example, Rencken & Durrant-Whyte (1993) consider a system whose functions are to identify objects as either humans or robots and to track the movement of the objects within a room. These two functions are decomposed into multiple instances of three subfunctions: identifying new objects entering the room, tracking normal movement of the objects, and ‘re-capturing’ objects which have previously been tracked but where the system has lost the track as a result of the object making some unexpected manoeuvre. The computer is assumed to perform the tracking subfunction for all objects. The allocation problem is to assign the subfunctions of identifying or recapturing individual objects to either the human or the machine. Since Fitt’s early work, it has become apparent that many functions in complex systems require sharing of the function between the human and the machine. Such sharing may take place in many different ways. Billings (1996) identifies 5 different levels of automation between totally manual operation by a human and fully autonomous operation by a machine. Sheridan & Verplanck (1978) offer a set of 9 intermediary levels that can occur in a supervisory control system. Goom (1996) observes that: “In modern defence systems, the majority of functions need both humans and machines to undertake complementary tasks”[p53] Given this observation, any method that is restricted to considering the allocation of functions to either a human or to a machine will be inadequate. Indeed, a technique that does not discriminate between alternative ways in which a function might be shared is also unlikely to be useful. Allocation techniques developed more recently recognise the possibility of sharing a function and consider the different ways such a sharing could be configured. Ip et al. (1990) use ‘Task Allocation Charts’ to break down each function into a series of subtasks that can then be allocated to either a machine or a human. Task Allocation Charts can also describe allocation of elements between different human operators. Meister’s (1985) method uses a similar scheme, configuring a set of subtasks required to implement a function and then considering alternative ways these could be shared between a human and a machine. Some techniques offer generic decompositions of a function, allowing each element within the model to be allocated to either a human or a machine. For example, Shoval et al. (1993) use a three stage model of perception, cognition and action for each function.

5

3.2 Selecting and combining criteria in allocation decisions Having identified some alternative options for allocating functions, a method needs to offer some guidance as to the criteria that are relevant to the decision process. In the historical development of allocation methods we see a gradual enrichment of the set of criteria applied. Traditional MABA-MABA lists focus exclusively on the relative performance, with respect to each function, that may be achieved by either a human or machine performing it. More recent work has identified multiple criteria that might be applied in allocation decisions. Meister (1985) suggests seven criteria that should be considered, although he does not claim that the list is exhaustive. Meister’s list covers: performance, reliability, number of personnel, workload, cost, maintainability and safety. In a review of techniques, Older et al. (1997) list seventeen different criteria that have been applied. Papantonopoulos & Salvendy (1998) provide a list of twenty-six criteria suggested by other authors. Meister (1985) also provides a technique to support multi-criteria decision making in allocation decisions based on the construction of a weighted linear preference function. One difficulty with combining a flat list of criteria using a weighted linear function is that this method of combination assumes that the criteria are independent (Vincke, 1992). The result of including closely related criteria such as maintainability and reliability in Meister’s method may be that these criteria are given too much weight within the decision. The Analytic Hierarchy Process (Saaty, 1980) is a multicriteria decision aiding technique that seeks to overcome this difficulty by organising criteria into a hierarchy. Papantonopoulos & Salvendy (1998) propose a method for allocating cognitive tasks using the Analytic Hierarchy Process. Their hierarchical organisation of criteria considers: ‘operational requirements’, which are requirements that emanate from the operational environment in which a cognitive task must be performed (i.e. requirements that derive from the task context); and ‘performance requirements’, which are requirements in terms of the cognitive abilities needed to perform the task. Clegg et al. (1996) also organise the criteria that they recommend into a hierarchy that covers: technology characteristics (feasibility, cost, reliability, constraints that automation might impose on other aspects of the system); environmental characteristics (health & safety factors, data security, legal considerations, additional constraints that might rule out automation); task

6

characteristics (criticality of the task, performance requirements, physical demands of the task, cognitive demands of the task, and overall workload demands of the job); and people characteristics (knowledge skills and education required, individual differences). More recent work has identified the need to consider the overall structure of the work that the human has been assigned. In the development of military systems Goom (1996) states: “One of the principle lessons that has emerged from the application of MANPRINT has been the need to identify the jobs of the users”[p.48] In the method proposed by Clegg et al. (1996), the allocation of tasks is preceded by an initial selection of an ‘organisation of work’ which defines, amongst other elements, the various ‘roles’ that the humans will have within the work system. This understanding of the humans’ role is then a criterion to be considered when allocating tasks.

3.3 Searching the space of alternative allocations Given a characterisation of the option space for allocation and a set of criteria for determining the quality of allocation decision, a method requires some procedure for exploring a range of allocation possibilities in order to find ‘good’ or ‘optimal’ allocations. Two basic strategies are evident in the literature as surveyed by Older et al. (1997). We consider these two strategies below, and then discuss the problems associated with them.

3.3.1

Divide and conquer (the reductionist approach)

The majority of techniques for allocation during systems design address this problem by considering each function in turn and attempting to identify an allocation for that function that is in some sense ideal. Once a hypothetical allocation for each function has been generated, this hypothesis is examined to check whether it meets the design constraints. In the rest of this paper, we shall refer to this as the ‘reductionist’ approach, because it seeks to reduce the complexity of the global allocation decision by decomposing it into a set of (relatively) independent decisions identifying hypothetical allocations for each function. One example of this type of procedure is described in detail by Pulliam & Price (1984). The procedure takes one function at a time and considers a set of alternative engineering concepts that might be possible to implement the function. Each of these

7

concepts is then analysed by applying a series of qualitative questions to the design proposal, such as: ‘is automation mandatory?’, ‘if mandatory, is automation feasible?’, ‘is the function too broadly defined to allocate?’, ‘is automation technically preferable?’, ‘should control tasks be allocated to man to keep him occupied, interested and informed?’. This process generates a hypothetical allocation for each function. In the later stages of the procedure the set of hypothetical allocations is tested ‘deductively’ by asking the following questions about the design: 1) Does man meet core performance requirements ? 2) Does man meet human performance requirements ? 3) Are cost trade-offs acceptable ? 4) Is the human factors structure adequate ? 5) Is cognitive support adequate ? 6) Is job satisfaction optimal ? Meister (1985) follows the same basic structure, considering each function in turn, generating design options for how that option could be distributed between humans and machines, and then applying multi-criteria decision techniques to trade-off between the different options. When all the individual functions have been given a hypothetical allocation the global design constraints are considered and any misfits identified lead to re-design of problematic functions. The same iterative pattern of hypothesising allocation for individual functions, collating these hypotheses, examining the emergent consequences and then iterating to correct difficulties, is also apparent in the methods outlined by Gramopadhye et al. (1992) and by Corbridge & Cook (1997) . Clegg et al. (1996) describe a procedure that is slightly less reductionist than some of those listed above. This is so for two reasons: •

Firstly, prior to the allocation of individual tasks, a basic organisational design concept is selected using a holistic overview of the entire system. This initial selection is then used to guide the decomposition of the system into tasks.



Secondly, the method considers, for each task, two questions that might be used to relate the current task to other tasks in the system. The questions are: ‘Would the automation of this task constrain other choices?’ (this question is related to technology characteristics); and, under task characteristics, ‘Are there any aspects of the task that make it critical to the performance of the system?’

However, with these two qualifications, Clegg et al.’s method still falls within our definition of the reductionist approach since it works by examining each task in turn

8

then iterating if the initial allocation decisions do not satisfy global system constraints. Of all the allocation techniques for early systems design reviewed by Older et al (1997) or appearing in Fallon et al (1997), only one short paper (Denley, 1989) considers allocating multiple functions concurrently. Denley’s approach, which he describes as a preliminary attempt at a method, allows the designers to group together proposed allocations for a set of functions as an ‘option scenario’. The designer can then choose between different ‘option scenarios’ for the set of functions in the same way as selecting between alternative allocations of a single function. Thus Denley’s approach, whilst ‘reductionist’ may not force the reduction to the same level of granularity as those methods listed above.

3.3.2

Global optimisation techniques

An alternative search strategy, is to apply optimisation techniques to search for the best global allocation (i.e. the allocation of all the functions in the system). This approach is evident within work on dynamic allocation of functions. This strategy is applied either to a small set of functions, or to a large set of instances of the same basic function (or task). For example, Shoval et al. (1993) describe the development of a navigational aid for blind users that offers five possible modes of operation. Each mode involves a different global allocation of three tasks ‘perception’, ‘cognition’ and ‘action’ between the human and the machine. Each mode is modelled as a path through a graph of points in a three dimensional space (task, agent, control state space). The system monitors the performance of the user and the complexity of the environment and uses this information to modify weights associated with edges in the graph. The mode (i.e. global allocation) selected is determined by finding the optimal path through the graph. In the system considered by Rencken & DurrantWhyte (1993) there are, at any one time, a large set of functions to be performed which may be drawn from one of two categories, either (a) identifying a new object or (b) re-capturing an existing object. Each of these functions may be allocated to a machine, allocated to a human, or left in the queue. The allocation algorithm considers the capacity available from the human and from the machine, the cost of starting the machine if it is not already switched on, the cost of failure to use the available capacity of the human or the machine, and the cost associated with failing to perform the function (i.e. leaving the function in the queue). The algorithm then

9

searches for an allocation of all the functions in the system that minimises this global cost.

3.3.3

Iteration versus optimisation

A basic problem with the reductionist approach is that it seems to require a number of iterations of the allocation process before a solution can be found that meets all of the known design constraints. Beevis & Essens (1997), reporting on a NATO workshop devoted to function allocation, touch on the question of why an iteration step is recommended by almost all authors who apply what we have termed reductionist approaches: “This may suggest predictive weakness in available function allocation techniques;…” Unfortunately, they immediately retreat from exploring this issue by continuing: “… more likely it reflects the many criteria that are involved in the decision”[p8, ibid.] A natural question to ask is whether the global optimisation techniques used in dynamic function allocation (which avoid the iteration step) could be scaled to support the design of large complex systems involving many heterogeneous functions, and considering the wide range of criteria suggested by authors such as Clegg et al.(1996), Meister (1985), Papantonopoulos & Salvendy (1998). Unfortunately, it is unlikely that such scaling would be successful. The reason for this is that the ‘cost’ or ‘quality’ functions used in generating the global optimisation require parameters to be set for each function. For example, in Rencken & DurrantWhyte’s system, the number of functions (either identifications or re-captures) that the human can process per unit time is input as a parameter (and varied as a result of feedback on performance), also the relative cost associated with failing to perform an ‘identify’ or a ‘re-capture’ function is entered as a parameter. In Shoval et al.’s (1993) system the input parameters are concerned with the relative cost of transferring different types of information between the human and the machine. In a system involving a large number of heterogeneous functions, and considering multiple criteria associated with allocation decisions, a very large number of such parameters would have to be defined and set. The setting of these parameters would then determine the global allocation decision. It is not at all clear that such quantitative

10

parameters could be set in any valid way. For example, such a method would require, for each design alternative suggested for each function: •

a validated, quantitative, measure of the workload that should be associated with the human activities involved in the function;



a validated, quantitative measure of the relative priority that should be assigned to the function at any time during systems operation;



a validated, quantitative measure of the risks associated with the design option; etc …..

Hence, attempting to scale the approach of global optimisation merely decomposes the problem to a set of parameter setting operations, many of which seem to be impossible given our current state of knowledge. In the next section, we shall argue that the predictive weakness of reductionist allocation approaches, identified by Beevis & Essens (1997), may be a direct result of the attempt to analyse each function individually rather than working with a perspective that aims to keep the whole system level in view.

4 Two fundamental flaws of reductionist allocation In this section, we shall argue that previous techniques for allocation of function based on the principle of considering individual functions in isolation are fundamentally unsound. The problems that we shall identify provide a compelling causal explanation for the predictive weakness of allocation methods noted by Beevis & Essens (1997).

4.1 Reductionist allocation fails to consider context In recent years, there has been a growing awareness of the importance of contextual factors in interactive systems design. This trend is evidenced by the current interest in alternative theoretical perspectives such as Activity Theory (Nardi, 1995), distributed cognition (Hutchins, 1995; Palmer et al, 1993), ethnomethodology and situated action (Suchman, 1987) and in new design approaches such as Contextual Design (Holtzblatt & Beyer, 1997) and Scenario-Based Design (Carroll, 1995; Kuutti, 1995). Proponents of contextual approaches highlight the complex dependencies and interactions between work activities and argue that traditional, task-based, approaches to interactive systems design may fail to take these interacting factors into account. In this section, we examine what elements of context might receive

11

inadequate treatment from existing allocation methods, and examine how a scenariobased approach might overcome these limitations. Fields et al. (1997) consider the work that may be necessary in an aircraft flight-deck following a bird-strike incident. At the start of the scenario the aircraft is flying low over water, using three engines (this is a normal context for the particular aircraft under consideration). When the bird strike occurs, engine fires are detected in both engines on one wing, leaving only one engine operating. In the formal description of the work there are defined emergency procedures for: shutting down the burning engines; extinguishing the fires once the engines are shut down; re-starting the non working engine; reconfiguring hydraulic systems that are pressurised by the engines. Other activities such as altering the aircraft trim and reducing drag are also required but are not so formally defined. In many modern aircraft, centralised warning systems can present electronic checklists to the pilot which describe the ‘standard operating procedure’ that should be used to deal with each problem. However, in analysing this scenario Wright et al. (1998) highlight the fact that although formal procedures are provided for each of the failures, an important aspect of the pilots’ work involves switching between the different procedures and prioritising some activities over others. Two points are apparent from this scenario: •

Firstly, the scenario identifies a number of tasks that must be performed by the pilot, that are not directly associated with any of the original list of functions to be allocated. For example, the two pilots must co-ordinate their work, they must schedule their activities and they must prioritise various activities. These scheduling and prioritisation tasks are not part of any of the ‘functions’ at the system level, but must be taken into account in determining allocation decisions. The scenario shows that allocation is not a ‘zero-sum game’, changing the allocation of a function from the human to the machine does not necessarily result in a reduction in workload equivalent to the human work necessary for that function. Rather, re-allocating functions between the human and the machine ‘transforms the work’ (Dekker & Wright, 1997) introducing emergent activities. These emergent activities can only be identified by considering the set of functions that must be delivered concurrently within the scenario.



Secondly, the scenario emphasises the fact that allocation decisions should take into account the role assigned to the operator. If allocation decisions are made on a function by function basis there may be strong arguments for automating the

12

function of shutting down the engine. The function is highly proceduralised and therefore easy to automate (automation is low cost), if a human has to perform the function it may impose high workload (automation will reduce workload), an automated system is less likely to make an error (automation would improve reliability) during execution of this safety-critical function (automation could improve safety). However, if we assume that the pilot has been given a role as the ultimate authority for safe and successful operation of the system, then he or she must be involved to some degree in the functions of extinguishing the fire & of managing engine thrust. Fully automating the engine shut-down could risk the loss of the aircraft in this scenario. Hence, the allocation recommendation that is generated when the function is considered in isolation is recognised as a poor recommendation when the function is considered in the context of a specific scenario. These two observations provide one explanation for the predictive weakness of allocation methods that are based on reduction of the system to the level of single functions. As Woods et al. (1995) conclude in discussing failures of complex humanmachine systems: ‘the proper unit of analysis is not the device or the human. “Cause” should not be attributed to either the design or to the people. Rather, the proper unit of analysis is the distributed cognitive system…’ [p123]. Our second piece of evidence is drawn from a very different source, namely economic utility theory.

4.2 Reductionist allocation is economically unsound 4.2.1

Allocation and relative value

A major issue within the bird-strike scenario above, is the relative priority and relative urgency of different tasks. For instance, some actions on the various task checklists are regarded as ‘tidying up’ actions that can be deferred until the immediate threats have been dealt with. Whilst all the functions identified for the system must be achieved (eventually and to some level of satisfaction), some functions require more immediate responses than others, and the quality of performance for some functions may be sacrificed to achieve higher quality of performance in others. The pilots’ work includes deciding priorities from moment to

13

moment, and allocating their limited workload capacities to those functions that are most important or most urgent. From the perspective of the design organisation an analogous problem must be solved. In designing the system, the organisation seeks to provide the best possible system, for a competitive price. To achieve this objective the organisation’s financial resources must be directed at those developments that are expected to deliver the greatest benefit per unit cost. Thus, for the design organisation, the decision of whether or not to automate a function will depend not only on whether automation results in higher performance, but also on whether there are other functions that could be automated for the same price that give rise to greater performance benefits. In economic terms this is a question of the ‘opportunity cost’ of automating the function. The following simple example illustrates why any method that treats functions in isolation is unlikely to generate sound recommendations for allocation. A system must be designed to deliver 3 functions, A, B & C. We assume a (very) simple model of the overall performance of the system as being measurable as a linear sum of some performance measure associated with each function. Any acceptable system must deliver all three functions with a minimum performance of 1 unit. The goal is to generate a system that has the highest possible performance per unit cost. Two design options are generated for each function, as shown in table 1.

FUNCTION

Function A

Function B

Function C

Option 1 – highly

Option 2 – limited

automated

automated assistance

Cost: £2 million

Cost: £1 million

Performance: 3 units

Performance: 2 units

Cost: £2 million

Cost: £1 million

Performance: 3 units

Performance: 2 units

Cost: £6 million

Cost: £4 million

Performance: 2 unit

Performance: 1 unit

Table 1: an example allocation problem Considering function C in isolation, we might think that option 1 is superior to option 2 because the ratio of performance to price (in £ millions) is 2/6 = 1/3 for option 1, and only 1/4 for option 2. Conversely, considering either function A or B

14

in isolation, option 1 offers a performance/price ratio of 3/2 vs. a ratio of 2/1 for option 2, hence we might expect that option 2 should be preferred. In fact, when the whole system is considered, the highest performance per unit price that can be obtained uses option 1 for functions A and B, and option 2 for function C. This combination provides a total performance of 3 + 3 + 1= 7 units for a cost of £8 million. This result however can only be obtained by considering all the functions together and comparing alternatives of spending money on the more expensive options for functions A & B, and spending less money on function C.

4.2.2

An economic model of a simple allocation decision

Generalising from the example, we construct the following simple model. A system is required to deliver a set of functions fi , i = 1 .. k. For each function, two possible design options are generated. Option A for each function, denoted fiA , delivers adequate performance PiA for a cost CiA. A second option for each function, option B denoted by fiA , delivers a higher performance, PiB , at a higher cost CiB. The overall performance of the system is modelled as a weighted sum of some performance measure appropriate to each function, i.e.

Performance( Sys) =

∑w P

i iA i ∈{ function i uses optionA}

+

∑w P

i iB i ∈{ function i uses optionB}

The allocator must decide which option should be chosen for each function, given

Cost ( Sys) =

∑C

iA i ∈{ function i uses optionA}

+

∑C

iB i ∈{ function i uses optionB}

that the overall cost of the system will be given by: Clearly the cheapest (and lowest performance) system that can be constructed uses option A for all functions, and the most expensive, highest performance system uses option B for all functions. However, the problem for the allocator is to find the system that provides the highest performance per unit cost. The performance per unit cost for a system is given by the equation:

Performance = Price

∑w P ∑C

i iA function i uses option A iA function i uses option A

+ +

∑w P ∑C

i iB function i uses option B

iB function i uses option B

15

Now consider the choice between two alternative systems that differ only with respect to the option chosen for a single function fj, i.e. consider the question of which option (A or B) should be chosen for function j. Let:

Sys2 = { f iA |i = 1.. j − 1} ∪ { f iB |i = j.. k } Sys1 = { f iA |i = 1.. j} ∪ { f iB |i = j + 1.. k } i.e. system 2 uses the more expensive option B for function j. Sys2 provides a better performance to cost ratio than Sys1 if and only if:

∑w P ∑C i

iA

i =1.. j −1

iA

i =1.. j −1

∑w P +∑ C +

i

iB

i = j ..k

∑w P ∑C i

>

iB

iA

i =1.. j

iA

i = j ..k

i =1.. j

∑w P + ∑C +

i

iB

i = j +1..k

iB

i = j +1..k

Which is equivalent to:

∑w P ∑C i

iA

i =1.. j

iA

i =1.. j

∑w P + ∑C +

i

i = j +1..k

iB

iB