A Domain-Specific Software Architecture for a ... - Semantic Scholar

10 downloads 129855 Views 79KB Size Report
Abstract. We present a domain-specific software architecture (DSSA) that supports development of a variety of intelligent patient monitoring (IPM) agents through.
A Domain-Specific Software Architecture for a Class of Intelligent Patient Monitoring Agents 1

Barbara Hayes-Roth Computer Science Department Stanford University March, 1994

1This research was sponsored by NASA contract NAG 2-581 under ARPA Order 6822 and by Teknowledge Corporation, Contract 71715-1 under ARPA contract DAAA21-92-C-0028. Serdar Uckun provided input and comments on earlier drafts of the paper. The Guardian project has benefitted from the contributions of many other present and past project members, including David Ash, Vlad Dabija, John Drakopoulos, Jan Eric Larsson, Richard Washington, Micheal Hewett, Rattikorn Hewett, Lee Brownston, and Alex Macalalad and from the contributions of present and past medical collaborators, including David Gaba, Juliana Barr, Hane Chien, Ida Sim, Garry Gold, and Adam Seiver.

1

Abstract We present a domain-specific software architecture (DSSA) that supports development of a variety of intelligent patient monitoring (IPM) agents through component reuse and reconfiguration. Specifically, the IPM DSSA comprises: (a) a reference architecture that supports the shared functional requirements of IPM agents and provides a framework for configuring diverse application-specific sets of components; (b) principles for decomposing IPM expertise into highly reusable components, along with a growing library of such components; and (c) an interactive application configuration tool that helps a user to select application-relevant components from the library and automatically configures selected components within the architecture. We demonstrate the efficacy of the IPM DSSA with results from the Guardian project, a series of experimental agents for monitoring intensive care patients.

2

1. Introduction The need for improved methods of software engineering is widely recognized. One prominent idea is to synthesize new application systems by configuring appropriate sets of reusable software components [Biggerstaff, 1991; Biggerstaff and Perlis, 1989; Biggerstaff and Richter, 1987; Boehm and Scherlis, 1992; Freeman, 1987; IEEE Transactions on Software Engineering, 1994; Tracz, 1988; 1991; 1992]. Despite its appeal, the current "theory" of application synthesis from reusable software components is incomplete. As Will Tracz wryly observes: "In order to reuse software, there needs to be software to reuse" [Tracz, 1988], but in fact, most existing software components are "more or less re-useless" [Tracz, 1992]. While challenging, this state of affairs is not surprising and should not be discouraging. It is unrealistic to expect software components designed specifically for particular applications to have, coincidentally, the features necessary for reusability. In order to realize the promise of reusable software, we need to engineer highly reusable software components from the start. Toward this end, domain-specific software architectures (DSSAs) [Tracz, Coglianese, and Young, 1993; Hayes-Roth, 1994; Mettala, 1990; Mettala and Graham, 1992] have been advanced as a methodology for factoring large software systems into components that have high reuse potential within a particular application domain. We have been developing a DSSA for a class of intelligent patient monitoring (IPM) agents, for example agents for monitoring medical patients in different monitoring contexts (e.g., intensive care unit (ICU) or operating room (OR)) and having a variety of specific histories. Following the general DSSA paradigm, the IPM DSSA has three major elements. First, an IPM reference architecture embodies both an appropriate computational foundation for the class of IPM applications and a congenial framework for compile-time and run-time configuration of different application-specific sets of components. Second, a library of reusable components of IPM expertise provides a set of building blocks that can be selected and configured within the architecture to create a variety of IPM agents. Third, an application configuration tool assists an IPM application builder in selecting components from the library and configuring them within the architecture in order to meet particular application requirements. Sections 2-4 below describe these elements of the IPM DSSA in more detail. The IPM DSSA supports synthesis and modification of IPM agents through configuration of reusable components throughout the application life-cycle: • new IPM application development through the selection and automatic configuration of application-relevant components from the library; • tailoring of IPM applications, for example, to the specific history of the monitored patient or to the preferred monitoring strategies of attending clinicians;

3

• reconfiguration of IPM applications in light of changed run-time circumstances, such as changes in the patient's condition or in the availability of monitoring devices or in the availability of new software components; • multi-person evolutionary development of complex IPM applications. Section 5 below demonstrates these capabilities in our experiments with a series of demonstration agents for monitoring intensive care patients, collectively referred to as Guardian. 2 Section 6 discusses conclusions and continuing research. 2. The IPM Reference Architecture 2.1 Overview of the Architecture

COGNITIVE LEVEL Current plan

.....

Behavior

Behavior

Information base and world model

Meta-controller

Executed behavior

Perception and plan execution feedback

Plan

PHYSICAL LEVEL Current plan

Behavior

.....

Behavior

Information base and world model

Meta-controller

Executed behavior(s)

ENVIRONMENT

Figure 1. Reference Architecture for Intelligent Patient Monitoring Systems.

2Although we do not discuss them in this paper, we also have been experimenting with our DSSA in other domains, including: intelligent monitoring of materials processing [Pardee, Schaff, and Hayes-Roth, 1991] and semiconductor manufacturing [Murdock and Hayes-Roth, 1991]) and intelligent control of autonomous office robots performing surveillance and delivery jobs [Hayes-Roth et al, 1993].

4

The proposed two-level reference architecture (shown in Figure 1) supports concurrent cognitive and physical behaviors. Physical behaviors include perception and action in the external environment, specifically perception of the monitored patient's condition (via sensors) and perception of clinicians' communications, along with action to set closed-loop control parameters (e.g., device settings) and action to communicate with clinicians. Cognitive behaviors include a variety of reasoning activities such as condition monitoring, fault detection, diagnosis, planning, explanation, etc. As shown in Figure 1, the physical level sends perceived information and feedback from action execution to the cognitive level, while the cognitive level sends control plans to the physical level. The two architectural levels share an underlying "dynamic control model" of their own operations. At the cognitive level, the model is implemented as the BB1 blackboard architecture [Garvey, et al, 1986; Hayes-Roth, 1985; 1990]. At the physical level, the model is implemented in a simpler, but analogous form. We have discussed this model, its implementation in BB1, and its use in Guardian in detail in other publications [HayesRoth, 1985, 1990; Hayes-Roth, et al, 1992]. The remainder of this section characterizes important features of the architecture in its role as a DSSA reference architecture. 2.2 Behaviors Behaviors embody the potential application of particular methods to particular tasks in particular contexts. For example, one cognitive behavior might apply a deductive method to plan a sequence of therapeutic actions in order to correct a monitored patient's low blood pressure. One physical behavior might apply a graphical display method to present the explanation of a particular diagnosis to the clinician. (In BB1, behaviors are implemented as knowledge sources or sets of knowledge sources.) Each behavior has a set of triggering conditions that can be satisfied by particular events , each signifying changes to the working memory or "blackboard," as it is called in BB1 and throughout this paper. Different events may result from perceptual inputs or previously executed behaviors. For example, the deductive planning method mentioned above might be triggered whenever any problem is diagnosed. The graphical display method for explaining diagnoses would be triggered whenever a clinician requested explanation of a diagnosis. When an event satisfies a behavior's triggering conditions, the behavior is enabled and its parameters bound to variable values from the triggering situation. A given behavior will be enabled and, therefore, executable, whenever events satisfying its triggering conditions occur—regardless of whether the behavior is the best available behavior or even useful in achieving the agent's current goals. Conversely, at each point in time, many competing behaviors will be enabled and the agent must choose among them to control its own goal-directed behavior. (In BB1, the set of enabled behaviors is called the agenda.)

5

To support these control decisions, each behavior has an interface that describes the kinds of events that enable it, the variables to be bound in its enabling context, the task it performs, the type of method it applies, its required resources (e.g., computation, perceptual data, effectors), its execution properties (e.g., speed, complexity, use of resources), and its result properties (e.g., certainty, precision, completeness). 2.3 Control Plans Control plans describe an IPM agent's intended cognitive or physical behavior as a temporal pattern of activities, each of which comprises a start condition, a stop condition, and an intended activity in the form: (task, parameters, constraints). For example, here is a partial control plan for the cognitive behavior of an IPM system that has detected a patient's suddenly high blood pressure: (Diagnose high blood pressure, ({BP, t}, f), ({speed = prompt})) (Plan treatment for underlying fault, (f, p, ({speed = prompt})) (Monitor treatment plan, (p, o), ({vigilance = high})) The agent intends to diagnose the patient's blood pressure at time t, using its fastest applicable method. Then it will plan a treatment for the underlying fault, f, again using its fastest applicable method. Finally, it will monitor execution of the treatment plan, p, again using its most vigilant method. As this example illustrates, task parameters can be specified as variables that are bound to values as a result of behaviors performed under the current plan step or earlier plan steps. In general, within the IPM reference architecture, control plans are data structures that the agent generates and modifies through cognitive behavior. Note that plans do not refer explicitly to any particular behavioral method in the agent's repertoire. Unlike a simple list of machine instructions or program subroutines, they are not directly executable. Instead, plans only describe intended behaviors in terms of the desired tasks, parameter values, and constraints on the performance of tasks. Thus, at each point in time, an agent has a plan of intended action, which implicitly allows a set of acceptable behaviors and for which its currently enabled behaviors may be more or less appropriate. As shown below, this approach produces control plans that are both flexible and robust, not only under a variety of run-time conditions, but also in their ability to control the behavior of a variety of agents having different configurations of specific behavioral components. (The architecture's representation and use of plans to control behavior are discussed in detail in [Hayes-Roth, 1993; Hayes-Roth, et al, 1993]). 2.4 Meta-Controllers A meta-controller follows an active control plan by executing, at each point in time, the enabled behavior that best matches the plan, namely the single behavior that: (a) performs the currently planned task with the specified parameterization; and (b) has an

6

interface description that satisfies the specified constraints better than any other enabled behaviors that meet condition (a). When multiple constraints are specified in the control plan (e.g., that a method be both fast and explainable), a user-specified integration function differentially weights their contributions to the ratings of competing behaviors. (In BB1, the meta-controller is called the scheduler.) For example, the first step in the cognitive control plan above is to diagnose the patient's high blood pressure promptly. Assume that the agent has two diagnosis methods, a fast case-based method that is enabled only for familiar problems and a slow modelbased method that is enabled for any observed problem. If high blood pressure is a familiar problem to the agent, both methods will be enabled and the agent will execute the faster case-based method. However, if sudden high blood pressure is unfamiliar to the agent, only the model-based method will be enabled and the agent will execute it. In both cases, the agent will ignore other enabled behaviors that are irrelevant to the diagnosis task. The agent will make similar choices among competing enabled behaviors for each step of its cognitive and physical control plans. Thus, an agent continuously improvises its specific course of behavior, following intended plans as well as possible, given the behaviors that happen to be enabled along the way. 2.5 Knowledge Base / Working Memory The diverse cognitive and physical behaviors an IPM agent performs interact with one another via changes they make to its working memory or knowledge base, which are jointly called the "blackboard" in BB1. All executed behaviors, including perceptual and cognitive behaviors, produce changes to the contents of the blackboard, which in turn may enable other potential behaviors by satisfying their preconditions. The blackboard is represented as an integrated conceptual graph that includes all of the agent's declarative knowledge (including both factual knowledge and descriptions of potential behaviors) and a temporally organized account of its run-time perception, reasoning, and action results. It provides a skeletal conceptual graph to which type hierarchies of application-relevant task, method, and domain concepts (discussed in section 3 below) can be attached at compile time and accessed at run time. The blackboard representation is further specified with aIPM shared ontology that defines the basic types of domain objects and relations of interest to the several tasks typically performed by IPM agents. These too can be instantiated for a particular application at compile time and accessed at run time. (Figure 2 shows a small excerpt from the IPM shared ontology.) At run time, each executed behavior makes new instances of object types defined in the shared ontology or makes changes to the attributes or relations among previously created instances. Depending on the task performed by the behavior, these changes are recorded in the temporally organized working memory as occurrences (e.g., by a

7

condition tracking task), expectations (e.g., by a prediction task), or intentions (e.g., by a planning task) regarding the patient's condition. In patient monitoring, consistencies and inconsistencies between corresponding entities on different timelines are of particular interest. For example, during a particular time interval an IPM agent may observe a particular sign, slowly rising blood pressure, that contradicts a previously generated expectation, quickly rising blood pressure, but is not too inconsistent with its goal, stable blood pressure. This set of relations, also recognized as events, may trigger a variety of potential behaviors, including alternative methods for performing each of the following tasks: to modify the expectations, to diagnose the source of the faulty expectation, to verify the measurements underlying the observation, or to change current therapy plans for lowering blood pressure. Abstract Tasks Coordinated

Excerpt from the IPM Ontology

Parm

Recognize pattern of parameter values Predict parameter values for pattern

Parm

contributes to contributes to

Pattern Pattern

Therapy

corrects

or

Recognize sign suggested by pattern Precict sign-confirming patterns

Pattern

suggests

Sign

causes

Fault

suggests

Diagnose fault causing sign Predict effects of fault Sign

Sign

causes

Plan therapy to correct fault/sign Predict effect of therapy on fault/sign corrects

Sign

Fault Therapy or

Fault

Figure 2. Excerpt from the IPM shared ontology. The IPM shared ontology is a key mechanism for coordinating the performance of complementary tasks and especially for readily accommodating alternative methods for performing those tasks [Hayes-Roth et al, 1986b; 1986c]. For example, as discussed in section 3 below, IPM agents perform tasks such as pattern recognition, sign interpretation, diagnosis, and treatment planning. To accommodate these tasks and their interactions (via their shared interest in particular conceptual entities), the IPM shared ontology defines the types of objects they take as inputs and outputs and the potential relations among them (e.g., pattern suggests sign, fault causes sign). Figure 2 shows a small excerpt of the current IPM shared ontology. All of the concepts (parameter, pattern, sign, fault, therapy) and relations (contributes to, suggests, corrects, causes) and their attributes (e.g., pattern attributes include name and possibility; the cause relation is

8

transitive) are specified in the IPM type hierarchy. Thus, the IPM shared ontology provides a standard interface for interoperation among the application-specific components configured within an IPM agent: any method for performing any task can interoperate appropriately with any method for performing a complementary task, so long as all methods use the IPM shared ontology. 2.6 Global Properties of the Dynamic Control Model As demonstrated in previous publications, our reference architecture's underlying dynamic control model provides the considerable flexibility required of a large class of IPM agents. An agent can have in its knowledge base many different behavioral methods for performing diverse cognitive and physical tasks. It can plan particular sequences of particular methods for performing particular in order to achieve goals. But it also can make more abstract plans that constrain its behavior without specifically determining it. In either case, the agent's meta-controllers will choose to execute whichever enabled behaviors best match its current plans. Because control plans for the IPM agent's own behavior also are represented as data structures in the blackboard, an agent can develop and modify them dynamically and by means of whatever control planning methods are enabled in its run time situation. This control model provides extreme flexibility and robustness in the agent's ability to adapt its goal-directed behavior to a variety of run-time conditions. Of primary interest in the present paper, our reference architecture's underlying control model also provides a congenial framework in which appropriate sets of reusable cognitive and physical components can be configured at both compile time and run time. Whatever components are configured within the architecture will automatically be enabled by relevant run-time events. Because control plans describe, rather than name, intended behaviors, they can be used to choose the best available behaviors from whatever set happens to be configured at compile time and enabled at run time. 3. The Library of Reusable Components of IPM Expertise 3.1 Principles of Orthogonal Knowledge Decomposition To provide a highly reusable set of software components for configuration within our reference architecture, we provide a framework for decomposing domain expertise along the three orthogonal dimensions shown in Figure 3. The presumption is that each of the three components produced by decomposition of a given competency may be reused in combination with alternative sub-components on each of the other two dimensions. (See also [Hayes-Roth, et al, 1986b, 1986c].) Tasks are classes of jobs an IPM agent might perform, defined by their abstract input/output specifications, independent of method and domain. For example, the task of diagnosis takes as input a patient condition and produces as output a hypothesized cause of that condition. Diagnosis can be performed by a number of different methods and in

9

many specific medical (and other) domains. Tasks may be further specified by imposing resource limitations (e.g., time limits) or performance requirements (e.g., precision, reliability). For example, an agent may need to perform a diagnosis task very quickly, but not necessarily need to identify the most specific diagnosis. Subject Domain = Ontology Semantics Factual Knowledge Metric Knowledge Anesthesia - Cardiac - Pulmonary ••• Critical Care - Cardiac •••

Method = Operations & Strategies Resource Requirements Performance Properties Model-Based Case-Based Associative • • •

Track Condition Detect Fault Diagnose Fault Plan Therapy Monitor Therapy Explain Reasoning Explain Condition Summarize Interval Task = ••• I/O Specifications Resource Parameters Performance Parameters

Figure 3. Definition of Orthogonal Sub-Components of Domain Expertise, with Illustrative Examples of Each. Methods are classes of computational approaches an agent exploits for a variety of cognitive or physical tasks, independent of domain. They are defined in terms of sets of abstract component operations, each of which may be enabled by run-time events, along with abstract strategies for selecting and sequencing enabled operations at run time in order to achieve goals. For example, model-based reasoning and case-based reasoning are two different cognitive methods an IPM agent might apply to diagnosis, therapy planning, or other tasks. Case-based diagnosis might, for example, comprise abstract operations such as "find a similar case" and abstract strategies such as "find the n most similar cases, then map the n cases onto the present situation, then ..." Methods for a given task may differ in their resource requirements (e.g., real time, computation time, sensor utilization, domain knowledge), run-time properties (e.g., interruptability, intermediate results, incremental solution improvement), or their characteristic results (e.g., precision, reliability, qualitative contents of conclusions). As a result, different methods that are equivalent in their logical applicability to an abstract task may be more appropriate for different task instances. For example, case-based reasoning may be more

10

appropriate than model-based reasoning for a diagnosis task that has a hard deadline or for patient populations for which the IPM agent has a good distribution of relevant cases. Subject domains comprise the different kinds of knowledge (e.g., ontology, facts, relations) an IPM agent might have regarding its monitored patient, including for example, knowledge of particular organ systems, disease conditions, therapy protocols, etc. Representing this knowledge declaratively and organizing it in terms of the IPM shared ontology allows it to be used to support various methods for various tasks and situations. For example, an IPM agent can use a standard model of the cardiovascular system for monitoring the cardiovascular condition of any ICU or OR patient. Our three-dimensional decomposition of domain expertise combines the complementary decompositions practiced in software engineering and knowledge engineering. Software engineers typically distinguish a software module's interface (combining our task and subject domain sub-components) and its implementation (our method sub-components). Knowledge engineers typically distinguish a module's knowledge (our subject domain sub-components) and its inference engine (combining our task and method sub-components). Each of these two-way decompositions promotes reuse; combining them expands opportunities for reuse. As mentioned above, orthogonal decomposition opens the possibility of reusing any sub-component along one dimension in combination with alternative sub-components from each of the other two dimensions, producing a potentially large number of distinctive competencies. 3.2 The IPM Component Library 3.2.1 IPM Task Components Table 1 characterizes the IPM library, including currently implemented tasks and some envisioned tasks.

11

Table 1. Cognitive and Physical Tasks Implemented or Envisioned for the IPM Library. Cognitive Tasks

Input

Output

Recognize pattern

New obs patient data+history Clinically important pattern(s)

Interpret sign

New possible pattern

Classify: (Ab)normal sign

Predict sign

New sign, diagnosis, plan

New expected sign(s)

Diagnose abnormal sign

New abnormal sign

Hypothesized cause

Plan treatment

New diagnosis

New intended treatment plan

Implement treatment

New treatment plan

Physical commands

Focus attention

New cognitive activity

Perception commands

Detect mismatch

New obs, exp, or int sign

Mismatch? O::E, O::I, E::I

Validate mismatch

New mismatch

Erroneous O or E ?

Revise erroneous observation Erroneous observation

Corrected observation

Revise erroneous expectation Erroneous expectation

Corrected expectation

Revise faulty plan

(Valid) O::I or E::I mism

Corrected int plan + exp

Explain condition

New request : Explain cond

Content & phys commands

Explain reasoning

New request : Exp reasoning

Content & phys commands

Explain monitoring strategy

New request: Exp strategy

Content & phys commands

Summarize patient status

New request: Patient status

Content & phys commands

Physical Tasks

Input

Output

Perceive patient data

Monitored parameter values

Selected interpretations

Manage device controls

New device control cmd

Implement new command

Perceive clinician input

New clinician input

Interpretation

Present explanation or advice Content & phys commands

12

Execute commnds on content

3.2.2 IPM Method Components As summarized in Table 2, the IPM library contains at least one domain-independent method component for performing each of the implemented cognitive and physical tasks characterized in Table 1. Some of these methods were designed and implemented by members of our research group, while others were developed by and imported from others. We plan to develop or import similar methods for the envisioned tasks in Table 1. Table 2 actually lists software modules, each of which packages together one or more method components; in the latter case, the components embody similar methods for performing complementary tasks. For example, the ReAct module packages together quick, associative methods for several tasks: interpretation, diagnosis, prediction, planning, etc. This packaging reflects the module creators' interests in completely addressing a set of related tasks with a consistent method of reasoning. Typically, an application builder will make one decision regarding whether or not to incorporate the entire set of such methods in an IPM agent. So this packaging is convenient. On the other hand, the application builder cannot independently select arbitrary methods from within a module and, to that degree, we have not yet succeeded in achieving that particular benefit of the orthogonal decomposition prescribed in section 3.1 above. Even when methods are packaged within a module, however, different methods for different tasks are implemented by different sets of BB1 knowledge sources, which are triggered independently and interact with one another only via the information they read or post on the blackboard using the IPM shared ontology. Therefore, even when packaged up for loading as a single module, methods for performing different tasks can be invoked independently of one another at run time and they can interact with other components in addition to those with which they are packaged. Thus, for example, ReAct's diagnosis method produces hypothesized faults that trigger its own treatment planning method and all other treatment planning methods known to the agent. Conversely, ReAct's treatment planning method is triggered by hypothesized faults posted by its own diagnosis method and by all other diagnosis methods known to the agent. Given this style of implementation, it is straightforward to decouple the methods within a component in order to achieve both the compile-time and run-time independence prescribed in our knowledge decomposition framework.

13

Table 2. Software Modules Containing One or More Method Components Implemented for the IPM Library. Method Component Modules

Tasks Performed

Focus v1-3

Selectively perceive

Device controller

Control device parameters

FPR, tFPR

Recognize sign

Mismatch Detector

Detect mismatch: O::E, O::I, E::I

TLM v1-3

Manage temporal episodes: O, E, I

CSFS

Focus attention

Glovie, ReAct v1-2

Interpret sign, Diagnose sign, Predict, Plan treatment, Implement treatment, Focus attention, Explain reasoning

PCT

Diagnose sign, Focus attention

MFM

Interpret sign, Diagnose sign, Predict condition, Explain condition

ICE v1-2

Interpret sign, Diagnose sign, Predict condition, Explain condition

SPI

Plan treatment, Predict condition, Implement treatment, Focus attention

RTP

Plan treatment, Predict condition, Implement treatment, Focus attention

TPI

Implement treatment plan

Special-purpose GUIs

Perceive clinician input, Present information to clinician

14

Note also that many of the modules in Table 2 contain methods for performing tasks that also can be performed by means of other methods contained in other modules. For example, the ReAct, PCT, MFM, and ICE modules all contain methods for performing diagnosis. However, they differ in their resource requirements (e.g., domain knowledge, patient data, computation time, real time) and in the quality of their results (e.g., certainty, precision, multiple-fault conditions, explainability). One of the strengths of our IPM DSSA is that it permits an application builder to choose among alternative methods for performing a required tasks based on application-specific resource availability and response requirements. One of the strengths of our IPM reference architecture is that it permits multiple methods for a given task to be configured within an IPM agent that faces variable resource availability and response requirements and permits the agent to make run-time decisions about which method to apply. Thus, we view alternative methods for a given task as complementary resources that can be exploited at compile-time in the construction of an agent or at run-time in the behavior of the agent. In the future, we plan to develop additional alternative methods for the IPM library in order to give application developers greater flexibility in tailoring their choices of methods to the resources and requirements of specific applications. Method components contained in the modules listed in Table 2 are described briefly below. Focus versions 1-3 [Washington and Hayes-Roth, 1989] operates at the physical level. It selectively perceives, interprets, and passes to the cognitive level values of continuously sensed patient data variables (e.g., pulse, breathing rate, peak inspiratory pressure) and intermittently sensed variables (e.g., lab tests, x-ray interpretation). Focus monitors the IPM agent's dynamic rate of processing recent perceptual inputs at the cognitive level in order to continuously adapt the global data rate at which it sends new perceptual inputs. Thus, it protects an agent's cognitive processing from perceptual overload, while maintaining its maximum feasible level of vigilance to changes in the patient's condition. Focus also accepts asynchronous instructions from the cognitive level regarding how it should interpret data on individual parameters (e.g., value classes, thresholds, running averages) and regarding the context-specific relevance of individual parameters to the agent's current cognitive activities. By distributing its global data rate among variables in proportion to their absolute importance and context-specific relevance, Focus allows an IPM agent to attend to the most useful currently available data, while maintaining some level of awareness of all potentially interesting data. (Versions 1-3 of Focus embody successive improvements in the power and efficiency of its selective perception and interpretation of patient data.) Device controller operates at the physical level. It manages device parameters by simply setting a specified control parameter (e.g., breathing rate) to a setting specified in asynchronously arriving commands from the IPM agent's cognitive level. In the future,

15

we plan to expand this capability to include feedback control of variables to maintain specified set points [Vina and Hayes-Roth, 1990] and perhaps execution of rules or simple conditional plans. FPR (and its predecessor FPR) [Drakopoulos and Hayes-Roth, 1993] embodies a temporal fuzzy pattern recognition method for the cognitive task, recognize sign. Given a knowledge base of clinically interesting data patterns, declared as combinations of static values, ranges, or rates of change on a set of parameters, tFPR calculates the "possibility" that each pattern exists in the world at each point in time, given the data that have been observed so far. (The earlier FPR embodied the same method restricted to static data snapshots.) tFPR allows fuzzy boundaries between neighboring patterns such as normal heart rate vs. tachycardia. tFPR also allows the specification of contextual information in patterns in order to estimate pattern possibilities based on factors such as postoperative state and patient history. For example, the definition of “normal cardiac output” or “normal heart rate” will differ among different patient populations and even during the normal postoperative course of the same patient, and patterns may be declared to take such contextual information into account. Mismatch detector embodies a method to detect discrepancies between corresponding episodes on the observed, expected, and intended timelines. Discrepancies between corresponding episodes on the observed and expected timelines indicate either a perceptual (sensor, measurement, or interpretation) error or an error in generating expectations regarding the patient's conditions. Discrepancies between corresponding episodes on the observed and intended timelines indicate either a perceptual error or that the agent is failing to achieve currently planned therapeutic objectives. Discrepancies between corresponding episodes on the expected and intended timelines indicate either an error in generating expectations regarding the patient's condition or that the agent will fail to achieve planned therapeutic objectives. TLM versions 1-3 (timeline manager) creates and evaluates new episodes on the observed, expected, and intended timelines of a deductive temporal database. The observed timeline records the history of parameter values, sign evaluations, diagnostic hypotheses, and confirmations of executed actions. The expected timeline records expectations regarding: (a) the persistence of observed conditions where appropriate; and (b) the time courses of parameter and sign values generated using simulation, rule-based, pattern-based, or other techniques. The intended timeline records therapeutic plans and the desired time course for parameter and sign values, for example for a stable patient or following a given treatment protocol. (Versions 1-3 embody successive improvements in the expressive power and efficiency of the timeline manager.) CSFS (Context-Specific Fault Selector) [Dabija and Hayes-Roth, 1994] determines which faults from the complete known set warrant prepared reactive responses during particular run-time contexts. The CSFS method is based on two observations: (a) that a

16

resource-bounded IPM agent cannot be prepared to react effectively to every possible contingency; and (b) that each potential fault condition for a given patient has different criticality in different monitoring contexts, such as different phases of the postoperative course. The method formalizes context-dependent criticality for a fault in terms of its relative likelihood, consequences if untreated, side effects of available treatments, and required response time for effective treatment. An agent applies the CSFS method in anticipation of particular monitoring contexts, selecting the most critical context-specific faults for which to prepare reactive responses. ReAct versions 1-2 (and its predecessor, Glovie) [Ash, Gold, Seiver, and Hayes-Roth, 1993; Ash and Hayes-Roth, 1993] embodies a decision-theoretic method for diagnosis and therapy planing. Unlike other decision-theoretic approaches, it organizes diagnostic knowledge in an action-based hierarchy of disease classes. Higher-order nodes represent large classes of diseases that have similar symptoms and are amenable to similar nonspecific treatments. Leaf nodes represent specific disease categories that are amenable to specific treatments. Given an observed abnormal sign, ReAct attempts to refine its current best hypothesis by performing tests that distinguish among its children. When a deadline occurs, ReAct plans to perform the therapeutic actions that cover all diagnoses in the set corresponding to its current best hypotheses and posts expectations about the results of executing those actions. Thus, it provides the "anytime" property [Dean and Boddy, 1989] of improving its diagnosis with more time, but being prepared to recommend a positive value therapy at any ime. (ReAct version 1 is restricted to single fault hierarchy defined at compile-time. It combines sign recognition, diagnosis, and treatment planning in a single task. ReAct version 2 works with multiple fault hierachies defineed at run-time and decouples its sign recognition, diagnosis, and treatment planning as separate tasks. Glovie, which preceded ReAct version 1, is a rudimentary treatment of the basic approach, hard-wired to a small set of specific faults and symptoms.) PCT (Parsimonious covering theory ) [Peng and Reggia, 1990] embodies an associative method for diagnosis in which bipartite graphs connect signs to diseases. Given prior probabilities of specified diseases, causal strengths of links to specified signs, and truth information on the signs, PCT calculates relative likelihood scores and ranks any number of multiple-disorder hypotheses. The strength of the PCT method lies in its wide coverage of the problem domain and in its ability to generate multiple-disorder hypotheses. The weakness of the method (as we have implemented it) lies in its limitation to bipartite graphs and, therefore, its inability to compute probabilities for intermediate nodes representing pathophysiological states [Pearl 1988]. MFM (multi-level flow model) [Lind, 1990; Larsson, 1992] embodies a model-based method for diagnosis and prediction. Its causal diagrams are organized around part-whole models of structure (anatomy) and behavior (physiology) and means-ends models of the function achieved by those structures and behaviors (i.e., goals representing desired

17

physiological dynamic conditions, faults representing pathophysiological conditions)). MFM is faster than any of the other diagnostic or predictive methods and can localize and explain the manifestations of a fault within an intuitive physical model of the patient. ICE versions 1-2 [Hewett and Hayes-Roth, 1990] embodies another model-based method for diagnosis, prediction, and explanation of observed or hypothesized faults. Starting with structure-function models similar to those of MFM, ICE more completely annotates those models with topological, geometric, part-whole, causal, and other relations—but not the goal-oriented mean-ends relations of MFM. It also exploits a set of similarly represented abstract models of generic systems that can occur at a number of specific loci within a number of different organ systems (e.g., flow systems, diffusion systems, delivery systems), along with causal models of their potential faults. By instantiating generic models within particular reasoning contexts, ICE can diagnose, predict, and explain a large number of specific faults from qualitative first principles. For example, ICE can use its generic flow model to reason about the flow of gases in the pulmonary system or in the ventilator, the flow of blood in the cardiac system, or the flow of ntrients in the digestive system. (ICE version 1 has only a limited knowledge base of generic models and can apply only one type of model (but with potentially many instantiations) in a given causal chain. ICE version 2 has an expanded range of generic models and can instantiate multiple types of models within a single causal chain.) SPI (Skeletal plan instantiator) generates patient-specific treatment plans from skeletal plans. Following [Friedland and Iwasaki, 1985], it selects the appropriate skeletal plan for a diagnosed fault, instantiates variables such as drug dosage and time course, operationalizes the plan based on patient context (e.g., history, possible drug interactions, risks, and allergies, pre-existing conditions, previous treatment attempts, etc.), and posts intentions and expectations regarding the effects of executing the instantiated plan. RTP (real-time planner) [Washington, 1994] is a forward-search-based multi-level planning method for generating and modifying both therapy plans and expectations regarding the effects of executing those plans under dynamic perceived patient conditions. Although it is computationally complex, RTP has the traditional strength of weak methods, broad applicability without specialized knowledge. In addition, it readily adapts to changes in the patient's condition (or in exogenous factors, such as the availability of certain treatment options) and it permits initiation of plan execution before plan construction is complete when necessary. TPI (Plan Implementer ) steps through an established treatment plan. It sends action commands to the physical level under appropriate real-time or contextual conditions,

18

monitors the expected effects in perceptual observations, and if necessary, determines the need for replanning. 3.3.3 IPM Domain Components We have implemented domain components for six organ systems : cardiovascular, pulmonary, renal, neurological, hematologic, and metabolic/endocrine. Given our currently implemented task and method components, we need the following kinds of knowledge about each organ system: • relevant parameters, their measurements, and their value ranges; • clinically interesting signs and their corresponding patterns of parameter values; • faults (i.e., diseases and complications) that can occur; • possible therapies (i.e., relevant treatment plans and actions) • structure, behavior, and function of the organ system. At this time, we have implemented all five kinds of knowledge only for the pulmonary and cardiovascular systems. For the other systems, we have implemented only knowledge of relevant parameters, clinically interesting signs, and possible diseases. Overall, the IPM library currently contains descriptions of approximately 120 parameters, 200 signs and patterns, 70 diseases and complications, and 100 actions (monitoring, diagnostic, and therapeutic). The cardiovascular and pulmonary knowledge components, which we have been working on the longest, are more detailed and more complete than the others. The cardiovascular components include most hemodynamic parameters, ECG interpretations, physical exam findings, most signs associated with these parameters, and approximately 20 disease descriptions pertaining to the postoperative care of cardiac surgery patients. The pulmonary components contain arterial blood gas results, ventilator settings, various respiratory parameters, chest tube output, physical and radiological findings, most related signs, and approximately 20 diseases including complications related to breathing circuit problems. 4. An Interactive Application Configuration Tool We are developing an interactive application configuration tool to automatically build IPM agents. As illustrated in Figure 4, the IPM DSSA enables us to configure a variety of IPM agents, each of which instantiates a different, application-relevant subset of the available cognitive and physical task, method, and subject domain components within an instance of the reference architecture. We also can reconfigure agents at run time, by adding or removing components individually. With our current tool, both compile-time and run-time configuration require manual selection of the desired components by the user, with automatic loading and configuration of components within the architecture. Many IPM agents will configure an application-

19

specific set of required tasks, optimal methods for performing each of those tasks under application-specific conditions, and the subject domain knowledge required to apply each of those methods. In cases where a given task must be performed under variable circumstances, suitable alternative methods and their required domain knowledge can be configured and then selectively enabled and executed by the agent at run time. In either case, if the configuration of components within an IPM agent is conceptually complete (that is, it includes the required subject domain knowledge to apply one or more methods to an application-sufficient set of specified tasks), the agent runs immediately and makes appropriate use of available components, depending on run-time conditions. IPM Reference Architecture

IPM Component Library

COGNITIVE LEVEL Current plan

.....

Behavior

Cognitive Components

Information base and world model

Behavior

Tasks

Meta-controller

Executed behavior

Perception and plan execution feedback

Plan

PHYSICAL LEVEL Current plan

Behavior

.....

Information base and world model

Behavior

Interactive Application Configuration Tool

Methods

Domains

Physical Components Tasks

Methods

Domains

Meta-controller

Executed behavior(s)

ENVIRONMENT

Variety of Individual IPM Agents COGNITIVE LEVEL Current plan

Behavior

.....

Behavior

COGNITIVE LEVEL Information base and world model

Current plan

Meta-controller

Behavior

.....

Behavior

COGNITIVE LEVEL Information base and world model

Current plan

Behavior

Behavior

Current plan

Behavior

.....

Behavior

Meta-controller

Meta-controller

Executed behavior(s)

Executed behavior(s)

ENVIRONMENT

Behavior

Information base and world model

Executed behavior

Perception and plan execution feedback

Plan

PHYSICAL LEVEL Information base and world model

.....

Meta-controller

Executed behavior

Perception and plan execution feedback

Plan

PHYSICAL LEVEL

Current plan

.....

Meta-controller

Executed behavior

Perception and plan execution feedback

Behavior

ENVIRONMENT

Plan

PHYSICAL LEVEL Information base and world model

Current plan

Behavior

.....

Behavior

Information base and world model

Meta-controller

Executed behavior(s)

ENVIRONMENT

Figure 4. Building a variety of IPM agents by configuring application-relevant components within the IPM reference architecture. We plan to automate the process of selecting application-relevant components as well, by allowing users to describe the IPM application in more general terms, for example by specifying ICU versus OR monitoring and supplying the patient history. An intelligent configuration tool could then determine what kinds of monitoring tasks are required, under what kinds of constraints the agent must perform those tasks, and therefore, what sorts of methods it must have to perform them, and finally, what kinds of domain knowledge will be required to perform the required tasks with the available methods. By instantiating the appropriate IPM "schema," the configuration tool can present an initial prototype IPM configuration to the user for approval or modification.

20

5. Experiments in Component Reuse and Reconfiguration 5.1 Overview of the Guardian Experimental Agents Table 3. Evolutionary Development of Guardian Experimental Agents: Reconfiguring Reusable Task, Method, and Domain Components Agent 1

Agent 2

Agent 3

Agent 4

Agent 5

Agent 6

Focus v1

Focus v1

Focus v2

Focus v2

Focus v3

Focus v3

Device Ctlr

Device Ctlr

Device Ctlr

Device Ctlr

TLM v1

TLM v2

TLM v3

TLM v3

Mm Detector Mm Detector Mm Detector

Glovie

ReAct v1

ReAct v1

ReAct v1

ICE v1

ICE v1

ICE v2

ICE v2

FPR

tFPR

CSFS

CSFS

ReAct v2

ReAct v2

PCT

PCT

RTP

MFM SPI TPI Knowledge Available for Specified Organ Systems: Pulmonary & Cardiovascular: Parameters, Signs, Diseases, Therapies, Structure/Function Other Systems: Parameters, Signs, Diseases only

Pulmonary

Pulmonary

Pulmonary Cardiovasc

21

Pulmonary Pulmonary Pulmonary Cardiovasc Cardiovasc Cardiovasc Hematologic Hematologic Hematologic Renal Renal Neurological Neurological Metabolic Metabolic Endocrine Endocrine

The Guardian project aims to develop effective IPM agents for intensive care unit (ICU) patient monitoring. In previous publications, we have discussed the ICU monitoring problem, Guardian's overall approach to it, and the details of Guardian's performance on specific monitoring scenarios [Hayes-Roth, 1990, 1994; Hayes-Roth, et al, 1989; Hayes-Roth, et al, 1992]; we do not repeat that discussion here. Instead, we focus on software engineering advantages we have enjoyed by developing these different Guardian agents within the framework of our IPM DSSA, especially advantages due to its support for component reuse. The discussion will be organized around six different experimental Guardian agents we have developed, as characterized in Table 3. Each of these agents configures a different subset of the IPM task, method, and domain components described in section 3 within the IPM reference architecture described in section 2. 5.2 New Application Synthesis through Architecture and Component Reuse The IPM DSSA facilitates the development of new applications by providing a suitable reference architecture to serve as a foundation for application-relevant components and a library of reusable components to be used as ready-made or modifiable building blocks. Thus, new applications can be developed more or less out of existing software entities; application builders can restrict their efforts to necessary new components or modifications. Demonstrating architecture reuse, all of the agents in Table 3 incorporate the same underlying IPM reference architecture. In fact, the original development and many improvements to this architecture preceded all of the agents [Hayes-Roth, 1985; 1990], so even agent 1 benefited from substantial savings in time and cost (on the order of tens of person-years), due to the availability of an appropriate reusable reference architecture. Demonstrating component reuse, each of agents 2-6 reuses some components developed for a predecessor, along with one or more new or improved components. For example, agent 2 reuses agent 1's Focus v1 and ICE v1 components, as well as its pulmonary knowledge component. Agent 2 replaces agent 1's Glovie components with ReAct v1 components. Note that substituting ReAct v1 for Glovie requires no additional changes because their methods perform the same generic tasks as defined within the IPM shared ontology. In agent 2, ReAct v1 components run wherever Glovie components run in agent 1. In a more complex case, agent 6 incorporates all of Agent 5's method and domain components, but adds three new MFM, SPI, and TPI components. Here, MFM performs tasks already defined within the IPM ontology (e.g., diagnosis, prediction), for which other alternative methods (ReAct and PCT) also exist within agent 6. It is necessary to augment the agent 6's control plans to insure that it uses appropriate constraints to choose among these three competing methods at run time. In all cases,

22

development of a new agent is accelerated by an amount roughly proportional to the total time and cost of developing the reused components. Although all of our experimental work to date has been on Guardian ICU monitoring agents, most of the IPM components are independent of the ICU monitoring domain. Certainly the component tasks and many of the methods could be used in a variety of medical applications. In addition, domain knowledge of parameters, signs, and patterns, as well as the structure, behavior, and function of organ systems generalize to many other medical application domains. Disease and treatment knowledge is more applicationspecific. However, even parts of this knowledge may be transferred to other applications. For example, consider emergency room monitoring of patients with hypovolemic shock. First principles and parameter/sign information on all organ systems from the IPM library could be reused since most of this information will be relevant. Disease and complication information for the cardiovascular and respiratory systems would be partially replaced with information relevant in the hypovolemic shock context. However, many of the disorders in the hematological knowledge base are relevant in the shock context and therefore would be retained. Thus, depending on the similarity between the base and target application domains, reuse of existing knowledge bases could provide significant savings in expert and knowledge-engineer efforts in developing knowledge bases. 5.3 Tailoring of Applications The IPM DSSA facilitates tailoring of applications. For example, a number of ICU monitoring agents might be configured for different patients with the same set of task and method components, but with different subject domain components depending on the specific organ systems that need to be monitored for each patient. Similarly, two different agents might be configured for monitoring the same patient in the OR and the ICU with the same task and subject domain modules, but with different method modules reflecting the different resources available and time constraints imposed on monitoring in the OR versus the ICU. Tailoring of applications might reflect other factors. For example, a monitoring agent for a particular patient in a particular setting might be configured with different task and method components, depending on the attending physician's preferences regarding which tasks to assign to the agent (as opposed to other human members of the monitoring team), which monitoring strategies are most effective, etc. 5.4 Run-Time Reconfiguration of Agents The IPM DSSA facilitates reconfiguration of applications in light of changed runtime circumstances. For example, superior components (e.g., a new and better planning method) may become available after initial system configuration. Alternatively, different or additional components may be required to deal with unexpected changes in the patient's condition or changes in the availability of monitoring devices. In any of these situations, new components can be substituted for the old ones or added to the knowledge base alongside the old ones—without interrupting system operation. The agent's event-

23

based enabling of behavioral methods, its plan-based meta-control choices among competing methods, and its efforts to retrieve necessary knowledge from the blackboard are not preprogrammed to handle any particular tasks, methods, or domain facts; they operate on whatever task, method, and domain knowledge are available at run time and match the constraints of operative control plans. 5.5 Multi-Person Evolutionary Development Our experiments provide their strongest support for the IPM DSSA's facilitation of multi-person evolutionary system development. Although the Guardian agents are experimental systems, the Guardian project faces many of the same challenges faced by "real" software system development projects: • Large Software System. Guardian agents, including architecture and components, have on the order of 100K lines of Lisp code. The IPM DSSA provides a natural decomposition of the Guardian software into architecture and task, method, and domain components that can be developed, tested, and maintained nearly independently. • Heterogeneous System Components. Guardian agents integrate performance of several complementary tasks, in many cases by several different methods, including both internally developed and imported components. The IPM DSSA readily accommodates diverse components through its event-based triggering, its plan-based scheduling, and its shared blackboard and IPM ontology for integrating results. • Changing Project Members. About a dozen computer science students and researchers have worked on different parts of Guardian, averaging three or four project members in any given year. Half a dozen physicians and medical students have consulted on the project, averaging one or two in any given year. The IPM DSSA facilitates cooperative system development by preserving the decomposition of domain expertise into task, method, and domain components and allowing individual team members to focus on just those components of interest. • Evolutionary Development. The Guardian agents have been developed incrementally over the last six years in response to an expanding (and occasionally contracting) set of application requirements and, in our case, changing student research interests. By providing a stable architectural platform and reusable components, the IPM DSSA explicitly enables incremental development and modification. Guardian benefits from its ability to incorporate whatever components become available. Students and other project members benefit because they can exploit the availability of existing components and because they can demonstrate and evaluate their contributions in the context of a more complex Guardian agent. Finally, the IPM DSSA provides a controlled software environment for comparatively evaluating the performance of Guardian agents that differ systematically in their component configurations.

24

5.6 Informal Cost/Benefit Analysis The IPM DSSA provides a framework for software synthesis out of reusable components for the class of intelligent patient monitoring agents. But reusability is achieved at a cost, namely, the marginal cost of developing software that meets the interface requirements of the architecture and shared ontology, as well as performing its primary function. This cost must be weighed against the benefits of reusability. Barnes and Bollinger [1991] propose a "quality-of-investment measure ... Q=B/R," for a given software entity, where B is the total cost savings accumulated over all subsequent reuse applications and R is the total initial investment in reuse. Although we have only limited experience so far with Guardian, we can estimate values of Q for the IPM reference architecture and components by estimating corresponding values of B and R. Based on our experience with six Guardian agents, the IPM reference architecture gives a high expected value of total cost savings (B) by offering a large domain of past and prospective reuse opportunities and substantial cost savings for each reuse application. Although,. the IPM reference architecture is, itself, an object of research and likely to undergo modifications in the future, we have not experienced any significant architectural limitations so far and expect the essential IPM reference architecture to provide adequate support for forseeable Guardian agents and other IPM agents. Although the initial development cost for the architecture was substantial, most of that cost was a necessary investment; the marginal cost of making the architecture domain independent ( R) was minimal. Based on these estimates of total cost savings and total reuse investment, the expected value of Q, the quality-of-investment for the architecture, is quite high. The IPM components we have developed so far vary widely in the expected value of total cost savings (B). For the six Guardian agents reported in this paper, the number of reuse opportunities so far varies from 1-6 and cost savings for each reuse application ranges from a few person-weeks to a person-year. Moreover, because ours is a research project and most of the software components are investigations of new ideas, we feel that these numbers underestimate both the number of reuse opportunities and the reuse savings for the best components that might come out of our project. By contrast, the marginal cost (R) of making components that follow our prescribed decomposition of tasks, methods, and subject domains seems minimal. Members of our project team simply combine: (a) good software engineering practice: decoupling an application's interface (subject domain and tasks) from its implementation (method); with (b) with good knowledge engineering practice: decoupling an application's knowledge (subject domain) from its inference engine (task and method). Based on these estimates of total cost savings and total reuse investment, the expected quality-of-investment (Q) value for components developed for the IPM library ranges from modest to moderately high.

25

6. Conclusion Our results suggest that the IPM DSSA provides an effective means of enabling software reuse in an important class of intelligent patient monitoring applications. It facilitates new application development, tailoring of applications to patient histories and physician monitoring strategies, and any necessary run-time reconfiguration. The greatest cumulative payoff accrues in long-term, evolutionary system development efforts. We hypothesize that similar DSSA approaches can be developed for other medical applications, with potential reuse of components across different medical application domains. In fact, the blackboard model [Erman, et al, 1980] underlying the IPM reference architecture was originally designed as a development environment for the Hearsay II speech-understanding system to address similar software engineering challenges within a complex "single-problem" application. With the IPM DSSA, we have expanded on the inherent strengths of the blackboard architecture with capabilities for plan-based metacontrol and with our knowledge decomposition framework and shown its utility as a development environment for complex "multi-problem" applications such as IPM agents.

26

References Ash, D., Gold, G., Seiver, A., and Hayes-Roth, B. Guaranteeing real-time response with limited resources. Journal of Artificial Intelligence in Medicine, 5 (1), 1993, 49-66. Ash, D. and Hayes-Roth, B. A comparison of action-based hierarchies and decision trees for real-time performance. Proc. of the National Conference on Artificial Intelligence, Washington, D.C., 1993. Barnes, B.H., Bollinger, T.B. Making reuse cost-effective. IEEE Software, January, 1991, pp. 13-24. Biggerstaff, T.J. Software reusability promise: Hyperbole and reality. Proceedings ICSE 13, IEEE Computer Society Press, pp. 52-54, May, 1991. Biggerstaff, T.J., and Perlis, A.J. Software reusability, ACM Press, 1989. Biggerstaff, T., and Richter, C. Reusability framework, assessment, and directions. IEEE Software, March, 1987, pp. 41-49. Boehm, B.W., and Scherlis, W.L. Megaprogramming. In Proceedings of the DARPA Software Technology Conference, Los Angeles, 1992. Dabija, V., and Hayes-Roth, B. A framework for deciding when to plan to react. Submitted to the National Conference on Artificial Intelligence, 1994. Dean, T., and Boddy, M. Drakopoulos, J., and Hayes-Roth, B. A sigmoidal based fuzzy pattern recognition system for multi-variate time-dependent patterns. Submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence, 1993. Erman, L., Hayes-Roth, F., Lesser, V., and Reddy, R. The Hearsay-II speechunderstanding system: Integrating knowledge to reduce uncertainty. Computing Surveys, 12, 213-253, 1980. Freeman, P. (ed.) Software Reusability. Los Alamitos, Ca.: IEEE Computer Society Press, 1991. Friedland, P. and Iwasaki, Y. The concept and implementation of skeletal plans. Automated Reasoning, Vol. 1, No. 2, 1985. Garvey, A., Hewett, M., Johnson, M.V., Schulman, R., and Hayes-Roth, B. BB1 User Manual. Stanford University, Knowledge Systems Laboratory Technical Report KSL-8661, 1986. Hayes-Roth, B. An architecture for adaptive intelligent systems. Artificial Intelligence, Special Issue on Agents and Interactivity, accepted for publication, to appear in 1994.

27

Hayes-Roth, B. A Blackboard for control. Artificial Intelligence, 26, 251-321, 1985. Hayes-Roth, B. Architectural foundations for real-time performance in intelligent agents. Real-Time Systems: The International Journal of Time-Critical Computing Systems, 2, 99-125, 1990. Hayes-Roth, B. Opportunistic control of action in intelligent agents. IEEE Transactions on Systems, Man, and Cybernetics, in press, 1993. Hayes-Roth, B., Buchanan, B.G., Lichtarge, O., Hewett, M., Altman, R., Brinkley, J., Cornelius, C., Duncan, B., and Jardetzky, O. Protean: Deriving protein structure from constraints. Proceedings of the National Conference on Artificial Intelligence, 1986a. Hayes-Roth, B., Garvey, A., Johnson, M.V., and Hewett, M. A modular and layered environment for reasoning about action. Stanford University: Technical Report No. KSL 86-38, 1986b. Hayes-Roth, B., Johnson, M.V., Garvey, A., and Hewett, M. Applications of BB1 to arrangement-assembly tasks. Journal of Artificial Intelligence in Engineering, 1, 85-94, 1986c. Hayes-Roth, B., Lalanda, P., Morignot, P., Pfleger, K., and Balabanovic, M. Plans and behavior in intelligent agents. KSL Technical Report, 1993. Hayes-Roth, B., Pfleger, K., Lalanda, P., Morignot, P., and Balabanovic, M. A domainspecific software architecture for a class of adaptive intelligent systems, submitted to IEEE Transactions on Software Engineering, 1994. Hayes-Roth, B., Washington, R., Hewett, R., Hewett, M., and Seiver, A. Intelligent monitoring and control. Proc. of the International Joint Conference on Artificial Intelligence, Detroit, Mi., 1989. Hayes-Roth, B., Washington, R., Ash, D., Hewett, R., Collinot, A., Vina, A., and Seiver, A. Guardian: A prototype intelligent agent for intensive-care monitoring. Journal of Artificial Intelligence in Medicine, 4, 165-185, 1992. Hewett, R., and Hayes-Roth, B. Representing and reasoning about physical systems using prime models. In J. Sowa, S. Shapiro, and R. Brachman (Eds.) Formal Aspects of Semantic Networks, Morgan Kaufmann, 1990. Hayes-Roth, F. Architecture-based acquisition and development of software: Guidelines and recommendations from the ARPA Domain-Specific Software Architecture (DSSA) Program, Technical Report, Teknowledge, Inc., 1994. IEEE Transactions on Software Engineering, Special Issue on Software Reusability, September, 1984.

28

Larsson, J. E., Multi-Level-Flow models for intelligent monitoring, submitted to Artificial Intelligence, 1994. Lind, M. Representing goals and functions of complex systems—An introduction to multilevel flow modeling. Technical Report, Institute of Automatic Control Systems, Technical University of Denmark, Lyngby, Denmark. Pearl, J. Probabilistic reasoning in intelligent systems. Morgan Kaufmann, San Mateo, CA, 1988. Peng, Y. and Reggia, J. A. Abductive inference models for diagnostic problem-solving. Springer-Verlag, New York, NY, 1990. Mettala, E. Domain specific software architectures. Presentation at ISTO Software Technology Community Meeting, June, 1990. Mettala, E., and Graham, M.H. The domain-specific software architecture program. CMU/SEI Report CMU/SEI-92-SR-9, June, 1992. Murdock, J., and Hayes-Roth, B. Intelligent monitoring of semiconductor manufacturing. IEEE Expert, 6, 19-31, 1991. Pardee, W.J., Schaff, M.A., and Hayes-Roth, B. Intelligent control of complex materials processes. Journal of Artificial Intelligence in Engineering, Design, Automation, and Manufacturing, 4, 55-65, 1990. Pearl, J. Probabilistic reasoning in intelligent systems. Morgan Kaufmann, San Mateo, CA, 1988. Peng, Y. and Reggia, J. A. Abductive inference models for diagnostic problem-solving. Springer-Verlag, New York, NY, 1990. Prieto-Diaz, R., and Arango, G. (eds.) Domain Analysis and Software Systems Modeling. Los Alamitos, Ca.: IEEE Computer Society Press, 1991. Tommelein, I.D., Hayes-Roth, B., and Levitt, R.E. Altering the SightPlan knowledgebased systems. Journal of Artificial Intelligence in Engineering, Automation, and Manufacturing, 6, 19-37, 1992. Tracz, W. (ed.) Software Reuse: Emerging Technology. Los Alamitos, Ca.: IEEE Computer Society Press, 1991. Tracz, W. Software reuse technical opportunities. Paper prepared for DARPA Software Program PI Meeting, 1992. Tracz, W. Software reuse maxims. ACM Software Engineering Notices, 13, 1988, pp. 2831. Tracz, W., Coglianese, L., Young, P. A domain-specific software architecture engineering process outline. SIGSOFT Software Engineering Notes, 18, 40-49, 1993.

29

Vina, A., Ash, D., and Hayes-Roth, B. Engineering reactive agents for real-time control. Proc. of Avignon91: Expert Systems and their Applications, Avignon, 1991. Washington, R. Real-time abstraction planning. PhD Thesis. Stanford Univeristy, 1994. Washington, R., and Hayes-Roth, B. Input Data Management in Real-Time AI Systems. Proc. of the International Joint Conference on Artificial Intelligence, Detroit, Mi., 1989. Winograd, T. Beyond programming languages. In D. Partridge (ed.), Artificial Intelligence and Software Engineering. Norwood, N.J.: Ablex Publishing Corp., 1991.

30