There is now good evidence that clinical decision support .... specialist. The decision is followed by three possible actions. (squares), any of which could be recommended by the decision. ..... Hammond P âComputer support for protocol-based.
Integrated Design and Process Technology, IDPT-2002 Printed in the United States of America, June, 2002-03-21 © 2002 Society for Design and Process Science
QUALITY AND SAFETY OF CLINICAL DECISION SUPPORT TECHNOLOGIES: A DISCUSSION OF THE ROLE OF FORMAL METHODS John Fox Cancer Research UK, Lincoln’s Inn Fields, London WC2A 3PX, UK ABSTRACT All new medical technologies have the potential to introduce new hazards despite the strongest efforts to avoid them (c.f. unanticipated side-effects of new drugs despite rigorous testing in clinical trials). This applies to medical software as much as conventional clinical interventions so software developers clearly have a responsibility to prevent systems contributing to avoidable patient risks, and to ensure that unavoidable hazards are properly detected and managed should they occur. Even with our best efforts, however, we may never be able to completely avoid the possibility that someone will be harmed in circumstances where a system such as a decision support or workflow system is involved. This position paper observes that software designers have a “duty of care” and reviews current techniques for ensuring safe development and deployment of technologies and applications. Formal methods may have a particular role in offering certain clinical safety guarantees, but they represent only one of many techniques that are needed in a design culture in which patient safety is paramount. INTRODUCTION “ ..we must systematically design safety into processes of care” Institute of Medicine, 2000
There is now good evidence that clinical decision support technologies such as patient monitoring and reminder systems, prescribing, treatment management and workflow systems can make a significant contribution to improved quality and consistency of patient care. In light of the recognition that human error in the delivery of patient care is a major source of avoidable mortality and morbidity (Kohn et al, 2000) interest in the use of such technologies is now growing rapidly. My group has a long commitment to developing technologies for decision support (see Elkin et al, 2000) and a number of applications have been successfully fielded (e.g. Fox and Thomson, 1998). We have also been concerned about the safety-critical nature of many such applications and have sought to apply rigorous software engineering methods to achieving quality
including formal specification and verification techniques (Fox, 1993; Krause et al, 1993). Much of this work has recently come together in the PROforma specification language, which was designed to formalise clinical processes and tasks, and the associated methodology and tools for building applications such as point of care decision support, guidelines, care pathways, workflow systems and the like (Fox and Das, 2000). PROforma has provided a useful test-bed for investigating quality and safety issues in developing these new services. It suggests that a formal approach can take us a long way in addressing these issues and I strongly support the aims of this workshop in promoting the adoption of formal methods. However, as a supporter, I would also like to discuss how our experience and that in other safety-critical industries suggests that formal methods are only part of the story in achieving high levels of quality and safety of software systems. In this paper I consider the role of formal methods from the perspective of the designer’s responsibilities in building clinical software, focusing on clinical decision support systems (CDSSs) and the quality, safety and legal liability issues that they raise. I argue that formal methods are valuable in addressing some questions, but only within a larger context of a culture in which they are integrated with a range of informal quality management procedures, proper hazard analysis safety analysis and testing, and monitoring of systems in use.1 THE PROFORMA LANGUAGE AND DEVELOPMENT METHODOLOGY PROforma is an executable specification language for describing clinical processes. In order to facilitate this processes are described in terms of a small set of tasks that a clinician needs to carry out to achieve clinical objectives (Fox and Das, 2000). The language is based on 1
Parts of this discussion are based on a “Green Paper” currently in discussion among members of OpenClinical (www.openclinical.org). One of OpenClinical’s objectives is to promote improved methodologies for ensuring quality and safety of clinical knowledge management systems and we invite workshop participants to join OpenClinical to support its aims.
1
a temporal logic language called R2L (Das et al, 1997) that has been translated into an object-oriented format that can be supported by modern software engineering methods. PROforma has proved to be a versatile system for capturing many types of clinical process, including decisions, protocols and care pathways Software tools are available for authoring applications in the language, including a commercial toolset (Arezzo see www.infermed.com ) and the Tallis release that we are developing to support web based publishing of interactive clinical guidelines (Fox et al, 2001).
can be found at www.infermed.com/wap/era.2 It is based on a standard text published by the UK Department of Health ( "Referral Guidelines for Suspected Cancer", see http://www.doh.gov.uk/cancer/referral.htm.) A graphical representation of an ERA guideline is shown in figure 1 and the PROforma specification is reproduced in the Appendix. In the PROforma development method a task’s behaviour is specified by populating a collection of attributes using a graphical design environment (figure 1).
PROforma is defined around four general classes of task: Decisions. A decision is any kind of choice. It is an abstraction from patient diagnosis, treatment, prognosis, risk assessment, referral and other decisions. PROforma supports a general procedure for making decisions under uncertainty. Actions. An action is any external service that needs to be carried out as part of a patient’s care (e.g. giving an injection). A PROforma action can be enacted by issuing a request to clinical staff or by direct control of a medical device. Enquiries. An enquiry is an abstraction of any task that acquires data. The implementation of an enquiry may entail a manual process (e.g. a user enters data via a form on a screen) or an automatic process (e.g. acquiring data from a patient record or a device). Plans: Collections of tasks that operate together to achieve some objective are called plans. Any of the above tasks can be included in a plan, and the tasks in a plan may be scheduled over time (e.g. the steps in a chemotherapy protocol). PROforma is defined recursively over plans so that an application can be a hierarchical task structure of any complexity. The language combines features of a formal specification language as developed in software engineering with those of knowledge representation languages as developed in AI. It can be viewed as a hybrid of a logic programming language (it supports inference in propositional and predicate logics, together with certain non-classical logics) and an object-oriented language, in which the objects are tasks designed to achieve clinical goals. A simple PROforma application is ERA (Early Referrals Application) which is designed to assist British General Practitioners in deciding whether patients with suspected cancer should be referred for urgent investigation. A publicly accessible demonstration of 12 ERA guidelines
Figure 1: Tallis authoring and testing environment for developing an executable PROforma specification of clinical tasks. The top window shows the main plan, here an ERA guideline for assisting in the decision whether or not to refer a patient for suspected breast cancer (see text). This guideline consists of an enquiry which requests information about the patient’s clinical history (green diamond) followed by a decision (circle) which recommends whether a patient should be seen urgently by a specialist. The decision is followed by three possible actions (squares), any of which could be recommended by the decision. The left panel shows a tree view of the whole plan (the plan is represented by a pink ellipse at the top). The tree view is particularly useful for complex applications e.g. when plans contain many sub-plans. The panels at the bottom right are CASE tools for populating the specification of any selected task.
QUALITY SAFETY AND LEGAL LIABILITY In many medical applications such as ERA there is a risk that clinical errors may occur due to some sort of failure. 2
ERA is a mixture of simple HTML pages and active decision support. To start one of the decision support processes select a specific cancer by clicking on the appropriate link in the left hand column. A patient data form will be displayed; once this has been completed click on the OK button to submit the data for processing by a PROforma server on the ICRF web site. ERA will return its recommendations on the basis of the data provided.
2
For example the data available about a patient could be inaccurate, or the conclusions drawn from it may be incorrect because the system’s inference procedures are unsound in some way or the reasoning does not cover unusual contingencies that have not been considered by the designers. There are many ways in which errors can creep into such applications, even one as simple as ERA. For this reason PROforma technology is designed to support a logical specification of clinical procedures and the knowledge used in such procedures. Authoring tools also include CASE tools that support a limited form of syntax-directed analysis of this specification. There is also a longstanding and unanswered issue concerning legal liability of CDSSs and other such systems: if a decision support system gives bad advice then who is responsible? Is it the software designers? the providers of the medical knowledge that it makes use of? or the end-users, the healthcare professionals who are responsible for the final clinical decision? No-one knows, since so far as we can establish there is no case law to establish the relevant precedents, either in the UK or elsewhere. All guideline developers will of course wish to minimise the chances of patient harm, for the patient’s sake and to anticipate possible legal liabilities that might result from the use of technologies such as PROforma. In pursuing these objectives we have reviewed a range of current practices in software and safety engineering with a view to establishing a quality methodology that is appropriate for CDSS technology (Fox and Das, 2000). In addition colleagues in our organisation have carried out a risk assessment to establish circumstances in which legal liability issues might arise for our organisation, and consulted independent legal opinion regarding the likely exposure of any supplier should patients come to harm in situations where its technology has been used. From the first study we have concluded that we have much to learn from established quality practices in software engineering, particularly in software safety engineering. Software is increasingly developed according to systematic development lifecycles, of course, which cover the design, implementation and ongoing maintenance of software that is intended for use in safety-critical applications (Leveson, 1995). Quality methodologies are supported by internationally accepted standards, such as the International Standards Organisation 9000 quality standard [see references section for URL]. Furthermore the software industry appears to be adopting the recently published International Electrotechnical Commission 61508 standard as a basis for establishing best practice in the design and development of safety-critical software [see references section for URL]. However, neither the ISO nor the IEC have the authority or resources to enforce
their standards (e.g. by any audit or certification process) so the current position is that industry must police itself. The main conclusion from the risk and liability assessment is that the legal position regarding liability of CDSSs is unclear. Major organisations such as the US FDA and its counterparts in the EU have not published policies or standards. They appear to be waiting for legal cases to arise so that the courts can clarify the position for them. Given this lack of clarity it is common practice for suppliers of decision support products and services to place disclaimers on applications that attempt to limit their liability by restricting “proper” use of CDSSs. Typical examples are “In providing this expert system, [the company] does not make any warranty, or assume any legal liability or responsibility for its accuracy, completeness, or usefulness, nor does it represent that its use would not infringe upon private rights” and “The Software is provided “AS IS”, without any warranty as to quality, fitness for any purpose, completeness, accuracy or freedom from errors”. Given existing consumer protection legislation in many countries legal opinion seems to be that such disclaimers actually offer limited protection if there are design faults with the product. The legal risks of CDSS suppliers can be reduced by taking insurance, though the degree of protection this affords is likely to vary from country. Despite the absence of case law in this area a supplier of CDSSs would almost certainly be viewed as having a legal duty of care to patients who might be adversely affected by the technology and to professionals who may use it in good faith in their clinical practice. Consequently they would be expected to follow what is considered to be best practice in their design, development, verification, validation, testing and marketing of such products. The goal of those developing CDSSs must of course not be simply to protect themselves through insurance, but to maximise the quality, safety and ethical use of their products, thereby minimising the risk of adverse clinical events and exposure to legal action. It is interesting to note that neither risk management professionals nor lawyers experienced in biomedical liability issues seem to be aware of the existence of formal design methods, or their potential role in satisfying demands for quality and safety guarantees. FORMAL METHODS “Most standards and technical approaches to safety involve just ‘getting the software right’ or attempting to increase software reliability to ultrahigh levels. Although in the future it may be possible to construct perfect software, the current reality is that we cannot accomplish this goal for anything but the simplest systems. N Leveson, Safeware, 1995, ix
3
Formal methods appear to offer an attractive approach to improving the safety guarantees that a developer can offer. Unfortunately the wholesale adoption of such procedures for promoting quality and safety is somewhat problematic. The use of formal design and verification in software engineering is demanding, and even after 20 years of research and development in formal methods the skills are not widely available in the software industry. Demands for adoption of formal methods could also rebound by discouraging the development of systems due to the increased costs for the supplier thereby reducing the commercial incentive for clinical development.
Q1. Software should be designed, implemented, tested and documented using generally recognised methods, notably clear development life-cycles and, where there are high levels of risk, formal design and verification.
Furthermore, no current standard can absolutely guarantee safety of medical software; there are just too many possible clinical contingencies that can arise, many of which cannot be foreseen or controlled by the designers. In reality all that a developer can do is commit reasonable effort to achieving acceptable levels off quality and safety. The problem is that the meanings of the terms “reasonable” and “acceptable” are vague, and an organisation could commit indefinite resources in return for ever-diminishing benefit. Consequently it is generally accepted in safety engineering that developers should not always be required to use rigorous development and verification techniques, but their duty of care only entails a responsibility to bring the level of risk associated with the use of a system to a level that is “as low as reasonably practicable” (ALARP).
Ensuring that the medical content of a decision support or guideline system is of high quality raises additional problems. Medical knowledge is subject to frequent change and new research can often demonstrate that past clinical practices are ineffective or even hazardous. Furthermore, knowledge quality will often be a professional judgement, by an expert individual or group, and cannot always be based on objective scientific evidence as to efficacy and safety. Even when there is evidence it may be limited, open to different interpretations, and subject to change as medical knowledge advances.
In the remainder of this discussion we consider a range of options available and propose an outline strategy for deciding when to adopt those options based on the ALARP principle, and particularly when to use formal design. We abandon any idea that a particular approach to quality and safety (such as formal specification and verification) is a panacea for all applications. Rather we propose a more flexible framework based on the ALARP principle that places a duty of care on CDSS developers and suppliers while permitting them to follow reasonable rules in determining the resources to commit to formal and rigorous development methods. The approach we propose is to assess the level of clinical risk that may be associated with a specific application and adopt increasingly quality and safety procedures whose rigour is proportional to the level of risk. METHODS FOR ASSURING QUALITY OF CDSs The quality of a decision support system needs to be considered at two levels: the level of the CDSS technology platform (the software which is used to build a clinical application) and/or the specific clinical application (the medical knowledge content). The following standard quality methods are applicable to both:
Q2. An explicit quality plan should be developed covering all phases of implementation, testing and maintenance of the system. Q3. Testing should be carried out following accepted practices, with all tests and their results recorded for review.
Even a formal representation of medical knowledge cannot therefore be proved to be clinically comprehensive or objectively valid; it can only attempt to unambiguously capture the current state of professional and scientific opinion. Nevertheless current verification techniques make it possible to automatically demonstrate that the medical knowledge used in a CDSS satisfies certain technical requirements like consistency and completeness. However, for the foreseeable future the medical content will need to be approved by professional clinicians or other authorities, so it should also be a requirement that formal content is humanly legible, so far as possible, and can be effectively assessed by appropriate reviewers and end-users. The developers of decision support systems should therefore seek to achieve at least the level of quality assurance that is applied with more traditional knowledge sources, such as medical journals and reference texts, augmented with methods that are appropriate for the new types of knowledge technology. Methods for quality control of medical content rather than technical properties of an application should include: C1. Use of peer review by competent individuals. Peer review may include static assessment of content (reading the knowledge base) and dynamic assessment (testing the performance of the application against example patient data). C2. All content should be available in a legible form for review by the end users of the system, both in static form
4
(e.g. as text) and dynamic form (e.g. as explanations of any decision or recommendation that the system makes for a specific patient).. C3. In certain circumstances where risks are high, formal methods may have a particular role. Formal analysis can be carried out to identify internal inconsistencies, gaps, redundancy, ambiguity, violations of integrity constraints etc. Ideally these will be carried out with automated techniques, such as syntax-directed verification and model-checking, though this may not always be practical. METHODS FOR ASSURING SAFETY OF CDSs Safety is more than quality. A CDSS that is designed and implemented to high quality standards, and is working exactly as intended, can still give bad clinical advice. For example the advice may not take into account unusual patient circumstances (e.g. unusual combinations of conditions; local lack of resources). In software systems in which there are significant safety considerations, therefore, an explicit protocol should be adopted which provides some assurance that the design and implementation of such systems minimises avoidable hazards to patients or others. This protocol may include S1. A basic Hazards and Operability Analysis at the start of development in order to classify the potential level of clinical risk associated with the application.3 S2. If the HAZOP suggests a high level of risk then development should include a separate “safety stream” including: a. A detailed HAZOP carried out alongside the software requirements specification phase, to identify clinical situations or events that could be associated with increased patient mortality or morbidity. Each such situation represents an obligation on system developers to make appropriate design changes which will prevent the anticipated hazard. b. Testing should explicitly include procedures to demonstrate that all safety obligations have been discharged. c. At completion of the application a “safety case” should be prepared which documents the principle hazards, management options, design choices and associated safety arguments, which have been considered in developing the CDSS.
d. The application may include active safety management during operation, such as hazard monitoring and amelioration (Fox and Das, 2000). DISCUSSION Documented compliance with a clear quality and safety procedure will provide the best practical demonstration that a developer has taken his duty of care seriously. Any faults, accidents or other mishaps that subsequently occur are probably unavoidable given the current state of clinical and scientific knowledge and do not represent negligence by the developer. Formal specification and verification techniques will have a role in achieving quality, and eventually an important one in ensuring safety, but in their most rigorous form they are currently demanding and expensive to use and the need for them is a matter of judgement - their use should not always be mandatory and may indeed be quite unusual. Formal thinking is important, however, particularly if specification and verification can be incorporated into a lightweight development lifecycle (Robertson and Agusti, 1999). We proposed a lightweight methodology for the PROforma technology. This is illustrated in Figure 2. The methodology has two streams. The one on the left is a standard lifecycle that is primarily concerned with quality of application content, formalised as a set of PROforma decisions, plans and other tasks. The first step in the quality stream is conceptual design, which sets out the basic medical objectives and clinical methods which the application is designed to support and will usually include (though is not the same thing as) a requirements analysis. Steps two and three, task analysis and knowledge specification, produce a declarative specification of the intended clinical procedure, first sketching the basic tasks informally and the second capturing the details of the required process, including scheduling and timing constraints, decision criteria, termination and abort conditions etc. CASE tools generate a declarative specification of the process, such as that illustrated in the Appendix. This can be verified in step four through a mixture of syntax-directed code checking and conventional testing. The final stage is the deployment, operational testing and feeding back of fault and other experience into the maintenance cycle.
3
HAZOP is a methodical investigation of the hazards and operational problems to which a technological system can give rise and “ is particularly effective for new systems or novel technologies”(Redmill et al, 1999).
5
C o n c e p tu a l d e s ig n
G e n e r a liz e d h a z a r d a n a ly s is
T ask a n a ly s is
A p p lic a t io n - s p e c if ic h a z a r d a n a ly s is and p r e v e n tio n K n o w le d g e s p e c if ic a t io n
K n o w le d g e v e r if ic a t io n
F a u lt re m o v a l
O p e r a t io n a l T e s tin g a n d u s e m o n it o r in g
A c t iv e h a z a r d m anagem ent
Lsafe is intended for two main purposes. First we aim to support semantic model checking, to determine whether or not an application can potentially enter states that are unsafe, to identify points where authorization is needed, or where modifications to planned actions may be required due to violation of safety constraints (as discussed by Hammond, 1996). Second, for some applications a safety-logic may provide a foundation for designing processes that can make run-time recommendations or autonomous decisions about patient care. For example we can introduce the additional modalities recommended and autonomously permitted. α is recommended
action α is permitted & action α preferred to action ¬α & authorization of α is obligatory
α is autonomously
action α is permitted & authorization of α is not obligatory
permitted Figure 2: The PROforma quality and safety lifecycle
The right hand stream of the method is focused on safety. It includes a HAZOP analysis and, conditional on the results of this, the detailed hazard analysis discussed in the text. It is also possible to include runtime management of hazards and risks (Fox and Das, 2000). The safety stream can also include formal methods in this methodology. For example, we can use manual or automatic checking of integrity and safety constraints (e.g. Hammond, 1996) based on a suitable safety formalism. Lsafe is a logic for reasoning about safety which has been developed for use in applications that are required to make recommendations or take decisions about actions that may be hazardous (Fox and Das, 2000). Lsafe assumes that medical knowledge is described in terms of the static properties of objects or situations and the dynamic consequences of actions and events in a clinical domain. It introduces a number of specialized modalities together with a temporal operator, which has the form [t1,t2]. The set of modalities is: safe authorized preferred permitted obligatory [t1,t2]
action α is safe action α is authorized action α is preferred to action β preconditions of action α are satisfied action α is obligatory property ϕ is true in interval t1 to t2.
Das has developed a model theory for Lsafe and proved its soundness and completeness. These are presented in Fox and Das (op cit, chapter 16) with an example of its use in management of acute asthma.
Although formal research into safety is an important direction for research there are, however, still fundamental limitations on how far this kind of approach can take us. This is primarily because the idea of a safe condition depends critically on an understanding of what it means to be safe; after all a patient can come to harm in an indefinite number of ways. We do not of course have a general set of axioms for this and therefore designers will depend on the judgement of medical specialists and clinical users for the foreseeable future. For this reason alone formal methods must be seen as an important but currently modest component of a safety strategy for software design in healthcare. It is generally accepted that safety and reliability of complex systems are socio-technical problems, not just technical ones. Quality and safety of design depends on the presence of a safety culture in which all individuals in the design, development and implementation team take their responsibilities seriously and carefully select and apply the appropriate level of rigour for the product they are constructing. Similar cultural factors have a major effect on the ability of user organisations, such as hospitals, to use the systems effectively. This is as much dependent on the training, attitudes and commitment to safe clinical practiceas it is to formal methods. CONCLUSIONS Management of quality and safety of clinical decision support systems is an important but difficult challenge, potentially requiring technical, professional and organisational commitment. A policy that is overly lax could lead to patient harm while one that is overly stringent will be a disincentive to developing such technologies and achieving the full potential for
6
improved patient care they potentially offer. In this paper I have set out a variety of options for improving quality and safety and for discharging our duty of care. It is not intended that all such options should be used in all applications but that the level of investment in managing quality and safety should match the potential level of clinical risk associated with design faults or operational failures. REFERENCES Das S, Fox J, Elsdon D, Hammond P, “A flexible architecture for a general intelligent agent” Journal of Experimental and Theoretical Artificial Intelligence, 9, 407-440, 1997. Elkin, Peleg, Lacson, Tu, Boxwala, Greenes, and Shortliffe. "Toward Standardization of Electronic Guidelines". MD Computing, Vol. 17, No. 6, 2000, pp. 39-44. ISO 9000 see http://www.iso.ch/iso/en/iso900014000/index.html IEC 61508 see http://www.iec.ch/61508/ Fox J "On the soundness and safety of expert systems", Artificial Intelligence in Medicine, 5, 159-179 (1993). Fox J “Designing safety into medical decisions and clinical processes” Invited lecture, in Udo Voges (Ed.): Computer Safety, Reliability and Security, 20th International Conference, SAFECOMP 2001, Budapest, Hungary, September 26-28, 2001, Proceedings. Lecture Notes in Computer Science 2187 Berlin: Springer Verlag. Fox J, Bury J, Humber M, Rahmanzadeh A, Thomson R “Publets: Clinical Judgement on the Web” Proceedings of AMIA Annual Symposium, Washington, 2001
Fox J and Bury J “A quality and safety framework for “intelligent” guideline systems” in Proc. Ann. Conf. Annual Medical Informatics Association, Los Angeles. Fox J and Das S Safe and Sound: Artificial Intelligence in Hazardous Applications, AAAI and MIT Press, July 2000. Fox J, Bury J, Humber M “Publets: Clinical Judgement on the web” Proc. Ann. Conf. Annual Medical Informatics Association, Washington D C, 2001. Fox J and Thomson R, “Decision support and disease management: a logic engineering approach” IEEE Transactions in Biomedicine, 2 (4), 217-228, 1998. Hammond P “Computer support for protocol-based treatments of cancer” Journal of Logic Programming 26(2), 93-111, 1996. Kohn, L T. Corrigan J M,. Donaldson M S, Editors; To Err Is Human: Building a Safer Health System; Committee on Quality of Health Care in America, Institute of Medicine, 2000 Krause P J, Fox J, M O'Neill and A Glowinski, "Can we formally specify a medical expert system?" IEEE Expert (1993) Leveson N Safeware: System safety and computers Reading, Mass: Addison-Wesley, 1995 Redmill F, Chudleigh M, Catmur, J System safety: HAZOP and Software HAZOP, Chichester: John Wiley, 1999 Robertson D and Agusti J Software blueprints: Lightweight uses of logic in conceptual modelling, ACM Press: Addison-Wesley, 1999.
Appendix PROforma specification of the ERA breast cancer referral guideline (excluding data definitions to improve clarity). The specification consists of a plan that contains a collection of tasks: an enquiry to collect patient data, followed by a decision to determine whether or not referral is required, and 3 alternative actions, which depend on the result of the decision. /** PROforma Guideline: Suspected breast cancer **/ /** 25/10/2000, (simplified by author for presentation, 1/3/2002 plan :: Breast ; caption :: 'Breast' ; component :: Clinical_information ; component :: Referral_decision ; schedule_constraint :: completed(Clinical_information) ; component :: No_two_week_referral ; schedule_constraint :: completed(Referral_decision) ; component :: Two_week_referral ; schedule_constraint :: completed(Referral_decision) ;
7
component :: Non_urgent_referral ; schedule_constraint :: completed(Referral_decision) ; end plan . enquiry :: Clinical_information ; caption :: 'Clinical information' ; source :: age ; mandatory :: yes ; source :: intractable_pain ; mandatory :: yes ; source :: nipple_changes ; mandatory :: yes ; source :: nipple_disc_features ; mandatory :: yes ; source :: skin_changes ; mandatory :: yes ; source :: tissue_changes ; mandatory :: yes ; source :: Sex ; mandatory :: yes ; end enquiry . decision :: Referral_decision ; caption :: 'Referral decision' ; choice_mode :: single ; support_mode :: symbolic ; candidate :: Two_week_referral ; argument :: for, ( tissue_changes includes 'Discrete lump' and age >= 30 ) ; argument :: for, ( skin_changes includes Ulceration ) ; argument :: for, ( skin_changes includes Nodule ) ; argument :: for, ( skin_changes includes Distortion ) ; argument :: for, ( nipple_changes includes Eczema ) ; argument :: for, ( nipple_changes includes 'Retraction or distortion' ) ; recommendation :: Netsupport( Referral_decision, Two_week_referral ) >= 1 ; candidate :: Non_urgent_referral ; argument :: for, ( tissue_changes includes 'Abscess' ) ; argument :: for, ( tissue_changes includes 'Cyst' ) ; argument :: for, ( intractable_pain = Yes ) ; argument :: for, ( nipple_changes includes Discharge and nipple_disc_features includes 'Large volume' and nipple_disc_features includes 'Bilateral' ) ; argument :: for, ( nipple_changes includes Discharge and nipple_disc_features includes Bloodstained ) ; argument :: for, ( tissue_changes includes 'Discrete lump' and age < 30 ) ; argument :: for, ( nipple_changes includes Discharge and age >= 50 ) ; argument :: for, ( tissue_changes includes 'Asymmetrical nodularity' ) ; recommendation :: Netsupport( Referral_decision, Non_urgent_referral ) >= 1 and netsupport( Referral_decision, Two_week_referral ) < 1; candidate :: No_referral ; recommendation :: Netsupport( Referral_decision, Two_week_referral ) < 1 and Netsupport( Referral_decision, Non_urgent_referral )