International Journal of Control, Automation, and Systems (2010) 8(4):782-792 DOI 10.1007/s12555-010-0410-6
http://www.springer.com/12555
Intelligent Interaction based on a Surgery Task Model for a Surgical Assistant Robot: Awareness of Current Surgical Stages based on a Surgical Procedure Model Seong Young Ko, Woo-Jung Lee, and Dong-Soo Kwon* Abstract: This paper deals with providing a surgical robot with awareness of the current surgical stage. The awareness of the surgical stage is the first step toward a natural interaction between a surgeon and a surgical robot, the ultimate goal of which is to help the surgeon perform surgery with a minimum control burden. For this purpose, a surgery task model was defined as a structured form of surgical knowledge, which can be understood by both the surgeon and the robot. The model consists of three components: a surgical procedure model, input information, and an action strategy at each surgical stage. This paper focuses on the awareness of current surgical stages based on the surgical procedure model. The surgical procedure model represents the sequential information of the surgery and it is arranged based on key surgical stages. To implement the surgical procedure model of a cholecystectomy, 21 cases of human cholecystectomies are decomposed into surgical stages and their relations are then analyzed. To deal with uncertainty, interaction functions are introduced to the model. While further experiments are necessary, it was shown that the key stages-based surgical procedure model could estimate the key surgical stages correctly during one case of in vivo porcine cholecystectomy. Keywords: Cholecystectomy, human-robot interaction, laparoscopic assistant robot, surgery task model, surgical procedure model.
1. INTRODUCTION During the past decade, many researchers have demonstrated the importance of surgical robots in improving the surgical outcomes of various types of surgery [1-3]. One class of such robots is medical assistant robots, whose objective is to assist a surgeon on behalf of medical staff by taking over responsibilities such as maneuvering a laparoscope [4-9] or assisting the surgeon as a scrub nurse [10,11]. Laparoscopic assistant robot systems have been successfully utilized to replace medical staff responsible for maneuvering the __________ Manuscript received September 2, 2008; revised September 1, 2009 and January 20, 2010; accepted February 12, 2010. Recommended by Editorial Board member Sangdeok Park under the direction of Editor Jae-Bok Song. This work was supported by the SRC/ERC program of MOST/KOSEF (grant #R11-1999-008). The contributions of Jonathan Kim and Won-Ho Shin are gratefully acknowledged. Seong Young Ko is with the Department of Mechanical Engineering, Imperial College London, SW7 2AZ, UK and was with the Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, 373-1 Guseong-dong, Yuseong-gu, Daejeon, Korea (e-mail:
[email protected]). Woo-Jung Lee is with the College of Medicine, Yonsei University 134 Sinchon-dong, Seodaemun-gu, Seoul, Korea (e-mail:
[email protected]). Dong-Soo Kwon is with the Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, 373-1 Guseong-dong, Yuseong-gu, Daejeon, Korea (e-mail:
[email protected]). * Corresponding author. © ICROS, KIEE and Springer 2010
laparoscope, but at the expense of placing an extraneous control burden on the operating surgeon. Whereas few voice commands are needed to perform a cholecystectomy with a well-trained human assistant for intermittent corrections, considerably more voice commands are needed with a robotic assistant. Although another modality such as the surgeon’s head motion can be utilized, this does not eliminate the need for continuous commands and may introduce physical stress. Surgery requires delicate handling of tissues at the surgical site. Accordingly, disruption of concentration may result in an undesirable outcome. Since a surgeon can perform surgery with a smaller control burden when aided by a human assistant, a skilled human assistant provides a good example of how a surgical assistant robot should behave. A key difference between the human assistant and the assistant robot lies in the degree of preliminary knowledge of surgery. Therefore, in order for the robot to become an intelligent assistant rather than a motorized surgical tool, it should have preliminary surgical knowledge similar to the well-trained human assistant. The ultimate objective of this study is to develop an interaction scheme that can be employed to completely substitute a robotic assistant for a skillful human assistant. The ideal method may be to develop a complete, human-like robot with both human-level artificial intelligence and interaction capability. However, considering current technical limitations, this remains a distant goal. Since the laparoscopic assistant robot’s task domain, i.e., surgery, is very specific, we believe that the
Intelligent Interaction based on a Surgery Task Model for a Surgical Assistant Robot: Awareness of Current Surgical…
Surgery Task Model
I need to look at that cystic duct.
Surgeon prefers this view …
Command Response States Surgeon
Suggestion
Surgical Assistant Robot
Fig. 1. Basic concept of an interaction scheme between a surgeon and a surgical assistant robot. most realistic approach is to formulate the skillful human assistant’s knowledge into a structured form that the robotic system can understand. This structured knowledge is defined as a surgery task model (STM). Fig. 1 illustrates the basic concept of our interaction scheme using a STM. This concept can be considered as a specific form of the general human-robot interaction (HRI) proposed by Yoon et al. [12,13]. Their model was proposed for an ultimate robot system with relatively high cognition. In the case of a surgical assistant robot, however, since its work domain is specific and its behavior is restrictive, it is believed that interaction with only a task model and a restricted interaction model can be applicable to the robot. Unlike other previous assistant robots that simply follow the surgeon’s direct commands, this assistant robot can estimate the current surgical task using the STM and the status of the surgical environment. It can then suggest appropriate actions, such as maneuvering the camera view. In this paper, we focus on the first component of the STM, the surgical procedure model. The surgical procedure model will be extracted from a real human cholecystectomy, and it will be modeled with a state-transition diagram. 2. INTERACTION SCHEME BASED ON SURGERY TASK MODEL 2.1. Related works Many methods for task analysis and task modeling have been developed. In efforts to analyze tasks efficiently, Goals Operators, Methods and Selection (GOMS), Task Action Grammar, and so forth have been studied by human-computer interaction research groups [14]. In order to describe a discrete event system, modeling methods using state-transition models such as automata and Petri-net have been studied [15,16]. Recently, these task analysis methodologies have been applied to medical robotics fields. MacKenzie et al. constructed a hierarchical decomposition of Nissen fundoplication, a type of laparoscopic surgery for stopping the reflux of stomach acid, using a hierarchical task analysis [17]. The procedure was broken down into a sequence of surgical steps and these steps were further broken down into surgical sub-steps, tasks, sub-tasks, and finally primitive motions. Their work showed that a surgical procedure can be expressed with a sequence of
783
decomposed surgical steps, and this decomposition provides an analytic methodology to evaluate the effectiveness of new techniques such as new tools. However, this model did not consider the variance of surgery and the role of a surgical robot. Rosen et al. proposed a modeling method for minimally invasive surgery using a discrete Markov model to assess surgical performance [18]. Their modeling method is wholly a bottom-up approach; that is, they constructed the discrete Markov model from a sequence of tool motions, which are classified by the position and orientation of the tool and the force and torque exerted on it. Their approach provides a generalized method for decomposing a surgical operation into motion primitives. However, their work focused on evaluation of the surgical performance and further discussion is required to apply it to a surgeon-robot interaction. This approach has been integrated with a laparoscopic simulator and utilized to evaluate the training surgeon’s skill improvement. Sakai et al. also constructed a patient response model, which was integrated with an endoscopic sinus surgery simulator [19]. The response model was obtained from real surgery to express the relations between the surgeon’s treatment and patient’s status. These relations were represented through a Bayesian network. Some researchers have utilized task models to control assistant robots. Fukuda et al. utilized a Petri-net based task model to control a meal assistant robot [20]. In their research, the recognition rate of commands from an electromyogram signal was improved by restricting the possible commands at each situation according to the task model. They showed that the recognition rate was substantially improved compared to that without any task model. Ohnuma et al. utilized timed automata to model an operating scenario, a surgical task, a surgeon, a scrub nurse or a scrub nurse robot, a patient, and their interaction [21]. This model was integrated with a scrub nurse robot to control the robot. Their approach is wellorganized for natural HRI and it is similar to our approach. However, in order to apply it for control of a laparoscope, surgery needs to be analyzed in detail. 2.2. Definition of surgery task model In our previous work [22], we defined a surgery task model (STM) as a structured form of surgical knowledge required for a surgeon to perform a specific surgery, including a surgical procedure model (SPM), input information for identifying current surgical states, and action strategy at each surgical state. While it would be an onerous task to standardize or quantify each step of a surgery, it was shown that some simple surgical procedures such as a cholecystectomy, surgical removal of the gallbladder, could be easily decomposed into discrete steps, as it is a straightforward process. In addition, since a cholecystectomy is one of the most widely performed laparoscopic procedures, we chose it as our first application. Although this paper explains our approach using a cholecystectomy, it is believed that the method employed for realizing the surgical assistant robot’s awareness and also the interaction scheme can be
784
Seong Young Ko, Woo-Jung Lee, and Dong-Soo Kwon
Fig. 2. Overall configuration of proposed interaction scheme based on surgery task model. applied to other, more complex surgical procedures. To take the variance of the surgical procedure into account, the SPM is modeled through a state-transition diagram. States are defined as sub-procedures. Transitions are defined as changes of the states. The transitions are triggered by events such as information pertaining to the operating room, the surgeon’s behavior, or information regarding the patient’s abdomen. In this paper, considering that the model will be integrated with a laparoscopic assistant robot, the surgical view captured by the laparoscope and the surgeon’s commands were chosen as major input modalities. Fig. 2 shows the overall configuration of the integration method of the STM with our laparoscopic assistant robot, KaLAR [9, 23]. First, the current surgical stage is estimated by using the SPM and events extracted from the laparoscopic view and the surgeon’s voice commands. In the presence of the uncertainty of the estimation process, verbal interaction between the surgeon and the SPM can be used to eliminate the uncertainty. A proper viewing strategy corresponding to the current surgical stage is then decided, based on a predefined action strategy map. Next, a desired view direction is generated based on the viewing strategy and input information. Finally, KaLAR moves in order to provide the determined view direction. 2.3. Mathematical expression of surgery task model and surgical procedure model One of the popular state-transition diagrams employed to formalize a discrete event system is an automaton. A deterministic automaton with input/output mapping is called a Moore automaton [16]. Although they may not meet all automata’s characteristics, both the STM and SPM can be expressed with an extended form of the Moore automaton, because a surgical procedure can be decomposed into surgical stages, which are defined as meaningful surgical treatments. First, the SPM is expressed with eight components, as in (1), by adding interaction functions and intermediated variables (surgical stages) to the Moore automaton. SPM = ( X , Y , E , f , I , g , X 0 , X f )
(1)
- X is a set of states. - Y is a set of surgical stages. The surgical stages are intermediate variables to establish common ground between the assistant robot and the surgeon.
- E is a set of events that associate with transitions. - f: X×E→X is a transition function: f (x,e)= x´ means that there is a transition labeled by the event e from the state x to the state x´; f can be a partial function. - I: X×E→X is an interaction function: I(x,e)=[x0, x1, … xk] means that there exist several transitions labeled by the same event e from x to x0, x1, … or xk. Additionally, it also means that the most probable destination state is x0 and the second most probable destination is x1, and so forth. - g: X→Y is an output function: g(x)=y means that the surgical stage y is assigned to the state x; g is also a partial function. - X0 is a set of initial states: this denotes the “start” state. - Xf is a set of final states: this denotes the “end” state and may include “emergency” states. Since the SPM contains most characteristics of the surgical procedure, its components occupy the greatest portion of the STM. Two additional components are necessary to express the STM, as in (2): a set of action strategies Z and an action function h.
STM = ( SPM , Z , h) = ( X , Y , Z , E , f , I , g , h, X 0 , X f )
(2)
- Z is a set of action strategies. The action strategies are the final output of the STM. Each action strategy determines how the robot should behave. - h: Y→Z is an action function: h(y)=z means that the action strategy z is assigned to the surgical stage y; h is also a partial function. We can easily find a new function mapping from the state x to the action strategy z directly, but we did not merge these functions in order to preserve the surgical stages. 2.4. Contributions A major contribution of this paper is to propose a key stages-based SPM to provide a surgical assistant robot with awareness of current surgical stages being performed. The awareness of a surgical robot is a fundamental requirement to construct natural interaction with a surgeon. For this purpose, a SPM, which consists of key stages and peripheral stages, was proposed and its implementation is explained in this paper. In addition, this paper presents the basic concept of the natural interaction between a surgical (assistant) robot and a surgeon using a STM including the proposed SPM. During the implementation of the SPM, an interaction function was proposed for resolving its uncertainty, which originates from a lack of input information or vagueness of the surgical procedure itself. As the most feasible and efficient approach, the interaction function was implemented by introducing verbal communication with the surgeon. The proposed SPM was integrated with a compact laparoscopic assistant robot and its feasibility was demonstrated in the performance of a porcine cholecystectomy.
Intelligent Interaction based on a Surgery Task Model for a Surgical Assistant Robot: Awareness of Current Surgical…
3. SURGICAL PROCEDURE MODEL BASED ON KEY SURGICAL STAGES In our previous work, we constructed two different representations of the SPM: a fully connected SPM and a normal SPM. These SPMs, however, revealed several weaknesses. The normal SPM does not cope with variation of the surgery, because it represents only the most frequent sequence. The fully connected SPM, meanwhile, may lose the sequential flow of the surgery due to unmodeled sequences or mistakes by the surgeon. In this paper, we proposed a SPM based on the key surgical stages. The proposed model (a) can include variation of the surgical procedure and (b) maintain its basic flow. This section also explains how to obtain the model from real human-assisted human cholecystectomies. The surgical stages are the units of surgical treatment, and the objectives of the different stages vary. These stages should be determined carefully so that the surgeon can understand them easily, because they facilitate the establishment of common ground between the robot and the surgeon. Note that the surgical stages are different from the states in that the latter are utilized to identify several treatments performed at different situations. Thus, several states may have the same corresponding surgical stage. Key surgical stages are defined as indispensable stages for the surgery. The sequence of these stages is defined as the normal SPM, but the method to obtain the normal SPM is carried out in a more systematic manner in comparison to the previous work. The other surgical stages, i.e., except the key stages, are defined as peripheral surgical stages. 3.1. General approach to obtain surgical procedure model The process to obtain a SPM based on key surgical stages is explained in Table 1. Since the SPM represents all characteristics of the specific surgery, the number and types of key surgical stages depend upon the type of surgery. Sections 3.2 to 3.5 outline the steps in detail by providing an example through a cholecystectomy and the KaLAR system. 3.2. Definition of surgical stages and transition conditions - steps A and B Operations performed by skilled surgeons may not include several patterns that developing surgeons carry out, because the skilled surgeons perform the operation semi-automatically or they can skip several steps owing to a wealth of experience. In order to eliminate the possibility of missing any stages, we utilized cases performed by a developing surgeon and those by an expert surgeon to construct the SPM. In addition, a semiexpert surgeon’s cases were also utilized to assess whether another surgeon’s cases are applicable to the modeled SPM. Thus, a total of 21 cases - 10 cases with a skilled surgeon, 7 cases with a developing surgeon, and 4 cases with a semi-expert surgeon - of human-assisted human cholecystectomies were analyzed. All cases of the
785
Table 1. Process to obtain a key stages-based SPM. Step A
Step B
Step C
Step D
Step E
Step F
Step G
Step H
Decompose the surgery into surgical stages that have distinctive objectives from other stages. Identify events (transition conditions), which distinguish surgical stages, and determine an action strategy at each surgical stage. - If one surgical stage includes several action strategies, divide it into several stages. - Consider several similar successive stages as one surgical stage. Determine key surgical stages. - Find the common longest sequence of the surgical stages from all cases. - Define all surgical stages belonging to the longest sequence as key surgical stages. Compose a normal SPM. - Remove successive repeated stages. - If there exist more than one longest sequence, merge them by connecting differing parts to each other. Construct the whole SPM. - Classify the stages into key stages and peripheral stages. The stages located in different order from the normal SPM obtained in step D are considered peripheral stages. - Construct local procedures between the key stages using the peripheral stages. - Construct the whole SPM by connecting the local surgical procedures. Assign a different state index to each surgical stage. - If there are several identical stages, assign different state indexes to them. Determine transition functions f. - If there exists a transition from state x to state x´ and event e is required for this transition, define the transition function x´=f (x, e). Determine interaction functions I considering the relations between the states. - If there exist more than one transition from state x and their events are identical to event e, define the interaction function I(x, e)=[x0, … xi]. - Determine the orders of the destination states based on their transition probability.
cholecystectomies were recorded in the laparoscopic view and four cases among them were also recorded from an external view. Among these cases, nine cases - 4 cases involving the skilled surgeon, 1 case with the developing surgeon, and 4 cases with the semi-expert surgeon - were randomly chosen for evaluation of the SPM. The other twelve cases were used to construct the SPM. First, each surgical procedure was decomposed into meaningful surgical stages by analyzing the recorded views. The primary criterion for decomposition is the goal of each stage, and the next criterion is the primary surgical tools in use. As a result, twenty-nine surgical stages were considered in our study. Transition conditions indicate a set of events E in (1). The events indicate distinguishing methods among the surgical stages, and eighteen events were used in this paper. For the full list of the surgical stages and the events, please see [24].
Seong Young Ko, Woo-Jung Lee, and Dong-Soo Kwon
786
3.3. Key surgical stages and normal surgical procedure model - steps C and D The key surgical stages are the indispensable surgical stages that occur in a specific order for all observed cases. To obtain the key stages, the common sequences of 12 cases need to be found. Among these sequences, the longest was defined as a normal SPM. The normal SPM defined in this paper is expected to be the same as the normal SPM defined in a previous paper, where we chose the transitions with the highest probabilities [22]. However, the previous method could not guarantee the sequence that starts from the start stage and arrives at the end stage. Thus, a more systematic method is proposed in this paper. Fig. 3 shows the flow chart of the algorithm to find the longest sequence of the key stages. First, all real sequences of the 12 cases, obtained at Step B, are loaded. Then, all possible sequences are generated by using the combination of the stages that are shown in the shortest sequence among the 12 cases. In our study, the number of stages in the shortest case was 27. In addition, since some stages such as stages 0 ~ 3, 15, and 26~28 are obviously included in the common sequence and their positions are known, these stages are excluded from the searching process for simplification. Thus, the maximum length for searching was reduced to 19. The generated candidates for common sequences are then compared with all sequences of the real surgical procedures, and it is checked if there exists at least one common sequence. If there is no common sequence, it is necessary to generate sequences that consist of a reduced number of stages, and repeat the comparison process. If at least one common sequence is found, this algorithm is terminated. The algorithm guarantees at least one common sequence, even if the length of the obtained sequence is only one. After obtaining the longest common sequence by the searching algorithm outlined in Fig. 3, the sequence was Inserting RH Trocar
Start
0
100%
1
Inserting LH Trocar
100%
2
Inserting Waiting AST Trocar Start
100%
100%
3
4
Start Load real sequential data
Real sequences
Max_Stages = No_Type_of_Stages(29) Length = Search_Max_Length(19)
Generate possible sequences by combination of stages in the sequence of the shortest real sequence Length = Length -1
Compare possible key sequences with all real sequences Does at least one possible sequence satisfy all real sequences?
No
End
Fig. 3. Flow chart to obtain sequence of key surgical stages. refined to obtain the normal SPM. This refinement process is Step D in Table 1. In our case, since there were successive identical key surgical stages 9, one stage 9 was deleted. Fig. 4 shows the final normal SPM of a human cholecystectomy. It was verified that this normal SPM is identical to the SPM obtained by the highest probability or the second highest in the case of revisit. 3.4. Surgical procedure model based on key surgical stages - steps E, F, and G Step E is to construct a whole SPM based on the key stages. First, all surgical stages are classified into the key Lifting/Fixing Gallbladder
92%
5
Exposing Artery/Duct
75%
Clipping Artery/Duct 64%
6
79%
7 84%
Collecting GB
11
31%
10
54%
6%
9 Separating GB from Liver
Extracting and Re-inserting Lap.
8 Cutting Artery/Duct
38%
12 Extracting AST trocar
58%
13 Extracting LH trocar
75%
14
100%
Extracting RH trocar
15
26
27
Extracting Extracting Suturing Laparoscope GB Ports
Fig. 4. Normal surgical procedure model of cholecystectomy.
28 End
Intelligent Interaction based on a Surgery Task Model for a Surgical Assistant Robot: Awareness of Current Surgical…
17 SC
Start
0
IL
787
IR
10 10 CA EL EL DI Inserting Waiting Inserting Inserting 19 DI&CT EL 23 Clipping GR RH Trocar LH Trocar AST Trocar Start HO 19 DI&CT/HO DI&CT Artery/Duct CA DI&CT 17 IT-A IT-R IT-L IR 1 2 3 4 DI 5 DI&CT 6 7 IR HO CA CA DI Lifting/Fixing Exposing Gallbladder & Artery/Duct DI&CT 6 CT 17 DI DI&CT DI&CT GR & HO 23 CT HO 22 7 SC GR DI HO SC EL CA 9 17 HO EL Collecting GB 17 SC IR HO IR HO HO
11
DI&CT
TPB
20 IR IR DI&CT IR HE 16 21 EL HE 18 IRHO 17 EL 10 DI EL DI DI/ GR 24 23
DI/GR IR 17
9
EL
EL
EL
GR
21
Extracting and Re-inserting Lap.
8
HO Separating GB HO from Liver
DI
Cutting Artery/Duct
22
ET-A
IR
17
DI&CT
EL
ET-A
10
25 DI/GR
ET-A
ET-A
10
IR
20
ET-L
ET-L
HO 18 ET-R
ET-L
12
13
ET-L
Extracting AST trocar DI
HO
11
EL
Extracting RH trocar
ET-L Extracting
18
14
ET-R
15
26
27
28
Extracting Extracting Suturing Laparoscope GB Ports
End
LH trocar
Fig. 5. Key stages-based surgical procedure model for cholecystectomy.
Inserting RH Trocar
Start
1
IL
2
Inserting Waiting AST Trocar Start HO
Inserting LH Trocar
IT-R
IT-L
3
9 SC
IT-A
4
5
27 GR EL 28 HO HO
HO
Collecting GB
DI&CT
TPB
31 IR IR DI&CT IR HE 33 32 EL HE 34 IRHO 35 EL 36 DI EL DI DI/ GR 38 37 ET-A
ET-A
30
DI/GR IR
IR 29
10 EL
12 CA EL DI DI&CT EL 14 CA DI&CT 13 IR 11 HO CA
8 GR 6 DI&CT/HO DI&CT
7
DI
Lifting/Fixing Gallbladder
GR
IR
DI&CT
Exposing Artery/Duct
26
DI
23 HO IR
EL
25
EL
EL EL
24
Extracting and Re-inserting Lap.
GR
DI&CT
HO
21 DI
HO
20
15
CA DI & 16 CT
DI&CT
DI&CT
HO
Clipping Artery/Duct
IR
19 SC CA SC
IR DI & CT
17
SC
18
HO
Separating GB HO from Liver
Cutting Artery/Duct
22
ET-A
IR
41
DI&CT
EL 40 DI/GR
ET-A
39 Extracting AST trocar DI
43
ET-L
ET-L
HO
ET-L
46
ET-L 44
42
HO
45
47
ET-R
ET-L Extracting
ET-R
48 Extracting RH trocar
EL
49
50
51
Extracting Extracting Suturing Laparoscope GB Ports
52 End
LH trocar
Fig. 6. Key stages-based SPM with reassigned state index. stages and the peripheral stages. If the surgical stages are included in the normal SPM but are located in a different position, they are considered peripheral stages. To construct the whole SPM, the local state-transition diagrams from one key stage to the next key stage are obtained by connecting the peripheral stages. This step produces a more complex state-transition diagram, as shown in Fig. 5. Note that surgical stages do not represent states X, because there are several states
identified with the same surgical stage. Instead, the surgical stages are highly related to action strategies, because the action strategies are defined based on the surgical stages. Thus, it is necessary to reassign a state index value for each surgical stage. Fig. 6 shows the key surgical stages-based SPM, which includes 52 states and 97 transition functions. Note that different state index values can be assigned to the same surgical stages, as shown in Fig. 6. This is the result of Steps F and G.
Seong Young Ko, Woo-Jung Lee, and Dong-Soo Kwon
788
3.5. Model’s uncertainty and interaction functions - step H We defined the model uncertainty as shown in (3). This value indicates the possibility that the SPM cannot estimate the unique current state. According to this value, the model uncertainty increases as the number of uncertain transition functions that start at the same states, require the same events, and arrive at different states increases.
Um =
No. of Uncertain Transitions No. of All Transitions
(3)
In many cases, it is difficult to eliminate the model uncertainty, as this requires complex and cumbersome additional sensors or methods to distinguish between two transitions. Thus, it will be more efficient to utilize the surgeon’s intelligence. As mentioned before, the interaction functions are in charge of dealing with uncertain transitions by asking the surgeon about the next surgical stages. In order to prioritize the uncertain transitions, the interaction functions are designed to include the order of priority of the subsequent possible stages. The priority values were obtained from their occurrences per case. The SPM shown in Fig. 6 has four pairs of uncertain transitions, as listed in Table 2, and the interaction functions are expressed at the rightmost column in Table 2. Note that, since the surgical stage is no longer the state, the interaction function is defined in terms of the reassigned state index values. For example, let’s consider I(7, DI&CT)=[11, 8]. In this interaction function, the numbers 7, 8, and 11 are the reassigned state index values shown in Fig. 6. Next, let us consider the meaning of this interaction function. When the event DI&CT is detected at state 7, first, the assistant robot system notifies the surgeon that the current stage is expected to be state 11 by explaining the stage using comments such as, “The current surgical stage is Exposing Artery/Duct.” If the answer is “Yes” or the surgeon does not respond, then the next state is confirmed to be state 11. Otherwise, if the answer is “No”, the robot system needs to consider the second possible subsequent stage. That is, since the next state is state 8, which is associated with stage 19, the assistant robot explains the newly estimated current stage by saying, “The current stage is Dealing with the area around the Gallbladder.” If there are more than two possible next stages, these verbal interactions need to be repeated several times. The number of interaction
Table 3. Uncertainty of surgical procedure model. No. of Interaction Functions No. of Uncertain Transitions No. of All Transitions Model Uncertainty
Key stages-based SPM 4 8 97 8/97 = 8.4%
functions and the model uncertainty defined by equation (3) are listed in Table 3. 3.6. Evaluation of surgical procedure model The robustness of the SPM was analyzed by applying the obtained model to other cholecystectomy cases. The expert surgeon’s four cases, the developing surgeon’s single case, and the semi-expert surgeon’s four cases that were not used during the modeling process are utilized in the evaluation. Based on the same sets of surgical stages and events, these nine cases are decomposed into sequences of surgical stages and are applied to the SPM in Fig. 6. The number of required verbal interactions to determine the subsequent steps, the number of failures in estimation of the peripheral stages, and the number of failures in estimation of the key stages are obtained as shown in Fig. 7. Here, failure refers to the situation when the subsequent state or stage was not estimated correctly. In order to check whether the remaining sequence is reachable even after a failure occurs, the state was maintained at the wrongly estimated state upon the occurrence of a failure. This evaluation showed that the sequence of the surgical stages could be estimated even in instances of failure in estimation of the states. Fig. 7 also shows that all key surgical stages can be estimated correctly. 12 ES1
10
ES2
8
ES3
6
DS1
ES4 SS1
4
SS2 SS3
2
SS4
0 Use of Interaction Use Functions
Failure to Estimate Failure Estimate Peripheral Peripheral Stages States
Failure to to Estimate Key Failure Estimate Key Stages States
Fig. 7. Results of awareness of current stages based on surgical procedure model.
Table 2. Interaction functions of key stages-based surgical procedure model. Key 5 5 10 10 11 11 12 12
Cur Stage 5 5 10 10 10 10 12 12
Next Stage 6 19 11 23 23 24 11 25
Cur. State Next State ID ID 7 11 7 8 25 30 25 26 34 35 34 38 39 40 39 44
Transition Conditions Tr1 Tr2 DI&CT DI&CT HO GR DI DI DI GR DI DI GR DI
Occurrence of Transition Tr1 Tr2 9 0 2 1 1 11 1 0 2 0 1 2 1 0 1 3
Interaction Functions I(7,DI&CT)=[11,8] I(25,DI) = [30,26] I(34,DI) = [38,35] I(39,DI) = [44,40]
Intelligent Interaction based on a Surgery Task Model for a Surgical Assistant Robot: Awareness of Current Surgical…
Main Surgery
Start/End Portion
124.0
140 120 100 80 60 40
0
11.3
3.8
20
Human/Human
Robot/Porcine
Robot with STM (Ideal)
Voice Commands in Cholecystectomy
Fig. 8. Comparison of number of voice commands. Fig. 8 presents a comparison of the number of necessary voice commands. The number of voice commands of the human-assisted human cholecystectomy was measured by video analysis. The number for the robot-assisted porcine cholecystectomy was obtained by in vivo experiments performed in our previous tests [9]. In the case of the robot assisted cholecystectomy with the STM, the number of voice commands was predicted from Fig. 7. As shown in Fig. 8, it is expected that the necessary voice commands will be reduced by using the interaction based on the SPM. However, since the value in Fig. 8 is obtained from an ideal situation, where action strategies are assumed to be perfectly extracted, evaluations through an in vivo experiment are required for a more practical evaluation. 4. INTEGRATION WITH LAPAROSCOPIC ASSISTANT ROBOT 4.1. Compact laparoscopic assistant robot In order to verify its performance, the SPM is integrated with our compact laparoscopic assistant robot, KaLAR, developed as shown in Fig. 9 [9,23,25]. The direction of the view can be altered by changing the alignment of the bendable articulated joints while magnification/reduction of the view can be altered by moving closer or away from the surgical site using a linear actuator. Since KaLAR itself functions as a laparoscope, a CCD camera module and a bundle of optical fibers are installed on the tip of the bending section, as shown in Fig. 9. Please refer to our previous papers for the tool identification and voice interaction. Bendable Mechanism
Actuators for Bending
789
4.2. Implementation of surgical procedure model The SPM described in Section 3 was implemented and integrated with the KaLAR system. The related figures and tables to determine the eight components of the SPM in (1) are listed in Table 4. The set of states X is related to the index values reassigned in Fig. 6. The set of surgical stages Y and the set of events E are not listed here. For the full list, see [24]. The transition functions, the initial state, and the final state are easily extracted from Fig. 6. The interaction functions are listed in Table 2 and the output function can be extracted from the relation between Figs. 5 and 6. Fig. 10 shows the integrated control software for the KaLAR system, which displays (a) the laparoscopic view captured by the camera on the tip of KaLAR at the left side, (b) the processed image that detects the tip position of surgical tools at the left bottom side, and (c) the estimated surgical stages at the right side. In this test, the image of the inside of the abdomen was placed in front of KaLAR to simulate the environment inside the abdomen. The control software also provides the type of detected surgical tool and the estimated surgical stage. Fig. 10 indicates that the detected tool is a dissector and the estimated surgical stage is stage 5, “Lifting/Fixing Liver”. 4.3. Implementation of interaction functions In this section, implementation of the interaction function is described in detail. Fig. 11 shows the flow Table 4. Relations between components of SPM and figures and table. Components of Surgical Corresponding Procedure Model Figures and Tables Symbol Meaning X Set of states Fig. 6 Y Set of surgical stages 29 surgical stages [24] E Set of events 18 events [24] f: X×E→X Transition functions Fig. 6 I: X×E→X Interaction functions Table 2 Relations b/w Fig. 5 g: X→Y Output function and Fig. 6 X0 Initial states < state 1 > in Fig. 6 Xf Final states < state 52 > in Fig. 6
CCD Camera
Linear Actuator Attaching Mechanism Passive Holder Optical Fibers
Optical Fibers
Fig. 9. Developed compact laparoscopic assistant robot.
Fig. 10. Integrated control software of KaLAR system.
Seong Young Ko, Woo-Jung Lee, and Dong-Soo Kwon
790 Vision Image
Voice Command
Extract the transition conditions Deny Command There is alternative
Other Commands Interaction functions
There is no alternative
Deterministic Uncertain
Change the current state among alternative states Inform surgeon of current state
Maintain the states
Choose current state Choose only current state Save alternative states (Transition functions) Inform surgeon of current state
(a) Stage 4: Waiting start. (b) Stage 6: Exposing artery/duct.
Determine the current surgical stage Determine the action strategy
Fig. 11. Chart to correct the estimated state according to surgeon’s response. chart to correct the estimated state according to the surgeon’s response. First, the control software of KaLAR checks if any transition condition is detected from both the laparoscopic view and the surgeon’s voice commands at every sampling time. If existing transition condition is found, it is extracted and converted to an ID value. If the detected condition is not the deny command “No”, the next step is to check if the pair of the current state and the current transition condition causes uncertain estimation by looking up the interaction functions. If a suitable interaction function exists, the next steps are to set the first state in the interaction function as the current state and to save the others as the alternative states. The next step is then to inform the surgeon of the current state. Finally, the current surgical stage and the action strategy are determined from the current state, in turn. If there is no suitable interaction function, the next state can be chosen deterministically through the transition function. The current surgical stage and the action strategy are also obtained deterministically in turn. At the first checking process, if the detected condition is the deny command, the next step is to check if there are saved alternative states. If alternative states exist, the current state is removed and the first probable state among them is chosen as the current state, because the surgeon issued a deny command to discard the current stage. KaLAR then informs the surgeon of the new current state again. If the detected condition is a deny command but there is no alternative state, then this indicates that the condition has been obtained improperly, and thus the command should be ignored. The implemented interaction functions were integrated with KaLAR and verified in a lab environment where interaction with KaLAR could be performed naturally. 4.4. In vivo Porcine Experiment and Discussion In order to evaluate the performance of estimation of the current surgical stages, an expert surgeon performed a porcine cholecystectomy. This evaluation was carried out in accordance with the guidelines enforced by local ethics committees. In this experiment, we utilized KaLAR integrated with the proposed SPM. Fig. 12 shows several still images of the control software during the porcine cholecystectomy. The black solid arrow and
(c) Stage 7: Clipping artery/duct. (d) Stage 11: Collecting GB.
Fig. 12. Still images of control software, which are extracted from video clips. 30 25
Estimated state Next probable state
20 te15 tas
10 5 0 0
500
1000
time(sec)
1500
2000
2500
(a) Trace of the estimated states. 25 20 e g a t s l a c i g r u s
Estimated surgical stage Next probable surgical stage
15 10 5 0 0
500
1000
time(sec)
1500
2000
2500
(b) Trace of the estimated surgical states Fig. 13. Traces of estimated states and surgical stages during in vivo porcine experiment. the white arrow on the still images indicate the estimate current surgical stages and the alternative stage, respectively. Even though the porcine cholecystectomy is slightly simpler than a human cholecystectomy, it was shown that the surgical stages could be estimated well from the ‘start’ stage to the ‘collecting GB’ stage, as shown in Fig. 13. Fig. 13(a) shows the trace of the estimated states, where small spots indicate the primarily estimated state and the larger circles indicate the next probable state, that is, the alternative state. Fig. 13(b) shows the trace of the estimated surgical stages, which were determined by the output function g.
Intelligent Interaction based on a Surgery Task Model for a Surgical Assistant Robot: Awareness of Current Surgical…
5. CONCLUSIONS AND FUTURE WORK In this paper, a methodology to construct a surgical procedure model based on key surgical stages was proposed in order to enable an assistant robot to be aware of current surgical stages being performed. In addition, the SPM was integrated in a compact laparoscopic assistant robot, KaLAR. Our ultimate goal is to develop a surgical assistant robot that can naturally interact with a surgeon with a minimum control burden. Considering feasibility, we proposed a surgery task model that was defined as structured knowledge about surgery. Among components of the STM, the SPM is the most important, because it has the most information of the surgery. In this paper, the concept of a key surgical stages-based SPM was proposed and a model of the cholecystectomy was extracted from recorded views of human cholecystectomies. Even though one porcine cholecystectomy has been carried out, the experiment showed that it is possible to make the assistant robot aware of the current surgical stages. We believe that the interaction scheme based on a STM is the most feasible way to realize an intelligent surgical assistant robot that can interact with a surgeon naturally. However, many challenging problems should be resolved. First, the main weakness of the STM is that the surgery should be modeled with similar efforts to the modeling process in this paper in order to apply it to different surgeries, because the STM and SPM highly depend on the type of surgery. Thus, it is necessary to generalize the modeling process. Second, it is necessary either to verify that key surgical stages are always estimated correctly or to consider how to deal with failure in estimation. Finally, in-depth research on an efficient method to utilize the interaction function is required. In the current study, the interaction function is quite simple, as shown in Table 2, and thus produces only a small number of verbal interactions as shown in Fig. 7. In the case of more complex surgery, however, the number of interaction functions will increase and each function can have more than two destination states. In this case, the verbal communication could bother the surgeon. It is necessary to reduce the model’s uncertainty with as limited information as possible and to find a more efficient way to deal with the model’s uncertainty than using only verbal communication. In order to develop a complete STM, the action strategies and the necessary input information should be considered further. In the case of the action strategies, we developed measurement devices and measured all trajectories of two tools and a laparoscope during several in vivo animal experiments. Currently, we are analyzing the view’s characteristics at each surgical stage to determine the preferred view and viewing mode. In the case of necessary input information, we utilized the laparoscopic view and the voice interface to minimize the necessity of additional sensors. The voice interface requires a more reliable engine, because there is considerable noise in the operating room and the surgeon is apt to be disturbed when recognition fails. In the case
791
of visual servoing, it is necessary to eliminate the possibility to misperceive the inserted tool, especially due to reflected strong light. REFERENCES R. H. Taylor and D. Stoianovici, “Medical robotics in computer-integrated surgery,” IEEE Trans. on Robotics and Automation, vol. 19, no. 5, pp. 765781, October 2003. [2] J.-H. Chung, S.-Y. Ko, D.-S. Kwon, J.-J. Lee, Y.-S. Yoon, and C.-H. Won, “Robot-assisted femoral stem implantation using an intramedulla gauge,” IEEE Trans. on Robotics and Automation, vol. 19, no. 5, pp. 885-892, October 2003. [3] P. P. Pott, H.-P. Scharf, and M. L. R. Schwarz, “Today’s state of the art in surgical robotics,” Computer Aided Surgery, vol. 10, no. 2, pp. 101-132, 2005. [4] E. Kobayashi, K. Masamune, I. Sakuma, T. Dohi, and D. Hashimoto, “A new safe laparoscopic manipulator system with a five-bar linkage mechanism and an optical zoom,” Computer Aided Surgery, vol. 4, no. 4, pp. 182-192, 1999. [5] P. Berkelman, P. Cinquin, J. Troccaz, J. Ayoubi, C. Letoublon, and F. Bouchard, “A compact, compliant laparoscopic endoscope manipulator,” Proc. of IEEE International Conference on Robotics & Automation, pp. 1870-1875, 2002. [6] S. Aiono, J. M. Gilbert, B. Soin, P. A. Finlay, and A. Gordan, “Controlled trial of the introduction of a robotic camera assistant (EndoAssist) for laparoscopic cholecystectomy,” Surgical Endoscopy and Other Interventional Techniques, vol. 16, pp. 12671270, 2002. [7] A. Casals, J. Amat, and E. Laporte, “Automatic guidance of an assistant robot in laparoscopic surgery,” Proc. of IEEE International Conference on Robotics and Automation, pp. 895-900, 1996. [8] Y.-F. Wang, D. R. Uecker, and Y. Wang, “Choreographed scope maneuvering in robotically-assisted laparoscopy with active vision guidance,” Proc. of IEEE Workshop on Applications of Computer Vision, pp. 187-192, 1996. [9] S.-Y. Ko, J. Kim, W.-J. Lee, and D.-S. Kwon, “Compact laparoscopic assistant robot using a bending mechanism,” Advanced Robotics, vol. 21, no. 5-6, pp. 689-709, May 2007. [10] A. Kochan, “Scalpel please, robot: Penelope’s debut in the operating room,” Industrial Robot: An International Journal vol. 32, no. 6, pp. 449-451, 2005. [11] F. Miyawaki, K. Masamune, S. Suzuki, K. Yoshimitsu, and J. Vain, “Scrub nurse robot systemintraoperative motion analysis of a scrub nurse and timed-automata-based model for surgery,” IEEE Trans. on Industrial Electronics, vol. 52, no. 5, pp. 1227-1235, October 2005. [12] K. Lee, H.-R. Kim, W. C. Yoon, Y.-S. Yoon, and D.-S. Kwon, “Designing a human-robot interaction framework for home service robot,” Proc. of IEEE International Workshop on Robot and Human In[1]
792
Seong Young Ko, Woo-Jung Lee, and Dong-Soo Kwon
teractive Communication, pp. 286-293, 2005. [13] W. C. Yoon, “Cognitive human-robot interaction: groping for dialogue intelligence,” Robot and Human : Korea Robotics Society Review, vol. 2, no. 3, pp. 29-43, July 2005. [14] P. Johnson, Human-Computer Interaction: Psychology, Task Analysis and Software Engineering, McGRAW-HILL Book Company Europe, UK, 1992. [15] F. Harashima and S. Suzuki, “Human-machine system design considering of mutual interactionmodeling of human behavior by eye gaze measurement-,” Proc. of the 2nd COE Workshop on Human Adaptive Mechatronics, pp. 13-18, 2005. [16] C. G. Cassandras and S. Lafortune, Introduction to Discrete Event Systems, Kluwer Academic Publishers, Norwell, MA, 1999. [17] C. L. MacKenzie, J. A. Ibbotson, C. G. L. Cao, and A. J. Lomax, “Hierarchical decomposition of laparoscopic surgery: a human factors approach to investigating the operating room environment,” Minimally Invasive Therapy and Allied Technologies, vol. 10, no. 3, pp. 121-127, 2001. [18] J. Rosen, J. D. Brown, L. Chang, M. N. Sinanan, and B. Hannaford, “Generalized approach for modeling minimally invasive surgery as a stochastic process using a discrete markov model,” IEEE Trans. on Biomedical Engineering, vol. 53, no. 3, pp. 399-413, March 2006. [19] K. Sakai, M. Masaaki, and K. Yokoyama, “Modelling patient responses to surgical procedures during endoscopic sinus surgery using local anesthesia,” Proc. of IEEE International Conference on Systems, Man and Cybernetics, pp. 364-369, 2004. [20] O. Fukuda, T. Tsuji, K. Takahashi, and M. Kaneko, “Skill assistance for myoelectric control using an event-driven task model,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1445-1450, 2002. [21] K. Ohnuma, K. Masamune, K. Yoshimitsu, K. Shinohara, J. Vain, Y. Fukui, and F. Miyawaki, “Surgical Scenario for laparoscopic surgery with timed automata and development of scrub nurse robot application of human adaptive mechatronics to surgical support system,” Proc. of the 2nd COE workshop on Human Adaptive Mechatronics, pp. 163166, 2005. [22] S.-Y. Ko, J. Kim, W.-J. Lee, and D.-S. Kwon, “Surgery task model for intelligent interaction between surgeon and laparoscopic assistant robot,” International Journal of Assistive Robotics and Mechatronics, vol. 8, no. 1, pp. 38-46, 2007. [23] Y.-J. Lee, Development of a Compact Laparoscopic Assistant Robot: KaLAR, Master’s Thesis, Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Korea, 2004. [24] S.-Y. Ko, Interaction Scheme of an Intelligent LaParoscopic Assistant Robot Based on a Surgery Task Model, Doctoral Thesis, Department of Me-
chanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Korea, 2008. [25] J. Kim, Y.-J. Lee, S.-Y. Ko, D.-S. Kwon, and W.-J. Lee, “Compact camera assistant robot for minimally invasive surgery: KaLAR,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2587-2592, 2004. Seong Young Ko received his B.S., M.S., and Ph.D. degrees in Mechanical Engineering from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 2000, 2002, and 2008, respectively. He was a visiting researcher in the Department of Electrical Engineering, University of Washington, USA for 6 months from 2005 to 2006. In 2008, he was a post-doctoral researcher in the Department of Electrical Engineering, KAIST, Korea, and from 2009, he is currently a research associate in the Mechatronics-In-Medicine Laboratory, the Department of Mechanical Engineering, Imperial College London, UK. His research interests include a medical robotics, human-robot interaction and intelligent control. Woo-Jung Lee received his Medical degree from the College of Medicine at Yonsei University, Seoul, Korea in 1982, the Diploma from Korean Board of General Surgery in 1987, a Master of Medicine from Yonsei University, Seoul, Korea in 1992 and a Ph.D. degree in Department of Surgery from Korea University, Seoul, Korea in 1996. He served the Korean Army as a captain from 1987 to 1990. From 1997 to 1999, he was a research fellow at Vanderbilt Medical School, Tennessee, USA. He is currently a Professor of the Department of Surgery and the director of Yonsei Medical Minimally Invasive/Robotic Surgery Center at Yonsei University, Seoul, Korea. His current research interests include minimally invasive surgery and robotic surgery. He is a member of KMA, KAGS, HBPS, and SLS. Dong-Soo Kwon received his B.S degree in Mechanical Engineering from the Seoul National University, Korea in 1980, an M.S. degree in Mechanical Engineering from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea in 1982 and a Ph.D. degree in Mechanical Engineering from Georgia Institute of Technology (GeorgiaTech), Atlanta, Georgia, USA, in 1991. From 1991 to 1995, he was a research staff at Oak Ridge National Laboratory, USA. From 2008 to 2009, he was a visiting professor in School of Mechanical Engineering at GeorgiaTech. He is currently a Professor of Mechanical Engineering at KAIST, a director of Human-Robot Interaction Research Center, and a director of Center for Future Medical Robotics, Korea. He is also a president of Korea Haptic Community, a vice president of Korea Robotics Society, a board of directors of The Korean Society of Mechanical Engineers, and an international cooperation director of Korean Society of Medical Robotics. His current research interests include human-robot interaction, telerobotics, haptics, medical robotics and entertainment robot.