Real-time 3D image-guided navigation system based on integral

0 downloads 0 Views 898KB Size Report
Integral Videography algorithm for calculating the 3-D image of surgical ..... part of surgical instrument when the tip point of it arriving at target in phantom.
Hongen Liao, Susumu Nakajima, Makoto Iwahara, Nobuhiko Hata, Ichiro Sakuma, Takeyoshi Dohi: Real-time 3D image-guided navigation system based on integral Videography. Proceedings of SPIE Vol.4615, pp.36-44. San Jose, USA, 2002

Real-time 3D image-guided navigation system based on integral videography Hongen Liao*a, Susumu Nakajimab, Makoto Iwaharac, Nobuhiko Hatac, Ichiro Sakumad, Takeyoshi Dohic a Graduate School of Engineering, The University of Tokyo; b Department of Orthopedic Surgery, the University of Tokyo; c Graduate School of Information Technology Science, the University of Tokyo; d Graduate School of Frontier Sciences, the University of Tokyo ABSTRACT A surgical navigation system that utilizes real-time three-dimensional (3-D) image was developed. It superimposes the real, intuitive 3-D image for medical diagnosis and operation. This system creates 3-D image based on the principle of integral photography (IP), which can display geometrically accurate 3-D autostereoscopic images and reproduce motion parallax without any need of special devices. We developed a new method for creating 3-D autostereoscopic image, named "Integral Videography (IV)", which can also display moving 3-D object. Thus the displayed image can be updated following the changes in surgeon's field of vision during the operation. 3-D image was superimposed on the surgical fields in the patient via a half-silvered mirror as if they could be seen through the body. In addition, a real-time Integral Videography algorithm for calculating the 3-D image of surgical instruments was used for registration between the location of surgical instruments and the organ during the operation. We have performed an analysis of the errors of our system by using a phantom. The registration error between the multiple markers and its IP images and was found to have a mean value of 1.13 mm and a standard deviation of 0.79 mm. The experimental results of targeting point location and avoiding critical area showed the errors of this navigation system were in the range of 2~3mm. This limitation was mainly determined by the pixel density of the LCD display used in the system. By introducing a display device with higher pixel density, accuracy of the system can be improved. Because of the simplicity and the accuracy of real-time projected point location, this system will be practically usable in the medical field. Keywords : 3-D image, integral videography, real-time, surgical navigation, registration, integral photography

1. INTRODUCTION The effective use of diagnostic images and image-guided surgical navigations have been a focus of discussion with the advancements of medical imaging and computer technology. These systems provide guidance to the surgeon base on preoperative tomographic images, such as computed tomography (CT) and magnetic resonance (MR) image. Most of 3D information in pre-operation image to surgeons, as a set of 2-D sectional images displayed away from the surgical area. Surgeons have to follow some extra procedures to perceive 3-D information of the patient during surgery. Firstly, they need to turn their gaze to the computer display showing the navigation images. Next, they must reconstruct the 3-D information in their minds, with the help of experience and anatomical knowledge. Lastly, they must turn their eyes back to the surgical area and register the 3-D information reconstructed in the minds with the actual patient’s body. These procedures interrupt the flow of surgery and the reconstructed 3-D information sometimes differs between individual surgeons. To solve these problems, medical images should be displayed three-dimensionally in the space that coincides with patient’s body. To superimpose images over the observer’s direct view of object, a lot of computer display techniques were presented. One of them is image overlay, which is a form of “Augmented Reality” in that it merges computer-generated information with real world images. Blackwell [1] used binoculars stereoscopic vision display system to show 3-D *[email protected]; phone 81-3-5841-7913 fax: 81-3-5841-6391; http://www.atre.t.u-tokyo.ac.jp; Department of Precision Machinery Engineering, Graduate School of Engineering, The University of Tokyo; 7-3-1 Hongo, BunkyoKu, Tokyo 113, Japan,

Proc. of SPIE Vol. 4615

36

Hongen Liao, Susumu Nakajima, Makoto Iwahara, Nobuhiko Hata, Ichiro Sakuma, Takeyoshi Dohi: Real-time 3D image-guided navigation system based on integral Videography. Proceedings of SPIE Vol.4615, pp.36-44. San Jose, USA, 2002

images with this method. This display method reproduces the depth of projected objects by the fixed binoculars disparity of the images. However, this does not always give the observer a sense of depth and the motion parallax cannot be reproduced without wearing a tracking device. As well as limiting the number of observers, this also causes visual fatigue. Consequently, systems based on binoculars stereoscopic vision are not suitable for surgical display. In this study, we used 3-D image named Integral Photography (IP), which was first proposed by M.G.Lippmann[2] in 1908. IP creates a 3-D autostereoscopic that can be seen without the use of special viewing glasses. As IP facilitates the continuous viewpoint of left to right or up to down, it is regarded as a promising method for creating 3-D autostereoscopic images. IP records and reproduces 3-D image using a “fly ’s eye” lens array and photographic film. Masutani et al. recorded an IP of medical 3-D objects on the film and applied it to clinical diagnosis and surgical planning [3]. Igarashi et al. proposed a computer-generated IP method that constructs an image by transforming the 3-D image about the object into 2-D coordinates on the computer display, instead of taking a photograph [4]. However, the computing time for creating IP was so costly that the system could not be practical for rendering medical objects. To overcome these limitations, we devised a new rendering algorithm for computer- generated IP and developed a practical 3-D display system [5] and applied it to a clinical field of orthopedic surgery [6]. Moreover, in the clinical application, the 3-D image of surgical instrument inserted into the organ also should be shown its real-time location. Consequently, a real-time rendering algorithm for intra- operative image information should be developed, especially for displaying the accuracy location of the intra-operative instrument. In this paper, we present a real-time 3-D surgical navigation system that superimposes the real, intuitive 3-D image for medical diagnosis and operation. This system creates 3-D image based on the principle of integral photography, named “Integral Videography (IV)”, which can be updated following the changes in operator’s field of vision. 3-D image was superimposed on the surgical fields in the patient via a half-silvered mirror as if they could be seen through the body. In addition, a real-time Integral Videography algorithm for calculating the 3-D image of surgical instruments was used for registration between the location of surgical instruments and the organ during the operation. The accuracy of the navigation system was evaluated through in vitro experiments based on phantoms.

2. METHOD 2.1 System configuration The 3-D real-time surgical navigation system developed in this study consists of the following components: • Position tracking system (POLARISTM Optical Tracking System, Northern Digital Inc., Ontario, Canada) • 3-D data source collection equipment (such as MRI, CT) • IP display (consist of LCD with XGA, lens array, half-silvered mirror, supporting stage) • Personal computer (Pentium 3 800MHz Dual CPU) PC 3-D Voxel Data

MRI etc. RS232C

RGB

IP Display

Optical Tracking System

Fig.1

Probe control equipment

System configuration; Instrumentation required for surgical navigation based on integral videography

Proc. of SPIE Vol. 4615

37

Hongen Liao, Susumu Nakajima, Makoto Iwahara, Nobuhiko Hata, Ichiro Sakuma, Takeyoshi Dohi: Real-time 3D image-guided navigation system based on integral Videography. Proceedings of SPIE Vol.4615, pp.36-44. San Jose, USA, 2002

The source image shown on the IP display is generated from the 3-D data by the algorithm described later. Any kind of 3-D data source, such as magnetic resonance (MR) or computerized tomography (CT) images can be processed. We use optical 3-D tracking system to track the position of surgical instrument and organ during the operation. The position and orientation of IP display can be also measured for image registration. With these data, we can calculate the relative position of surgical instrument to organ. Both of the patient organ model/target and surgical instrument are represented in the form of IP. The resultants IPs are displayed in real-time on IP display. Computer Internal Processing

Flat display Space image formation

Image Calculation of pixel in supposed screen

observer

Voxel Data Supposed lens array

Lens array

(a)

(b)

Fig.2 Principle of Integral Videography. The figure shows how to generate and reproduce a 3-D object by IV. Light rays reflected at the first point seen from the observer’s side on the 3-D object pass through the centers of all lenses on the array and are redisplayed on the flat display. When the image was lit from behind, light rays intersect at the original point to become a new light point.

In the proposed integral videography, each point showed in a 3-D space is reconstructed by the convergence of rays from pixels on the computer display, through lenses in the array. The observer can see any point in the display from various directions as if it were fixed in 3-D space. Each point appears as a new light source (Fig.2b), and then a 3-D object can thus be constructed as an assembly of such reconstructed light sources. Coordinates of those points in the 3-D object that correspond to each pixel of the flat display (for example: LCD) mu st be calculated for each pixel on a display. Fig.2a Shows 3-D IP image’s rendering method that calculates coordinates of one point for each pixel of the flat display. The fundamental layouts of this system include the operator, 3-D image on IP display, half-silvered mirror, reflected image, patient in this system, the relations of them are shown in Fig.3. Projected 3D Videography without Distortion

Half Silvered Mirror 3D Image Registration Real-Time Integral Videography Generate

Patient

Operator

Assistant

Surgical Instrument

3-D Positions intra-operative surgical instrument Tracking System X rays-CT Real-Time Medical  MRI, Ultrasonic Image Information

Fig.3 Fundamental layout for surgical registration and the photo of registration between the IP image and the patient

2.2 Generate a real-time IP for organ and surgical instrument (1) Method of generating IP for organ and surgical instrument

Proc. of SPIE Vol. 4615

38

Hongen Liao, Susumu Nakajima, Makoto Iwahara, Nobuhiko Hata, Ichiro Sakuma, Takeyoshi Dohi: Real-time 3D image-guided navigation system based on integral Videography. Proceedings of SPIE Vol.4615, pp.36-44. San Jose, USA, 2002

We can create a clear and detailed stationary IP image by using the fundamental method shown in Fig.2. However, the calculating time of this method for searching the first point seen by observer was too costly to be used as a real-time IP creating. Moreover, the intra-operative situation of the surgical instrument change frequently compared to the organ. It is very difficult to create a real-time intra-operative IP when the surgical instrument is calculated together with the organ. Consequently, we present a new method that creates organ and surgical instrument’s IP respectively firstly [6], and then combine them together. One of integrated IP image of brain and inserted surgical instrument creating with this method is shown in Fig.4.

Surgical instrument

Fig.4 Method for dealing with organ and surgical instrument for making real-time IP; A cross section and the foci of brain IP

(2) Fast IP rendering algorithm for 3-D object and surgical instrument Although the fundamental method for IV can achieve all of the image information without any escape, the rendering time for the entire pixel on the flat display is so costly that cannot be used for real-time IP creating. We develop a special rendering method for medical IP, named jump -rendering method (Fig.5). It search the image voxel data every 2n (n=3 or more) pixels on the searching line. When the searched position (point on the searching line like A) enter the area of organ, the searching point will return back to the last step, and then re-research the pixel data every one pixel. In addition, we only calculate the IP in the necessary area of lens array (Fig.6). The calculation time with this method will be shorted to below 1 seconds, which is satisfied with the requirements of real-time IP.

Direction of an Incident Ray

2 n ( k − 1)

Lens Array

Lens Array

SizeXY Voxel Data Origin

2n k

(start point )

A

2n k + 3

SizeXY

2n ( k + 1)

Surgical Instruments

Lens Array

Supposed   Screen

B

z …… y

x

Lens Array

Fig.5 Fast rendering algorithm for IP of organ

Fig.6 Real-time IP rendering algorithm for surgical instrument

2.3 Registration method Because the relationship between the location of the projected 3-D image and that of the IP display can be found from calculation, registration between the 3-D image and the patient’s body could be performed by registration between the IP display and the patient’s body. The relation between the surgical instrument and the organ can be also confirmed by

Proc. of SPIE Vol. 4615

39

Hongen Liao, Susumu Nakajima, Makoto Iwahara, Nobuhiko Hata, Ichiro Sakuma, Takeyoshi Dohi: Real-time 3D image-guided navigation system based on integral Videography. Proceedings of SPIE Vol.4615, pp.36-44. San Jose, USA, 2002

using optical tracking probes. In this study, we use optical tracking probe to measure the intra-operative situation of IP display, and then we can achieve the display coordinateΣdis form the transform of Σdis-Σstd. The organ and surgical instrument coordinateΣorg can also be achieved as the same way. With the registration of Σdis and Σorg, we can combine the 3-D image of intra-operative surgical instrument and organ with the patient’s body (Fig.7). Once the displayed objects have all been transformed into the same coordinate system via this kind of calibration and registration transformations, they will be appeared to the operator’s eyes exactly as if they existed in their virtual spatial locations. Σstd :

zstd

Σdis:

Standard Coordinates

IP Display Coordinates ProbeⅠfor Display Position

y std

y dis IP Display

xdis 3D Optical Tracking System

zdis

xstd y org Patient

Operator

zorg

Σorg:

Organ & Surgical Instrument Coordinates

Fig.7

xorg

ProbeⅡfor Surgical Instrument

Transformations between coordinate systems

3. EXPERIMENTS 3.1 Experimental equipment Using the registration method shown in 2.3, we can coincides the position of the 3-D image theoretically with that of objects. However, some errors in the recognized positions of the points in the image are inevitable. We measured the accuracy of the recognized location of the registered image to the real object by using a phantom object. The phantom used in this study consists of a plastic cubic container (side of 80mm long) filled with an agar. Within the phantom, we embedded a clay sphere and two poles (Fig.8). The clay sphere is 15mm in diameter and is used to quantitatively measure the puncturing error in the experiment of targeting a point location. The poles are 2-4mm in diameter and are regarded as vessel in the avoiding experiment. We also attached a set of markers in the circumference of the phantom. These markers, also shown in Fig.8, are used for registering the IP image with the actual space of phantom. The MR image data of the phantom consisted of one set of coronal image with 0.39mm×0.39mm in-plane resolution and 1-mm slice thickness. M4(1/4,0,1/2)

z

Agar filling M1(0,1,1)

80mm

M2(1,1,1) Ball Diameter:15mm

80mm

y

x

80mm

M3 (1,1/4,1/4)

Pole Diameter:2-4mm

Fig.8

Scheme of the cubic phantom

Proc. of SPIE Vol. 4615

40

Hongen Liao, Susumu Nakajima, Makoto Iwahara, Nobuhiko Hata, Ichiro Sakuma, Takeyoshi Dohi: Real-time 3D image-guided navigation system based on integral Videography. Proceedings of SPIE Vol.4615, pp.36-44. San Jose, USA, 2002

The quality of IP image is depended primary on the pixel density of display (LCD) (the number of pixels per a unit display area) and the lens pitch of lens array. The primary measurements of IP display are shown in Table1. Table.1 Equipment Specification of IP Display Display size Pixel number Pixel pitch Lens pitch Lens Focal length Lens Refractive index Weight Horizontal Resolution Depth

6.0inch 1024×768 (XGA) 0.120mm 2.32mm (hexagon) 3mm Nd = 1.49 1.8kg 1.2mm 3.6mm

The target placed in the phantom was bigger than the resolution of IP display, and the diameters of poles are near the resolution of IP display. We examined the valid of this system for a real-time image-guarded surgery by a group of experiments including puncturing a target with a small needle and avoiding critical areas.

M1

M2

M4

M3 Mi ( i=1~4):Marker

Fig.9

Registration result with IP display

The 3-D image generated from MRI was superimposed on the phantom with the registration method mentioned above. The position corresponding to the markers were displayed in the 3-D image and their coordinates were measured three times by the optical tracking system (Fig.9). The mean value of differences between the measured and the true coordinates of four markers Mi (i=1~4) is 1.13mm and the standard deviation is 0.79mm as shown in Table.2.

Table. 2 The mean differences between the measured and the true coordinates of the markers M1 M2 M3 M4

0.9 1.3 0.2 2.1

Mean value (mm) S.D (mm)

1.13 0.79

3.2 Real-time puncture and avoidance experiment (1) Real-time puncture experiment We measure the spatial location of surgical instrument (a small needle) by tracking the probe fixed on surgical instrument with an optical tracking system. The tip positions and inserted point of the needle can be calculated. These position data will be used for creating an intra-operative IP image of surgical instrument. Fig.10 (a ) show an instantaneous situation of the inserted part of surgical instrument when the tip point of it arriving at target in phantom. After finishing puncturing experiment, in order to examine the accuracy, we withdrew the surgical instrument out of

Proc. of SPIE Vol. 4615

41

Hongen Liao, Susumu Nakajima, Makoto Iwahara, Nobuhiko Hata, Ichiro Sakuma, Takeyoshi Dohi: Real-time 3D image-guided navigation system based on integral Videography. Proceedings of SPIE Vol.4615, pp.36-44. San Jose, USA, 2002

phantom and injected colored-ink into the traces of puncture from the entrance point of the needle. The puncture error was 2.6mm by measuring the traces remained in the clay, as shown in Fig.10 (b).

(b)

(a)

Error: 2.6mm

Fig.10 (a): An instantaneous situation of IP image when the tip point of surgical instrument arriving at target. (b): The result of puncture experimentation.

(2) Real-time avoidance experiment In this system, we added feedback information of surgical instrument when it was inserted into the organ. The color of surgical instrument's 3-D image changed automatically from green to red when the tip point of instrument approach critical area as shown in Fig.11 (a). This purpose of experiment was to examine the accuracy of this navigation system by avoiding a set of poles with different diameters. The processes of avoidance experiment are performed as follows. First, we inserted the surgical instrument toward the 3-D image of pole until the IP image’s color of tip point change. When the color turn to red, we considered the surgical instrument approached the critical area. In order to confirm our imagination, we inserted the instrument about 2mm at the same direction. If we perceived the instrument touching the pole, the experiment was recognized success. Conversely, if the touch could not be perceived or the color did not change through the experiment, the avoidance was judged as failure. We conduced procedures four times for both of shallow location and deep location for phantom 1 and phantom 2 respectively. We succeed in avoiding the pole bigger than 3 mm both in shallow and deep location for 6 times, and succeed avoiding the 2mm’s pole in shallow location but fail in deep location (Table3). After avoiding critical area, we continue to insert the surgical instrument, as shown in Fig.11 (b).

(a)

(b)

Fig.11 (a): The color on the tip of instrument change to red when the instrument approaching critical area. (b): Continue to stick after avoiding critical area .

Table.3 The results of avoiding critical area experiment Pole Phantom1 Phantom2

a b c d

Diameter of Pole (mm) 4 4 3 2

Shallow location (Depth of 30mm) Success Success Success Success

Deep location (Depth of 70mm) Success Success Success Fail

Proc. of SPIE Vol. 4615

42

Hongen Liao, Susumu Nakajima, Makoto Iwahara, Nobuhiko Hata, Ichiro Sakuma, Takeyoshi Dohi: Real-time 3D image-guided navigation system based on integral Videography. Proceedings of SPIE Vol.4615, pp.36-44. San Jose, USA, 2002

3.3 Time for calculating IP of a moving surgical instrument The necessary calculation time for renewing IP image of a moving surgical instrument was different when the length of inserted part and the direction change. In this experiment, a 100×100×100mm cubic, corresponding to a voxel data of 256×256×256 pixel was used to examine the relation between the calculated time and the length of inserted part of surgical instrument. We inserted the instrument around a diagonal line of the cubic (Fig.12), and record the time of creating a new IP image. The mean values of 3 times measurement are shown in Fig.13. Surgical Instrument Inserting Direction Lens Array

Fig.12 (a): The relation between the inserting direction of surgical instrument and the test model. (b): The corresponding IP image created by IV algorithm. These IP images are displayed on the LCD behind the lens array.

IP Calculation Time (S)

DATA 3.5 3 2.5 2 1.5 1 0.5 0 0

4

8

12

16

Ratio of Inserted Instrument Part (/16)

Fig.13 The relation between the IP calculation time and the part of inserted surgical instrument

4. DISCUSSION AND CONCLUSION The navigation system developed in this study is more accurate by using a real-time 3-D guided-image for intra operative surgical instrument. We developed an image-guided surgical system with 2 primary merits by IP for minimal invasion surgery. • It is not necessary for surgeon to look away from the patient to receive guidance. Moreover, it avoid the need of extra devices, such as the wearing of special glasses; • Geometrical accuracy over the projected objects (esp. depth perspective), visibility of motion parallax over a wide area, simultaneous observation by many people. Based on the requirements above, we created organ’s IP and offer a guidance for surgical instrument by showing the intra-operative location of surgical instrument which cannot be seen by observer when it enter the inside of organ respectively. Because soft tissues may deform during surgery, a method to sense intra-operative organ deformation and to display accurately their shapes and positions will be required. Although the update of organ’s IP still costs time and the IP image is not very clear due to the limitation of pixel density of the presently available display device, with the advancement of display technology and computer hardware, it will be possible to register the intra-operative organ’s configuration with the surgical instrument in real-time based on the real-time data of organ from Open-MRI or deformation calculation. The resolution of flat display like LCD in this system is limited to 200 dpi. The spatial resolution of the 3-D image projected is proportional to the ratio of the lens diameter in the lens array to the pixel pitch of display. Thus, both the lens diameter and the pixel pitch need to be made much smaller. One of the possible solutions to realize higher pixel density is the use of multiple projectors to create one image and reduction projection of the resultant image on a small

Proc. of SPIE Vol. 4615

43

Hongen Liao, Susumu Nakajima, Makoto Iwahara, Nobuhiko Hata, Ichiro Sakuma, Takeyoshi Dohi: Real-time 3D image-guided navigation system based on integral Videography. Proceedings of SPIE Vol.4615, pp.36-44. San Jose, USA, 2002

screen with special lens optics. A large computation power is also needed to correspond to multiple -projector system. We are currently developing such a multiple projector system and a parallel-calculation method to accelerate calculation. In conclusion, our research has shown that the real-time 3-D surgical navigation system can superimpose the real, intuitive 3-D image for medical diagnosis and operation. The newly developed real-time IV algorithm for calculating the 3-D image of surgical instruments was effective to present the real-time location of surgical instruments and the organ during the operation. The experimental results show the errors of this navigation system were in the range of 2~3mm. Because of the simplicity and the accuracy of real-time projected point location by introducing a display device with higher pixel density, this system will be practically usable in the medical field.

ACKNOWLODGEMENTS This study was partly supported by the Grant-in-Aid for the Development of Innovative Technology (12113) by the Ministry of Education, Culture, Sport, Science and Technology in Japan, and the research of Future Program (JSPSRFTF 99I00904) by Japan Society for the Promotion of Science.

REFERENCE 1. 2. 3.

4. 5. 6. 7.

M.Blackwell, C.Nikou, A.M.Digioia, T.Kanade, “An Image Overlay System for Medical Data Visualization,” Mediacal Image Computing and Computer-Assisted Intervention MICCAI’98, pp.232-240, 1998. M.G.Lippmann: “Epreuves reversibles donnant la sensation du relief,”, J. de Phys Vol.7, 4th series, pp821-825, 1908. Y.Masutani, M.Iwahara, O.Samuta, Y.Nishi, N.Suzuki, M.Suzuki, T.Dohi, H.Iseki, T.Takakura, “Development of integral photography-based enhanced reality visualization system for surgical support,” Proc. of ISCAS’95 pp1617, 1995. Y.Igarashi, H.Mutrata, M.Ueda, “3-D display system using a computer generated integral photography,”. Japan J. Appl. Phys. 17, pp.1683-1684, 1978. S.Nakajima, K.Masamune, I.Sakuma, T.Dohi, “Three-dimensional display system for medical imaging with compute-generated integral photography,” Proc. of SPIE Vol. 3957, pp.60-67, 2000. S.Nakajima, K.Nakamura, K.Masamune, I.Sakuma, T.Dohi, “Three-dimensional medical display with computergenerated integral photography,” Computerized Medical Imaging and Graphics, 25, pp235-241, 2001. Hongen Liao, Susumu Nakajima, Makoto Iwahara, Etsuko Kobayashi, Ichiro Sakuma, Naoki Yahagi, Takeyoshi Dohi, “Intra-operative Real-Time 3-D Information Display System based on Integral Videography,” Medical Image Computing and Computer assisted Intervention MICCAI2001, LNCS 2208, pp.392-400, 2001

Proc. of SPIE Vol. 4615

44

Suggest Documents