The Integration of a Neurosurgical Microscope as ... - Semantic Scholar

41 downloads 0 Views 1MB Size Report
The Integration of a Neurosurgical Microscope as an Interface to a. Medical .... This work is kindly supported by Leica Microsystems GmbH1 and. VRmagic ...
The Integration of a Neurosurgical Microscope as an Interface to a Medical Training Simulator Florian Beier∗ Institute for Computational Medicine, University of Heidelberg

A BSTRACT We present the integration of a neurosurgical microscope as an interface to a medical training simulator. The simulator is based on Virtual Reality and combines the use of real-time simulation algorithms with a native interface consisting of a real surgical microscope and original instruments. The position and pose of the head of the microscope are tracked with an optical tracking system that is mounted on the microscope. The camera tracks 3 infrared lightemitting diodes (LEDs) that are integrated in a phantom of the patient’s head. The status of the microscope such as zoom and focus are read out via controller-area network (CAN). The oculars have been replaced by a stereoscopic display. Index Terms: I.3.1 [Computer Graphics]: Hardware Architecture—Input Devices; I.3.7 [Computer Graphics]: ThreeDimensional Graphics and Realism—Virtual Reality 1 I NTRODUCTION Neurosurgical operations are complex and potentially dangerous procedures. Only well trained and experienced neurosurgeons are able to perform these interventions successfully. Consequently, there is an urgent need for an efficient training environment that is realistic but does not depend on human beings or animals. Virtual Reality (VR) can be used in order to implement such a training system. Although we know of some research groups that have developed neurosurgical simulators [8, 2], we are not aware of any project that combines the native interface of a movable surgical microscope with original instruments (see figure 1).

be available. Consequently, the head of the microscope has to be tracked and the levels of focus and zoom have to be determined. 2

R ELATED W ORK

Several neurosurgical training simulators based on Virtual Reality exist, e.g. [7, 10]. The interface of these and of most other systems consists of a haptic force feedback device positioned below a half silvered mirror that reflects the rendering of the virtual scene. Shutter glasses are needed in order to see the Virtual Reality in 3D. In the area of optical tracking several products that are commercially available offer optical tracking systems with markers, e.g. [9, 5]. Most of them need a lot of space for the camera rigs or require a given static and complex marker configuration. Our intention was to attach the tracking system to the head of the microscope in order to minimize the setup by realizing an inside out tracking. We are not aware of any system that is small enough to match those demands. 3

M ETHODS

Displays The oculars of the surgical microscope were removed and substituted by microdisplays of an eMagin Z800 3DVisor [3]. The displays have a resolution of 800 × 600 pixels. We modified the Visor, so that each display can be addressed as one monitor. By doing so, we managed to realize a stereoscopic view without the need of the original driver. Like the original oculars, the angle of the visor as well as the distance between both displays can be changed and adapted to the user’s preferences. Different oculars with varying magnification levels can be represented by changing the settings of the OpenGL camera in the virtual environment. Figure 2 shows the mounted display on the top of the head of the microscope. Optical tracking of the microscope

Figure 1: NeuroSim.

Figure 2: Micro displays.

In 2011 we introduced NeuroSim [1], a VR based simulator, that uses original instruments and a real surgical microscope (see figure 1). NeuroSim has been developed by the University of Heidelberg in cooperation with the neurosurgical clinic Mannheim of the University of Heidelberg and VRmagic GmbH in Mannheim. The surgical microscope is an important tool for neurosurgeons. Its use has to be trained. Position and pose of the head of the microscope are constantly changed manually in order to get a suitable view of the operation area. In order to change the virtual scenario according to the position and pose of the head of the microscope and according to the levels of zoom and focus, these data have to ∗ e-mail:

[email protected]

IEEE Virtual Reality 2012 4-8 March, Orange County, CA, USA 978-1-4673-1246-2/12/$31.00 ©2012 IEEE

The optical tracking system consists of one multi-sensor camera made by our cooperation partner VRmagic GmbH. The camera is equipped with one Field Programmable Gate Array (FPGA) and four black and white sensors with a resolution of 752x480 pixel. Infrared LEDs are used, as they can easily be tracked in an open volume that is subject to light changes. Visible light is filtered by equipping the sensors with black glasses that are only permeable for infrared wave lengths. We use 880nm LEDs with a 160 degree angle to guarantee a maximum of visibility. All four sensors stream their data pixel-synchronously to the FPGA. The FPGA preprocesses the data according to a preset illumination range. The resulting indexed images can be compressed efficiently with a lossless run-length encoding, reducing the amount of data that has to be transferred via USB. The data is then sent via USB to the host computer. Software on the host computer extracts the compressed data and calculates the 2D markers from connecting pixels. The 2D coordinates of the markers are defined by the center of mass of the connecting pixels, resulting in subpixel accuracy. The phantom of the patient’s head is equipped with three infrared LEDs, see figure 3. The LEDs are closely positioned to the area of the opening of the head. The multi-sensor camera is mounted on the head of the microscope, see figure 4. The cameras are adjusted

97

is below 1mm. Figure 5 shows the virtual scenario shown in the displays with different levels of zoom, focus and different positions of the head of the microscope.

Figure 3: Phantom of the head. Figure 4: Optical tracking system.

in such a way, that the tracking volume covers the area that would normally be seen by the real optical system of the microscope. By doing that, we made sure that the infrared LEDs are visible as long as the microscope is positioned in a reasonable way regarding a real operation. Position and pose of the head relatively to the microscope can be calculated as long as the three infrared LEDs are seen by at least two sensors. Although two sensors would be sufficient, we favour all of the four sensors so that either the tracking volume or the stability of the reconstructed positions can be increased. In order to do a 3D reconstruction of the three infrared LEDs, the cameras have to be calibrated. Calibration is done with a chessboard pattern and algorithms based on Zhang [11]. As we use infrared LEDs, there are several unicolor 2D markers seen by one camera. Consequently, the candidate problem has to be solved in order to find corresponding 2D points and to reconstruct them to one 3D position. This is done by using the epipolar geometry [4] and a relational tracking method described in [6]. Corresponding 2D points of two cameras  are triangulated to one 3D position. As 4 = 6 virtual pairs of images that reconfour sensors result in 2 struct the same point, we have merged the calculated points to one final 3D position per marker.

Figure 5: The virtual scenario with different levels of zoom/focus and two different positions of the microscope.

5 C ONCLUSIONS We presented the integration of a real surgical microscope into a VR based medical training simulator. It can be used as a native input and output device to train surgical procedures as well as the use of the microscope. Neurosurgeons confirmed the usefulness of the system. They also stated that it is suitable as an interface to the simulator as well as a good possibility to train the handling of the microscope. The adaptions to a real microscope are negligible and complete reversible. Consequently, the microscope used does not have to be a dedicated simulator tool but can still be used in real operations. The tracking system is modular and small enough to be used in any training simulator that involves a movable microscope. Also, only one standard PC is used for the whole simulator. ACKNOWLEDGEMENTS This work is kindly supported by Leica Microsystems GmbH1 and VRmagic GmbH2 .

CAN bus connection The microscope uses a CAN-bus to exchange messages between the different components. An external port to the bus is available. By connecting the simulation computer via a USB-to-CAN adapter to the microscope, we were able to exchange messages with the microscope. Messages regarding the motor positions of the optical system that are relevant for the level of zoom and focus are received. These values are translated into real zoom and focus values via a microscope specific look-up table. In the default configuration the buttons on the pistol grips are programmed to change the level of zoom and focus. Furthermore, they can be individually reprogrammed via the touchscreen of the microscope. Depending on the configuration, either the result of a button pushed (e. g. change of motor positions) or the action of pushing a button itself can be recognized through the CAN bus. Consequently, the system is ready for further features that are implemented into the simulator. 4

R ESULTS

The mounted display shows the generated scene in 3D. The scene is adapted according to the position of the head of the microscope that is tracked with the optical tracking system. Different levels of zoom and focus are realized according to the messages received with the CAN dongle from the microscope. Timing measurements of the optical tracking system shows that the system achieves a frame rate of 69Hz, being limited only by the time needed to read out the sensor data. The overall latency is below 20ms. Although the absolute accuracy of the position and pose of the microscope is not really important, as it is repositioned according to the displayed position of the virtual patient, measurements of tracked test objects indicated that the accuracy of the optical system

98

R EFERENCES [1] F. Beier, S. Diederich, K. Schmieder, and R. M¨anner. NeuroSim - The Prototype of a Neurosurgical Training Simulator. Studies in health technology and informatics, 163:51–56, 2011. [2] A. de Mauro, J. Raczkowsky, M. Halatsch, and H. Worn. Virtual Reality Training Embedded in Neurosurgical Microscope. Proceedings of the 2009 IEEE Virtual Reality Conference, pages 233–234, 2009. [3] eMagin. eMagin Z800 3D Visor. http://www.3dvisor.com/. [4] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2 edition, Apr. 2004. [5] IOTRACKER. Imagination Computer Services Ges.m.b.H., iotracker. http://www.iotracker.com. [6] A. K¨opfle, F. Beier, C. Wagner, and R. M¨anner. Real-time Markerbased Tracking of a Non-rigid Object. Studies in health technology and informatics, 125:232–234, 2007. [7] O. V. Larsen, J. Haase, L. R. Østergaard, K. V. Hansen, and H. Nielsen. The Virtual Brain Project–development of a neurosurgical simulator. Studies in health technology and informatics, 81:256–262, 2001. [8] D. Lobel. Frontiers in neurosurgery: simulation in resident education. the Future oF neurosurgical education, 2011. [9] NDI. Northern Digital Inc., Medical Products. http://www. ndigital.com/medical. [10] NeuroTouch. NeuroTouch. http://www.nrc-cnrc.gc.ca/ eng/dimensions/issue2/virtual_surgery.html. [11] Z. Zhang. A flexible new technique for camera calibration. IEEE Transactions on pattern analysis and machine intelligence, 22(11):1330–1334, 2000.

1 http://www.leica-microsystems.com 2 http://www.vrmagic.com