Multimodal and multi-informational neuronavigation - CiteSeerX

2 downloads 0 Views 1MB Size Report
operating theatre to pre-operative multimodal data. We define ... Classification methods are used for the segmentation of white matter and gray matter as well as ...
Multimodal and multi-informational neuronavigation 







P. Jannin , O. J. Fleig , E. Seigneuret , X. Morandi , M. Raimbault and J.-M. Scarabin



Laboratoire SIM, Faculté de Médecine, Université de Rennes 1, 2 Avenue du Pr. Léon Bernard, 35043 Rennes Cedex (France) 

Carl Zeiss, 60 Route de Sartrouville, 78230 Le Pecq (France) 

Neurosurgery Department, University Hospital Rennes, 2 Avenue du Pr Léon Bernard, 35043 Rennes Cedex (France) This paper presents the principle of multimodal and multi-informational neuronavigation as well as the data fusion environment we have developed for these techniques. Our fusion environment supports the planning step and includes registration, segmentation, 3D visualisation as well as data browsing and interaction tools. We provide a neuronavigation system with relevant information for the performance of the surgical act. The information is extracted from multimodal data such as magnetic resonance imaging (MRI), magneto-encephalography (MEG) and functional magnetic resonance imaging (fMRI). We demonstrate multimodal neuronavigation on a clinical case with multimodal information injected in the overlay (head-up) display of a neurosurgical microscope (Surgical Microscope Navigator by C ARL Z EISS, Oberkochen/ Germany). Our first experiences show that being able to access pre-operative functional and anatomical data within the neuronavigation system adds valuable support for the surgical act in many surgical contexts (cavernoma, low grade tumours, . . . ). Keywords: Neuronavigation, multimodal imaging, computer assisted planning. 1. Introduction Planning in neurosurgery consists in determining or defining target areas, functional and anatomical high risk zones, landmark areas and trajectories. All these components and the anticipated stages of the intervention are part of what we call the surgical script. Today neurosurgeons have access to multimodal medical imaging to support the definition of their surgical script. Contours of a lesion, sulci segmented from MRI, or equivalent current dipoles from magneto-encephalography (MEG) might determine entities for this surgical script. Due to registration tools which establish the correspondence between coordinate systems of different imaging modalities, one can take advantage of the complementary nature of imaging modalities. To execute the prepared scenario, the neurosurgeon needs to retrieve certain preselected entities within his surgical environment. Three dimensional neuronavigation systems establish 

Contact: [email protected]

http://sim3.univ-rennes1.fr

the correspondence between patient and world coordinate systems and register the patient in the operating theatre to pre-operative multimodal data. We define multimodal and multi-informational neuronavigation as the retrieval and matching in the operating theatre of relevant information selected from pre-operative multimodal data. “Multi-informational” refers to the fact that different kind of information can be extracted from one single modality, e.g. from anatomical MRI lesions for target areas, vessels, ventricles, and sulci might be extracted for the definition of target areas, areas to be avoided or reference landmarks. We have integrated and developed a soft- and hardware environment (fig. 1) allowing the acquisition and retrieval of multimodal data, the definition of the surgical script in a planning application and the transfer of the selected multimodal information from the planning workstation to the neuronavigation system (SMN/C ARL Z EISS). Selected information from planning can be displayed during surgery on the workstation of the navigation system as well as on the overlay (also referenced as head-up display) in the right ocular of the surgical microscope. We have automated the procedures from acquisition to surgery as much as possible, keeping user intervention to a minimum. First clinical results outline the interests and limits of our approach. 2. Materials and methods Information required for the planning of the surgical act is extracted from different modalities. Each modality is registered to a 3D anatomical MRI data set. During the pre-operative planning step, the neurosurgeon selects relevant information for the surgical act from multimodal data. The selected information is transformed and stored into a file normally used by the neuronavigation software (STP 3.4, L EIBINGER, Freiburg/Germany) to store hand drawn target contours (fig. 2). The file contains the contours of the objects to be injected into the microscopes graphical overlay. 2.1. Acquisition and segmentation We have developed segmentation tools for the automatic extraction of anatomical objects from an anatomical MRI data set. Morphological operators are used for brain and lesional area segmentation. Classification methods are used for the segmentation of white matter and gray matter as well as for ventricles [1]. We have also developed a method for the segmentation of cortical sulci based on active contours and curvature analysis [2]. This method allows the computation of a compact numerical description of a sulcus by modelling its median surface. Each of the extracted median surfaces is modelled by a B-spline. Depending on the anatomical localisation of the lesion motor, somato-sensory or language zones are stimulated for both MEG and fMRI recordings. After the reconstruction of the signal MEG acquisitions provide either the coordinates of the MEG equivalent current dipoles or a three dimensional map of the correlation indexes. fMRI data is analysed using the Functool software package (G ENERAL E LECTRIC M EDICAL S YSTEMS). A correlation volume is generated, and significant volumes of interest with crosscorrelation values superior to 0.5 are selected by a radiologist. The results are stored in the patient’s data base as a list of points or as a three dimensional texture volume.

Neuro Navigation

MRI

MEG

fMRI Acquisition

SPECT

fMRI VOI

Vessels MRA Alpha shape Planning

AMC

Figure 1. Data fusion hardware and software environment. SPECT

fMRI VOI

Lesion

Dipoles MRI

Analysis

Neuro Navigation

MEG

Lesion

Lesion Sulci

AMC AMC

STP VOIs Direct

AMC or Balloons

Neuronavigation system

Figure 2. Data representation and encoding. Each entity is converted to proprietary STP file format (STP VOI).

MEG

Dipoles 2.2. Planning Each modality and information is registered to the anatomical 3D SPGR MRI volume conMRI Lesion sidered as reference volume. The MEG/MRI registration is computed by the STA/R software Vessels Sulci MRA (B IOMAGNETIC T ECHNOLOGYAMC I NC .) and based on a head-shape surface registration [3]. The co-registration of fMRI and MRI Alpha AMC data sets is based on mutual information [4] [5]. shape For each modality, the user manually selects the entities required for surgery. Due to the AMC specifications of the SMN system, each selected entity has to be transformed from its initial or STP Balloons VOIs representation AMC (as points in Direct three dimensional space or as volume) to a surface-based representation and stored in the proprietary SMN file format (fig. 2). Suitable 3D surface representations (spheres, quad-meshes, polygons) and generation algorithms (alpha-shapes [6], marching cubes [7]) are used for different modalities (fig. 4). Neuronavigation system

2.3. Surgical step Selected information stored as 3-D contours is displayed on the workstation of the neuronavigation system as 2-D coloured lattices in the MRI cut planes (fig. 5). The surgeon can also ask to “superimpose” information as computer generated graphics over the real view of the operating field through the microscope in the right ocular. The overlay graphics consists of monochrome contours of the sections of selected objects with the focal plane (fig. 6). 3. Results We have been using the SMN system for neurosurgery since 1995. We began manually integrating functional information (MEG) into the neuronavigation system in 1997 [8]. Since

frontal precentral

MEG

fMRI

central

Figure 3. MRI volume rendering showing superior frontal, precentral and central sulcus, somato-sensory MEG and motor fMRI.

Figure 4. 3D surface-based repesentation of same entities as in fig. 3.

the beginning of 1999 we have been developing an environment allowing the automatic insertion of multimodal information into the SMN system. This environment has been used on 15 patients for epilepsy surgery and tumoural surgery. The example in fig. 3, 4, 5 and 6 shows a patient with an oligo-astrocytoma. Funtional information are finger tapping paradigm fMRI recordings and somato-sensory MEG recordings of the right thumb. 4. Discussion Making multimodal information available in the neuronavigation system proved to be useful for the following reasons. Lesional areas are used to define and follow the surgical path or trajectory. Sulci proved to be very helpful to define a trajectory to cavernoma, to define the limits of resection and they supported the recognition of the anatomical environment during the surgical act. Sulcal cartography performed after segmentation helps the surgeon to mentally build a road map of the patient’s anatomy. This “mental image” in the imagination of the surgeon enables her even in the case of anatomical deformation to orientate herself in the surgical field. Vessels also support the identification the anatomical environment of the surgical field. And finally, MEG and fMRI functional information outline the position of functional high risk areas. Few papers have been published on the interest of sulcal information during neuronavigation. In [9] advantages are not explained in detail and intra-operative use of the sulci is only a prospective issue. Although automatic sulci segmentation methods are still subject to performance limitations, manual segmentation is not applicable in clinical routine. Some work on the use of functional information in connection with neuronavigation systems has previously been published [10–14]. The authors outlined several benefits from this

u mo Tu

fMRI

r

fMRI precentral MEG central

Figure 5. STP 3.4 software environment during surgery showing entity contours in a cut plane perpendicular to optical axis.

Figure 6. Intra-operative microscope view: Overlay with precentral and central sulcus, fMRI (motor) and MEG (somato-sensory).

integration: 

no need to perform invasive intra-operative electrical mapping procedures, surgery might be replaced by biopsy evaluation prior to radiotherapy and/or chemotherapy [14], smaller craniotomy and safer trajectory by decreasing the risk of functional morbidity [13], identification of critical structures that are not visible via the usual surgical exposure [12], and improvement of functional outcome for surgery around eloquent brain areas [10,11].    

However, in contrast to our automatic approach always a manual integration of multimodal (especially functional) information into a neuronavigation system is described. There are still limitations of our system to overcome:    

anatomical deformation during surgery is neither tracked nor corrected for, automatic sulci segmentation procedure is sometimes unstable on highly distorted or abnormal anatomy, spatial accuracy of functional imaging information and its validation is still subject to research, limited quality of the overlay (fig. 6). In order to take full advantage of the augmented reality feature, a coloured 3-D stereo-vision overlay seems indispensable.

5. Conclusion We propose a system for multimodal and multi-informational neuronavigation. The system permits the surgeon to access information defined during a planning step when it is crucial: in

the operating theatre. Still, pre-planning, planning and surgical steps are designed as separated tasks. This allows our approach also to be seen independent from a neuronavigation system. New modalities can easily be integrated. Our system proves to be very useful for guidance during surgical procedures especially due to the display of sulci and/or functional information in the ocular of the surgical microscope. REFERENCES 1. F. Lachmann and C. Barillot. Brain tissue classification from MRI data by means of texture analysis. Proceeding of SPIE Medical Imaging, 1652:72–83, 1992. 2. G. Le Goualher, C. Barillot, and Y. Bizais. Modelling cortical sulci with active ribbons. International Journal of Pattern Recognition and artificial Intelligence, 8:1295–1315, 1997. 3. D. Schwartz, D. Lemoine, E. Poiseau, and C. Barillot. Registration of MEG/EEG data with 3D MRI: methodology and precision issues. Brain Topography, 9(2):101–116, 1996. 4. F. Maes, A. Collignon, D. Vandermeulen, G. Marchal, and P. Suetens. Multimodality image registration by maximisation of mutual information. IEEE Trans Med Imaging, 16(2):187– 98, apr 1997. 5. W. Wells 3rd, P. Viola, H. Atsumi, S. Nakajima, and R. Kikinis. Multi-modal volume registration by maximisation of mutual information. Medical Image Analysis, 1:35–51, mar 1996. 6. Tim Poston, Tien-Tsin Wong, and Pheng-Ann Heng. Multiresolution isosurface extraction with adaptive sceleton climbing. EUROGRAPHICS, 17(3), 1998. 7. Herbert Edelsbrunner and Ernst P. Mücke. Three-dimensional alpha shapes. ACM Transactions on Graphics, 13(1):43–72, 1994. 8. J.-M. Scarabin, P. Jannin, D. Schwartz, and X. Morandi. MEG and 3D navigation in image guided neurosurgery. In Proc. of Computer Assisted Radiology and Surgery, 1997. 9. K. Niemann, U. Spetzer, V. A. Coenen, B. O. Hütter, W. Küker, and D. von Keyserling. Minimally Invasive Techniques for Neurosurgery, chapter Anatomically Guided Neuronavigation: First Experience with the SulcusEditor. Springer, 1998. 10. O. Ganslandt, R. Fahlbusch, C. Nimsky, H. Kober, M. Moller, R. Steinmeier, J. Romstock, and J. Vieth. Functional neuronavigation with magnetoencephalography: outcome in 50 patients with lesions around the motor cortex. Journal of Neurosurg, 91(1):73–79, Jul 1999. 11. M. Hardenack, N. Bucher, A. Falk, and A. Harders. Preoperative planning and intraoperative navigation: status quo and perspectives. Comput Aided Surg, 3(4), 1998. 12. JA. Maldjian, M. Schulder, WC. Liu, IK. Mun, D. Hirschorn, R. Murthy, P. Carmel, and A Kalnin. Intraoperative functional mri using a real-time neurosurgical navigation system. Journal Computer Assisted Tomography, 21(6):910–912, Nov-Dec 1997. 13. AR. Rezai, M. Hund, E. Kronberg, M. Zonenshayn, J. Cappell, U. Ribary, B. Kall, R. Llinas, and PJ Kelly. The interactive use of magnetoencephalography in stereotactic imageguided neurosurgery. Neurosurgery, 39(1):91–102, Jul 1996. 14. TP. Roberts, E. Zusman, M. McDermott, N. Barbaro, and HA Rowley. Correlation of functional magnetic source imaging with intraoperative cortical stimulation in neurosurgical patients. Journal of Image Guided Surgery, 1(6):339–347, 1995.

Suggest Documents