Development of 3D, real-time microscope with touch interaction x. Asay-Davis, T. Foley, J. Squier, K. Wilson. University of California, San Diego, Department of ...
Development of 3D, real-time microscope with touch interaction
x. Asay-Davis, T. Foley, J. Squier, K. Wilson University of California, San Diego, Department of Chemistry, Urey Hall Addition, Mail Code 0399, LaJolla, CA 92093-0339 U.S.A. Abstract We describe the development of a three-dimensional, real-time environment that enables seamless interaction with the microscopic world.
Key words: microscope, touch interaction
1. Introduction To date, microscopes are used primarily as instruments for viewing phenomena on the spatial scale of micrometers. The development of tools such as laser tweezers [1J has made it possible to interact with these phenomena mechanically as well as optically. The goal of this work is to combine the optical and mechanical aspects of these two tools and create a unique environment in which they are implemented. In this environment, the user will be able to perform sophisticated mechanical tasks at the microscopic level without the necessity of in depth knowledge of how such mechanical interactions are accomplished. What you see in the image, you can touch and manipulate.
The challenges in creating this environment are many. It requires real-time data acquisition, for instantaneous visualization in
three dimensions (visual feedback) and control of the microscope using touch interaction (mechanical feedback). In the following, we describe how these challenges are presently being met.
2. Description of three-dimensional visualization To create three-dimensional images from our microscope we are working at implementing newly developed high-speed multiphoton, multifoci imaging techniques such as described by Buist [2] and Bewersdorf [3]. The novel multi-focii geometry developed by our group for this application is described elsewhere {4]. In this paper we concentrate on the manipulation of the cross-sectional data sets produced by the microscope, and how they are rendered into three-dimensional images in real-time.
Normally, real-time three-dimensional graphics involve polygon rendering. This method is good for displaying precomputed surfaces at high speed, given the abundance of inexpensive hardware designed specifically for rendering polygons. However, this approach proved impractical in our application for the following reasons. Our digital camera gives us two-dimensional pixel cross sections and we do not have time to precompute a polygon representation for display. In addition, our data sets tend to be complex enough that a high-fidelity representation would require too many polygons for real-time display. For these reasons, we have chosen to display our data sets using volume rendering.
In volume rendering the cross sections of the data are stored directly in memory with no preprocessing. The data becomes a
three-dimensional volume, a grid of cubes called voxels. Rendering is done by casting rays through the volume which accumulate color and opacity from the voxels they pass through. If done in software, however, this technique is too slow to be interactive. Therefore, we employ a hardware card to speed the process up.
82
In Commercial and Biomedical Applications of Ultrafast Lasers II, Joseph Neev, Murray K. Reed, Editors,
Proceedings of SPIE Vol. 3934 (2000) • 0277-786X/0OI$15.00
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/22/2013 Terms of Use: http://spiedl.org/terms
The hardware card we use is the VolumePro 500 made by Mitsubishi [5]. This card is capable of rendering a 256 by 256 by 256 volume in real-time (30Hz). It is. in fact, the first single-chip real-time volume rendering engine. The card has l2t Ml)
of volume memory, for storing 512 by 512 by 256 voxels. The Volume Pro is able to render volumes made of stacked crosssectional without any images precomputation on the data. The card allows
us to change rotation, scaling, material classification, colors and lighting instantly. The speed of the Volume Pro means that we
can visualize a sequence captured in real-
time through a frame-grabber or digital camera that in turn, is a live feed from the microscope. Together. these features will give us the capabilities for both real-time display and real-time update of microscopic data.
Figure one shows a confocal data set as rendered by the Volume Pro. The letters UCSD have been etched into glass by a high power laser. The surface shown is a region of constant data value, or an isosurface. Our software automatically found the isosurface of the maximum gradient, which is the most
e trom a real-time three-dimensional rendenng of a conlöcat data set.
noticeable boundary in the data. In this case that surface is the material boundary between glass and air pockets. We can add other isosurfaces freely. In addition, we have given the volume physical properties including mass and stiffness, which are important when we integrate touch interaction.
3.
Touch Interaction
We choose to allow exploration of confocal data using haptics, also known as touch interaction, for the following reasons. It allows users to explore details of the data that do not come across visually (including small surface details, and features that are hidden from view or in the interior of an object). We can use touch as both a means of Input and output. Haptics are able to respond immediately to the user's actions, forming a tight feedback ioop. This isn't possible with the other senses. it is truly three-dimensional manipulation and display -- as opposed to mice and joysticks. For controlling objects. it was also important that the haptics be intuitive, allowing the user to directly control the position of an object. This is in contrast to the use of a joystick. in which the rotation of the joystick is often used to control an object's velocity.
We chose the Phantom system from SenseAble technologies for our haptics 161. The Phantom reads both position and rotation (six degrees of freedom) and applies forces along the x,y and z axes. They possess a positional accuracy of 20 ini. which is better than our ability to hold the Phantom still. In addition, the haptics feature an update rate of I kHz — fast enough
to fool the mind into feeling solid, realistic surfaces. Finally, it simplifies computation (over force-feedback gloves for Instance) by only simulating one point of contact: the user holds a stylus (which represents a virtual tool) and uses it to feel simulated surfaces.
To allow a user to explore confocal data we note the data value of the isosurface they see. This isosurface is what they will 'feel." A traditional technique is to let the user probe into the surface and apply a "penalty" force to them based on their depth. Since we do not have time to find a mathematically precise surface, we use the difference in data values between the surface and the probe location to estimate distance, and local gradient to estimate a surface normal. When the data is discontinuous or the stiffness coefficient is too high, this does not work, and the haptic device vibrates.
83
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/22/2013 Terms of Use: http://spiedl.org/terms
To solve this problem and provide
I\
the most realistic "feel" to our object we use a spring-damper
------* X
model commonly applied to these
problems. The spring-damper
Haptic USE
model adds a level of indirection to the force calculations. The user no longer directly interacts with
location.
the surface. Instead they pull around a virtual mass, attached to the end of the stylus by a spring.
Penalty Forc
The mass interacts with the
F=-kx
volume using our penalty force model. The stiffness is set high so that the mass gets "caught" on the
/
surface. Since the user does not directly feel forces applied to the
mass, they do not notice any resonance. Since the mass is
virtual rn contact
caught on the surface, the length
of the stretched spring is an
x
surface
accurate measure of penetration distance, and the spring force is
Figure 2. Schematic of forces used in spring damper model.
similar to a penalty force.
Figure two shows what happens when the user has pulled the virtual mass into a surface. We apply a penalty force to the mass to hold it anchored at the boundary. The user is located a distance d away from the mass, and the spring force pulls them back towards the surface. Taking the spring force on the mass to be the applied force and the direction of the penalty force to be a normal vector, we can calculate realistic static and kinetic friction and make the force feedback very convincing. It is, however, just feedback. Although the user can apply forces to the data they see on screen, we need something that will translate these macroscopic forces into microscopic ones, and apply them to the specimen.
4. Touch interaction with the Microscope We want to allow the user to become an active participant in the system, rather than just an observer. Haptics provides an intuitive interface for manipulating macroscopic objects: we can push, pull, lift and probe them. It can allow us to do the same with our microscopic specimens. In order to use haptics Camera in conjunction with a microscope, we need a tool that can apply push-and-pull forces at the microscopic level. Laser tweezers are an obvious choice: they provide non-damaging physical forces that can easily be controlled. The laser has the added advantage that the environment does not affect it and it
can safely penetrate most kinds of surfaces. For example, laser tweezers might be used to manipulate organelles without damaging the cell membrane that surrounds them.
Objective
Specimen :
+
In order to integrate laser tweezers into our microscope, we need a basic understanding of the principals on which they operate. For the purposes of demonstrating how they work, we assume
White light source
that we are working with polystyrene microbeads -
perfect spheres with a high index of refraction with respect to the surrounding medium (water). As the laser moves to the left
X-Y galvanomtric Scan head
Anamorphic optics
Diode laser
Figure 3. Microscope schematic
of a bead, refraction causes a change in photon momentum to the right. As a result, the bead moves left toward the focus of the laser. Similarly, as the focus of the laser moves above the bead, the photons experience a downward change in momentum and the bead moves up. In this way, the bead can be kept at the focus of the laser regardless of the direction in which we move the laser.
84
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/22/2013 Terms of Use: http://spiedl.org/terms
To build a microscope that uses laser tweezers. we start with a diode laser. We pass the beam
through anamorphic optics to correct beam
shape. X-Y galvonometric scanners are employed to allow computer control of the laser beam position. A 0.6 NA 40x ob;ective
magnifies the specimen and focuses the laser
into the specimen. A white-light source illuminates the specimen from below. The white-light image is projected onto a digital camera. We can optionally filter out the laser
wavelength before the image reaches the camera, producing a cleaner picture.
A diode laser was chosen for the laser lwee/.ers
since it is an inexpensive, easily nianipulated laser. In particular. we chose an SDL-5402-H I
for its wavelength (32 nm. therefore low absorption in biological samples), power (up to
200 mW continuous wave), and focusing capability (single spatial mode, near
diffraction-limited). Galvanometric
scanners are used for manipulating
the beam position. They are
achromatic, and can be used at many
different wavelengths. They also feature convenient feedback on
position. fast access time, and
provide imperceptible delay between haptic movement and laser movement (GSI lumonics, VM2000).
To allow Phantom control of the galvanometric scanners, we calibrate two x.y positions in the scanner range
to the boundaries of the Phantom workspace. This has two benefits: First, we can easily limit the range of
motion of the laser to an area of interest within the specimen. for instance the visible portion of the specimen. Second, we can know where the laser is focused in the specimen programmatically. This
lets us filter the laser wavelength out
as seen on the computer. 1 e X marks the position of the laser, and the bead that is presently being manipulated with the laser tweezer-haptic interface.
of the image (producing a clean image with no oversaturation) and instead superimpose a digital cursor that gives accurate feedback on the laser position.
Figure four is a picture of the Phantom apparatus in use while figure five shows an image of the specimen (10 pm polystyrene microspheres) as displayed on the computer screen. The 'X' marks the location of the laser beam under controi of the Phantom. Through the Phantom interface the user moves the laser tweezers, which in turn drag the bead in the desired direction.
To make trapping easier, we apply a force to the user opposing the forces applied to the beads by the laser tweezers. To achieve this, we have to know the effect that the laser has on the beads. As an initial estimate, we assume that the force ofthe
85
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/22/2013 Terms of Use: http://spiedl.org/terms
laser on a bead is proportional to the distance between their centers up to the radius of the bead. Outside this radius the force is zero. Thus, our force feedback takes the form of a spring force pulling the laser toward the center of nearby beads. To calculate the spring force, all we need to know is the position of the beads and of the laser. The latter is already determined
by the above mentioned scanner calibration while the former is much harder to calculate. Currently we simply find the brightest spots in the image and take them to be the centers of beads. However, this technique is dependent on the focusing of the image and the contrast between the beads and the background. We are investigating the use of pattern matching to more accurately and robustly find the centers of the beads. Even with our current methods, we have found that haptic feedback aids users in trapping and manipulating beads.
5. Conclusion We have demonstrated real-time, three-dimensional rendering of and haptic interaction with confocal data sets. This capability will be joined with our multiphoton, multifocii microscope, to produce three-dimensional images in real-time. In addition, we have demonstrated haptic control of laser tweezers for the manipulation of microbeads. The haptic/laser tweezer interface will be combined with our confocal data display to allow real-time touch interaction with the specimen observed in our 3D microscope.
6. References 1. M. Berns, "Laser Scissors and Tweezers," Scientific American, 62-67, April 1998. 2. A.H. Buist, M. Muller, J. Squier, G.J. Brakenhoff, "Real time two-photon absorption microscopy using multipoint excitation," J. Microsc. 192, 217-226 (1998). 3. J. Bewersdorf, P. Rainer, S. Hell, "Multifocal multiphoton microscopy," Opt. Lett., 24, 655(1998). 4. D. Fittinghoff, J.Squier, to be submitted for publication, Optics Letters. 5. Sensable Technologies, http://www.sensable.com 6. Real —Time Visualization, http:llwww.rtviz.com
86
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/22/2013 Terms of Use: http://spiedl.org/terms