Realistic Visual and Haptic Rendering for ... - Semantic Scholar

4 downloads 0 Views 922KB Size Report
algorithm is based on the Williams method. In this approach ... William's node vicinity. ..... [4] J. Koshwanez, M. Holl, M. Mc Murray, D.Gottschling, D. Meldrum,.
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Barcelona, Spain, April 2005

Realistic Visual and Haptic Rendering for Biological-Cell Injection Mehdi Ammi and Antoine Ferreira Laboratoire Vision et Robotique, Universit´e d’Orl´eans 10 bd Lahitolle, 18020 Bourges Cedex, France Email: mehdi.ammi, [email protected]

leading to unstable visual tracking of the automatic microinjection task. An alternative solution to the previous works is the development of intelligent human-machine interfaces for 3-D sensing, visualization and interaction adapted to the user’s intention and skills. Few works have been carried out until now mainly using using virtual reality [6] or augmented reality [7] interfaces. However, such user’s interfaces are not really multimodal and intuitive. This paper focuses on a virtualized reality human-machine interface integrating force and vision based methods for augmented operator interaction. To improve the manipulation of the biological cell, we proposed a three-dimensional (3-D) biomicromanipulation system based on virtual reality (VR) technology. The proposed user’s interface provides augmented visual and haptic feedback in order to guide and to assist the teleoperator during cell injection. The biological medium and the cell deformations are reconstructed in realtime in order to facilitate more uniform microinjection of embryos by selecting the appropriate 3-D free viewpoint in the VR space. By adding virtual haptic rendering modalities such as the cellular forces and the viscosity of the medium, stability of the cell punction is improved during teleoperation. Finally, the fusion of these different sensing modalities in the virtual environment allows realistic cell injection tasks to be performed. The paper is summarized as follows. Section 2 presents the three-dimensional micromanipulation system. Section 3 presents the micro-world data extraction based on visual pattern detection, and then the 3-D reconstruction in a virtual scene is presented in Section 4. Finally, Section 5 presents the fusion of visual and haptic rendering techniques provided to the operator.

Abstract— This paper presents a new biomicromanipulation system for biological object such as embryo, cell or oocyte. As the cell is very small, kept in the liquid, and observed through a microscope, the two-dimensional visual feedback makes difficult accurate manipulation in the 3-D world. To improve the manipulation work, we proposed an augmented human-machine interface. A 3-D visual information is provided to the operator through a 3-D reconstruction method using vision-based tracking deformations of the cell embryo. In order to stable injection tasks, the operator needs force feedback and haptic assistance during penetrating the cell envelop, the chorion. The proposed human-machine user’s interface allows real-time realistic visual and haptic control strategies for constrained motion in image coordinates. Virtual haptic rendering allows to constrain the path of insertion and removal in the 3-D scene or to avoid cell destruction by controlling adequately position, velocity and force parameters.

I. I NTRODUCTION Biological cell manipulation becomes of great interest for analysis, diagnosis, and manipulation of single biological cells such as gene or cell injections, membrane capacitance measurements using patch-clamping techniques or intracytoplasmic sperm injection. Such biomanipulation tasks are supported by experienced manual operators relying only on the visual information from the optical microscope. Because of the eyestrain, the efficiency and the surviving rate are rather low. Furthermore, as the biological cells are irregular in configuration and easily deformable, they can be seriously damaged during manipulation and treatment due to excessive force or hand tremor. Moreover, the liquid flow due to micropipette motion in petri-dish acts as a disturbance to the cell manipulation. Due to these difficulties, the cell manipulation tasks require much attention but the success rate is quite low even by skillful operators. Various cell injection systems have been developed to provide more controllable manipulation of biological cells. Prior published works focuse on using teleoperated micromanipulators in combination with vision methods to improve guidance of the injection tasks [1][2][3]. In order to realize successful autonomous injection, image based segmentation and visual tracking of cells have been attempted by several groups [4][5]. If segmentation of the egg and the holding pipette is relatively easy, the injecting needle is completely in focus prior to the micro-injection or completely occluded by the biomembrane during insertion. It increases considerably the segmentation procedure

0-7803-8914-X/05/$20.00 ©2005 IEEE.

II. 3-D B IO -M ICROMANIPULATION S YSTEM A. Experimental Setup An immersive supervisory control system allowing complex human-machine interaction for 3-D biomicromanipulation strategy is shown in Fig.(1). It is basically composed of VR input/output devices (force feedback, vision servoing, sound feedback), a multi-modal operator interface and a microrobotic cell injection system. Different visual feedback techniques are provided, i.e., multiple view monitoring, vision-based control, or 3D visual immersion through a head mounted display (HMD) system. To enhance the feeling of immersion, the operator

930

Fig. 3. Structure of egg cell envelope, the plasma membranes (outer and inner), the holder pipette and the injection pipette. Fig. 1.

Immersive 3-D bio-micromanipulation system.

Fig. 2.

the total magnification of the microscope is fixed at 10. At this magnification, the valid area of grabbed image is about 600 × 400 microns with a resolution of 0.825 micron by pixel. The position information is then transferred to the main 3-D virtual engine module through the Ethernet to realize 3-D reconstruction of the biological environment. So, the speed of vision system depends strongly on the speed of localization and recognition procedure. The faster the localization performs, the faster the systems runs. The recognition of the cell and the micropipette can be considered as a matching problem. Two kinds of matching methods are often used: pattern matching method and feature matching method. Such methods have been applied taking into account the real time constraints imposed by the micro-world data extraction.

Microrobotic cell manipulation setup.

A. Localization of the Pipettes

can use a haptic interface (PHANToM, SensAble Technologies) with 6-DOF positional input and 3-DOF force output for force-feedback interaction. The microrobotic cell manipulation system is composed of the piezoelectric cellular force sensor, a cell holding unit, a vacuum unit and a Nikon T E − 300 inverted microscope as shown in Fig.(2). The holding pipette was mounted on a oil hydraulic micromanipulator, and the injection pipette was mounted on the Eppendorf 5171 piezoelectric micromanipulator. The viewing camera is a Coolpix 990 from Nikon.

1) Injection Pipette Localization: The injection micropipette is mounted on a controllable micromanipulator with a high precision of positioning. After a calibration procedure, the positioning of the micropipette is performed through a closed-loop controller with a repeatability of few micrometers. 2) Holding Pipette Localization: As the holding micropipette is mounted on a manual micromanipulator, the vision system should find out the accurate position of the pipette in grabbed image. We used correlation based template matching to realize the localization of the pipette in real-time. First, we define a rectangular image block of the pipette. Then, we calculate the correlation between the pattern and the corresponding image region at every pixel in grabbed images. The correlation represents the likelihood between the pattern and the corresponding region. The point where the correlation reaches local maximum value is considered as the matching point. In order to make the correlation insensitive with global luminosity variation we use the Normalized Correlation Coefficient:

B. Experimental Method The figure (3) shows the structure of egg cell envelop and the plasma membranes (outer and inner) which is studied in this work. An embryo was manually selected and captured with the holding pipette, moved to the injection portion of the slide, and brought into focus. The main experimental steps are defined as follows: 1) keep the cell fixed relative to the micromanipulator using the holding pipette, 2) guide the injecting pipette to the edge of the embryo, 3) insert to puncture the biomembrane (using the injection pipette), hold and deposit the substance and 4) remove the injection pipette while constraining it to move back only along the direction of insertion.



u,v M (u, v) · I(x + u, y + v) N CC =    | u,v M 2 (u, v) · u,v I 2 (x + u, y + v)| (1) where, I is the grabbed image, and M is the reference template. To optimize computation time, we limit the correlation to a preset window. The center of this window corresponds to the position of the pipette in the

III. M ICRO -W ORLD DATA E XTRACTION The setup of the vision processing block consists of four parts: optical microscope, CCD camera, image grabbing and processing board, and image processing. In this study,

931

EExternal (v) = −|∇(v(s))|2

previous image, and its dimension is defined experimentally (according to the pipette’s velocity).

2

d v(s) with vs = dv(s) ds and vss = ds2 The terms α, β, and γ represent scaling parameters. These parameters are assumed to be constant throughout the length of the contour. The adopted energy minimization algorithm is based on the Williams method. In this approach, the minimization is carried out by turns on each node of the snake, independently of the other nodes. If we suppose that ui is the current node, its energy value is evaluated by the Eq.(3). The same operation is carried out for all positions in the close vicinity of ui as shown in Fig.(4). The minimum energy position is chosen as new position of ui before to process the next node.

B. Biological Cell Segmentation In order to reconstruct the biomembrane geometry deformations in a three-dimensional virtual space, a robust deformable visual tracking algorithm is required. Contour extraction has been widely studied in image processing and a number of methods have been proposed. The most commonly used edge finding techniques are the gradientbased Prewitt, Sobel and Laplace detectors [8]. Also, other contour finding methods like second derivative zerocrossing detector [9] or computational approach based on the Canny criteria [10] were reported. However, due to common image features like texture, noise, image blur or other anomalies, like non-uniform scene illumination, edge finding techniques frequently fail in producing confident results. Continuous image boundaries present in the source image may be represented by broken edge fragments or may not be detected at all. In some cases further utilization of edge information can be hindered by the fact that edges are composed of few pixels. Finally, all these techniques usually need post-processing steps to obtain contours that are connected and closed. The technique of active contours (namely, Snakes), first presented by Kass et al. [11], is used in many applications, including edge detection, shape modelling, segmentation, pattern recognition, and object tracking. These techniques always produce closed contours, and are well adapted to segment biological images. 1) Outer and Inner Biomembrane Segmentation: In the active contour approach, the contour is defined in the grabbed image as a parametric deformable curve v(s, t), where parameter s is the spatial index and t is the time index. This deformable curve is a function of the coordinate variables x and y: v(s, t) = (x(s, t), y(s, t)) : s ∈ Ω t ∈ T ;

Fig. 4.

The presence of different objects, such as the holding pipette, the injection pipette or the impurities of the medium acts as disturbances and makes the straight use of the snakes impossible for contour tracking of the biomembrane deformations. For that, a succession of pre-processing steps have been carried out previously. The first step consists to erase the vacuum holder and the injection micropipette from the image. As the positions of both components have been localized previously in the image space, the erasing operation is carried out by approximating the holding and the injection pipettes as polygons (Segmented Connected Components). The pipette/membrane contact points define the boundaries of these segments. These points are obtained by testing the gray level of the image along the pipettes’s edges (see Fig.5). The second step consists to locate the impurities and to remove them from the image. Once the holding and injection pipettes are removed from the scene, we apply a Sobel filter to enhance the cell and the impurities from the biological medium. Finally, we apply a threshold filter in order to keep only the large gradient variations before to label the connected sets. Once the biological cell is isolated, the use of Snakes for the outer biomembrane becomes much easier and faster (see Fig.(6)).

(2)

where Ω and T are defined as open intervals. Each configuration of the contour is associated with some finite energy expressed by the following functional form:  Etotal (v) =



[EElastic (v)+EBending (v)+EExternal (v)]

(3) ”EElastic ” and ”EBending ” are respectively the elastic and the bending energy. The elastic energy allows the active contour to shrink or expand [12] and the bending energy causes the active contour to be a smooth curve. The term ”EExternal ” represents the energy modeling the external constraints. In the considered case, we choose the Sobel filter as image gradient. The expression of the different energies can be expressed : EElastic (v) =

1 2

EBending (v) =

 Ω 1 2

2) Initialization of the Control Nodes: The free-form algorithm of the Snakes has the generality that allows it to be used with a wide variety of object shapes. However, this generality also makes the gradient-based algorithm susceptible to noise and local minima since no assumptions are made about the object shape being tracked. In order to avoid these problems, the implementation of the snake

α(s)|vs |2 ds

 Ω

William’s node vicinity.

β(s)|vss |2 ds

932

computation time of the gradient-based algorithm. For each iteration, we consider the last configuration of the snake as a starting position for control nodes. As shown in Fig.(7), robust tracking of the cell deformation has been achieved. The experimental data of Fig.(8) shows the linear dependance between the computation time with respect to the number of control nodes. In order to make a tradeoff between computation time and tracking contour error, we choose 16 control nodes for real time operation which leads to a success rate of 80 percent.

Fig. 5. Segmented connected contours for the injection pipette and the holder pipette for the outer (left image) and inner (right image) biomembrane.

C. Determination of Contact and Penetration Phases During the injection task different time-dependent operating phases have to be considered: no contact phase, contact phase and puncture phase. In order to detect precisely the contact instant between the injection pipette and the cell, we start by dilating the binary image by a vertical structuring element (4-vertical pixels). This structure will dilate the pixels in vertical direction, and thus, homogenize the image’s components without altering their horizontal dimensions. Once the image is dilated, we label the connected pixels in order to form only two object classes: (1) the cell and the holding pipette, and (2) the injection pipette. The contact occurs when these two classes merge together.

Fig. 6. Application of a threshold to recover the outer (left image) and inner (right image) cytoplasm.

Fig. 7. Robust deformable visual tracking algorithm: Initialization (left image) and deformation (right image) of the Snakes.

Fig. 9. Velocity of the contacts points at the pipette/cell interface before (left image) and after puncture (right image)

Fig. 8.

For the penetration instant detection, our approach is based on a simple experimental observation. During the injection, the displacement of the cell’s membrane (in the vicinity of the contact point) is divided into two phases. The first phase occurs before the puncturing operation, and is characterized by a displacement of the membrane in the same direction as the pipette direction. The second phase occurs after the puncture, when the pipette is penetrating into the nucleus. It is characterized by a displacement of the membrane in the opposite direction to the pipette direction. Thus, the puncture moment corresponds to the transition between these two phases (see Fig.(10)). In our implementation, the detection of both transition periods is achieved by computing the relative velocity between the injection pipette and the two boundary contact points.

Computational time for one snake.

contours include two types of control node initialization. The first one is an initialization step performed off-line and based on the circular Hough transform. We assume that before the micro-injection task, the biomembrane has a circular form. For the implementation of the Hough transform [13], we defined a three dimensional accumulator (Hough space), the first two dimensions represents the center of the circle and the third one its radius. The second initialization is performed online and occurs before the energy minimization process. It gives to the snakes’s control nodes an initial position close to the final one. By this way, it is possible to reduce considerably the

IV. 3D R ECONSTRUCTION In this section, we describe the procedure that generates the inner and outer biomembrane surface. The problem breaks down to that of interpolating a third-degree nonuniform rational B-spline (NURBS) surface over a quadrilateral of p × q points. The NURBS is widely used in 3-D

933

TABLE I C OMPUTATION TIME Reconstruction steps

Computation time (ms)

Inner membrane’s snake

28.89

Outer membrane’s snake

26.23

Holding pipette tracking

25.24

3-D scene updating

38.34

Total computation time

118.70

Fig. 11.

Cell membrane.

the equilibrium, the force balance equation is expressed as follows :

computer graphics to describe three-dimensional surfaces [14].

F = π r2 p + σd sin(θ) 2 π r h

(4)

2

Here, the term πr p express the force produced by the internal pressure (p) to counterbalance the applied force F . The second term σd sin(θ)2πrh is the force due to membrane stress σd . These two forces together balance the applied force. Following [16], the force according to the two geometrical parameters wd and a is expressed as: Fig. 10. instance

Respectively real cell and its reconstruction before contact

F =

  3 − 4ζ 2 + ζ 4 + 2 ln(ζ 2 ) 2πEhwd2 a2 (1 − ν) (1 − ζ 2 )(1 − ζ 2 + ln(ζ 2 ))3

(5)

where E is the membrane elastic modulus, ν is the Poisson ratio, and ζ = c/a

During the injection task, we assume that the cell is symmetric according to the vertical plane. This assumption allows to recover the 3-D surface of the biomembranes with a good approximation [15]. The control points used for the NURBS interpolation are the control nodes returned by the snakes. Then, we create a 3-D skeleton of control points before to interpolate the surface with the NURBS. As example, Fig.(10) shows the original grabbed image from the microscope and the 3-D image reconstructed in the virtual environment. It should be noted that the errors due to image noise and modeling incertitude are less that 1 micrometer with the proposed optimized visual tracking algorithms. Furthermore, Table (I) shows the mean computation time of each process.

Fig. 12. Original grabbed image of cell deformations during pipette insertion (left side) and 3-D reconstructed cell deformations based on visual tracking of cell deformations (right side).

Equation (5) allows the force estimation according to the experimental membrane’s geometrical deformation shown in figure (12). The two main parameters a and wd are returned from the 3-D virtual engine module. The Poisson ratio ν of biomembrane is typically assumed to be 0.5. In our application, the biomenbrane thickness (h) is of 4.6 micrometers and the micropipette radius (c) is of 3.8 micrometers. The membrane’s elastic modulus was determined experimentally with a pre-calibrated piezoresistive sensor (F=35.3 kP a). The experimental values are similar to those obtained by Sun and Nelson in [16]. The force is rendered in the 3-D virtual scene as potential force vectors distributed along the contact points at the pipette-cell interface (Fig. (12)). The forces experienced during the puncture event were observed to be less than 10mN.

V. P SEUDO -H APTIC R ENDERING A. Biomembrane Model for Haptic feedback The pseudo haptic rendering is based on biomembrane point-load model, developed by Sun and Nelson [16]. If the biomembrane elastic modulus is determined, applied forces can be estimated from visual feedback alone, a technique referred to as vision-based biomembrane pseudo force rendering. In this technique, we consider the biomembrane as a thin film and assumes that the inner cytoplasm provides hydrostatic pressure on the membrane. The three geometric parameters describing an indented biomembrane shape are a, wd , and R as shown in figure (11). A pipette of radius c exerts a force F on the membrane, creating a dimple with a radius a and depth wd and semicircular curved surface with a radius R. The resultant membrane shape is reasonable approximation. At

934

second B2 , when the pipette is inside the cell. We choose to put B2 > B1 to simulate the cell’s viscosity.

B. Constrained Motion The task outlined above suggests several remarks. In order to realize an injection task with minimal damage, the needle must be withdrawn as quickly as possible and safely. Furthermore, alignment of the pipette and puncture of the cell membrane require dexterous manipulation. In order to avoid these problems, the user’s interface provides to the operator an efficient haptic-based guidance tool [17]. It can constrain the path of insertion and removal of the pipette by imposing some virtual fixtures based on potential fields (attractive or repulsive) or some different types of forces (viscosity, friction, vibration). All these forces are virtual constraints that are implemented in the master’s control haptic interface (PHANToM device). In order to assist the operator in navigating into the embryo, we implemented a haptic guide based on cone-shaped attractive force. Figure (13) gives a representation of the proposed method.

VI. C ONCLUSION This paper has reported preliminary experiments to improve the realism of visual and haptic modalities perceived by the operator during cell micro-injection. To improve the manipulation work, first we developed an augmented 3-D bio-micromanipulation system with a virtual reality environment. We proposed a 3-D modeling methodology based on visual tracking of cell deformations for 3-D visualization. We used gradient-based tracking algorithms (Snakes algorithms) for real-time environment reconstruction. Then, in order to guide the operator gesture during insertion or removal tasks, we introduced virtual potential fields in order to assist haptically the operator. R EFERENCES [1] S. Yu, B.J. Nelson, Microrobotic Cell Injection, IEEE Int. Conf. on Robotics and Automation, Korea, 2001, pp.620-625. [2] Y. Sun, K-T. Wan, K.P. Roberts, J.C. Bischof, B.J. Nelson, Mechanical Property Characterization of Mouse Zona Pellucida,IEEE Trans. on Nanobioscience, vol.2, n4, 2003, pp.279-286. [3] X. Li, G. Zong, S.Bi, Development of Global Vision System for Biological Automatic Micro-Manipulation System,IEEE Int. Conf. on Robotics and Automation, Korea, 2001, pp.127-132. [4] J. Koshwanez, M. Holl, M. Mc Murray, D.Gottschling, D. Meldrum, Automation of Yeast Pedigree Analysis, IEEE Int. Conf. on Robotics and Automation,New Orleans, LA, 2004, pp.1475-1480. [5] R. Kumar, A. Kapoor, R.H. Taylor, Preliminary Experiments in Robot-Human Cooperative Microinjection, IEEE Int. Conf. on Intelligent Robots and Systems, Las Vegas, Nevada, 2003, 3186-3191. [6] F.Arai, A.Kawaji, P.Luangjarmekorn, T.Fukuda, K.Itoigawa, ThreeDimensional Bio-Micromanipulation under the Microscope,IEEE Int. Conf. on Robotics and Automation, Korea, 2001, pp.601-609. [7] R. Taylor, P. Jensen, L. Whitcomb, A. Barnes, R.Kumar, D. Stoianovici, P. Gupta, Z. Wang, E. De Juan, L. Kavoussi, A SteadyHand Robotic System for Micro-Surgical Augmentation,Int. Journal of Robotic Research,vol.18,Issue 12, 1999, pp.1201-1210. [8] S.Davis, A survey of edge detection techniques, CGIP, 4:248-270, 1975. [9] Marr, E.Hildreth, Theory of edge detection. Proc. Of the Royal Society, B-207:187-217, 1980. [10] J.Canny, A computational approach to edge detection, IEEE Trans. PAMI, 8(6):697-698, 1986. [11] M.Kass, A.Witkin, D.Terzopoulos, Snakes: Active Contour Models, Int. Journal of Computer Vision, 1, pp.321-331,1987. [12] V.Caselles, F.Catt, F. Dibos, A geometric model for active contours in image processing, Num. Mathematik, vol.66, n1, pp.1-31, 1993. [13] Yip, R., Tam, P., and Leung, D., Modification of Hough transform for circles and ellipse detection using a 2-dimensional array, Pattern Recognition, vol.25, no.9, pp.1007-1022, 1992. [14] Piegl L. and Tiller W. 1996, The NURBS book, Springer pres, Berlin. [15] Floater M. S. and Reimers M., Meshless parameterization and surface reconstruction, Computer Aided Geometric Design, 18 (2), 77-92, 2001. [16] Y. Sun, K-T. Wan, K.P. Roberts, J.C. Bischof, B.J. Nelson, Mechanical Property Characterization of Mouse Zona Pellucida, IEEE Trans. on Nanobioscience,vol.2,no.4, 2003, pp.279-286. [17] N. Pernalete, W. Yu, R. Dubey, W. Moreno. Development of a Robotic Haptic Interface to Assist the Performance of Vocational Tasks by People with Disabilities, IEEE Int. Conf. on Robotics and Automation, USA, 2002, 1269-1274.

Fig. 13. Representation of the force (left side) and virtual guide-based on cone-shaped attractive potential field for user guidance (right side).

The projection of the pipette’s effector Cartesian position on the desired trajectory is calculated by: Vprojection =

B V B ||B||

(6)

The attractive potential field has an amplitude which increases with the distance between the end effector and projected point. The force vector is then calculated as −V projection . In order to provide graduated and F = V smooth force along the axis, we multiply the resulting force by a function f inversely proportional to the distance (d) from the external membrane such as F p = f (d)F . The corresponding attractive force on the haptic’s device endeffector can be multiplied by a constant coefficient K in c = K F p . With order to adjust the effects of this force F m this constraint, it is easier to the operator to carry out the injection without risking to destroy the cell. Once the membrane punctured, this force accompanies the operator’s gesture, until the pipette removes from the cell. C. Haptic Perception The main idea is to generate mechanical effects, such as viscosity effects, on the master’s arm during operation. The generated counteracting forces ensure smooth motion by filtering abrupt gestures of the operator. The counteracting viscous force is given by the following equation Fv = Bv where v is the velocity, and B is the damping constant. In our implementation we have used two damping constants. The first B1 , if the pipette is outside the cell, and the

935