Vision-based haptic feedback for capsule endoscopy navigation: a proof of concept
Marco Mura, Yasmeen Abu-Kheil, Gastone Ciuti, Marco VisentiniScarzanella, Arianna Menciassi, Paolo Dario, Jorge Dias, et al. Journal of Micro-Bio Robotics ISSN 2194-6418 J Micro-Bio Robot DOI 10.1007/s12213-016-0090-2
1 23
Your article is protected by copyright and all rights are held exclusively by SpringerVerlag Berlin Heidelberg. This e-offprint is for personal use only and shall not be selfarchived in electronic repositories. If you wish to self-archive your article, please use the accepted manuscript version for posting on your own website. You may further deposit the accepted manuscript version in any repository, provided it is only made publicly available 12 months after official publication or later and provided acknowledgement is given to the original source of publication and a link is inserted to the published article on Springer's website. The link must be accompanied by the following text: "The final publication is available at link.springer.com”.
1 23
Author's personal copy J Micro-Bio Robot DOI 10.1007/s12213-016-0090-2
RESEARCH PAPER
Vision-based haptic feedback for capsule endoscopy navigation: a proof of concept Marco Mura1 · Yasmeen Abu-Kheil2 · Gastone Ciuti1 · Marco Visentini-Scarzanella3 · Arianna Menciassi1 · Paolo Dario1 · Jorge Dias2 · Lakmal Seneviratne2
Received: 20 January 2016 / Revised: 8 May 2016 / Accepted: 16 May 2016 © Springer-Verlag Berlin Heidelberg 2016
Abstract In this paper, a vision-based haptic feedback system has been proposed with the aim to assist the movement of an endoscopic device during capsule endoscopy (CE) procedures. We present a general system architecture consisting of three modules responsible for vision, haptic guidance and control of movements. The vision module generates 3D local maps as well as local navigation trajectory for endoluminal navigation. The haptic guidance module consists of a haptic device that allows the user to control the movement of the capsule along the generated path. The haptics module also helps the operator by transforming the 3D maps and the relative paths into a guiding virtual force. Measuring the current relative distance between the user input and the maps boundaries, the haptic guidance module will check if the user is moving away or toward the colonic walls and will generate a feedback force with the aim to assist the operator during the navigation procedure. The user will also sense an attractive virtual feedback force toward the generated path that will help the user in the navigation. Finally, the movement control module is the interface between the haptics module and the chosen manipulator. The final goal is to develop a complete active CE robotic platform with haptic feedback in order to enhance safety, to reduce cost (using the same system as a
Marco Mura
[email protected] 1
The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Pisa, 56025, Italy
2
Khalifa University Robotics Institute, Khalifa University of Science, Technology and Research, Abu Dhabi, UAE
3
Computer Vision and Graphics Laboratory, Kagoshima University, Kagoshima 890–8580, Japan
training simulator as well as real endoscopic platform) and to help the operator during the navigation by combining all 3D local maps into a full 3D reconstructed colon. Keywords Colonoscopy · Haptic guidance · Robotic endoscopic capsule · 3D reconstruction · Vision-based haptic feedback
1 Introduction Colorectal diseases and in particular colorectal cancer (CRC) affect a large number of people worldwide, with a strong impact on healthcare services. CRC is fourth in terms of incidence rate among all cancers in high-income countries, accounting for 694.000 deaths worldwide in 2012 [1, 2]. The survival rate of CRC patients can reach 90 % in case of early diagnosis (an asymptomatic and earlier stage diagnosis can save more that 60000 European citizen per year) falling down to less than 7 % for patients with advanced disease. For this reason, regular screening is highly recommended for patients older than 50 years or having family history of CRC [3]. CE is a diagnostic procedure that allows medical doctors to examine digestive districts without the need of sedation. CE also offers a non-invasive and painless investigation of the gastro-intestinal (GI) tract [4]. However, current CE practice has two main limitations: (i) a high percentage of pathological miss rate due to poor image quality and limited portion of the GI tract visually covered, due to the irregularity and unpredictability of the peristaltic contractions moving the capsule along the tract [5]; (ii) long time required to evaluate the large body of images (more than 50,000) generated during a single run. Therefore, there is a need for an active capsule navigation system that would
Author's personal copy J Micro-Bio Robot
allow physicians to clearly visualize the GI tract and guide the capsule movement to enable detailed investigations of specific regions of interest. These main issues in CE could be mitigated with a teleoperated active navigation system integrating a 3D environmental reconstruction and haptic feedback that will improve the navigation process by helping the operator. Various methodologies were proposed in the literature to perform locomotion and control for CE [4, 6]. CE could be divided in two main families based on the nature of the locomotion: (i) passive and, (ii) active locomotion. Passive locomotion exploit the peristaltic waves that facilitate the transportation of indigestible objects through the GI tract. Given Imaging Ltd. (Yokneam, Israel) was the first company to bring CE technology into the market. Other commercially available capsules include the MiroCamn (Intromedic Co., Seoul, Korea), the EndoCapsule (Olympus Inc., Tokyo, Japan), OMOM (ChongQing JinShan Science and Technology Co, Ltd., Chongqing, China) and CapsoCam (CapsoVision Inc., Saratoga, CA, USA). Regarding active locomotion CE, there are many different examples: Sliker et al. developed a wired CE for colonoscopy using micro-patterned treads [7]; Valdastri et al. developed a legged capsule robot [8]. One of the most common active locomotion schemes of CE uses a magnetic-based driving approach [9, 10]. In this scheme, the capsule is equipped with an internal magnetic source while the external teleoperated platform hold and move an external magnetic source. The capsule movement is controlled by varying the magnetic field (in case of electro-magnet) [11, 12] or by moving an external permanent magnet in order to obtain the desired position of the capsule [13]. While these technologies are explicitly aimed at automatic robot active locomotion, a human operator would normally be required in order to enable interactive examination of potential pathology sites. In this paper, we address this shortcoming by describing a “humanin-the-loop” navigation paradigm in order to give to the human operator adequate guidance and an intuitive control interface to perform the best procedure possible. The aim of this paper is to describe the general architecture and report the preliminary testing of a novel robotic visual and haptic based capsule navigation system. While in our test we apply our paradigm in the context of magnetic locomotion, the architecture of the proposed system is modular and can be applied to any driving system regardless of its underlying technology (e.g., robotic endoscopes, legged capsules etc.). Particular care was taken in designing a non-invasive system to control the capsule in 3D space while providing not only monocular 2D images to the operator, as the limited field of view and lack of depth perception can make navigation difficult, but also a virtual environment showing the 3D local map of the colonic district. This is achieved by using a robotic manipulator coupled with a
computer vision module able to infer the 3D structure of the environment on a frame-by-frame basis: based on the user input and the estimated scene structure, the control system gently generates forces guiding the user along the centerline of the GI tract. The focus of this paper will be on the local navigation exploiting the haptic feedback based on the 3D generated local maps that can be combined, later, to reconstruct the full 3D colon structure. The paper is organized as follows: related work is reviewed in Section 2, the overall system architecture presented in Section 3, while the haptic guidance, vision and manipulator control modules are described in Sections 5, 4, 6 respectively. Finally, results of a preliminary experimental phase are shown in Section 7 while we draw our conclusions and outline the future steps in Section 8.
2 Related work The use of haptic guidance was introduced in endoscopic and surgical applications to improve the physician performance and enhance training on medical simulators [14]. Reilink et al. [15] evaluated the use of haptic control to steer the tip of flexible endoscopes in lumen direction. They extracted the lumen’s position and its center from endoscopic images using adaptive threshold techniques. Then, they used the difference between the location of the lumen center and the location of the haptic device to generate a force feedback to the user. If the difference is large, the user is away from the center of the lumen and a large force is generated and vice versa. However, in this implementation only a 2D displacement is used to create a haptic feedback instead of using a 3D map of the environment. The use of haptic feedback to improve operators performance was also introduced for micro-robotic cell injection application. The haptic virtual fixtures did not only provide real-time assistance to the user, but it is also used to facilitate offline operator training via a virtual training environment. For example, Ghanbari et al. [16] proposed to integrate position-to-position kinematic mapping and haptic virtual fixtures to improve operators performance during cell injection. Haptic virtual fixtures were presented by potential fields of three different geometries; neiloid, cone, and paraboloid. The researchers found that paraboloid virtual fixtures provided the highest success rates in the simulation environment. Other researchers used the haptic feedback to study the interaction between the environment and a controlled robot. For example, Pacchierotti et al. [17] used the haptic force feedback in a tele-operation system that is used to steer self-propelled micro-motors. The end effector of the haptic interface is used to control the micro-robots and provide the haptic feedback about the forces exerted at the remote
Author's personal copy J Micro-Bio Robot
environment. The steering capabilities of their proposed system were tested in both structured and unstructured remote environments with twenty-six subjects. All users were able to complete the tasks with improved performance. In [18], Mehrtash et al. proposed a virtual reality interface for magnetic-haptic micro-manipulation platform (MHMP). The interface consists of three main elements: (i) a haptic station that will not only provide the user with force/torque information from virtual or remote environments but will also identify the operators hand motion commands; (ii) a simulation engine that will compute the forces applied to the operators hands as well as the micro-robots position; (iii) a display unit that is used to show the micro-manipulation tasks and environments using 3D graphics. Different optical techniques for 3D surface reconstruction in laparoscopy and endoscopy have been proposed in literature and the robustness of those methods were extensively reported in [19, 20]. Preliminary work towards vision-based capsule navigation was introduced in [21], where a photometric calibration technique was introduced in order to estimate the tissue reflectance property of the GI tract and obtain a metric 3D reconstruction to enable fully automatic magnetic navigation. However, the system was not designed to aid interactive navigation, rather it was meant to close the loop with magnetic locomotion by providing dynamic trajectory information to the magnetic control module. Our work focuses on providing the human operator with additional haptic feedback based on 3D generated maps in order to aid interactive navigation and provide an intuitive control interface to safely move the capsule inside the colon. The user will interact with both a real environment by tele-operating the manipulator arm and a virtual environment implemented in the Gazebo simulator using the 3D generated maps. The haptic feedback will provide two cues: (i) path following based on attractive virtual forces and (ii) colon wall avoidance and lesions detection based on repulsive virtual forces. Fig. 1 Overall system architecture
3 System architecture The general capsule visual haptic navigation architecture is shown in Fig. 1. The proposed capsule navigation system consists of three main modules: (i) vision navigation, (ii) haptic guidance and (iii) capsule controller. The full navigation system is developed to assist the navigation of a capsule-based endoscope inside the human colon. In this architecture, the capsule position is controlled by the movement of an external magnet attached to the end effector of a 6 degree of freedom (DoF) manipulator. The capsule includes a vision module that is used to collect 2D colon images. In the final endoscopic platform configuration, the link between the capsule and the external magnet attached to end effector of the manipulator is called virtual capsule magnetic link (VCML). In this work, this link is assumed to be a rigid link. This assumption was made in order to not take in account all the possible position mismatch between the external magnet and the capsule and simplifying the 3D virtual navigation visualization. The capsule movement is controlled by controlling the robotic arm position using the manipulator embedded control system. The capsule camera captures colon images which are fed into the vision navigation module. In this module, 3D local maps are generated where the center of each map is estimated. Based on the map, the vision navigation module will generate a navigation path. The generated path and the maps are fed into the haptics system. The haptic guidance module consists of a haptic device that allows the user to control the movement of the camera over the generated path. The haptic guidance module also allows the user to sense a virtual feedback force if the user is getting closer to the boundaries indicated by the map. Throughout this paper, the following variables and notation are used: • •
B: body 3D coordinate frame (inertial). C : camera 3D coordinate frame.
Author's personal copy J Micro-Bio Robot
• • • • • • • • • • • • • • • •
cP (X, Y, Z):
3D image point. 2D image point. (fx , fy ): focal distances. (x0 , y0 ): camera principal point offset. s: camera pixel skew. n: surface normal. l: light source vector. ρ: surface albedo. I : image intensity. 1 : light attenuation term depending on distance r2 between light source and surface r. Map: generated maps. P ref : generated reference path. P current : calculated user path. P : difference between reference path (P ref ) and current user path (P current ). F r = (frx , fry , frz ): virtual repulsive force. F a = (fax , fay , faz ): virtual attractive force. c p(x, y):
4 Haptic guidance module Haptic interfaces are usually used to improve the touch experience to perceive and interact better with a real environment using mechatronics devices and computer control. A standard haptic interface consists of a haptic device and computer software that relates the human inputs to the haptic information display [22]. Our haptic rendering block diagram is shown in Fig. 2. The haptic guidance module consists of a haptic device that allows the operator to control the movement of the capsule over the generated path. In our implementation, the operator interacts with a haptic device (Phantom Omni, Sensable Technologies, Wilmington, MA, USA) to give a reference command to the manipulator (RV-6SDL, Mitsubishi
Fig. 2 Haptic force feedback rendering block
Electric, Japan). The same approach could be used for training purposes, where instead of moving a real manipulator the operator input are visualized on a 3D virtual environment such as the one presented in [23]. As previously mentioned, haptic rendering is used to compute the response of the environment (virtual forces that are generated from the colon maps) that will be fed back to the human operator. Haptic rendering is defined as the process of calculating the forces required by contacts with virtual environment objects according to the operator’s motion parameters [22]. In this framework, forces are computed using the spring model where the force is a function of a spring stiffness and the distance (as illustrated in Fig. 3). Our general haptic loop is shown in Fig. 4. First, a haptic device senses human operator’s input and then the system applies this input to both a virtual and a teleoperated environment. The response of the environment that will be fed back to the human operator is computed using haptic rendering. Haptic feedback also creates a pseudo force to act on the human operator’s hand. Finally, actuators on the haptic device generates the corresponding force feedback to the human operator. Those force feedbacks allow the human operator to feel the direct interaction with a real environment. Based on the haptic feedback, the operator position is modified and another cycle of the haptic loop starts. In this module the force feedback is generated depending on the user position with respect to the ideal path, which provides a feeling of the environment based on the generated local maps. The information stored in the path is used to build a 3D virtual force field that is applied on the haptic device. If the user deviates away from the generated path, an attractive virtual force is generated. At each sampling time, the difference (path) between the generated reference path and the user current path is converted into a virtual attractive 3D force vector (Fa ) as described in Eq. 1 under
Author's personal copy J Micro-Bio Robot Fig. 3 (a) Repulsive force spring model acting on the capsule and the colon boundary (b) Attractive force spring model acting on the capsule and the generated path
the conditions in Eq. 2. The haptic device will also provide a repulsive forces (F r ) that prevent motion outside the colon boundaries. Fa = Kf ∗ i; Fa =
Fa ,
i ||i||
if Fa ≤ Fm ∗ Fm , if Fa > Fm
where: •
Kf : parameter that describes the stiffness;
Fig. 4 The Haptic loop of the capsule haptic interface
(1)
(2)
• •
i: the 3D displacement between the target and the capsule position; Fm : the maximum value of the force.
F r and Fa are computed using the same model described in Eq. 2. In order to compute F r the relative position difference between the capsule and the map boundaries is Figure 3 illustrate both the repulused instead of path. sive force spring model between the capsule and the colon boundary and the attractive force spring model acting on the capsule and the generated path. It is worth mentioning that this haptic approach, with small modifications, could
Author's personal copy J Micro-Bio Robot
be used not only in CE but also in combination with robotic endoscopes [15].
5 Vision module The vision navigation module consists of a camera (500 x 582 CCD camera with 120 degrees field of view, Karl Storz GmbH, Tuttlingen, Germany) that is equipped with a LED based light source (NESW007BT, Nichia Corp., Tokushima, Japan) embedded into the capsule prototype. The camera intrinsic and extrinsic parameters were estimated with a standard checkerboard using Matlab Camera Calibration Toolbox [24]. The main function of the vision navigation module is to generate local 3D maps of the colon from 2D capsule images. Then, the generated maps are used to estimate the navigation path. Maps and trajectories are fed into the haptic guidance system to allow force feedback generation for the user in case of any deviation from the path or exceeding the map boundaries. Generally, the map generation and path generation thread is executed slower than real-time depending on the computing platform, and the map and path to follow is refreshed every N frames depending on the chosen execution rate. This is possible since the path to follow is generally locally straight given an initial direction, and it allows to relax the computational constraints to suit the computational platform. Similar strategies of slower-than-real-time 3D reconstruction coupled with real-time tracking and movement is common in state-of-the-art SLAM architectures such as LSD-SLAM [25] and ORB-SLAM [26].
While in the original formulation the light source position was used to scale the reconstructed volume to metric dimension by triangulating the position in space of specular highlights, in our implementation we skip this step so our maps are reconstructed up to an unknown scale factor. This is due to the fact that the path is followed for N frames and then updated again, so a direction is sufficient for providing a haptic feedback without an absolute metric reconstruction. While in this work we use SFS as our method for 3D reconstruction, our strategy may employ any type of 3D reconstruction method, and use the relative output maps. Local 3D maps reconstruction with SFS is achieved through an inversion of the image irradiance equation. Essentially, given an a priori reflectance model describing the physical formation process behind the captured image, its mathematical representation is inverted in order to recover dense surface distance and normal information. The camera model that is used in local map generation formulation was the pinhole projection camera model. The model is used to describe the mapping of a 3D point onto the image point given a known calibrated intrinsic camera and using the camera center as the world’s frame of reference: ⎡ ⎤ ⎤ X ⎡ ⎤ ⎡ x f x s x0 0 ⎢ ⎥ ⎣ y ⎦ ∼ ⎣ 0 f y y0 0 ⎦ ⎢ Y ⎥ ⎣Z⎦ 0 0 1 0 1 1
(3)
In its general formulation for Lambertian surfaces, SFS can be seen as an inverse problem attempting to reconstruct the scene shape from a single monocular frame by solving the following irradiance equation:
5.1 Map generation The local navigation maps for the following N frames are generated using the Shape-from-Shading (SFS) formulation proposed in [27]. The authors chose to use SFS as a tool for recovering 3D map from endoscopic images because the 3D structure of the visualized scene is reconstructed from a single monocular camera without any further information required. Moreover, the author choose specifically this SFS formulation as it takes into account the effect of the light source away from the optical center. Fig. 5 The mapping of the magnetic capsule degrees of freedom in the Haptic device
I =ρ
1 n·l r 2 nl
(4)
In Eq. 4, the SFS problem is expressed as recovering the unknown structure in the form of surface normals and absolute distance between the light source and the surface from knowledge of the light source position and the image intensity for each pixel. Despite this being an underconstrained problem, well-posedness has been shown under configurations where the light source is close to the target
Author's personal copy J Micro-Bio Robot
path given the current local maps. As explained in the previous section, the path consists of a general direction that will be considered valid for the following N frames. To generate the path, the local 3D map is divided into slices along the z-axis which is the optical axis of the camera and for each slice the geometric center of mass (CoM) is computed. Then, all the extracted CoMs are provided to a RANSAC algorithm [28] to find the best fitting 3D line through the
Fig. 6 Experimental ex-vivo test bench for the data collection
surface (e.g., endoscopic images) and the distance attenuation term between the light source and the tissue surface is not ignored. The SFS nonlinear partial differential equation resulting from inversion of Eq. 4 is solved through a Lax-Friedrichs sweeping scheme as shown in the work of Visentini-Scarzanella et al. [27]. The calculated depth was then projected using the calibrated camera information to obtain realistic 3D maps that can be used for the local path generation and haptic feedback. 5.2 Path generation Local path generation is used to assist the operator in navigating an endoluminal device by generating an estimate
Fig. 7 Samples of the SFS local maps from ex-vivo samples
Fig. 8 The navigation path. The blue dots represent the CoM derived for each slice of the map and the green line represents the path generated by using RANSAC. The planned capsule path goes through the reconstructed section of the colon. The reconstructed map is rendered semi-transparently for clarity
Author's personal copy J Micro-Bio Robot
CoMs. The algorithm for the path extraction is described in Algorithm 1, a pseudo code. Algorithm 1 Path Generation Algorithm. Input: 3D local Map (Map) Output: Optimal path for navigation (Path) Load 3D Map Extract the all X, Y, Z coordinates of the vertex position of the Map Divide 3D Map into N slices for 1 to do Find the X, Y, Z vertex within the slice Calculate the geometrical CoM of the the slice end for Use RANSAC to compute the best Path from the calculated geometrical CoM. return Path The difference, between the position imposed by the user and the optimal path is used to activate the haptic system to guide the operator close to the target path (P ref ).
These movements are reported in Fig. 5. In order to test the robotic platform a dedicated software framework was implemented to integrate the haptic force feedback and the manipulator control. The manipulator was controlled exploiting ROS [29] and MoveIT [30] libraries. The haptic device output were remapped in order to comply with the DoF of the magnetic capsule. Moreover, the control strategy was designed with the aim of having an user friendly interface, so all the user motion inputs are referred on the capsule rather than the robotic arm base reference coordinates.
7 Experimental evaluation To test our proposed solution, we have qualitatively evaluated each block of the system individually: (i) map generation block, (ii) path generation block, (iii) manipulator control and haptic force feedback block. The complete experimental setup is showed in Fig. 6. 7.1 Map generation block
6 Manipulator control module As stated earlier, in the current version of our architecture the link between the capsule and the external magnet attached to the end effector (VCML) is assumed to be a rigid link. The manipulator controller module is used to control the capsule movements by controlling the manipulator end effector position with respect to the manipulator base. The controller commands are received from the user using the haptic interface according to the following mapping: • • •
The translational forward/backward motion of the capsule is performed by moving the haptic tip along the X-direction of the haptic device; The rotational pitch movement is performed by imposing a pitch rotation on the haptic tip of device; The rotational yaw movement is performed by imposing a yaw rotation on the haptic tip of device.
Fig. 9 3D map for navigation and haptic feedback. (a) the dense textured local map used by the operator for navigation and diagnosis; (b) the sparse collision local map used to compute a repulsive forces
The detailed experimental setups, scenarios and results of the map generation block were published in a previous work [21]. The camera (500 x 582 CCD camera with 120 degrees field of view, Karl Storz GmbH, Tuttlingen, Germany) was first calibrated to characterize its radiometric response and internal parameters. The endoscopic capsule was then inserted in explanted ex-vivo porcine colon for a total length of 150mm. The images were then pre-processed to remove deviations from the assumed Lambertian reflectance model and fed to the SFS algorithm for surface reconstruction. Samples of the generated local maps are shown on Fig. 7. 7.2 Path generation block The resulting extracted path is shown in Fig. 8. In Fig. 8 are reported: as blue dots the geometric CoM of the various
Author's personal copy J Micro-Bio Robot Fig. 10 The repulsive forces based on the relative distance between capsule and map boundaries. As the capsule approaches the boundaries of the colon, stronger forces are generated and provided to the haptic device
Author's personal copy J Micro-Bio Robot
slices of the local map; as green line the path generated by using RANSAC. 7.3 Manipulator control and haptic force feedback block The visualization of local maps was achieved using the Gazebo robotic simulator [23]. In Fig. 9 is reported the dense textured local map and the relative sparse collision local map used to provide a F r that prevent motion outside the colon boundaries. During the test the operator was able to navigate inside a colon virtual simulator. The test consisted in moving along the colon simulator lumen exploiting the haptic force feedback. Figure 10 shows how the forces are calculated based on the relative distance between the capsule and the colon boundaries. The repulsive forces are graphically represented as red arrows while the resultant force, that is virtually applied on the capsule and truly applied on the Phantom Omni, is graphically represented as a blue arrow. When the capsule is not close to the colon wall, no repulsive forces are computed and the operator can navigate freely. As the capsule approaches the boundaries, repulsive forces are calculated as described in Eq. 2, and the total repulsive forces is applied on the haptic device. The integration of the local 3D maps and the local path allow the operator to navigate giving to him/her visual and haptic feedbacks. Moreover, the haptic force feedback allows a safer CE magnetic locomotion as the operator does not exceed the boundaries of colonic walls.
8 Conclusions and future work The general architecture of a vision-based haptic feedback for capsule navigation is described and the various system components were tested. The described system can be used to explore gastrointestinal tract and provide force-feedback to the operator in order to assist the navigation procedure. The reconstructed surfaces can be exploited for augmented reality, accurate trajectory planning and ultimately to close the control loop and enable automatic active wireless capsule endoscopy navigation. Possible directions for future work includes: the integration of all subsystems to assess the proposed robotic platform and real-time map and path generation. Furthermore, in order to better understand the clinicians’ needs regarding the proposed haptic feedback, an extensive series of tests (enrolling medical staff) will be planned. The final aim is to develop a complete active CE locomotion robotic platform with haptic feedback in order to enhance safety and help the operator during the navigation stage. In this work, the magnetic link between the capsule and the external magnet attached to the end-effector of the manipulator was considered rigid. In a real scenario, in order to reliably match the magnetically dragged capsule
motion on the generated ideal path, sensors (i.e., accelerometers, gyroscopes, magnetic sensors, RFID etc.) will be integrated on-board to provide a closed-loop feedback, thus enabling an accurate motion control and the possibility to use other reconstruction technique such as the one reported in [31]. Acknowledgment The authors would like to thank Reem Ashour (Khalifa University Robotic Institute, Abu Dhabi, UAE) for her suggestions and support in the development of the Gazebo environment and in the development of the haptic force field generation.
References 1. World health organization. [Online]. Available: http://www.who. int/mediacentre/factsheets/fs297/en/, [May 20, 2016] 2. International agency for research on cancer - estimated incidence, mortality and prevalence worldwide in 2012. [Online]. Available: http://globocan.iarc.fr/Pages/fact-sheets-cancer.aspx/, [May 20, 2016] 3. Cancer research uk. [Online]. Available: http://www.cancerresearchuk. org/cancer-info/cancerstats/types/bowel/survival/stage/, [May 20, 2016] 4. Sliker LJ, Ciuti G (2014) “Flexible and capsule endoscopy for screening, diagnosis and treatment,” Expert Rev Med Dev 11(6):649–666 5. Fan Y, Meng M-H, Li B (2010) 3D reconstruction of wireless capsule endoscopy images. In: Annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp 5149–5152 6. Sliker L, Ciuti G, Rentschler M, Menciassi A (2015) Magnetically driven medical devices: a review. Expert Rev Med Dev 12(6):737– 752 7. Sliker LJ, Kern MD, Schoen JA, Rentschler ME (2012) Surgical evaluation of a novel tethered robotic capsule endoscope using micro-patterned treads. Surg Endos 26(10):2862–2869 8. Valdastri P, Webster RJ III, Quaglia C, Quirini M, Menciassi A, Dario P (2009) A new mechanism for mesoscale legged locomotion in compliant tubular environments. IEEE Trans Robot 25(5):1047–1057 9. Carpi F, Pappone C (2009) Magnetic maneuvering of endoscopic capsules by means of a robotic navigation system. IEEE Trans Biomed Eng 56(5):1482–1490 10. Ciuti G, Valdastri P, Menciassi A, Dario P (2010) Robotic magnetic steering and locomotion of capsule endoscope for diagnostic and surgical endoluminal procedures. Robotica 28(02):199– 207 11. Lucarini G, Mura M, Ciuti G, Rizzo R, Menciassi A (2015) Electromagnetic control system for capsule navigation: Novel concept for magnetic capsule maneuvering and preliminary study. J Med Biol Eng 35(4):428–436 12. Lucarini G, Ciuti G, Mura M, Rizzo R, Menciassi A (2015) A new concept for magnetic capsule colonoscopy based on an electromagnetic system. Int J Adv Robot Syst 12:25 13. Ciuti G, Donlin R, Valdastri P, Arezzo A, Menciassi A, Morino M, Dario P et al (2010) Robotic versus manual control in magnetic steering of an endoscopic capsule. Endoscopy 42(2):148 14. Okamura AM (2004) Methods for haptic feedback in teleoperated robot-assisted surgery. Ind Robot Int J 31(6):499–508 15. Reilink R, Stramigioli S, Kappers AM, Misra S (2011) Evaluation of flexible endoscope steering using haptic guidance. Int J Med Robot Comput Assist Surg 7(2):178–186
Author's personal copy J Micro-Bio Robot 16. Ghanbari A, Horan B, Nahavandi S, Chen X, Wang W (2014) Haptic microrobotic cell injection system. IEEE Syst J 8(2):371– 383 17. Pacchierotti C, Magdanz V, Medina-S´anchez M, Schmidt OG, Prattichizzo D, Misra S (2015) Intuitive control of self-propelled microjets with haptic feedback. J Micro-Bio Robot 10(1–4):37–53 18. Mehrtash M, Khamesee MB, Tarao S, Tsuda N, Chang J-Y (2012) Human-assisted virtual reality for a magnetic-haptic micromanipulation platform. Microsyst Technol 18(9–10):1407–1415 19. Maier-Hein L, Groch A, Bartoli A, Bodenstedt S, Boissonnat G, Chang P-L, Clancy N, Elson DS, Haase S, Heim E et al (2014) Comparative validation of single-shot optical techniques for laparoscopic 3-d surface reconstruction. IEEE Trans Med Imag 33(10):1913–1930 20. Maier-Hein L, Mountney P, Bartoli A, Elhawary H, Elson D, Groch A, Kolb A, Rodrigues M, Sorger J, Speidel S et al (2013) Optical techniques for 3d surface reconstruction in computerassisted laparoscopic surgery. Med Image Anal 17(8):974–996 21. Ciuti G, Visentini-Scarzanella M, Dore A, Menciassi A, Dario P, Yang G-Z (2012) Intra-operative monocular 3D reconstruction for image-guided navigation in active locomotion capsule endoscopy. In: 4th IEEE RAS & EMBS international conference on biomedical robotics and biomechatronics (BioRob), pp 768– 774 22. Siciliano B, Khatib O (2008) Springer handbook of robotics. Springer, Berlin. doi:10.1007/978-3-540-30301-5
23. Linner T, Shrikathiresan A, Vetrenko M, Ellmann B Modeling and operating robotic environent using gazebo/ros. In: Proceedings of the 28th international symposium on automation and robotics in construction (ISARC2011), pp 957–962 24. Bouguet JY (2004) Camera calibration toolbox for matlab 25. Engel J, St¨uckler J, Cremers D (2015) Large-scale direct slam with stereo cameras. In: Proceedings of the IEEE international conference on intelligent robots and systems (IROS) 26. Mur-Artal R, Montiel J, Tardos J (2015) Orb-slam: a versatile and accurate monocular slam system. IEEE Trans Robot 31(5):1147– 1163 27. Visentini-Scarzanella M, Stoyanov D, Yang G-Z (2012) Metric depth recovery from monocular images using shape-from-shading and specularities. In: 2012 19th IEEE international conference on image processing (ICIP). IEEE, pp 25–28 28. Hast A, Nysj¨o J, Marchetti A (2013) Optimal ransac-towards a repeatable algorithm for finding the optimal set 29. Quigley M, Conley K, Gerkey B, Faust J, Foote T, Leibs J, Wheeler R, Ng AY (2009) Ros: an open-source robot operating system. ICRA Workshop Open Source Softw 3(3.2):5 30. Chitta S, Sucan I, Cousins S (2012) Moveit![ros topics]. IEEE Robot Autom Mag 1(19):18–19 31. Abu-Kheil Y, Ciuti G, Mura M, Dias J, Dario P, Seneviratne L (2015) Vision and inertial-based image mapping for capsule endoscopy. In: 2015 International conference on information and communication technology research (ICTRC). IEEE, pp 84–87