Robotic Handling of Surgical Instruments in a Cluttered ... - IEEE Xplore

3 downloads 0 Views 829KB Size Report
Apr 3, 2015 - Robotic Handling of Surgical Instruments in a Cluttered Tray. Yi Xu, Ying Mao, Xianqiao Tong, Huan Tan, Weston B. Griffin, Balajee Kannan, ...
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 12, NO. 2, APRIL 2015

775

Robotic Handling of Surgical Instruments in a Cluttered Tray Yi Xu, Ying Mao, Xianqiao Tong, Huan Tan, Weston B. Griffin, Balajee Kannan, and Lynn A. DeRose

Abstract—We developed a unique robotic manipulation system that accurately singulates surgical instruments in a cluttered environment. A novel single-view computer vision algorithm identifies the next instrument to grip from a cluttered pile and a compliant electromagnetic gripper picks up the identified instrument. System is validated through extensive experiments. Note to Practitioners—This research was motivated by the challenges of perioperative process in hospitals today. Current process of instrument counting, sorting, and sterilization is highly labor intensive. Improperly sterilized instruments have resulted in many cases of infections. To address these challenges, an integrated robotic system for sorting instruments in a cluttered tray is designed and implemented. A digital camera is used to capture an image of a cluttered tray. A novel single-view vision algorithm is used to detect the instruments and determine the top instrument. Position and orientation of the top instrument is transferred to a robot. A compliant electromagnetic gripper is developed to complete the gripping. Experiments have demonstrated high success rate of both instrument recognition and manipulation. In the future, error handling needs to be further reinforced under various exceptions for better robustness. Index Terms—Bin picking, end-effector, image analysis, robot vision systems. I. INTRODUCTION

A

UTOMATING the perioperative process has the potential to significantly address current safety and efficiency concerns in a hospital. An enabling technology for such is a robotic system that picks and places instruments into bins. Existing approaches to automating the sorting process are expensive and are limited in their capabilities. The state-of- the-art solution, RST's PenelopeCS is designed to automate several key functions for the clean side of the sterile supply [1]. A human operator first separates the instruments from a container and places them on a conveyor belt one at a time. Then, a robotic arm fitted with a magnetic gripper

Manuscript received December 17, 2014; accepted January 03, 2015. Date of publication March 03, 2015; date of current version April 03, 2015. This work was supported by the U.S. Department of Veteran Affairs under Grant VA11812-C-0051. This paper was recommended for publication by Associate Editor Y. Zhang and Editor D. Tilbury upon evaluation of the reviewers' comments. Y. Xu was with GE Global Research, Niskayuna, NY 12309 USA. He is now with CapsoVision, Inc., Saratoga, CA 95070-6615 USA (e-mail: yi.xu. [email protected]). Y. Mao, H. Tan, W. B. Griffin, B. Kannan, and L. A. DeRose are with GE Global Research, Niskayuna, NY 12309 USA (e-mail: [email protected]; [email protected]; griffi[email protected]; [email protected]; derose@ge. com). X. Tong was with GE Global Research, Niskayuna, NY 12309 USA. He is now with NVIDIA Corporation, Santa Clara, CA 95050 USA (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. This paper has supplementary downloadable multimedia material available at http://ieeexplore.ieee.org provided by the authors. The Supplementary Video shows two robots performing surgical instrument pick-and-place using our method. The first sequence is performed by an Adept arm. The second one is performed by a Baxter. This material is 25 MB in size. Digital Object Identifier 10.1109/TASE.2015.2396041

Fig. 1. Pipeline of our robot manipulator.

picks up a single instrument from the belt. A machine vision system or a barcode scanner is used to identify instruments and sort them into stacks. It requires additional infrastructure as well as human operation at critical junctures; thus limited in an unstructured environment where instruments are cluttered in a container. To handle surgical instruments cluttered in a container, several challenges must be addressed. First, the surgical instruments often have very similar characteristics. This makes it difficult for vision-based algorithms to recognize them from an unordered pile. Second, instruments are made of shiny metal. Optical effects such as specularities and interreflections pose problems for many pose estimation algorithms, including multiview stereo, 3D scanning, etc. Further, without six degrees-of-freedom (DOF) pose, standard finger or vacuum grippers will have trouble executing the grip. The proposed solution effectively addresses all these challenges and has two key elements: 1) a vision algorithm that robustly identifies the instruments in a clutter, infers the occlusion relationships among the instruments, and provides visual guidance for the robot manipulator and 2) a compliant end-effector design, which can execute precise instrument gripping in a cluttered environment with only a 2D reference point as picking location. This flexibility of the end-effector is important because determining a weak 4-DOF pose (i.e., 2D location, orientation, and scale) in 2D space is more robust and potentially faster than computing an accurate full 6-DOF pose due to the optically-challenging nature of the surgical instruments. To our knowledge this is the first instance of an automated sorting solution that is robust to handling a varied instrument suite. Fig. 1 shows the workflow of the proposed robotic system. Our contributions include: • An integrated vision-guided robotic system for singulating surgical instruments from a cluttered environment. • A vision algorithm that identifies instruments, estimates 4-DOF poses, and determines the top objects from a pile. • A custom electromagnetic gripper with multi-axis compliance that grips surgical instruments with only a 2D location as reference. This paper is an extended version of our previous conference publication [2].

1545-5955 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

776

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 12, NO. 2, APRIL 2015

B. Occlusion Reasoning

Fig. 2. FDCM algorithm [10] (left) and the proposed approach (right) applied to surgical instrument data. FDCM identifies an object that is still occluded. Our method identifies a good candidate for picking.

Occlusions reasoning can be solved by modeling local inconsistency for the occluders [13]. However, for surgical instruments, occluders will have similar appearance as those being occluded. Alternative approaches focus on learning the structure of occlusions from data [14]. Hsiao and Hebert [15] propose a multicamera approach to model the interactions among 3D objects. Compared to these two methods, the proposed approach does not require data for training and only uses one camera. C. End-Effector Design

II. RELATED WORK A. Vision-Guided Bin-Picking Picking items from an unordered pile or in unstructured environments has been a popular research topic for decades [3]. Some of the recent developments use active sensing, e.g., Kinect sensor, to acquire depth map of a scene [4], [5]. Then, 3D object models are matched to the acquired point clouds. Choi et al. [6] use a structured-light scanner to capture 3D models of objects in a box and use a Hough-voting approach based on oriented surface points to estimate object pose. PR2 uses a 3D laser scanner for mobile manipulation [7]. Bin-picking shiny objects has also been studied in industrial settings. Shroff et al. [8] extract high curvature regions on specular objects using a multi-flash camera. Object pose is obtained by triangulating these features. Rodrigues et al. [9] use a single camera and multiple light sources. A random FERN classifier is trained to map appearance of small patches into object poses. Liu et al. [10] use a multiflash camera to obtain good depth edge information and use fast directional chamfer matching (FDCM) to match templates onto the input image. Pretto et al. [11] use a single camera and a large diffuse light source mounted over the container. They estimate a 6-DOF pose for planar objects based on a candidate selection and refinement scheme. In their experiments, each container only contains one object type. Data-driven approaches (e.g., [9]) are also popular among robotics researchers. Collet et al. [12] develop a multiview approach that can recognize all objects in the scene and estimate their full poses. Their approach relies on learned models for the objects using the SIFT descriptor. 1) Contributions of the Proposed Vision Algorithm: Most 3D sensors have difficulty capturing specular objects. Strong similarities among surgical instruments pose serious challenges for data-driven approaches. Compared to template matching methods, the proposed approach has several advantages: 1) Template matching minimizes a cost function over a parameter space of transformations for each template. Thus, computation time scales linearly with the number of templates. Computation time of the proposed approach depends on the number of actual objects instead of templates; thus suitable for handling a large variety of surgical instruments. 2) It is problematic to pick the object with the smallest cost as in most template matching methods. This is because increasing the number of templates increases the proportion of false matching, especially when a lot of the instruments are very similar. The matching cost is highly dependent on the shape and size of the objects and occlusions. We applied both FDCM [10] and our algorithm on an image with five instruments (Fig. 2). FDCM identifies an object that is still occluded but has a simple shape; while our method identifies a good candidate for picking. 3) The proposed approach only uses a DSLR camera without the need of multiple/special lights and multiviews. Thus, it is cheaper and easier to implement.

Pick-and-place robots are a mature class of robots. Despite significant advances, a key technical challenge is the need for low-cost, adaptive and precise end-effectors. Common and commercially available grippers include vacuum, parallel, and magnetic grippers. Vacuum grippers are simple and offer gentle manipulation. However, they are limited in handling irregularly-shaped objects with uneven surfaces. Parallel grippers have high precision and repeatability. Most of them are custom made and tailored to specific applications. Recent advances in dexterous manipulation have made it possible to grip a large variety of objects with just a single gripper design [16]. However, handling surgical instruments in cluttered environments is still very challenging. Sophisticated vision guidance is needed to ensure reliable grip and avoid collision. Magnetic grippers can generate enormous gripping force in a compact form factor and grip irregular-shaped objects. Because most metallic surgical instruments that require sterilization are ferromagnetic [17], PenelopeCS has used a magnetic gripper for surgical instrument manipulation [1]. This gripper has an electromagnet mounted on a 1-DOF spring-loaded base to allow firm contact between the electromagnet and the surgical instrument. Such a design works well for single instrument on a flat surface. However, system utility will be limited when used with instruments in a pile since 6-DOF pose estimation is required for obtaining contact surface orientation. We address this issue by designing an electromagnetic gripper that can passively reorient and conform to instrument surface. III. DESIGN ROBOTS FOR INSTRUMENT HANDLING Typically, a sterile processing unit has a dirty side and a clean side. On the dirty side, instruments are washed and disinfected manually by a sterile processing nurse. Debris and biological material are removed. The job of dirty side robot is to pick up instruments from a pile and sort them into several empty containers based on instrument types (e.g., scissors, tweezers). Then, the nurse brings the containers to a sink, washes the instruments, and brings empty containers back onto the robot's working table for more instruments. In this research, we use a Baxter Research Robot, whose compliance design allows it to work well with people around. After decontamination, instruments are moved onto the clean side for further processing. The job of the clean side robot is to count the instruments, pick them up, and place them into predefined locations of a surgical kit. Thus, we use a six-axis Adept® Viper S650 arm, since it is clean room certified and has high accuracy. IV. VISION-BASED INSTRUMENT LOCALIZATION Our vision system is designed for operating on both dirty side and clean side. We use a high-resolution DSLR camera that looks downward at a predefined region within the robot's work space. Fig. 3 shows a flowchart of our vision system. The vision algorithm finds one instrument that is a candidate, determines a 2D gripping point, and waits for

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 12, NO. 2, APRIL 2015

777

Fig. 5. (Left) An occupancy map of the instruments. Instruments that are assigned to a lower bit have lower intensity. (Right) Several binary masks showing pairwise intersection regions between instruments.

Fig. 3. Flowchart of our computer vision algorithm.

instrument localization and pose estimation step. We superimpose the randomly colored edges of transformed templates onto the input image to show the effectiveness of alignment. It is noteworthy that pose estimation can be improved by using either projective transformation or a subsequent nonlinear optimization that minimizes reprojection errors. We simply use affine transform for computational efficiency. As we will discuss later, due to our robust occlusion reasoning algorithm and our compliant end-effector design, perfect pose estimation is not essential for the success of gripping. C. Occlusion Reasoning

Fig. 4. a) Decoded barcodes shown in green boxes. There are missed detections due to occlusion (yellow circles). b) Pose estimation using the four corners of the data matrices from both templates and input image. We superimpose transformed template edge maps onto the input image.

the robot to pick it up. The process repeats until all instruments are removed from the container. A. Surgical Instrument Identification In recent years, automated tracking of surgical instruments has been gaining popularity (e.g., Key Surgical® KeyDot and Censitrac™). Individual instrument tracking beyond tray level improves infection control and provides a mechanism for root cause analysis. Typically, each surgical instrument is equipped with a small 2D data matrix barcode, which ranges from 1/8 to 1/4 inch in diameter and encodes a unique ID for the tool. We use a commercial barcode reading module [18] to locate and read all visible barcodes from a high-resolution image. We place two barcodes on each side of an instrument, assuming instruments only have two possible stable placements in a pile. For non-flat instrument (e.g., forceps), a cap is used to close the tips, reducing the potential for alternate stable orientations that could limit barcode visibility. On the dirty side, barcode visibility might be affected by biological remains. In such case, the sterile processing nurse will manually remove the instrument from the pile. Fig. 4(a) shows an image of the instruments in a tray and detected barcodes (green boxes). In a preprocessing step, a library is populated with templates, each of which is an image of an instrument captured with a black background. Each instrument generates two templates in the library—one for each side. The templates are indexed using the barcode IDs. B. Localization and Pose Estimation We estimate a 4-DOF pose for each instrument in an input image. For each detected barcode, we retrieve its counterpart template from the library. The barcode reading module not only reads the ID, but also detects the four corner points of the data matrix. By using the four corners on both the input image and the template, we compute an affine transformation that brings the template into alignment with the instrument in the input image. Fig. 4(b) shows a visualization of the

Given the poses of instruments, the vision algorithm then infers the occlusion relationship between each pair of intersecting instruments. This determines which instruments are not occluded by others. We first compute a single channel occupancy map. Each bit of the occupancy map is assigned to one instrument. We first transform each template using the computed affine transformation, and then store its binary mask in the designated bit in the occupancy map. The intersecting region between two instruments can then be easily computed by finding the occupancy map pixels that have the two corresponding bits. Fig. 5 shows an occupancy map and the intersection region masks between a few pairs of instruments. For every pair of intersecting instruments and , there are only two hypothetical occlusion relationships: occludes or occludes (denoted as or ). We synthesize two images and corresponding to the two hypotheses. This is done by rendering the two templates in different orders. The occlusion relationship can and then be inferred by comparing the actual input image against . Since and only differ at the intersecting regions and are and within the identical, otherwise, we only compare against intersection region masks. We dilate the masks by a small amount to account for inaccuracy in the estimated poses. To compare images, we use a descriptor called Edge Orientation Histograms (EOH) [19], which are contrast invariant and only use edges instead of appearance information. They can handle large appearance changes due to varying lighting conditions, specularities, and interreflections among the surgical instruments. To reduce noise and focus on important contour edges, the input images are first blurred with a Gaussian kernel (e.g., 7 7). We compute masked EOH (meoh) de, and and then compute Eculidian disscriptors for image , tances between the histograms. We use 2 2 overlapping blocks and 9 bins for each block, resulting in a 36 dimension descriptor. The number of blocks and bins are empirically determined. A thorough study on the parameters can be done in the future to determine the best set of parameters for the descriptor. The hypothesis with smaller histogram distance to the input image is selected: (1) Fig. 6 shows an actual input image and two hypotheses and their edge maps. Due to different lighting conditions, the appearance of the

778

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 12, NO. 2, APRIL 2015

Fig. 6. a) An input image (top) and a query image generated by Canny edge detection (bottom). b) and c) The two synthesized hypotheses. a) Query Image . b) . c) .

instruments changed a lot between query image and hypothesis images, especially highlights and shadows. The proposed method is still able to select the correct hypothesis because the EOH descriptor is robust against such optical challenges. If the intersection region of two instruments and are occluded by another instrument , applying (1) on and leads to a potentially incorrect decision due to lack of edge features. However, this is not a problem because only one un-occluded instrument is intended for picking at a time instead of sorting the entire tray in one step. In this case, will be picked first. Because the pile of instruments may shift during each grip, we need to recapture and process the scene before each grasping. D. Picking an Instrument Once all the occlusion relationships are determined, the algorithm finds the non-occluded instruments, one of which is randomly selected for picking. Occasionally, all instruments are occluded by others due to ). In such case, we compute an occlusion cycle (e.g., the least occluded instrument. The picking location for each surgical instrument can be determined empirically by a user in the template creation stage. Since the camera's image plane is parallel to the robot's working surface, the picking location in the image space can be easily translated to the robot's coordinate system. Because of the compliant end-effector design, imperfection in this transformation does not affect gripping accuracy much. Alternatively, an optimal picking location can be determined for the target instrument. For every point on the instrument, we compute the area of the target instrument within a bounding box of the size of the gripper, denoted as . We also compute the sum of areas of all other instruments within the bounding box, denoted as . Then, we compute (2) is the optimal picking location. In this The point with the largest way, the contact area between the gripper and instrument is maximized; while the chance of other instruments interfering with the gripping is and can be efficiently computed by using integral minimized. images on the occupancy map. When the robot fails to grip an instrument, the system will attempt to pick the same instrument twice, before moving on to the next candidate instrument.

Fig. 7. (a) Design: 1: Spring for the axis compliance 2: Rotary damper 3: Torsion spring 4: Electromagnet 5: Adapter 6: Load cell. (b) Workflow of gripping. The red/blue lines approximate the surface orientation of gripper and instrument before and during contact.

point as the picking location. As shown in Fig. 7(a), the electromagnetic gripper has three passive joints: a prismatic joint along the axis and two revolute joints along the and axes. Each joint has a coil spring attached to the shaft; making these joints compliant. A rotary damper (ACE Controls® RTG2) is coupled to the revolute joint shafts through spur gears to prevent instrument oscillation after picking up and during transport. A load cell (Futek® LSB200, 5 lbs) is used to measure the contact force between the electromagnet and the instru(e.g., 7N) to determine when the ment. We use a threshold force electromagnet is in proper contact with an instrument (i.e., no gap between the electromagnet and instrument). An electromagnet (APW® EM137S) is attached at the bottom of the gripper. To reduce potential adherence between target instrument and adjacent instruments, the current of the electromagnet is modulated by a motor drive to generate a gripping force just enough to pick up the target. For each instrument, the required magnet current is calibrated manually and stored in a lookup table indexed by instrument IDs. An illustration of instrument pickup workflow is shown in Fig. 7(b). Given a 2D gripping location, a pick-up maneuver is completed with the following three steps: 1) Robot arm moves the gripper to the gripping location and at a distance above the instrument. is experimentally determined (e.g., 8 cm) to accommodate various pile height. 2) Robot arm approaches the target along the axis. When the electromagnet comes in contact with the target, it reorients itself to align with the instrument surface. The robot controller monitors the contact force is reached; indicating full contact with the target. 3) The elecuntil tromagnet is energized to pick up the instrument with a pre-calibrated current. One challenge with electromagnetic gripper is that surgical instruments may be magnetized over time. Residual magnetism not only causes difficulty during surgery, but also leads to drop failure when an instrument adheres to the gripper after the current is set to zero. Our solution is to control the electromagnetic current such that it oscillates , where is the elecand decays over time, i.e., tromagnet current for instrument picking. is chosen to be 20 for a balanced operation time and instrument-release success rate. A standard demagnetizer (e.g., Neutrolator®) is also used to demagnetize a tray of instruments after all processing is done. Both Baxter and Viper S650 robots are equipped with our electromagnetic gripper. For Baxter, an Arduino® Due microcontroller is used as a bridge between the gripper electronics and ROS. For Viper, an Adept® Smartcontroller is used to control the gripper. Communication between the vision system and the robots is via TCP/IP messages.

V. END-EFFECTOR DESIGN

VI. EXPERIMENTS

In this study, a compliant electromagnetic gripper is designed to grip a surgical instrument in a cluttered environment given a 2D reference

In the experiments, the vision algorithm runs on a PC (3.2 GHz CPU and 8 G RAM). To detect small 2D barcodes, we use a DSLR

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 12, NO. 2, APRIL 2015

779

Fig. 8. Camera views from an experimental run of our robot manipulator. Each image shows the container before gripping. The target instrument is highlighted in green. The 2D reference point for gripping is marked by a red “ .” The final four images are omitted.

TABLE I RESULTS USING BAXTER AND ADEPT

camera with 6000 4000 pixel resolution (Nikon® D5200). We found that 70 70 pixels are necessary to decode a 1/8” data matrix. The occlusion reasoning algorithm is performed at a much lower resolution at 1500 1000 pixels. Fifteen experiments were performed on each robot. Each run started with 12–15 instruments. Four hundred twenty-one images were captured and processed. Barcode detection took from 1.21 to 1.82 s depending on the number of barcodes in the view. It took another 0.48 to 2.93 s to compute the gripping location depending on the clutter. Table I shows quantitative results of the experiments. The vision algorithm successfully detected the topmost instrument or the one that was least occluded for 93% of the time with Baxter and 95% with Adept. When the candidate instrument was correctly identified, the success rate for singulation was 98% with Adept arm and 92% with Baxter. Baxter's lower success rate was due to two reasons: 1) it is compliant–the magnet tends to slide along the instrument surface for tolup to 1 cm during contact and 2) it has lower accuracy ( erance). Fig. 8 shows an image sequence using Adept arm. The next instrument to be removed is highlighted in green with a 2D reference gripping location (small red+).

Fig. 9. a) Vision system recommended instrument incorrectly. b) After a failed attempt, instruments shifted; resulting in a new configuration where the vision system successfully found the topmost instrument .

A. Gripping Failure and Recovery There are three reasons that lead to gripping failure. 1) The vision system incorrectly identified a target instrument (26 out of 421 images). Most of the time, this was due to occluded barcode. Occasionally, the vision algorithm made an incorrect prediction (3 out of 26). When vision error happened, the robot manipulator was able to recover from failure. Fig. 9(a) illustrates such a case. The algorithm decided was not occluded because 's barcode was not visible. Robot attempted to grip , but failed due to insufficient gripping force. However, the instruments shifted during the attempt and 's barcode became visible (Fig. 9(b)). The robot manipulator then removed from the pile. 2) When the target instrument was correctly identified, gripping still failed 22 times out of 395 attempts. A majority of them (20) was because the gripper came in touch with an instrument that was not the target. This happened more often on the dirty side Baxter due to the potential sliding between magnet and instrument surface. This resulted in gripping nothing (18 times) or a wrong instrument (2 times). In Fig. 10(a), instruments and aligned closely. Even was the target, the elecfirst and was not tromagnetic gripper made contact with scissors able to lift it up due to insufficient gripping force. After it failed to grip twice, system decided to ignore and move on to the next candi-

Fig. 10. a) Failed attempt when trying to remove instrument . b) After the was determined to be occluded the least. c) is failed attempt, instrument now on top of the pile.

date. When a wrong instrument is gripped and placed in a wrong bin, a sterile processing nurse has to correct the error manually. 3) Occasionally, there can be a cycle among instruments. The proposed algorithm will pick the least occluded instrument. Fig. 10(b) shows such a cycle ). Since instrument was the east occluded, the ( system decided to singulate . After removing , instrument sat on top of the pile (Fig. 10(c)). In rare cases (only two times), such a cycle put too much weight on the target instrument and led to nothing being gripped. However, instruments may shift during the failed attempts similar to Fig. 9, and our system will be able to recover from failure. VII. CONCLUSION AND DISCUSSION We developed a flexible robotic manipulation system that is able to identify and singulate surgical instruments from a cluttered container.

780

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 12, NO. 2, APRIL 2015

The vision algorithm is robust against changing light conditions. The compliant electromagnetic gripper allows us solve for 2D pose instead of more challenging 3D pose. The compliant design of the gripper also makes it possible to recover from certain gripping failures and can be extended to applications involving other mostly planar objects, such as certain industrial parts. In the future, we will first work on error handling. For the scenario in Fig. 9, a vision-based object verification algorithm can determine whether the topmost object is indeed unoccluded. A verification step using an additional camera can be incorporated to verify if a singulated instrument is indeed the one determined by the vision algorithm. Currently, we require that all the instruments with a pivot (e.g., scissors, clamps) be in closed position. In the future, we would like to extend our algorithm to handle opened instruments. This involves identifying the instruments, finding the pivot location, and performing template matching on the two portions separately. We would also like to extend the system for picking up nonplanar objects. This can be achieved by placing a barcode and creating a template at each stable position of the object. Finally, instead of randomly selecting one of the non-occluded objects to pick, we want to optimize the picking sequence to achieve minimal cycle time.

REFERENCES [1] RST PenelopeCS, 2015. [Online]. Available: http://www.roboticsystech.com/ [2] Y. Xu, X. Tong, Y. Mao, W. Griffin, B. Kannan, and L. DeRoose, “A vision-guided robot manipulator for surgical instrument singulation in a cluttered environment,” in Proc. IEEE Int. Conf. Robot. Autom., 2014, pp. 3517–3523. [3] K. Rahardja and A. Kosaka, “Vision-based bin-picking: Recognition and localization of multiple complex objects using simple visual cues,” in Proc.IEEE/RSJ Int. Conf. Intell. Robot. Syst., 1996, pp. 1448–1457. [4] C. Papazov, S. Haddadin, S. Parusel, K. Krieger, and D. Burschka, “Rigid 3D geometry matching for grasping of known objects in cluttered scenes,” Int. J. Robot. Res., vol. 31, no. 4, pp. 538–553, Apr. 2012. [5] M. Nieuwenhuisen, D. Droeschel, D. Holz, J. Stückler, A. Berner, J. Li, R. Klein, and S. Behnke, “Mobile bin picking with an anthropomorphic service robot,” in Proc. IEEE Int. Conf. Robot. Autom., 2013, pp. 2327–2334.

[6] C. Choi, Y. Taguchi, O. Tuzel, M.-Y. Liu, and S. Ramalingam, “Votingbased pose estimation for robotic assembly using a 3D sensor,” in Proc. IEEE Int. Conf. Robot. Autom., 2012, pp. 1724–1731. [7] S. Chitta, E. G. Jones, M. Ciocarlie, and K. Hsiao, “Mobile manipulation in unstructured environments: Perception, planning, and execution,” IEEE Robot. Autom. Mag., vol. 19, no. 2, pp. 58–71, Jun. 2012. [8] N. Shroff, Y. Taguchi, O. Tuzel, A. Veeraraghavan, S. Ramalingam, and H. Okuda, “Finding a needle in a specular haystack,” in Proc. IEEE Int. Conf. Robot. Autom., 2011, pp. 5963–5970. [9] J. J. Rodrigues, J.-S. Kim, M. Furukawa, J. Xavier, P. Aguiar, and T. Kanade, “6D pose estimation of textureless shiny objects using random ferns for bin-picking,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., 2012, pp. 3334–3341. [10] M.-Y. Liu, O. Tuzel, A. Veeraraghavan, Y. Taguchi, T. K. Marks, and R. Chellappa, “Fast object localization and pose estimation in heavy clutter for robotic bin-picking,” Int. J. Robot. Res., vol. 31, no. 8, pp. 951–973, Jul. 2012. [11] A. Pretto, S. Tonello, and E. Menegatti, “Flexible 3D localization of planar objects for industrial bin-picking with monocamera vision system,” in Proc. IEEE Int. Conf. Autom. Sci. Eng., 2013, pp. 168–175. [12] A. Collet, M. Martinez, and S. Srinivasa, “The MOPED framework: Object recognition and pose estimation for manipulation,” Int. J. Robot. Res., vol. 30, no. 10, pp. 1284–1306, Sep. 2011. [13] X. Wang, T. Han, and S. Yan, “An HOG-LBP human detector with partial occlusion handling,” in Proc. IEEE Int. Conf. Comput. Vision, 2009, pp. 32–39. [14] S. Kwak, W. Nam, B. Han, and J. H. Han, “Learn occlusion with likelihoods for visual tracking,” in Proc. IEEE Int. Conf. Comput. Vision, 2011, pp. 1551–1558. [15] E. Hsiao and M. Hebert, “Occlusion reasoning for object detection under arbitrary viewpoint,” in Proc. IEEE Conf. Comput. Vision Pattern Recogn., 2012, pp. 3146–3153. [16] A. M. Dollar and R. D. Howe, “The highly adaptive SDM hand: Design and performance evaluation,” Intl. J. Robot. Res., vol. 29, no. 5, pp. 585–597, Feb. 2010. [17] Surgical Instruments – Metallic Materials – Part 1: Stainless Steel, ISO 7153-1 Standard, 1991. [18] 2DTG DataMatrix SDK icEveryCode™, 2015. [Online]. Available: http://www.2dtg.com/, , [19] B. Alefs, G. Eschemann, H. Ramoser, and C. Beleznai, “Road sign detection from edge orientation histograms,” in Proc. IEEE Intell. Vehicles Symp., 2007, pp. 993–998.

Suggest Documents