Robotic Grasping and Manipulation

0 downloads 0 Views 32MB Size Report
Istituto Italiano di Tecnologia, Genoa, Italy. 2. Centro E. Piaggio, Universitá ...... We used a ROS metapackage called TRAC-IK [1]. It is reliable and fast, especially ...
Yu Sun · Joe Falco (Eds.)

Communications in Computer and Information Science

816

Robotic Grasping and Manipulation First Robotic Grasping and Manipulation Challenge, RGMC 2016 Held in Conjunction with IROS 2016 Daejeon, South Korea, October 10–12, 2016 Revised Papers

123

Communications in Computer and Information Science Commenced Publication in 2007 Founding and Former Series Editors: Phoebe Chen, Alfredo Cuzzocrea, Xiaoyong Du, Orhun Kara, Ting Liu, Dominik Ślęzak, and Xiaokang Yang

Editorial Board Simone Diniz Junqueira Barbosa Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil Joaquim Filipe Polytechnic Institute of Setúbal, Setúbal, Portugal Igor Kotenko St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, St. Petersburg, Russia Krishna M. Sivalingam Indian Institute of Technology Madras, Chennai, India Takashi Washio Osaka University, Osaka, Japan Junsong Yuan University at Buffalo, The State University of New York, Buffalo, USA Lizhu Zhou Tsinghua University, Beijing, China

816

More information about this series at http://www.springer.com/series/7899

Yu Sun Joe Falco (Eds.) •

Robotic Grasping and Manipulation First Robotic Grasping and Manipulation Challenge, RGMC 2016 Held in Conjunction with IROS 2016 Daejeon, South Korea, October 10–12, 2016 Revised Papers

123

Editors Yu Sun University of South Florida Tampa, FL USA

Joe Falco National Institute of Standards and Technology Gaithersburg, MD USA

ISSN 1865-0929 ISSN 1865-0937 (electronic) Communications in Computer and Information Science ISBN 978-3-319-94567-5 ISBN 978-3-319-94568-2 (eBook) https://doi.org/10.1007/978-3-319-94568-2 Library of Congress Control Number: 2018948382 © Springer International Publishing AG, part of Springer Nature 2018 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

This edited book, Robotic Grasping and Manipulation Challenge 2016, is a collection of papers describing the rules, results, competitor systems, and future directions of the inaugural competition held at IROS 2016 in Daejeon, South Korea. This competition was designed to allow researchers focused on the application of robot systems to compare the performance of hand designs as well as autonomous grasping and manipulation solutions across a common set of tasks. The competition comprised three tracks that included hand-in-hand grasping, fully autonomous grasping, and simulation. The hand-in-hand and fully autonomous tracks used 18 predefined manipulation tasks and 20 objects. The first chapter of this book provides an overview of the task pool as well as the selection of tasks to support the various stages of the competition. Chapters 2–11 present the competitors’ robotic system designs as well as their strategies and results in completing the tasks. Chapter 12 covers competitor feedback as well as an analysis of lessons learned toward improvements and advancements for the next competition at IROS 2017. In the final chapter a strategy is presented for a series of grasping and manipulation competitions that facilitate objective performance benchmarking of robotic assembly solutions with an emphasis on the mechanical assembly application space. We would like to acknowledge our sponsors, the International Conference on Intelligent Robots and Systems (IROS) 2016 Organizing Committee and the Institute of Electrical and Electronics Engineers (IEEE) Robotic and Automation Society (RAS) Technical Committee (TC) on Robotic Hands Grasping and Manipulation (RHGM). We would also like to convey our appreciation to all contributors to this book as well as the technical advisors and volunteers who helped to make this competition successful. We thank Optoforce, Right Hand Robotics, and Robotous for donating components from their commercial line of robot products as awards to top performers that, which will surely inspire future research efforts. Finally, a special thanks to Volha Shaparava from Springer for her diligent efforts to bring this book to fruition. March 2018

Yu Sun Joe Falco

Organization

Organizing Committee Yu Sun Nadia Cheng Hyouk Ryeol Choi Zoe Doulgeri Erik D. Engeberg Kris Hauser Joe Falco Nancy Pollard Maximo Roa Zeyang Xia

University of South Florida, USA RightHand Robotics, USA Sungkyunkwan University, South Korea Aristotle University of Thessaloniki, Greece Florida Atlantic University, USA Duke University, USA National Institute of Standards and Technology, USA Carnegie Mellon University, USA DLR German Aerospace Center – Institute of Robotics and Mechatronics, Germany Shenzhen Institutes of Advanced Technology, China

Contents

Robotic Grasping and Manipulation Competition: Task Pool . . . . . . . . . . . . Yu Sun, Joe Falco, Nadia Cheng, Hyouk Ryeol Choi, Erik D. Engeberg, Nancy Pollard, Maximo Roa, and Zeyang Xia

1

Advanced Grasping with the Pisa/IIT SoftHand . . . . . . . . . . . . . . . . . . . . . Manuel Bonilla, Cosimo Della Santina, Alessio Rocchi, Emanuele Luberto, Gaspare Santaera, Edoardo Farnioli, Cristina Piazza, Fabio Bonomo, Alberto Brando, Alessandro Raugi, Manuel G. Catalano, Matteo Bianchi, Manolo Garabini, Giorgio Grioli, and Antonio Bicchi

19

Design of Modular Humanoid Robotic Hand Driven by SMA Actuator . . . . . Yang Chen, Shaofei Guo, Hui Yang, and Lina Hao

39

The TU Hand: Using Compliant Connections to Modulate Grasping Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dipayan Das, Nathanael J. Rake, and Joshua A. Schultz Design and Application of Dorabot-hand2 System. . . . . . . . . . . . . . . . . . . . Zhikang Wang, Shuo Liu, and Hao Zhang Manipulation Using the “Utah” Prosthetic Hand: The Role of Stiffness in Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Radhen Patel, Jacob Segil, and Nikolaus Correll

57 84

107

SKKU Hand Arm System: Hardware and Control Scheme . . . . . . . . . . . . . . Dongmin Choi, Byung-jin Jung, and Hyungpil Moon

117

A Robotic System for Autonomous Grasping and Manipulation . . . . . . . . . . Mingu Kwon, Dandan Zhou, Shuo Liu, and Hao Zhang

136

Improving Grasp Performance Using In-Hand Proximity and Contact Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Radhen Patel, Rebeca Curtis, Branden Romero, and Nikolaus Correll Robotic Grasping and Manipulation Competition @IROS2016: Team Tsinghua . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fuchun Sun, Huaping Liu, Bin Fang, Di Guo, Tao Kong, Chao Yang, Yao Huang, Mingxuan Jing, and Junyi Che Complete Robotic Systems for the IROS Grasping and Manipulation Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eadom Dessalene and Daniel Lofaro

146

161

172

VIII

Contents

Robotic Grasping and Manipulation Competition: Competitor Feedback and Lessons Learned . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Joe Falco, Yu Sun, and Maximo Roa

180

Robotic Grasping and Manipulation Competition: Future Tasks to Support the Development of Assembly Robotics . . . . . . . . . . . . . . . . . . . Karl Van Wyk, Joe Falco, and Elena Messina

190

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

201

Robotic Grasping and Manipulation Competition: Task Pool Yu Sun1 , Joe Falco2(B) , Nadia Cheng3 , Hyouk Ryeol Choi4 , Erik D. Engeberg5 , Nancy Pollard6 , Maximo Roa7 , and Zeyang Xia8 1

2

University of South Florida, Tampa, USA National Institute of Standards and Technology (NIST), Gaithersburg, USA [email protected] 3 RightHand Robotics, Somerville, USA 4 Sungkyunkwan University, Seoul, South Korea 5 Florida Atlantic University, Boca Raton, USA 6 Carnegie Mellon University, Pittsburgh, USA 7 German Aerospace Center (DLR), Cologne, Germany 8 Shenzhen Institutes of Advanced Technology, Shenzhen, China

Abstract. A Robot Grasping and Manipulation Competition was held during IROS 2016. The competition provided a common set of robot tasks for researchers focused on the application of robot systems to compare the performance of hand designs as well as autonomous grasping and manipulation solutions. Tracks one and two of the competition were supported by tasks chosen from a predefined pool of tasks. This task pool was assembled by the authors based on the challenges faced in developing robot systems that have the flexibility to grasp and manipulate a wide range of object geometries. This paper provides an overview of the task pool as well as the selection of tasks to support the various stages of the competition.

Keywords: Robot

1

· Grasping · Manipulation · Competition

Introduction

The first Robot Grasping and Manipulation Competition, held during the 2016 International Conference on Intelligent Robots and Systems (IROS) in Daejeon, South Korea, was sponsored by the Institute of Electrical and Electronics Engineers (IEEE) Robotic and Automation Society (RAS) Technical Committee (TC) on Robotic Hands Grasping and Manipulation (RHGM) [1]. The goal of the competition was to bring together researchers to benchmark the performance of autonomous grasping and manipulation solutions across a variety of application spaces including service, health care, and manufacturing robotics. This competition was the first of a planned series in the area of grasping and manipulation. It was designed to evaluate the performance of robot solutions This is a U.S. government work and its text is not subject to copyright protection in the United States; however, its text may be subject to foreign copyright protection 2018 Y. Sun and J. Falco (Eds.): RGMC 2016, CCIS 816, pp. 1–18, 2018. https://doi.org/10.1007/978-3-319-94568-2_1

2

Y. Sun et al.

that include grasp planning, end-effector design, perception, and manipulation control. The competition was comprised of three tracks; (1) hand-in-hand grasping, (2) fully autonomous grasping and manipulation, and (3) simulation. This chapter presents the task pool used to support tracks one and two which incorporated 18 predefined manipulation tasks and 20 objects that were readily obtainable through on-line retailers. In order to help teams prepare their systems for the competitions, the rules along with 10 randomly chosen tasks and associated objects were provided prior to the competition. A competition task set was selected from the pool and provided to teams just prior to the competition. This task pool was developed by the authors based on the challenges faced in developing robot systems that have the flexibility to grasp and manipulate a wide range of object geometries across diverse application spaces. The task sets were selected to support the various stages of the competition based on tasks used in previous manipulation data collections [2], with emphasis placed on the physically interactive requirements in those manipulations [3,4]. The objects in the tasks were designed to use items from the Yale-Carnegie Mellon-Berkeley (YCB) Object and Model Set [5] and the 2015 Amazon Picking Challenge (APC2015) [6] object datasets1 . This paper provides an overview of the task pool as well as the selection of tasks to support the various stages of the competition.

2

Task Designs

Eighteen competition tasks were designed and separated into four levels of difficulties. For the competition, the possible points awarded were commensurate with the level of difficulty of the tasks. A task in level 1 is worth 10 points, a task in level 2 is worth 20 points, a task in level 3 is worth 30 points, and a task in level 4 is worth 40 points. 2.1

Pick Up Peas with a Spoon (Level 1–10 Points)

Items 1. Twenty green peas 2. Spoon, bowl, plate and cup (Fig. 1) Setup 1. A 15.2 cm (6 in.) bowl is half-full with green peas on Table (Fig. 2) 2. A 25.4 cm (10 in.) plate is located 25.4 cm (10 in.) to the right of the bowl 3. A spoon is placed in a cup with the spoons handle out. The location of the cup can be defined by the contestants 1

Certain commercial entities and items are identified in this paper to foster understanding. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the materials or equipment identified are necessarily the best available for the purpose.

Task Pool

3

Steps 1. 2. 3. 4.

The robot system grasps the spoon from the cup The robot system uses the spoon to pick up peas from the bowl The robot system uses the spoon to transfer the peas to the plate Repeat steps 1 to 3 as needed

Rules and Scoring 1. Successfully transferring each pea is worth 2 points until reaching 10 points 2. The competition is terminated if the robot knocks over any object 3. Dropping peas outside of the plate is allowed

Fig. 1. Kit used for competition includes cups, bowls, plates and eating utensils.

2.2

Grasp a Towel on the Table and Hang It onto a Support (Level 1–10 Points)

Items 1. Towel 2. Towel hanger Setup 1. A towel is placed flat on the table 2. A towel hanger is placed on the table 20 cm away from the towel

4

Y. Sun et al.

Fig. 2. A half-full bowl with peas and an empty plate and a cup are placed on a table.

Steps 1. The robot system picks up the towel from the table 2. The robot system places the towel on the hanger Rules and Scoring 1. 2. 3. 4.

The competition is terminated if the towel hanger falls Ten points are received if the towel is placed on the hanger Three tries are allowed If the towel falls off the hanger the try is counted as a failure

2.3

Use a Spoon to Stir Water in a Cup (Level 1–10 Points)

Items 1. Spoon 2. Cup Setup 1. A cup is placed on a table 2. A spoon is placed in a cup with the spoon’s handle out 3. The location of the cup holding the spoon can be defined by the contestants Steps 1. The robot system picks up the spoon in the empty cup 2. The robot system submerges the bowl of the spoon in the other cup half-full of water 3. The robot system stirs water using the bowl of the spoon for two cycles

Task Pool

5

Rules and Scoring 1. Five points for each of the two stir cycles completed 2. The competition is terminated if the cup is tipped over 3. Three point reduction if any water is spilled 2.4

Shake Out Salt from a Salt Shaker to a Defined Location (Level 1–10 Points)

Items 1. Salt shaker 2. Salt 3. Measuring cup (1 L or 4 cup capacity) Setup 1. Salt shaker is filled to approximately 90% capacity with salt 2. Salt shaker is placed upright on table surface 3. The contestant can decide the location of the plate Steps 1. The robot system performs grasp of salt shaker 2. The robot system shakes salt shaker above the plate with the salt shaker remaining completely intact during the task Rules and Scoring 1. Dispensing 14.8 mL (1/16 cup) of salt on the plate earns full points 2. The salt on the plate will be poured into the measuring cup to measure 3. Salt is allowed to be dispensed outside of the plate but will not be included in the measurements 4. The salt shaker must remain intact during the entire test (teams are not allowed to take it apart) 5. Dropping shaker is allowed. However, if the robot cannot pick it back up, the competition is terminated 2.5

Use a Brush to Brush-Off Sand (Level 1–10 Points)

Items 1. A flat piece of cardboard measuring approximately 50.8 cm × 50.8 cm (20 in. × 20 in.) 2. Plate as in Task 1 3. Cleaning Brush 4. Sand 5. Measuring cup (1 L or 4 cup capacity)

6

Y. Sun et al.

Setup 1. Cut hole in cardboard center slightly smaller than plate diameter and place cardboard centered on plate rim 2. Support the cardboard on table top so the plate clears table top 3. Spread 2 cups of sand evenly on tray surface keeping sand approximately 5.1 cm (2 in.) from the plate Steps 1. The robot system grasps the brush 2. The robot system uses the brush to sweep sand from tray surface into the plate Rules and Scoring 1. Scoring is based on the amount of sand brushed into the plate, as measured in the measuring cup 2. Two points are awarded for each 30 mL (1/8 cup), with a maximum score of 10 points 2.6

Pick Up Items from a Shopping Basket and Put Them into an Open Plastic Shopping Bag (Level 1–10 Points)

Items 1. 2. 3. 4. 5. 6. 7. 8.

Shopping Basket YCB food item: Coffee Can YCB food item: Box of sugar YCB Food item: Potted meat can YCB food item: Mustard Container YCB food item: Plastic fruits Plastic shopping bags Wood suit hanger

Setup 1. Fix the shopping basket, and put all five items randomly into the shopping basket (all items with a random poses, as natural as shopping by human) 2. Mount the plastic shopping bag using wood suit hanger, to keep the bag open 3. Find and pick up the coffee can and put into the shopping bag 4. Find and pick up a box of sugar and put into the shopping bag 5. Find and pick up the potted meat can and put into the shopping bag 6. Find and pick up the mustard container and put into the shopping bag 7. Find and pick up the fruits and put into the shopping bag

Task Pool

7

Steps 1. 2. 3. 4.

The robot system localizes an object The robot system grasps an object The robot system transfers the object to a shopping bag Repeat for each of the five items

Rules and Scoring 1. Successful transfer of each of the five items for two points for a total of ten points 2.7

Grasp a Plug and Insert It into a Socket (Level 2–20 Points)

Items 1. Power Strip (w/2 alternating current (AC) outlets and 4 universal serial bus (USB) charging ports) 2. Industrial VELCRO adhesive backed 3. One AC night light 4. One USB night light Setup 1. Mount the power strip to a surface using the VELCRO tape and plug into an AC power source 2. Plug one AC night light into one outlet of the power strip 3. Plug one USB light into one USB port of the power strip Steps 1. The robot system extracts a light completely from the outlet. (If plug cannot be removed, consider loosening the plug for extraction in order to proceed to the next step) 2. The robot system plugs a light into the sockets to minimum depth for electrical contact (the light turns on) 3. Repeat for each of the two lights Rules and Scoring 1. 2. 3. 4.

Five points awarded extract per light Five point awarded insertion (light on) per light No points awarded if the plug is manually loosened for extraction No points awarded if the light doesn’t turn on for insertion

8

2.8

Y. Sun et al.

Hammer a Nail (Level 2–20 Points)

Items 1. Smooth Foam approx. 5 cm × 10 cm × 30 cm (2 in. 4 × ins × 12 in.) 2. Five 10D 3-in Common Nails (or metric equivalent) 3. YCB hammer Setup 1. Set hammer at predefined table location with handle overhanging table 2. Fix smooth foam to table top at defined location with the 5 cm × 30 cm surface on table 3. Mark nail locations on foam as shown in Fig. 3 4. Mark nails with a 2.5 cm (1 in.) and 5.1 cm (2 in.) depth 5. Set nail in next location per test push to 2.5 cm (1 in.) depth Steps 1. The robot system grasps hammer at predefined location 2. The robot system positions the hammer at nail location and drives the nail 3. Repeat for four nails Rules and Scoring 1. Drive nail with hammer to 5.1 cm (1 in.) depth for two points per nail 2. Or drive nail to full depth (head flush with foam) for five points per nail

2.9

Grasp and Cleanly Tear Away Piece of Toilet Paper (Level 2–20 Points)

Items 1. Toilet paper roll holder 2. Bath Tissue Setup 1. Mount the roll holder on the edge of the table 2. Place a full roll of toilet paper on the roll holder with the roll parallel to the ground 3. Leave a small amount of paper hanging down Steps 1. The robot holds on the loose toilet paper 2. The robot tears off a sheet of toilet paper in perforated squares

Task Pool

9

Fig. 3. Hammer a nail setup

Rules and Scoring 1. 20 points are earned to remove single square of toilet paper 2. The robot is allowed to try five times 2.10

Transfer Straw into a To-Go Cup with Lid (Level 2–20 Points

Items 1. Straws 2. Cup with lid 3. A cup to hold straws Setup 1. Several straws are placed into one upright cup 2. Another cup has a lid on it, placed upright and 30 cm to the side of the cup holding straws 3. The cup with lid is full of water Steps 1. The robot system picks up one straw from the cup of straws 2. The robot system places the straw into the straw hole while keeping the cup upright

10

Y. Sun et al.

Rules and Scoring 1. 2. 3. 4.

The location of the cup can be defined by the contestant 20 points are earned for successful insertion with cup remaining upright The competition is terminated if the cup with lid is tipped Dropping straws is allowed

2.11

Pick Up and Place Using Tongs - Level 2–20 Points

Items 1. Forceps (tongs) 2. Five objects Setup 1. Tongs placed on planar surface 2. Define goal placement zone with a 5 cm radius circle 3. Each of five objects presented to contestant for self placement 20 cm from defined goal zone Steps 1. The robotic system grasps the tongs 2. The robotic system uses the tongs to pick up objects and relocate into the goal zone 3. Repeat for each object Rules and Scoring 1. 2. 3. 4. 5.

One point for grasping and lifting the tongs One point for each object grasped and lifted above the plane Two points for successful transfer of each object into the goal zone Only the tongs can contact each object If an object is dropped, the contestant must move to the next object (no object pushing to achieve goal zone)

2.12

Putting on or Removing Bolts from Nuts with a Nut Driver (Level 3–30 Points)

Items 1. 2. 3. 4.

3.8 cm × 8.9 cm × 15.2 cm (1.5 in. × 3.5 in. × 6 in.) length wood stud Threaded inserts and installation kit Hex Bolt Nut Driver

Task Pool

11

Setup 1. Drill holes per Fig. 4 and install threaded insert (this process could be a step in future competition) 2. Fix wood stud at predefined table location 3. Locate nut driver in locating hole on wood stud per Fig. 4 4. Place mark on bolt threads 1/4 in. (or 6 mm) from bottom surface of head 5. Start bolt thread and apply two full turns Steps 1. The robot system grasps nut driver from predefined location in wood stud 2. The robot system uses the nut driver to drive bolt until fully seated Rules and Scoring 1. Five points to engage nut driver with bolt 2. Ten points to drive bolt to mark or 25 points to drive bolt to full depth

2.13

Clip an Artificial Nail with Nail Clippers (Level 3–30 Points)

Items 1. 2. 3. 4.

White, opaque acrylic Vise Permanent marker Nail file

Setup 1. A piece of acrylic is cut (laser-cutting recommended) and clamped with an approximately 7.5 cm × 7.5 cm (or 3 in. × 3 in.) piece exposed (Fig. 5) 2. A 1 cm long line is drawn (using the permanent marker) along the center of one of the edges of the acrylic piece 3. The acrylic piece is clamped using the vise such that the drawn line is exposed on the top Steps 1. The robot system performs a grasp of nail file 2. The robot system uses the nail file to remove the edge so that the permanent marker is also removed Rules and Scoring 1. 10 points for partial removal of the 1 cm line 2. 30 points for complete removal of the 1 cm line

12

Y. Sun et al.

Fig. 4. Drive bolt setup

2.14

Use a Saw to Cut Cardboard Along a Line (Level 3–30 Points)

Items 1. 2. 3. 4.

Hand Saw Clear plastic sign holders Cardboard Inserts Binder Clips

Task Pool

13

Fig. 5. File nail setup

Setup 1. Align two sign holders with space between them to at least clear the saw kerf and fix to table top 2. Mark a 30.5 cm (12 in.) cardboard filler insert with cut line at center and mark scoring at 1/3 intervals, 10.2 cm (4 in.), along cut line (Fig. 6) 3. Insert cardboard filler insert into sign holder with cut line centered and clamp using binder clips 4. Use left side of the sign holder to stage the saw for grasping Steps 1. The robot system grasps the saw by the handle from staged location 2. The robot system uses the saw to cut the cardboard along cut line Rules and Scoring 1. Ten points for cutting through each 1/3 section of cardboard along cut line for a total of 30 possible points

2.15

Fully Extend Syringe and Then Fully Press Syringe (Level 4–40 Points)

Items 1. 30 cm3 syringe

14

Y. Sun et al.

Fig. 6. Cut Cardboard setup

Setup 1. Set syringe in a fully compressed state 2. Place syringe on a flat surface Steps 1. The robot system grasps the syringe 2. The robot system extends the syringe to at least 30 cm3 (without removing the plunger) 3. The robot system returns the syringe to the fully compressed state Rules and Scoring 1. 15 points to fully extend 2. 15 points to fully compress 3. The syringe base can be constrained by a means independent of the robot system 2.16

Open a Bottle with a Locking Safety Cap Using Push down and Turn Caps (Level 4–40 Points)

Items 1. 3.8 cm × 8.9 cm × 15.2 cm (1.5 in. × 3.5 in. × 6 in.) length wood stud 2. Pharmacy Vials 40 Dram (snap caps with safety push and turn cap)

Task Pool

15

Setup 1. Attach bottle securely to wood stud so that the attachment can withstand substantial twisting forces 2. Place cap on bottle and twist to lock 3. Fix wood stud at predetermined location on table Steps 1. The robot system grasps the cap 2. The robot system applies a push and turn motion to the cap of the bottle to unlock it 3. The robot system removes the cap from the bottle Rules and Scoring 1. Unlock cap for 20 points 2. Separate cap from the bottle 20 points 2.17

Peel a Potato (Level 4–40 Points)

Items 1. 2. 3. 4.

One large potato One potato peeling device 3.8 cm × 8.9 cm × 15.2 cm (1.5 in. × 3.5 in. × 6 in.) wood board Two, 10D 7.6 cm (3 in.) Common Nails nails

Setup 1. Mark two nail locations centered on the board and 7.5 cm (3 in.) apart 2. Hammer the two nails fully into the board so that the pointed ends of the nails protrude through the board 3. Clamp the board with nails points protruding upward 4. Push the potato onto the nails until the potato sits on the board Steps 1. The robot system grasps the potato peeling device 2. The robot system uses the peeling device to remove the skin from the potato Rules and Scoring 1. 10 points for each potato skin shaving 2. Up to 40 points total 3. Each shaving must be at least 2.5 cm (1 in.) in length to achieve points

16

Y. Sun et al.

2.18

Use Scissors to Cut a Piece of Paper (Level 4–40 Points)

Items 1. YCB Tool items: scissors 2. A4 papers Setup 1. Prepare four pieces of A4 paper and draw each of the lines in Fig. 7 on each paper (straight line, polyline, curve and shape line). The shapes shown in Fig. 7 should be drawn to fully utilize the A4 paper size 2. Each line will define the paper surface into two parts

Fig. 7. Setup for cut pieces of paper

Steps 1. The robot system grasps the scissors 2. The robot system uses the scissors to cut a piece of paper into two along a straight line 3. The robot system uses the scissors to cut a piece of paper into two along a polyline 4. The robot system uses the scissors to cut a piece of paper into two along a curve 5. The robot system uses the scissors to cut a shape from a piece of paper Rules and Scoring 1. Scoring is based on the accuracy of the cutting trace, where full points are awarded for all cuts that stay within 5 mm of the cutting trace. 2. Two points each for fully cutting straight line, polyline and curve 3. Four points for shape line cutting

Task Pool

3

17

Conclusions

Nine of the 18 tasks were released three months before the competition. Two weeks before the competition, 10 tasks were randomly selected from the 18 tasks and were released as the competition tasks. These included tasks 1, 2, 3, 4, 7, 8, 10, 12, 15, and 18. Four of the tasks where chosen from level 1, three from level 2, two from level 3 and one from level 4 for a total of 200 possible points. The chosen tasks are shown in Fig. 8.

Fig. 8. Listing of tasks chosen for the competition.

The tasks proved to be quite challenging to the teams that competed in the competition. Allowing teams access to a subset of the tasks throughout the competition registration process, preparation months prior to the competition, and practice days at IROS, led to a successful competition day where teams were prepared yet still challenged by final tasks presented. This pool of tasks will be used to support the next competition and the pool is expected to grow as new tasks are developed by the organizing committee.

References 1. IEEE RAS TC Robotic Hands Grasping and Manipulation. http://www.rhgm.org. Accessed 30 Jun 2017 2. Huang, Y., Bianchi, M., Liarokapis, M., Sun, Y.: Recent data sets on object manipulation: a survey. Big Data 4(4), 197–216 (2016) 3. Lin, Y., Sun, Y.: Grasp planning to maximize task coverage. Int. J. Robot. Res. 34(9), 1195–1210 (2015)

18

Y. Sun et al.

4. Lin, Y., Sun, Y.: Task-oriented grasp planning based on disturbance distribution. In: Inaba, M., Corke, P. (eds.) Robotics Research. STAR, vol. 114, pp. 577–592. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-28872-7 33 5. Yale-CMU-Berkeley Object Dataset. http://rll.eecs.berkeley.edu/ycb/. Accessed 30 Jun 2017 6. Amazon Picking Challenge. http://rll.berkeley.edu/amazon picking challenge/. Accessed 30 Jun 2017

Advanced Grasping with the Pisa/IIT SoftHand Manuel Bonilla1 , Cosimo Della Santina2 , Alessio Rocchi1 , Emanuele Luberto2 , Gaspare Santaera1 , Edoardo Farnioli1 , Cristina Piazza2 , Fabio Bonomo3 , Alberto Brando3 , Alessandro Raugi3 , Manuel G. Catalano1 , Matteo Bianchi2,4(B) , Manolo Garabini2 , Giorgio Grioli1 , and Antonio Bicchi1,2 1

4

Soft Robotics for Human Cooperation and Rehabilitation, Istituto Italiano di Tecnologia, Genoa, Italy 2 Centro E. Piaggio, Universit´ a di Pisa, Pisa, Italy [email protected] 3 QB Robotics, Pisa, Italy Department of Information Engineering, Universit´ a di Pisa, Pisa, Italy

Abstract. This chapter presents the hardware, software and overall strategy used by the team UNIPI-IIT-QB to participate to the Robotic Grasping and Manipulation Competition. It relies on the PISA/IIT SoftHand, which is underactuated soft robotic hand that can adapt to the grasped object shape and is compliant with the environment. It was used for the hand-in-hand and for the simulation tracks, where the team reached first and third places respectively. Keywords: Grasping

1

· Grasp simulation · Grasp planning

Introduction

Despite the continuous advancements in the field, the design and realization of dexterous robotic hands is still a big challenge in robotics. Over the years, several hand designs were proposed that try to match the level of dexterity of the human hand. These hands typically resort to very complex and articulated designs which cleverly integrate many actuators, sensors and joints trying to get close to the richness and complexity of the sensory and motor functions of the human hand, nevertheless this still remains a distant goal on the horizon (e.g. [19]). An alternative and promising trend in robot hand design is simplification, indeed trying to encompass some of the limitations of overly complex mechanical system and removing some of its components can introduce more advantages than drawbacks, if done with the right criteria. One of the most interesting simplification criteria is that of embedding part of the control intelligence in the physical structure of the system itself, the main tool to achieve this goal is underactuation [4]. Thanks to under-actuation, engineers can reduce the number of degrees of actuation (DOAs) of robotic hands and thus simplify their design. c Springer International Publishing AG, part of Springer Nature 2018  Y. Sun and J. Falco (Eds.): RGMC 2016, CCIS 816, pp. 19–38, 2018. https://doi.org/10.1007/978-3-319-94568-2_2

20

M. Bonilla et al.

While in a traditional robot hand a reduction of DOAs would imply less DOFs and, in turn, a considerable reduction of shape adaptation capabilities, the theory of under-actuation provides principles to design hands that retain a large number of DOFs and, by consequence, adaptability. To achieve this, under-actuation resorts to differential transmissions, realized in various forms, e.g. gears [26], tendons and pulleys [23]. Another idea, coming from the field of motor control [2], recently caused a great interest in the robotic hands field: postural synergies. Synergies can be defined [27] as a set of variables that show correlated changes in time. Postural synergies can be seen as a basis of the subspace of the effective human hand configurations among all those made possible by the kinematics of the body. Moreover, synergies can be ordered in terms of the statistical variance of the total amount of motion that they explain. This makes them a successful tool for artificial hand analysis, control and design simplification [3]. Notorious examples are [6,10]. The basic concept of postural synergies later evolved both on the human motor control and robotic sides. In particular the Soft synergies theory sprouted up [3], which assumes that synergies exist on an ideal domain, and define a virtual reference movement, toward which the physical system is elastically attracted. The manipulated environment, in turn, opposes to the ideal hand motion through its own compliance. The two actions concur to the generation of an equilibrium (e.g. [37,38]). Based upon the theory of soft synergies, adaptive synergies [20] integrated synergies with under-actuation, yielding a simple implementation of the former, with a series of considerable advantages, such as control and design simplification. The Pisa/IIT SoftHand [9] is a recent outcome of this research (see Fig. 1): it implements one soft synergy, actuated with a transmission system that uses one tendon, pulleys, and one motor. The SoftHand demonstrated excellent grasping skills, in many different situations, combined with robustness and a simple control interface1 .

2

Organization

The rest of this chapter is organized as follows. Section 3 briefly introduces the Pisa/IIT SoftHand. In Sect. 4 we state the problem arising from the use of soft hands for grasping. Then, Sects. 5 and 6 present the simulation tool and grasping strategy used during the challenge. Section 7 presents the strategy of the PISAIIT-QB team to compete in the grasping challenge in both hand-in-hand and simulation tracks. Finally Sect. 8 presents our results and conclusions.

3

The Pisa/IIT Softhand

The Pisa/IIT SoftHand was designed according to few specifications. On the functional side, requirements are to grasp as wide a variety of objects and tools 1

Videos available at https://goo.gl/8zYDWs.

Advanced Grasping with the Pisa/IIT SoftHand

21

as possible, among those commonly used by humans in everyday tasks. The hand should be primarily able to effect a whole hand grasp of tools, properly and strongly enough to operate them under arm and wrist control, but also be able to achieve tip grasps. No in-hand dexterous manipulation is required for this prototype. For further evolutions in this sense please refer to [14]. The main nonfunctional requirements were resilience against force, overexertion and impacts, and safety in interactions with humans. The hand was designed to be lightweight and self-contained, to avoid encumbering the forearm and wrist with motors, batteries and cabling, along with cost effectiveness. To meet the first functional requirement, the hand was designed anthropomorphically, with 19 DOFs arranged in four fingers and an opposable thumb, Fig. 1(a). To maximize simplicity and usability, however, the hand uses only one actuator. The hand assembly design is shown in Fig. 1(b). Each finger has four phalanges, while the thumb has three.

Fig. 1. Pisa/IIT SostHand (a) is an anthropomorphic soft hand, with 19 DoFs actuated through a single tendon (b).

In rest position, with fingers stretched out and at a relative angle of about 15◦ in the dorsal plane, the hand spans approximately 230 mm from thumb to little finger tip, is 235 mm long from the wrist basis to the middle finger tip and has 40 mm maximum thickness at the palm. The requirement on power grasp implies that the hand is able to generate a high enough grasping force, and to distribute it evenly through all contacts, be them at the fingertips, the inner phalanges, or the palm. We adopted a non-conventional “soft robotics” design of the mechanics of the hand composed of rolling joints and elastic ligaments at a very low cost. Rolling contact articulations replaced standard revolute joints. Our design is inspired to a class of joints known as COmpliant Rolling-contact Elements (CORE) [8]. Among these, the Hillberry design of a rolling joint [22] was a source of inspiration to our design. A Hillberry joint consists of a pair of cylinders in rolling contact on each other, held together by metallic bands, which wrap around the cylinders on opposite sides as schematically shown in Fig. 2(b).

22

M. Bonilla et al.

(a) Core

(b) Hilberry

(c) Pisa/IIT SoftHand

Fig. 2. Rolling contact articulations can be employed instead of standard revolute joints to obtain robust human-like articulations.

Figure 2(c) shows how we used CORE joints in the design of the Pisa/IIT SoftHand. In particular, we adopted CORE joints for all the interphalangeal, flexion/extension articulations. Conversely, conventional revolute joints was used for metacarpophalangeal, abduction/adduction articulations. W.r.t. classic Hillberry joints, metallic bands were replaced with elastic ligaments fixed across the joint with an offset in the dorsal direction. Suitable pretensioning of the ligaments, together with a carefully designed profile of the two cylinders, introduces a desirable passive stability behavior, with an attractive equilibrium at the rest configuration (when fingers are stretched). These features are particularly important for the system to behave softly and safely in contact, and to recover from force overexertion, due e.g. to impacts or jamming of the hand, making the hand automatically return to its correct assembly configuration. The joint can withstand severe disarticulations and violent impacts (Fig. 3). Actuation of the hand is effected through a single Dyneema tendon routed through all joints using passive anti-derailment pulleys. The tendon action flexes and adducts fingers and thumb, counteracting the elastic force of ligaments, and implementing adaptive underactuation without the need for differential gears. According to the design approach proposed in [20], the motor actuates the adaptive synergy as derived from a human postural database [36]. The mechanical implementation of the first soft synergy through shape-adaptive underactuation was obtained via numerical optimization of the pulley radii and joint stiffness (Fig. 4).

4

Problem Definition

Being actuated through just one motor, the Pisa/IIT SoftHand free closure movements is limited to just one direction. However, its overall compliance conveys the natural production of complex behaviors through interactions with the environment and the object to be grasped. This fact makes the Pisa/IIT SoftHand really intuitive to use by humans because they use it in the same way they would use their own hands. However, to make a robot use Soft Hands in the same way

Advanced Grasping with the Pisa/IIT SoftHand

23

that humans do is challenging. The main problem comes from the difficulty of giving robots the same level of intelligence and experience humans have. It is the authors opinion that the reduced dimensionality of the actuation space, combined with the human inspiration, make this under-actuated and soft hand a valuable candidate not only for use on autonomous robotic manipulators (as e.g. [1]) but also for applications where a human user is active part of the planning and control loop, spanning from tele-operation, to prosthetics, to human grasp studies and rehabilitation robotics. Preliminary results in this sense with Pisa/IIT SoftHand are provided in [7,18]. While direct control of the Pisa/IIT SoftHand is an easy problem, automatic motion planning is not, since it is difficult to model human-robot interactions. Classical motion planning algorithms are designed to place specific parts of the hand (mainly the fingertips) on specific parts on the object to grasp it [29]. This is not possible with Soft Hands since it is not possible to move each joint of the hand independently, introducing a paradigm shift in motion planning for robotic hands. Recently in [5] we introduced what we call “daring grasping”, it consist in exploiting the environment to help in the grasping process. In such paper we discussed the problem of grasping with Soft Hands, we proposed the idea of taking advantages of the interactions of the robotic hands with the environment to shape the hand while it is grasping. In order to plan motion for Soft Hands we need to model environment interactions which is not an easy task. In this chapter we use the PISA/IIT Soft Hand together with the same idea than in [5] to grasp objects but we use a more efficient simulator and grasping algorithm.

Fig. 3. The Pisa/IIT SoftHand joints can withstand severe force overexertion in all directions, automatically returning to the correct assembly configuration

24

M. Bonilla et al.

Fig. 4. Some experimental grasps performed with the Pisa/IIT hand, with the object placed in the hand by a human operator.

5

A Generic Simulator for Compliant Hands

Klampt generic compliant underactuated hand emulator (CUHE) has been used in order to simulate the Pisa/IIT SoftHand [9]. The emulator makes use of the adaptive synergy concept which can be generalizable to a wide array of underactuated compliant hands, other than the SoftHand for which it has been firstly implemented. Example of compliant underactuated hands include the iHY [30], Reflex Hand [31], Pisa/IIT SoftHand, Robotiq 3-Fingered Gripper [32], RBO [12] and RBO2 [13] and Yale Hand [28]. Model-based predictions of motions and forces can offer powerful insights for grasp planning [24,25], and are an invaluable tool in testing high level grasp controllers before application to physical hardware. Simulation tools will help predict outcomes of controllers that make deliberate contact with the environment [15] as well as with the object. They may also be useful for computational exploration of gripper designs as well as control strategies, such as in reinforcement learning. Motivated by the increasing interest in CUHs and recent advances in simulation software [34], and exploiting the framework that extends the concept of soft synergies into that of adaptive synergies [20] to model robotic hands with a known transmission distribution matrix and linear joint stiffness, a generic compliant underactuated hand emulator has been presented [33] that is built upon the open source Klamp’t simulator (http://klampt.org) and provides a common tool for simulating a large set of CUHs. Klamp’t allows to simulate interaction with rigid objects taking into account the full dynamics of the hand, object and environment. It integrates models of compliant joints with recent contributions in robust mesh-mesh contact generation methods [21] including contact point clustering and uses adaptive time stepping capabilities to reduce penetration artifacts when simulating contact compliance. While Gazebo plugins have implemented physics engines with the recent boundary-layer expanded meshes (BLEM) [21] technique for stable contact

Advanced Grasping with the Pisa/IIT SoftHand

25

generation [34], and there is existing work for simulating the Pisa/IIT SoftHand in Gazebo [35], there are no generic plugins which allow simulating different CUHs flexibly or quickly and seamlessly tuning a hand model. On the other side, in [5] a dynamic simulation of the Pisa/IIT SoftHand is implemented in the multi-body dynamics simulator MSC Adams [11]. The simulator is used to validate the provided methods to generate pre-grasp palm configurations w.r.t. the object pose. The simulator demonstrated moderate fidelity to an experimental scenario, with some difficulties with hand-object penetrations and estimation of contact normals, as well as a level of performance which is orders of magnitudes slower than the presented method. Recent developments of the Klamp’t simulator provide a uniform interface for specifying emulators for custom sensors and actuators/transmission systems, by using a flexible API and the Python language. These features enable the development of a lean architecture for developing a generic compliant hand emulator and allows simulating a new CUH writing a minimal amount of code, providing fast simulations and allowing to quickly customize the simulation pipeline for grasp planning and learning purposes. 5.1

An Emulator for the Pisa/IIT SoftHand

An underactuated hand is modeled as a set of rigid links articulated by n joints, with na degrees of actuation and na < n. The state of the fingers is denoted as q ∈ Rn . A control u ∈ Rna gives rise to a net torque on the joints τ ≡ τ (q, u) ∈ Rn which summarizes the sum of internal torques including gearing, stiffness, damping, joint stops, and friction. Thus, the dynamics of the robot in contact are given by B(q)¨ q + C(q, q) ˙ q˙ + G(q) = τ (q, u) + τc

(1)

where B(q) is the robot’s mass matrix, C(q, q) ˙ is the Coriolis force matrix, G(q) is the generalized gravity vector, and τc = J T fc are the joint torques resulting from external contact forces. Given an initial state (q0 , q˙0 ), a control trajectory u(t), and a final time T a simulator will generate a trajectory of the robot q(t) : [0, T ] → Rn as well as the motions of other objects O1 , . . . , Om , taking the dynamics into account as from (1). A fundamental part of simulating grippers, and in particular CUHs lies in implementing a model of underactuation, to emulate the underactuated transmission system regardless of the particular model of hand and its kinematic and dynamic properties. The proposed emulator has the objective of providing such functionalities, and as described in Sect. 5.2, it is loaded by Klamp’t, together with the hand model providing its dynamic and kinematics properties, in order to dynamically simulate a CUH. Modeling Underactuation and Compliance. Underactuated and compliant hands are linked with transmission mechanisms, e.g., tendons or mechanical

26

M. Bonilla et al.

linkages, that distribute actuator effort across multiple joints. They also include spring mechanisms that restore the hand to a consistent rest state once gripping effort is removed, that is, restore deterministic behavior to the hand in spite of underactuation (given a certain value for the actuated variables, an infinite number of possible configurations exist for the underactuated joints). Simulations must allow for actuators to drive multiple links forward, but also to allow for forces on one link to affect the distribution of effort across other links. We model these effects with a formulation based on the adaptive synergies framework [20]. First, we use a general constraint model that relates actuator displacements s to configuration displacements q s = Rq,

(2)

where the reference configuration is chosen so the zero actuator and joint correspond. The nA × n transmission matrix R determines how joint movements pull on each actuator. We assume the drive mechanism generates torques on each joints in order to maintain these constraints. Denote the tensional force at the tendons generated by the actuators as f ∈ Rna , and the torques generated by the drive mechanism be denoted τd ∈ Rn . By the principle of virtual work, we have τd = RT f . Let us also define the joint torques produced by spring mechanisms as τs = −Eq where E is a n × n joint stiffness matrix (which is usually diagonal). Neglecting friction effects, the resultant vector of joint torques is τ˜ = RT f − Eq.

(3)

where τ˜ = τc at the equilibrium. We then solve for f and q that satisfy the constraints (2) and (3) by solving a system of linear equations, which has a closed form solution in terms of s, R, E, and τ [20] for a constant transmission distribution matrix R and joint stiffness matrix E : f = AJ T fc + Bs

(4)

q = CJ T fc + Ds

(5)

with A, B, C, D properly defined matrices, as in [20]. Given the solved f from Eq. (4) and substituting the result in (3), we obtain joint torques to actuate on every joint given a position command s on actuators. Given (1) and imposing quasi-static equilibrium conditions, we then compute the gravity compensation terms and define the actuation torque as τ = τ˜ + G(q).

(6)

This procedure needs to iterate over multiple time steps to achieve equilibrium between mechanism torques and contact forces, which may cause chattering if contact forces are nonsmooth. At rest, the driven hand will converge to the configuration given by Eq. (5), when the underactuated joints are modeled without friction. For hands with configuration dependent transmission distribution

Advanced Grasping with the Pisa/IIT SoftHand

27

matrix R, we can still use the formula assuming small displacements in q imply little variations in R, and thus setting a number of simulation substeps  1. Moreover, while Eqs. (2)–(5) assume proper offsets in the actuator configuration and a zero rest position for the joint elastic elements, it will not be necessarily the case for the hand model, so that when emulating the underactuated transmission, we will resort to commanding a position command σ ∈ [0, 1] such that σ = σscaling (s − σoffset )

(7)

τ˜ = RT f − E (q − qrest )

(8)

and Eq. (3) becomes Also, not necessarily all joints in the hand will be part of the underactuated transmission, so that all previous formulas do not apply to all joints, but to the subset of underactuated joints nu . More details on this aspect will be given in the following section. 5.2

Compliant Hand Emulator

Compliant underactuated hands that can be modeled with adaptive synergies are implemented by subclassing the CompliantHandEmulator class. Each outer simulation step may be composed of several substeps, during which the simulated world state evolves and sensors and actuators emulators are updated; the controller is updated once per every outer step while forces and dynamics are evolved at a higher rate. To set parameters describing the structure of the hand, the subclass must implement the loadHandParameters() function. These parameters define the set of underactuated joints, the synergy joints (i.e. the joints for which a non-trivial transmission exists that relates it to the underactuated joints of the gripper), and the actuated joints with a standard transmission (i.e. joints with Normal or Affine transmission, which are the default transmission types implemented at the moment in Klamp’t) (Fig. 5). Contact point and force information are obtained thanks to the BLEM [21] algorithm allowing for a stable emulation of the adaptive synergy [34]; thanks to adaptive time-stepping, mesh interpenetration are avoided even in the case of large contact forces where finite contact stiffness can cause the contact to happen below the mesh boundary layer. The relative smoothness of the BLEM contact force predictions are crucial to synergy emulation, since the external forces component fc in (4) directly influence actuated torques. The CompliantHandEmulator accepts commands both directly from the user (man-in-the-loop) or from a high level controller. It simulates simple joint transmission constraints (present in the SoftHand emulator due to the presence of roll-articular joints [9]) using a simple PID control with a setpoint updated to the reference (eventually underactuated) joint position at every simulation substep, while all the underactuated joints are commanded using Eq. (4). The simulator obtains a real time factor of ∼ 0.5 with simple contact scenarios and ∼ 0.24 in a typical grasp simulation with an i7-6500U @ 2.50 GHz processor, 10 substeps (10 ms per time step, 1 ms every physics substep), with the adaptive

28

M. Bonilla et al.

timestep scheme impacting performances for more complex contact scenarios or less stiff objects.

Fig. 5. A sequence showing CUHE being used to simulate the SoftHand grasping and lifting object play go rainbow stakin cups 9 red from the YCB dataset by using simple controller.py, provided as example base controller in the simulation framework package for the Robot Grasping and Manipulation Competition

The emulator for the SoftHand is implemented by extending the CompliantHandEmulator class, in order to define the underactuated joints, the joint stiffness and the transmission distribution matrix. Pisa/IIT SoftHand was loaded using the URDF (Unified Robot Description Format) format from the hand, available in [35]. In particular a a SoftHandLoader class has been created which is able to automatically load transmission and stiffness parameters form the URDF. The loader is thus usable as a reference for hands whose parameters are encoded in the model’s URDF. Virtual joints are issued to implement the synergistic actuators, in particular the joint driving invisible wire link for the SoftHand URDF model. This is a design choice which is enforced in the emulator code and should be kept in new hand models whose transmission is simulated through the emulator. For the five-fingered SoftHand, na = 1, nu = 19, with one abduction/adduction joint for each finger, plus 3 DoFs except for the thumb (which has 2) which are rolling-contact joints [9]. Hence to the 19 underactuated joints correspond 14 additional joints, since each rolling-contact (Hillberry) joint is modeled using one underactuated joint and one matched joint constrained to follow its parent joint movements [5].

6

Grasp Planning: Minimum Volume Bounding Box Approach

In this section, we present a method to plan grasps for soft hands. Considering that soft hands can easily conform to the shape of an object, with preference to certain types of basic geometries and dimensions, we decompose the object into one type of these geometries, particularly into Minimal Volume Bounding Boxes (MVBBs), which are proven to be efficiently graspable by the hand we use. A set of candidate hand poses are then generated using geometric information extracted from such MVBBs. All the candidates are validated by means of dynamic simulation by using a Soft Hand CUHE to build a database of grasps for each object.

Advanced Grasping with the Pisa/IIT SoftHand

6.1

29

Bounding Box Decomposition

Algorithm 1 was presented in [17]. The idea is to decompose the object in MVBBs minimizing the volume of the boxes which fit partial point clouds. The algorithm takes a point cloud of an object (points3D ) and approximates it with MVBBs. This is performed by first projecting a point onto three planes which are the nonopposite faces of the box. Then, using Algorithm 2, the points are split in two sets p1 and p2. The split is performed for each of the projected points (f aces) and for each of the two projection axes. After that, the points are approximated with a box and their areas a1 and a2 are computed. At the end, the algorithm returns the point and the split direction minimizing the mentioned area. The split is then performed for the set of all 3D points and it results in two boxes p and q with a set of 3D points. The reduction rate of the volume of the two new boxes compared with the original is then compared with a user-given parameter t to judge if the split was useful or not. If it is useful, the split is performed and the points in each of the boxes are considered as separate new point clouds to repeat the procedure, otherwise the algorithm is stopped. Algorithm 1. Approximate the object in MVBB 1: procedure BoxApproximate(faces, points3D ) 2: box ← F indBoundingBox(points3D ) 3: f aces ← nonOppositeF aces(box) 4: (p, q) ← split(F indBestSplit(f aces, points3D )) 5: if (percentualV olume(p + q, box) < t then 6: BoxApproximate(p) 7: BoxApproximate(q) 8: end if 9: end procedure

 t is a stop criteria

Figure 6 shows a comparison of different values of t. Depending on the task that we want to perform, this parameter can assume different values. For example, if we want to grasp objects from handles, like in cups or pots, they can be

(a) t = 10−4

(b) t = 10−5

(c) t = 2.5 · 10−8

Fig. 6. Comparison of the MVBB generated by the Algorithm 1 using different values of t.

30

M. Bonilla et al.

Algorithm 2. Split the point cloud 1: procedure FindBestSplit(faces, points3D ) 2: for i ← 1 to 3 do 3: p2D ← projects(points3D , f aces[i]) 4: for x ← 1 to width(f aces[i]) do 5: (p1, p2) ← verticalSplit(p2D , x) 6: a1 ← boundArea2D (p1) 7: a2 ← boundArea2D (p2) 8: if (a1 + a2 < minArea) then 9: minArea ← (a1 + a2) 10: bestSplit ← (i, x) 11: end if 12: end for 13: for x ← 1 to height(f aces[i]) do 14: (p1, p2) ← verticalSplit(p2D , y) 15: a1 ← boundArea2D (p1) 16: a2 ← boundArea2D (p2) 17: if (a1 + a2 < minArea) then 18: minArea ← (a1 + a2) 19: bestSplit ← (i, y) 20: end if 21: end for 22: end for 23: end procedure

isolated using a large value of t, see Fig. 6(a). Similar values can be used to grasp a cup from above for example. On the other hand, if we want to explore more deeply the geometry of the object (e.g. to grasp edges in pinch grasp configurations), the parameter t must be decreased, see Fig. 6b and (c). In practice, the selection of the parameter t is related to the translation of high level task specifications to low level grasp actions. Let us consider an example in which a robot has to pick up a pot to pour the content into a glass. In this case, a convenient choice is to grasp the pot from the handle, as shown in Fig. 7(a). Therefore, in this case a very fine object decomposition is not necessary. On the other hand, if we consider the task of passing an object from one hand to another in a bi-manual manipulation setting, the selected box to be grasped could be one in the border of the cooker body, see Fig. 7(b), such that the second arm has more options to decide where to grasp the object without colliding with the first hand. Thus, in this case, a finer object decomposition is beneficial. 6.2

Proposing Grasp Poses

The aim of this section is to explain how we align the hand with respect to the object in order to grasp it. We consider the orientation of each of the MVBBs and the orientation of the object itself. The orientation of the boxes comes from the principal axis determined through Principal Component Analysis (PCA)

Advanced Grasping with the Pisa/IIT SoftHand

31

Fig. 7. The selection of the box to grasp depends on the high level task specification.

Fig. 8. Graphical explanation of the procedure performed to align the hand with each bounding box.

performed in the F indBoundingBox function. The inclusion of the PCA is one of the differences with respect to the original algorithm in [17], and makes the algorithm invariant to the reference frame of the point cloud. Once the object is decomposed into MVBBs, the next step is to select a box to grasp. There are many criteria to do this, the most promising and useful depending on the task that the robot has to perform once the object is grasped. A possibility is to start generating hand poses from the outermost box. This choice is driven by our first priority of just grasping the object in a robust successful manner — most probably in a power grasp configuration, as the hand is just closed to a certain extend — for, e.g., clearing a table. Once a MVBB is selected, the procedure followed to find the transformation TOH describing the pose of the hand with respect to the object is the following: 1. Align the x axis of the hand parallel the longest side of the MVBB. 2. Align the z axis of the hand with the axis of the box which has the smallest angle with respect to the z axis of the hand. 3. Compute the orientation of the y axis to form a right-handed frame.

32

M. Bonilla et al.

H From this procedure, we can generate the rotation matrix RO defining the orientation of the hand frame H with respect to the object frame O. The frame H is placed 5 mm out of the MVBB, in the negative direction of the z axis defined previously. The procedure is explained graphically in Fig. 8. From the previous steps we can see that, from steps (1) and (2), the selected axis (longest axis of the box and the one with smallest angle between the axis of the box and the z axis of the hand) can be the same. In this case, we still align the x axis with the longest side of the MVBB, but the z axis is aligned with the axis of the box which has the smallest angle with respect to the vector connecting the centroid of the MVBB with the object centroid. If those axis are parallel so we pick a different axis randomly from the point cloud.

6.3

Pose Variations

The previous procedure generates just a single hand configuration. However, once a MVBB is generated, there is a large number of possibilities to grasp it. In order to generate more variations for a box, we first set the range of motion in which we can move the hand, translating a distance xt and rotating by an angle αt , both along the longest axis of the box, while still not colliding with the object. Figure 9 shows the random variations created for the cup. Variables xt and αt generate a 2D space, with high probability of being collision free, from where we pick a random point, with uniform distribution, and then check for collisions with the hand. If this configuration is collision free, then it is a candidate pose to grasp the object. In this work, we generate 40 random configurations for each box and considered the first 5 boxes on the object, thus for each object there are 200 candidate poses. This procedure constitutes one of the differences with respect to the method proposed in [5], where the authors did not consider collisions in the procedure previous to the simulations. As they explained in the paper, they had a high percentage of failures just because from the very beginning of the simulations, there were many collisions with the object. In this work we performed handobject collision checking before any simulation and discarded those in collision.

7 7.1

The Competition Hand-in-Hand Track

The hand-in-hand competition was divided in two sessions: the pick and place and the manipulation sessions. The tasks regarding the pick and place session were the simplest; the attendant has to pick, ten different objects from a basket and place them into four different virtual boxes. In particular the bottle of water, the scissors and the hammer had to be placed in their boxes with a defined orientation. For this kind of tasks the Pisa/IIT SoftHand showed its total simplicity of use. Combined with a custom-made handle it allowed the user to use his own wrist in order to move the hand inside the basket and successfully pick and place all objects (Fig. 10).

Advanced Grasping with the Pisa/IIT SoftHand

33

Fig. 9. In order to generate more poses to grasp each box, the hand is rotated and translated along x axis of the box.

Fig. 10. Pisa/IIT SoftHand before and after the pick and place session

The manipulation session was composed of ten different tasks, divided in four levels of difficulty, so during the challenge the user had to accomplish four tasks getting 10, 20, 30 and 40 points depending on the level at which the task belonged. Most tasks inside the first level consisted of grasping simple objects, as spoon or a salt shaker, and then use the object to move a third object from a bowl to another one. In the second level the manipulation was still simple but the objects to be grasped were more difficult because of their geometry (e.g. a plug or a straw) and physical features (e.g. the hammer). In the third level also the manipulation becomes difficult, for example there was a task in which the user had to grasp the plunger of a fully closed syringe and open the syringe without removing it. Finally the last level focused on the repeatability of the manipulation tasks, e.g. there was a task where the user had to grasp a pair of scissors (a difficult object that many gripper were not able to grasp) and then

34

M. Bonilla et al.

cut a paper following a line, repeating several cut motions always in the same manner, following the line. Our strategy for this session was to show videos to the user to let him know that the Pisa/IIT SoftHand can be used in a simple and natural way, just like he uses his own hands (Fig. 11).

Fig. 11. Pisa/IIT SoftHand while cutting a sheet with the scissors

7.2

Simulation Track

A strategy for the second task, Picking irregular objects from a cluttered shelf has been devised that makes use of the elements already presented in the architecture explained in Fig. 12. In an offline phase, we proceed to:

Fig. 12. For each mesh in the set a mvbb decomposition is created, and for each obtained pose a simulation is run. Both successful and unsuccessful simulations are then stored in a database which is filtered once the grasp scenario is known to leave only grasps which do not cause collisions with the environment. In the figure, an example grasping large black spring clamp from the Y CB object set

– extract MVBBs from every object of the YCB and Amazon datasets. For objects whose meshes had too many vertices, an automatic simplification using quadric-error edge-collapse simplification [16] has been used. Generate candidates grasp from each MVBB as explained in Sect. 6.2.

Advanced Grasping with the Pisa/IIT SoftHand

35

– programatically simulate each grasp candidate to flag the successful and unsuccessful grasps and store them in a database. The objects are placed on a plane and subject to gravity, and a scoring procedure is established which consists on closing the hand, lifting for one second by 0.2 m and checking the (world aligned) z-coordinate of the object CoM w.r.t. the CoM at rest against a predefined threshold value in order to detect successful grasps. During the run (online phase) we then: – check which of the successful grasps from step (ii) of the offline phase are collision free. Only collisions between hand and environment are considered, but not between hand and objects, nor the collisions during the approach phase. – rotate the hand to orient it according to the goal grasp pose, align it with the pose, raised slightly, then perform a simple linear approach phase – lower the hand, grasp, lift, and move the hand on top of the box The state machine responsible for the second phase would attempt the procedure automatically for all objects on the shelf.

8

Results and Conclusions

For the hand-in-hand track, the ten tasks were completed by the user in 69 s, obtaining the maximum score, showing not only the simplicity of using the handle to control the SoftHand but also the idea behind the design of the hand. In fact, the SoftHand was the only five fingered hand design to participate to the competition, while the other teams used simpler grippers composed of two or three fingers which mechanically have different geometry than a human hand. During the competition we realized how important is the design of the handle to drive the Pisa/IIT SofHand. The one we used was designed to be simple to use and lightweight, however a deeper analysis on its design is suggested in order to make it more intuitive. Using a glove for the human to teleoperate the robotic hand and the combination with EMG signals is envisioned as future research. The Pisa/IIT SoftHand was the fastest device to complete both sessions. In the authors opinion it was probably because, for a generic user, it is more intuitive and natural to move, manipulate and grasp objects with a robotic hand which has a similar shape and function of the human hand. In particular we suspect that when users tries to grasp or manipulate an object, they tend to leverage their own knowledge of the successful wrist movements for a given manipulation task. Also for the manipulation session the SoftHand showed the power of the idea behind its design. In fact, the user was able to use it in a simple and natural way, completing all the tasks in almost 22 min while the second runner spent about 32 min. As for the pick and place session, also in the manipulation, the fact of using a grasping tool similar to their own hands allows the user to leverage his knowledge to place the wrist properly.

36

M. Bonilla et al.

In the case of the simulation track, the competition proved a good testing grounds for the grasp algorithms and emulator already presented in the previous sections. Unfortunately, for the first task, the balls resulted to be too big for the hand to be grasped, as the task was more tailored to gripper than to human-like hands. On the second task, though, we have been surprised by the efficiency of the devised algorithms to successfully pick the objects from the shelf. A simple state machine has been developed in order to perform the task, with the addition of a “shaking” procedure at the end of the placing phase, together with the hand opening sequence, in order to make sure that objects would fall in the box without remaining stuck to the hand. The competition has been an opportunity to tune the hand model in the simulator in order to better match the design of the latest physical models. In particular, by tweaking the transmission matrix R the closing behavior of the thumb has been tuned, thus qualitatively obtaining a similar behavior to the physical hand, both kinematically and with respect to grasp performance. While the CUHE allows to tweak a synergy scaling parameter (σscaling in 7), normalizing R row-wise ensured a consistent value for synergy scaling during tweaking of the hand parameters. Acknowledgements. This research has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No. 645599 (SOMA).

References 1. Ajoudani, A., et al.: A manipulation framework for compliant humanoid COMAN: application to a valve turning task. In: IEEE-RAS International Conference on Humanoid Robots (Humanoids), pp. 664–670. IEEE (2014) 2. Bernstein, N.A.: The Co-ordination and Regulation of Movements, 1st edn. Pergamon Press Ltd., New York (1967) 3. Bicchi, A., Gabiccini, M., Santello, M.: Modelling natural and artificial hands with synergies. Philos. Trans. R. Soc. B: Biol. Sci. 366(1581), 3153–3161 (2011) 4. Birglen, L., Gosselin, C.M., Lalibert´e, T.: Underactuated Robotic Hands, vol. 40. Springer, Heidelberg (2008) 5. Bonilla, M., et al.: Grasping with soft hands. In: IEEE-RAS International Conference on Humanoid Robots (Humanoids), Madrid, Spain, 18–20 November 2014 6. Brown, C.Y., Asada, H.H.: Inter-finger coordination and postural synergies in robot hands via mechanical implementation of principal components analysis. In: IEEE International Conference on Intelligent Robots and Systems (IROS), pp. 2877– 2882. IEEE (2007) 7. Brygo, A., et al.: Synergy-based interface for bilateral tele-manipulations of a master-slave system with large asymmetries. In: International Conference on Robotics and Automation (2016) 8. Cannon, J.R., Howell, L.L.: A compliant contact-aided revolute joint. Mech. Mach. Theory 40(11), 1273–1293 (2005) 9. Catalano, M.G., et al.: Adaptive synergies for the design and control of the Pisa/IIT SoftHand. Int. J. Robot. Res. (IJRR) 33, 768–782 (2014). https://doi.org/10.1177/ 0278364913518998

Advanced Grasping with the Pisa/IIT SoftHand

37

10. Ciocarlie, M., Goldfeder, C., Allen, P.: Dexterous grasping via eigengrasps: a lowdimensional approach to a high-complexity problem. In: Robotics: Science and Systems Manipulation Workshop-Sensing and Adapting to the Real World. Citeseer (2007) 11. MSC Software Corp. Adams. http://www.mscsoftware.com/product/adams. Accessed 26 Aug 2015 12. Deimel, R., Brock, O.: A compliant hand based on a novel pneumatic actuator. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 2047– 2053 (2013). https://doi.org/10.1109/ICRA.2013.6630851 13. Deimel, R., Brock, O.: A novel type of compliant, underactuated robotic hand for dexterous grasping. In: Robotics: Science and Systems, Berkeley, CA, pp. 1687– 1692 (2014) 14. Santina, C.D., et al.: Dexterity augmentation on a synergistic hand: the Pisa/IIT SoftHand+. In: IEEE-RAS International Conference on Humanoid Robots (Humanoids), pp. 497–503. IEEE (2015) 15. Eppner, C., Brock, O.: Planning grasp strategies that exploit environmental constraints. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 4947–4952 (2015). https://doi.org/10.1109/ICRA.2015.7139886 16. Garland, M., Heckbert, P.S.: Surface simplification using quadric error metrics. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1997, pp. 209–216. ACM Press/Addison-Wesley Publishing Co., New York (1997) 17. Geidenstam, S., et al.: Learning of 2D grasping strategies from box-based 3D object approximations. In: Robotics: Science and Systems (RSS), Seattle, USA (2009) 18. Godfrey, S.B., et al.: A synergy-driven approach to a myoelectric hand. In: 2013 IEEE International Conference on Rehabilitation Robotics (ICORR), pp. 1–6. IEEE (2013) 19. Grebenstein, M., et al.: The DLR hand arm system. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 3175–3182. IEEE (2011) 20. Grioli, G., et al.: Adaptive synergies: an approach to the design of under-actuated robotic hands. In: IEEE International Conference on Intelligent Robots and Systems (IROS), pp. 1251–1256. IEEE (2012) 21. Hauser, K.: Robust contact generation for robot simulation with unstructured meshes. In: International Symposium on Robotics Research, Singapore (2013) 22. Hillberry, B.M., Hall Jr., A.S.: Rolling contact joint. US Patent 3,932,045 (1976) 23. Hirose, S.: Connected differential mechanism and its applications. In: Proceedings of 2nd ICAR, pp. 319–326 (1985) 24. Kappler, D., Bohg, J., Schaal, S.: Leveraging big data for grasp planning. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 4304–4311. IEEE (2015) 25. Kim, J., et al.: Physically based grasp quality evaluation under pose uncertainty. IEEE Trans. Robot. 29(6), 1424–1439 (2013). ISSN: 1552–3098, https://doi.org/ 10.1109/TRO.2013.2273846 26. Lalibert´e, T., Birglen, L., Gosselin, C.: Underactuation in robotic grasping hands. Mach. Intell. Robot. Control 4(3), 1–11 (2002) 27. Latash, M.L.: Fundamentals of Motor Control. Academic Press, New York (2012) 28. Ma, R.R., Odhner, L.U., Dollar, A.M.: A modular, open-source 3D printed underactuated hand. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 2737–2743 (2013). https://doi.org/10.1109/ICRA.2013.6630954

38

M. Bonilla et al.

29. Miller, A.T., Allen, P.K.: Examples of 3D grasp quality computations. In: IEEE International Conference on Robotics and Automation, pp. 1240–1246. IEEE (1999) 30. Odhner, L.U., et al.: A compliant, underactuated hand for robust manipulation. Int. J. Robot. Res. (IJRR) 33(5), 736–752 (2014). https://doi.org/10.1177/ 0278364913514466 31. RightHand Robotics. Reflex SF Spec Sheet. http://www.righthandrobotics.com/ main:reflex. Accessed 26 Aug 2015 32. Robotiq. 3-finger adaptive robot gripper spec sheet. http://robotiq.com/products/ industrial-robot-hand/. Accessed 26 Aug 2015 33. Rocchi, A., Hauser, K.: A generic simulator for underactuated compliant hands. In: 2016 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR) (2016) 34. Rocchi, A., et al.: Stable simulation of underactuated compliant hands. In: IEEE International Conference on Robotics and Automation (ICRA) (2016) 35. Rosales, C.J.: Pisa/IIT Soft Hand. https://github.com/CentroEPiaggio/pisa-iitsoft-hand. Accessed 26 Aug 2015 36. Santello, M., Flanders, M., Soechting, J.F.: Postural hand synergies for tool use. J. Neurosci. 18(23), 10105–10115 (1998) 37. Wimboeck, T., Ott, C., Hirzinger, G.: Passivity-based object-level impedance control for a multifingered hand. In: IEEE International Conference on Intelligent Robots and Systems (IROS), pp. 4621–4627. IEEE (2006) 38. Kai, X., et al.: Design of an underactuated anthropomorphic hand with mechanically implemented postural synergies. Adv. Robot. 28(21), 1459–1474 (2014)

Design of Modular Humanoid Robotic Hand Driven by SMA Actuator Yang Chen, Shaofei Guo, Hui Yang, and Lina Hao(&) School of Mechanical Engineering and Automation, Northeastern University, Shenyang, China [email protected]

Abstract. With excellent properties of light weight, low energy consumption and high power weight ratio, shape memory alloy (SMA) actuator has been widely applied in robotic hand nowadays. A novel modular humanoid robotic hand driven by SMA actuator is proposed and fabricated by 3D printer in this paper. An innovation displacement amplification pulley is designed to increase the output displacement of SMA actuator. The novel permanent magnet restoring structure can generate larger output force than that in conventional spring restoring structure in grasping process. In addition, modular design of hand makes it easy to assemble and disassemble, and the motion of hand is underactuated grasping form which simplifies the control system. The grasping tests of different shape and dimension of objects show that the hand has well working performance. The grasping diameter of object is less than 120 mm, and the maximum grasping weight is 500 g. Keywords: Modular humanoid robotic hand  SMA actuator  Modular design Permanent magnet restoring structure  Displacement amplification pulley

1 Introduction Humanoid dexterous robotic hand has been extensively researched, and it has become a significant development direction in robotic domain. With an increasing number of scholars focus on robotic hand, more and more novel and powerful grasping function dexterous hands have been proposed and fabricated such as Shadow Dexterous Hand [1], Festo Hand [2], Ionic Polymer Metal Composite (IPMC) artificial finger [3], dielectric elastomer (DE) robotic finger [4], robotic hand driven by shape memory alloy (SMA) actuator [5] and nylon-muscle-actuated robotic finger [6]. All above robotic hands are driven by artificial intelligent material actuators instead of motor or hydraumatic conventional driving form. With properties of light weight, low energy consumption and high power-weight ratio, artificial intelligent material actuators have been widely used in robotic system. In our research, we are interested in SMA actuator. SMA is integrated with driving and sensing properties, and has high power weight ratio and low driving voltage. There exist many hand designs that are driven by SMA actuator, such as multifunctional prosthetic hand [7], three-finger prosthetic hand [8], robotic hand [9] and a soft robotic hand with antagonistic SMA strip [10]. However, strain of SMA wire is always less than 8%, in industrial application, tensile machine is © Springer International Publishing AG, part of Springer Nature 2018 Y. Sun and J. Falco (Eds.): RGMC 2016, CCIS 816, pp. 39–56, 2018. https://doi.org/10.1007/978-3-319-94568-2_3

40

Y. Chen et al.

employed to make SMA wire generate maximum strain about 8%. While it is infeasible to generate maximum strain in robotic hand application, and to generate enough displacement of an SMA actuator requires a large area of the hand or even forearm mounting to drive finger flexing. In addition, the restoring force of a spring which becomes larger restrains the flexing motion of a finger in the grasping process, which decreases work efficiency of SMA actuator. In addition, SMAs have strong inherent nonlinearity requiring complex control systems in order to compensate for this characteristic. To overcome problems associated with a robotic hand driven by SMA actuators, a modular humanoid robotic hand is designed and fabricated using a 3D printer. This hand is assembled by seven modules which are finger, thumb, palm, displacement amplification pulley, permanent magnet restoring structure, fan and control system hardware. The modular design makes simplifies assembly and disassembly and the displacement amplification pulley increases the output displacement of the SMA actuators. Additionally, a permanent magnet restoring structure and fan improves work efficiency of the SMA actuator, and the underactuated enveloping grasping design is adaptive to shape of object which simplifies the control system. All of the above features enhance the performance and efficiency of the hand. This paper is organized as follows. Section 2 provides a detailed description of the modular components and their functionality as incorporated into the hand design. Section 3 derives the kinematics while Sect. 4 derives the drive parameters of the hand. The control system scheme of hand is proposed, and the corresponding hardware is selected to realize function of control system in Sect. 5. Section 6 provides grasping test results including performance parameters used. The Sect. 7 is summary and conclusion.

2 Design of Hand The humanoid robotic hand is designed by modular theory, and each part of hand is fabricated individually so that it is easy for us to assemble. The shape and dimension of hand is similar to human hand. The entire hand has five fingers including an index finger, a middle finger, a ring finger, a little finger that is identical to the ring finger, and a thumb. The entire hand body is modeled in SolidWorks, and fabricated using a 3D printer. We use SMA actuator (MigaOne-15-Linear Shape Memory Alloy Actuator) purchased from Miga Motor Company to drive each finger by generating rotational motion [11]. In this section, we will describe all parts of hand in detail. 2.1

Finger

The finger has three parts which are Distal Interphalangeal (DIP), Proximal Interphalangeal (PIP) and Metacarpophalangeal (MCP) [12] which has been simplified by one degree of freedom (DOF). The three rotational joints of the finger are driven by a single SMA actuator, so that the finger generates an underactuated motion. This grasping form can be adaptive to the shape of object, which can make a strong and stable manipulation. In order to enhance the friction between finger surface and object, non-slip mats

Design of Modular Humanoid Robotic Hand Driven by SMA Actuator

41

are stick to each surface of DIP, PIP and MCP. Two ropes are connected to the oriented pulley in each finger, one is connected to SMA actuator to generate grasping motion, and the other is connected to permanent magnet restoring structure to generate restoring force of finger. The structure diagram of finger is shown in Fig. 1.

Fig. 1. The structure diagram of finger

2.2

Thumb

In human hand manipulation, the thumb plays a very important role in most grasping motions. The human thumb has an Interphalangeal (IP) with one DOF, a MCP with two DOFs and a Trapeziometacarpal (TM) with two DOFs [12]. In order to ensure the thumb can perform grasps using an SMA linear actuator, the thumb of this humanoid robotic hand is designed with an IP, MCP and TM, and all are with one DOF. The thumb is mounted on edge side of the palm. The thumb also has two ropes that wrap and terminate on the IP pulley for transmitting grasping force and restoring force. 2.3

Palm

In order to make the humanoid robotic hand modular and easy to assemble, the palm is assembled with four fixed shells which are used to mount SMA actuator, displacement amplification pulley and permanent magnet restoring structure. Each fixed shell corresponds to one finger. The mounting position of all the components in fixed shell is shown in Fig. 2.

Fig. 2. The structure diagram of fixed shell

42

2.4

Y. Chen et al.

Displacement Amplification Pulley

In this paper, the maximum strain of SMA wire is about 5%. In general, we need a long enough SMA wire to drive one finger flexing and extending thoroughly. Other SMA based robotic hand designs use a large portion of the wrist space to mount the SMA wire actuators. We design a highly integrated and modular humanoid robotic hand where the space of the palm is limited that minimizes the length of the SMA wire actuator and maximizes linear displacement. In this paper, a novel displacement amplification pulley is designed and fabricated by 3D printer as shown in Fig. 3. This variable diameter pulley consists of two parts where the diameter of one half of the pulley is 12 mm and the diameter of the other half is 20 mm. The output displacement of SMA actuator is connected to the edge of small diameter pulley, and the output displacement of finger is connected to the edge of big diameter pulley so that the variable diameter pulley can increase the output displacement of driving rope of finger by 1.6 times. That is to say, the output displacement of SMA actuator is 8.255 mm and the output displacement of driving rope of finger is 13.758 mm. This design not only saves SMA wires, but also makes the hand compact and elaborate.

Fig. 3. The variable diameter pulley

2.5

Permanent Magnet Restoring Structure

The SMA actuator used in this paper is a one way SMA wire, and the restoring force is so small that finger cannot fully regain its initial state. Because of this, a restoring structure for the SMA actuator is required for this SMA driven robotic hand. While a spring seems to be a viable component to provide restoring force, increasing output displacement of SMA actuator will cause the spring to generate increasing force which will oppose the grasping force of the robotic hand. To avoid this force canceling problem, an innovative permanent magnet restoring structure is proposed and fabricated by 3D printer as shown in Fig. 4. The permanent magnet restoring structure consists of a mounting base and a pair of opposite polarity permanent magnet. When the output displacement of the SMA actuator becomes larger, the permanent magnet generates less force, and when the output displacement of SMA actuator becomes smaller, the permanent magnet force increases making the finger restore to its initial state rapidly. As evidenced by grasping

Design of Modular Humanoid Robotic Hand Driven by SMA Actuator

43

Fig. 4. The permanent magnet restoring structure

tests, this method of actuation increases the overall grasping forces of the humanoid robotic hand. 2.6

The Humanoid Robotic Hand

All the components of humanoid robotic hand have been designed as described above. The humanoid robotic hand mainly consists of five fingers and a palm. The physical diagram of humanoid robotic hand is shown in Fig. 5. The dimension parameters of humanoid robotic hand are listed in Table 1.

Fig. 5. The physical diagram of humanoid robotic hand

3 Kinematics Analysis of Hand 3.1

Forward Kinematics

The structure of the modular humanoid finger can be simplified as an open loop kinematic chain of two-links in series. When the index finger, middle finger, ring finger and little finger extend completely, we define the hand state as palm plane. The plane of

44

Y. Chen et al. Table 1. The dimension parameters of robotic hand Length/mm Width/mm Thickness/mm Rotation angle= DIP 23 16 10 40 PIP 34 16 13 60 MCP 35 16 14 80 Fixed shell 114 22 25 –

TM is 10 mm above palm plane. The coordinate system of robotic hand shown in Fig. 6 is built by D-H method. The base coordinate system of all the fingers is built at rotation axis of MCP or TM. o  xp yp zp represents the palm coordinate system. o  xwi ywi zwi represents the coordinate system of each joint of finger, where w denotes different fingers, and i denotes joint coordinate sequence. We use t, f, m, r and l to represent thumb, index finger, middle finger, ring finger and little finger respectively.

Fig. 6. The robotic hand coordinate system

Based on the physical dimensions of robotic hand, we can determine the origin of base coordinate system of finger position in palm coordinate system. The D-H parameters of finger are shown in Table 2, where Li represents distance between axis of two adjacent joints, di represents distance between common perpendicular of Li and Li1 , ai represents angle of axis of two adjacent joints, and hi represents rotational angle of link. The homogeneous coordinate transformation matrix of finger joint is obtained by D-H parameters as follows. Note that c is short for cosine function, and s is short for sine function.

Design of Modular Humanoid Robotic Hand Driven by SMA Actuator

45

Table 2. D-H parameters of finger Joint number Li =mm di =mm ai = 1 L1 0 0 2 L2 0 0 3 L3 0 0

2

ch1 6 sh1 T1 ¼ 6 4 0 0 2

ch2 6 sh2 T2 ¼ 6 4 0 0 2

ch3 6 sh3 T3 ¼ 6 4 0 0

hi = h1 h2 h3

sh1 ch1 0 0

3 0 L1 ch1 0 L1 sh1 7 7 1 0 5 0 1

ð1Þ

sh2 ch2 0 0

3 0 L2 ch2 0 L2 sh2 7 7 1 0 5 0 1

ð2Þ

sh3 ch3 0 0

3 0 L3 ch3 0 L3 sh3 7 7 1 0 5 0 1

ð3Þ

The transformation matrix of base coordinate system of finger relative to coordinate system of fingertip can be obtained as 2

T30

c123 6 s123 ¼6 4 0 0

s123 c123 0 0

0 0 1 0

3 L1 c1 þ L2 c12 þ L3 c123 L1 s1 þ L2 s12 þ L3 s123 7 7: 5 0 1

ð4Þ

The transformation matrix of base coordinate system of finger relative to coordinate of palm can be obtained as 2

3 L1 c1 þ L2 c12 þ L3 c123 6 L1 s1 þ L2 s12 þ L3 s123 7 7; Pf 0 ¼ T30 Pf 3 ¼ 6 4 5 0 1

ð5Þ

where Pf 3 ¼ ½ 0 0 0 1 . From the above homogeneous coordinate transformation matrix, each fingertip position in palm coordinate system can be expressed as (

Tpwp ¼ Tpw0 Tw0 wp w0 Pwp p ¼ Tp Pwi ¼ Tp Pw0

;

ð6Þ

46

Y. Chen et al.

where Tpwp represents transformation matrix from fingertip coordinate system to palm coordinate system, Tpwp represents fingertip position in palm coordinate system, Tpw0 represents homogeneous coordinate transformation matrix from finger base coordinate system to palm coordinate system, Pw0 represents fingertip position in finger coordinate system. According to Eq. (6), fingertip of index finger position in palm coordinate system is expressed as Pfp p. 2

3 27 6 35c1 þ 34c12 þ 23c123 7 6 7 Pfp p ¼ 4 35s þ 34s 5 1 12 þ 23s123 þ 55 1 Fingertip of middle finger position in palm coordinate system is expressed as Pmp p . 2

Pmp p

3 0 6 35c1 þ 34c12 þ 23c123 7 7 ¼6 4 35s1 þ 34s12 þ 23s123 þ 60 5 1

Fingertip of ring finger position in palm coordinate system is expressed as Prp p . 2

3 5 6 35c1 þ 34c12 þ 23c123 7 6 7 Prp p ¼ 4 35s þ 34s 5 1 12 þ 23s123 þ 55 1 Fingertip of little finger position in palm coordinate system is expressed as Plp p. 2

3 54 6 35c1 þ 34c12 þ 23c123 7 6 7 Plp p ¼ 4 35s þ 34s 5 1 12 þ 23s123 þ 50 1

3.2

Inverse Kinematics

Inverse kinematics analysis of hand is to solve angle of each joint of finger in known of fingertip position condition. The pose of fingertip in base coordinate system of finger can be calculated by Eq. (6). Tw0 ¼ ðTpw0 Þ1  Tpwp

ð7Þ

Take index finger as an example, Tpfp represents pose of fingertip of index finger in palm coordinate system. So we can get the derivation of pose Tf 0 .

Design of Modular Humanoid Robotic Hand Driven by SMA Actuator

2

c/ 6 s/ Tf 0 ¼ 6 4 0 0

s/ c/ 0 0

3 0 x1 0 y1 7 7 1 05 0 1

47

ð8Þ

Where 8 c/ ¼ c123 > > > < s/ ¼ s123 > x1 ¼ L1 c1 þ L2 c12 > > : y1 ¼ L1 s1 þ L2 s12 Where 8 c12 ¼ c1 c2  s1 s2 > > > > > s ¼ c1 s 2  s 1 c2 > > < 12 x2 þ y21  L21  L22 c2 ¼ 1 > > 2L L > > > qffiffiffiffiffiffiffiffiffiffi1ffi 2 > > : s ¼  1c2 2

2

It can be calculated by algebraic method. (

h1 ¼ arctan 2ðy1 ; x1 Þ  arctan 2ðk2 ; k1 Þ h2 ¼ arctan 2ðsin h2 ; cos h2 Þ

Where (

k1 ¼ L1 þ L2 cos h2 k2 ¼ L2 sin h2

And then / can be calculated as follows. h1 þ h2 þ h3 ¼ arctan 2ðs/; c/Þ ¼ /

3.3

Grasping Space

Using the coordinate system relationship between the fingertip and palm, the grasping space of humanoid robotic hand can be determined using MATLAB. The simulation results are shown in Fig. 7 as the yz plane view results and in Fig. 8 as the axial view results. The robotic hand has large grasping space from simulation results, and grasping diameter of object ranges from 0 to 166 mm in theory.

48

Y. Chen et al.

Fig. 7. The yz plane view of grasping space

Fig. 8. The axial view of grasping space

4 Transmission Performance Analysis of Finger 4.1

Joint Driving Torque

The grasping motion of a finger is affected by the restoring force generated by restoring structure, as well as efficiency of transmission mechanism. The position and dimension of guiding shaft and pulley influence driving torque of each joint and the output force of finger directly. The schematic diagram of finger transmission is given out in Fig. 9. Due to grasping motion transmitting in rope-driven style, in the condition of same SMA actuator output force, optimal transmission mechanism form will achieve maximum driving torque of finger. In order to analyze scientific rationality of finger

Design of Modular Humanoid Robotic Hand Driven by SMA Actuator

49

structure designed in this paper, transmission mechanism of finger is simplified as a combination of pulley block, and the pulley block is idealized as a mass point. This simplified model of the transmission mechanism is shown in Fig. 10.

Fig. 9. The schematic diagram of finger transmission

It is easily obtained from Fig. 10 as Eq. (9). M ¼ 2cT cos bcos b0

ð9Þ

Where M represents driving force of joint, c represents force arm of joint, T represents driving force in rope generated by SMA actuator, G represents resultant force acting at pulley, Ge represents one component force of G, b represents angle between G and Ge , b0 represents complementary angle of b. While physical dimension of transmission component cannot be neglected in practice, considering this condition, the diagram of driving force analysis is described in Fig. 11. The mathematical relation of physical dimension can be obtained in transmission structure as below. qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 8 > c ¼ d12 þ d22 > > > > > > > a ¼ a0 þ a00 > > > > d1 > > > tan a0 ¼ > < d2 r 1 þ r2 > sin a0 ¼ > > > c > > p > 00 > > b ¼  a  b0 > > 2 > > > 00 > > 0 :b ¼ p  a  b 2 Above equations of physical dimension relation are put into Eq. (9), and joint driving torque is expressed as follows. M ¼ 2T

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a0  a00 þ b00 p  a  b00 d12 þ d22 cosð Þcos( Þ 2 2

ð10Þ

50

Y. Chen et al.

Fig. 10. Simplified model of transmission mechanism

Fig. 11. The diagram of actual driving force analysis of transmission structure

Where d1 represents horizontal distance between pulleys, d2 represents vertical distance between pulleys, ri represents radius of pulley, a0 represents angel between horizontal line and connection line, and the connection line is mass point of adjacent pulleys, a00 represents angel between rope and connection line after simplification, b00 represents angel between horizontal line and rope twined at fingertip. From Eq. (10), joint driving torque will become larger with increasing of radius of pulley and distance between pulleys. 4.2

Driving Displacement

Driving displacement is also an important performance index of grasping motion. We can get driving displacement of DIP. lDIP ¼

h1  2pr2 360

ð11Þ

Design of Modular Humanoid Robotic Hand Driven by SMA Actuator

51

Due to the small dimension of the guiding shaft, its influence in calculation of driving displacement can be neglected. Therefore, the driving displacement of PIP and MCP can be expressed as Eqs. (12) and (13). lPIP ¼

h2  2pr3 360

ð12Þ

lMCP ¼

h3  2pr4 360

ð13Þ

Where h1 , h2 and h3 represent rotational angle of DIP, PIP and MCP. So the driving displacement of finger is total of all the joint displacements. Ltotal ¼ lDIP þ lPIP þ lMCP

ð14Þ

From Eq. (14), the total driving displacement will become smaller with decreasing of radius of pulley. Through the transmission performance analysis of finger, we should decrease dimension of pulley to increase total driving displacement, as well as appropriately increase dimension of guiding shaft to increase joint driving torque.

5 Control System of Hand 5.1

Control Mode

The control system is a vital component in humanoid robotic hand, as it determines the grasping performance of the system. The hand designed in this paper generates underactuated grasping movement, so we are not concerned with each joint rotation angle of the hand during the grasping process, and the hand can form enveloping grasping to adapt to object shape. The grasping control problem focuses on how to control the output displacement of SMA actuator. There are many factors influencing the output displacement of SMA actuator such as driving voltage, driving current, environment temperature, mounting position error and so on. In reality, we can only control the driving voltage and current of SMA actuator to obtain different grasping forms, while environment temperature and other uncontrollable factors also influence grasping effect. As a result, it is very difficult to control finger position precisely. Considering the SMA actuator working under a safety temperature threshold, the temperature interrupt protection control strategy is proposed for different grasping tasks. The flow diagram of the control system is shown in Fig. 12. The main programs of the control system, the program handling interrupt protection and the program that executes motion are presented in Figs. 13 and 14, respectively. The working process of the control system begins with checking whether the key has been pressed on the manual button. There are five key settings on the manual button, and each key has corresponding function such as stop operation, cool SMA actuator, grasping motion 1, grasping motion 2 and grasping motion 3. They represent the five different working modes of the humanoid robotic hand. When the system checks for any key pressed, the control system will execute corresponding program

52

Y. Chen et al.

Fig. 12. The flow diagram of control system

Fig. 13. The flow diagram of temperature interrupt protection

Design of Modular Humanoid Robotic Hand Driven by SMA Actuator

53

Fig. 14. The flow diagram of grasping motion

command. The temperature interrupt protection is executing during all operations. If the temperature of the SMA actuator exceeds safety temperature at any time, temperature interrupt protection program will start. 5.2

Hardware

In this humanoid robotic hand, we select Arduino 2560 as control unit. The control unit has plenty of I/O ports such as 54 digital I/O ports (16 ports can be worked as PWM output ports), 16 analog input ports, 6 external interrupt ports and other communication ports. Arduino 2560 is not only inexpensive but it also has a lot of open source code which greatly simplifies project efforts. The temperature protection module that is used to monitor the SMA actuator temperature is commercially available. When the SMA actuator exceeds the pre-set safety temperature, the temperature protection module sets a digital output value to zero. The power of SMA actuator is achieved from voltage

54

Y. Chen et al.

regulation module that is adjusted by the PWM output of Arduino 2560 to control the voltage regulation module. The hardware of control system is set up in Fig. 15.

Fig. 15. The setting up of hardware of control system

6 Grasping Tests The control system hardware is packaged in a 3D printed box. A human can manipulate the manual button to make hand work, and we design three grasping modes in advance. They are five-finger grasping, three-finger grasping and two-finger grasping respectively, which can complete adaptive grasping for different shaped and dimensioned objects. In this paper, we use a variety of daily life goods of varying shape and dimension to test different grasping modes. The grasping tests performed on this humanoid robotic hand are shown in Fig. 16. From the test results, the daily life objects can be grasped stably, and the humanoid robotic hand can complete pick and place tasks as well as manipulation tasks. During power enhanced grasping tests, we found that the fan sometimes turned on, that is to say, the temperature of SMA actuator exceeded the safety temperature threshold, and the interrupt protection successfully protected the SMA actuator in grasping process. All the tests demonstrate good working effect of control system and humanoid robotic hand itself. Table 3 shows some major performance parameters of humanoid robotic hand.

Design of Modular Humanoid Robotic Hand Driven by SMA Actuator

(a)

(b)

(c)

(d)

55

Fig. 16. Grasping tests of robotic hand: (a) bottle, (b) scissors, (c) bowl, (d) strawberry.

Table 3. Performance parameters of humanoid dexterous hand Item Grasping diameter Grasping frequency Maximum grasping weight Parameter  120 mm 10 times/min 500 g

7 Conclusion A novel modular humanoid robotic hand driven by SMA actuator is proposed and fabricated by 3D printer in this paper. It is convenient to assemble and disassemble because of its modular design method. It costs little time for scholar to design and fabricate a prototype. The hand has similar shape and dimension to that of a human hand with five fingers consisting of four identical fingers and one thumb. This robotic hand is also light weight and compact, and it is easy for us to design an interface at the end of hand. Then, we can affix it to the end joint of robotic arms to test the capabilities of manipulation. The innovation displacement amplification pulley and permanent magnet restoring structure are original to improve grasping effect of object. The

56

Y. Chen et al.

innovation displacement amplification pulley can increase the output displacement of SMA actuator. The novel permanent magnet restoring structure can generate larger output force than that in conventional spring restoring structure in grasping process. The daily life goods grasping tests demonstrated good grasping performance of the hand. According to test results, the hand can grasp and object weighing a maximum of 500 g with no more than a 120 mm diameter. The grasping frequency of the hand was tested to be 10 times/min. Acknowledgments. This work was supported by National High Technology Research and Development Program of China (863 Program) under Grant No. 2015AA042302 and the National Natural Science Foundation of China under Grant No. 61573093. The authors would like to thank IROS 2016 committee for giving us the chance to participate in Robotic Grasping and Manipulation Competition, primary contact person Yu Sun for his hard work, Korean government funding as a reimbursement of our team’s travel to IROS 2016, and all the volunteers in IROS 2016. Yang Chen designed the control system of hand, carried out the grasping tests and wrote the paper. Shaofei Guo designed the modular humanoid robotic hand and participated in the grasping tests. Hui Yang participated in debugging the program of control system. Lina Hao inspired us to design a robotic hand driven by artificial muscle actuator and reviewed this manuscript.

References 1. Shadow Dextrous Hand Technical Specification on https://www.shadowrobot.com/wpcontent/uploads/shadow_dexterous_hand_lite_technical_specification_G1_20150817.pdf 2. ExoHand-Festo on https://www.festo.com/net/SupportPortal/Files/156734/Brosch_FC_ ExoHand_EN_lo.pdf 3. Sang, J.L., Man, J.H., Kim, S.J., et al.: A new fabrication method for IPMC actuators and application to artificial fingers. Smart Mater. Struct. 15(15), 1217–1224 (2006) 4. Chuc, N.H., Vuong, N.H.L., Kim, D., et al.: Design and control of a multi-jointed robot finger driven by an artificial muscle actuator. Adv. Robot. 24(14), 1983–2003 (2010) 5. Silva, A.F.C., Souto, C.D.R., Silva, S.A.D., et al.: Application of artificial vision as measurement validation tests on a robotic hand driven by shape-memory alloys, pp. 522–526 (2015) 6. Wu, L., Andrade, M.J.D., Rome, R.S., et al.: Nylon-muscle-actuated robotic finger. In: Society of Photo-optical Instrumentation Engineers Conference Series. International Society for Optics and Photonics (2015) 7. Andrianesis, K., Tzes, A.: Development and control of a multifunctional prosthetic hand with shape memory alloy actuators. J. Intell. Rob. Syst. 78(2), 257–289 (2015) 8. Simone, F., York, A., Seelecke, S.: Design and fabrication of a three-finger prosthetic hand using SMA muscle wires. In: Proceedings of SPIE - The International Society for Optical Engineering, vol. 9429 (2015) 9. Yussof, H., Miskon, M.F., Lange, G., et al.: Shape memory alloys as linear drives in robot hand actuation. Proc. Comput. Sci. 76, 168–173 (2015) 10. She, Y., Li, C., Cleary, J., et al.: Design and fabrication of a soft robotic hand with embedded actuators and sensors. J. Mech. Rob. 7(2) (2015) 11. Information on http://www.migamotors.com/index.php?main_page=product_info&cPath= 1&products_id=26 12. Erol, A., Bebis, G., Nicolescu, M., et al.: A Review on Vision-Based Full DOF Hand Motion Estimation, p. 75 (2005)

The TU Hand: Using Compliant Connections to Modulate Grasping Behavior Dipayan Das(B) , Nathanael J. Rake, and Joshua A. Schultz The University of Tulsa, Tulsa, OK 74104, USA [email protected] http://personal.utulsa.edu/∼jas019/index.html

Abstract. Guided by the notion that the five-fingered anthropomorphic hand is a good general purpose manipulator, Team Tulsa approached the hand-in-hand portion of the grasping and manipulation competition using a simplified anthropomorphic hand. The hand had a simplified thumb, fixed in the opposed position, and only two actuators. Motions of the fingers and thumb were coupled together using a “ties and skips” architecture where thumb and finger tendons were tied to specific coils of a “mainspring” in a manner that produced the best behavior across the wide range of challenges. The actuators could move or deform the spring in common mode, which resulted in an enveloping grasp) or differential mode (which resulted in a pinch grasp) and superimpose the two modes. The compliant nature of the hand allowed the fingers to conform to the object as the grasp was acquired. This strategy allowed the retrieval of all objects from the basket (all on the first or second attempt by the volunteer), and scooping peas from the dish, but could not operate the hammer (due to its weight) the syringe, or the scissors (as they required increased dexterity).

Keywords: Physical compliance Transmission mechanisms

1

· Grasping

Introduction

The grasping and manipulation competition at IROS represented an auspiciously timed opportunity to evaluate the newly developed University of Tulsa Anthropomorphic Robotic Hand (or TU Hand) at performing basic grasping and manipulation tasks in real-world environments. This was an excellent opportunity to evaluate whether a greatly simplified, moderately dexterous anthropomorphic hand is the right approach to completing tasks in unstructured environments. Thanks to Fahad Ansari, University of Tulsa Junior, for serving as the unaffiliated volunteer for our simulated competition in Tulsa. This research was supported by NSF NRI 1427250. c Springer International Publishing AG, part of Springer Nature 2018  Y. Sun and J. Falco (Eds.): RGMC 2016, CCIS 816, pp. 57–83, 2018. https://doi.org/10.1007/978-3-319-94568-2_4

58

D. Das et al.

The development process of the TU Hand has been a “bottom-up” approach, meaning that we begin with a very simple hand and increase complexity as we find it is needed, rather than a “top-down” approach, where one begins with a fully actuated (or even overactuated) hand and then simplifies the design by removing degrees-of-freedom, actuators, or other elements and assesses the resulting impact on the hand’s functionality. The “top-down” approach is certainly a valid one; this has been espoused by the designers of the Anatomically Correct Testbed Hand (ACT) [1–5] to great effect. A top-down approach is limited to high degree-of-actuation hands. Making modifications to simplify or economize it in a research setting involves risk because it is so costly and difficult to repair. With the “bottom-up” approach the risk is much less. We chose to compete in the hand-in-hand competition because we did not have a robotic arm available to us, and because it gave us more opportunity to study the objecthand behavior as the task was completed without being obscured by effects from the arm kinematics, dynamics, or other features of operation more proximal to the hand. Therefore, the TU Hand was designed and built in the sprit of the iRobotHarvard-Yale hand [6] (now commercialized by Right Hand Robotics) [7], the idea being that with key rational design choices, a hand with a small number of actuators can be made with a great deal of dexterity. To this spirit of simplicity, we add anthropomorphism; namely that the hand contains two distinct types of entities—fingers and a thumb. One might ask why the TU team decided to compete with an anthropomorphic hand. Anthropomorphic hands have been described by Cutkosky as “a good general purpose manipulator” [8]. They are also intuitive to operate—important in the hand-in-hand competition because the volunteer will naturally tend to acquire grasps as he or she would with his or her own hand. Working under the assumption that humans have learned to acquire grasps of high grasp quality [9], the anthropomorphic morphology predisposes the hand to these types of grasps. With a non-anthropomorphic hand, grasps may be planned that are of equal or better grasp quality by GraspIt! [10], but the volunteer may not have knowledge of how to plan a high quality grasp due to the unfamiliar morphology. The tasks in the grasping and manipulation competition require prehensile grasps for nearly all tasks. A prehensile grasp in the anthropomorphic context means that one contact point is imposed by the thumb, and all others can be described relative to this contact point [11]; fingers can be arranged as a single “virtual finger” to produce a pencil grasp, or distributed over the object to produce a concurrent grasp [12]. From this standpoint, the thumb is a “base” for the grasp, and must be able to resist the contact forces of the individual fingers. One important realization regarding the design and construction of the robotic hand is that not all grasps need to be force closure. Looking at the tasks individually, it is not difficult to construct a rough estimate of the task wrench space [11] (6 dimensional force-torque space where each coordinate represents the magnitude of force or torque expected to be applied). Since the hand can be constructed to produce grasps that can cover each of these task wrench spaces,

The TU Hand: Using Compliant Connections

59

rather than a grasp wrench space that extends out a significant distance in all directions, the design and construction of the hand can be greatly simplified, as will be described in the sections to follow. The hand that resulted is a modified version of the hand presented in [13]. Actuation and some automatic computer control were added. In addition, modifications were made to the thumb specific to completing the tasks of the competition. The new version of the thumb was actuated according to our underactuation strategy rather than being “poseable” by the contralateral hand as in the prior version. The earlier version of the thumb was referred to by the authors as the “GI Joe-version” of the thumb, because it had to be placed in position manually as one would the limbs of a GI Joe toy. This may seem like an odd choice, but can be surprisingly functional; the thumb of the BeBionic prosthetic hand also requires some positioning by the contralateral hand of the wearer for the abduction/adduction degree of freedom. Nevertheless, the “poseable” thumb was not suitable for the needs of the competition and actuation needed to be added. This was done without increasing the number of actuators.

2

What It Means to “Underactuate Compliantly”

Due to the small space available in a hand, it is technically challenging and expensive to place actuators directly along the joint axis. Even in a completely intrinsic hand (where all motors are distal to the wrist), actuators are still placed remotely to their joint axis (most often in the palm). This means that some sort of transmission mechanism is necessary to convert motion of the actuators into motion of the joints. Most often this is accomplished by linkages, as in the SARAH hand [14] or by tendons, as in Hirose’s seminal soft finger design [15]. The human hand is subject to the very same constraints as are found in robots. In the human body, the high-strength muscles (e.g. flexor digitorum profundus) are extrinsic to the hand, with tendons that pass through the carpal tunnel [16], and the lower-strength intrinsic muscles (e.g. interossei) act through an ingenious tendon network to modify the pose of the hand. Our long-term vision is to capture these features for a robust robotic hand design that captures the most pronounced aspects of human behavior. Whether it be due to biological inspiration or the real-world challenges of fitting the necessary actuators into a hand, underactuation has been a fact of life since the very earliest robotic hands. Figure 1 shows a sampling of research robotic hands, chosen for either historical significance or because they were particularly influential on our work. Anthropomorphic1 hands are shown by the closed circles, and non-anthropomorphic hands by the open circles. The figure 1

In this work, “Anthropomorphic” can be taken to mean having a distinct “thumb” placed in opposition to the fingers with distinct differences in morphology and kinematics from the remaining fingers and with morphology of the fingers generally human in kinematics, appearance and proportion. Not all are five-fingered. For a more quantitative measure of anthropomorphism, the reader is referred to Liarokapis [17].

60

D. Das et al.

shows the number of actuators vs. the number of joints (as each hand includes varying degrees of simplification to the hand kinematics). The solid line corresponds to n + 1, where n is the number of joints, due to the well-known relation that n+1 actuators is the minimum number of actuators that can independently move n joints (this arises from the unilateral constraint that tendons can only pull) [18]2 . n + 1 tendon-driven hands often require non-intuitive routings and complicated inverse joint-actuator mappings, so this configuration is generally avoided. The Belgrade-USC Hand is considered by many to be the first example of a robotic hand in the truest sense [19] and note that it is well below the n + 1/n line. However, the early Utah-MIT dextrous hand [20,21], and StanfordJPL hand [22], adopt 2n antagonist configurations, as well as the more recent DLR hand [23]. The ACT hand [3] adopts a bio-mimetic tendon network. The remaining hands lie below the line and must accept some amount of within-finger joint coupling or even finger-to-finger coupling. Many different design choices are made within this figure to simplify the hand: eliminating abduction-adduction motions of the fingers [24,25], coupling all three joints of the finger for a curling motion [26,27], coupling the two interphalangeal joints but with independent metacarpal phalangeal motion [23], reducing the number of fingers [21], making fingers “poseable” [26] and causing fingers to move together identically [25]. Dashed lines denote that the Schunck anthropomorphic hand [28] is a simplified version of the DLR hand and the TU Hand, Version II (currently under construction) is based on the TU Hand, Version I (which was used in the competition). Hands allow humans to interact with their environment in many possible ways. The human hand is capable of an array of diverse things such as powerful grasps and delicate manipulation. This is despite the fact that humans do not have complete finger individuation [29]; they exhibit coupled motions of the fingers. This idea is very important, especially in the field of robotics because it presents the possibility of creating robotic hands with human-like functionality without independently controlling each of the human hand’s fingers. Thus underactuated systems were created using this method, hands with more degrees of freedom than actuators [30]. This is useful in many ways, especially in humanoid robots, because underactuated systems usually weigh less and cost less than their fully actuated counterparts. Underactuated robotic hands often utilize differential mechanisms [31] because naturally conforming to a shape is an innate advantage of underactuated hands. Fully actuated hands are able to precisely select contact points on a grasped object and precisely control contact forces based on the inverse kinematics and Jacobian of each finger (collectively called the hand Jacobian) [32]. This is wonderful if the system has an exact representation of the object, but this breaks down quickly in uncertain environments. Imagine that a robot has a representation of a glass of water that is 20% larger than the true glass. It 2

In principle, linkage-based systems could be driven bidirectionally and independently drive n joints with only n actuators, but design of such a mechanism is extremely challenging. All linkage-based hands the authors are aware of are underactuated.

The TU Hand: Using Compliant Connections

61

Fig. 1. Underactuation design choices made by several historic and research hands

servos to the desired contact locations and then—proceeds to drop the glass! Impedance control can be used to address this as in other contact problems, but object rotation during grasping can still cause problems. It is in this regard that underactuation can pose an advantage: the coupling in the fingers of the hand can be architected to conform to an unknown object. This is used to great effect in the velo gripper [33] and the SARAH hand [31], where static indeterminacy in the fingers causes it to conform to the object. Other hands, such as the TBM hand [34] the University of Pisa/IIT Softhand [35] use built-in hardware compliance to “absorb” any uncertainties in the object. In essence, they are converting displacement uncertainty to force uncertainty using the constitutive behavior of a spring. Uncertainty in the contact forces (particularly the component lying in the null space of the grasp matrix) is less likely to destabilize an object than a lost contact point due to position uncertainty. The TU Hand was born out of the desire to think of a robotic hand as a unit, rather than a collection of individual manipulators that happen to be manipulating the same object. This perspective is central to how underactuation is accomplished in the hand. It uses hardware compliance to encode synergistic movements of the fingers. Unlike these prior underactuation solutions, where the compliance is an “add-on” to accommodate uncertainty, the compliance is

62

D. Das et al.

part of the operating principle of the hand. The TU Hand breaks the mold of underactuation being a decision about which fingers or joints “get their own actuator”. Ever since Santello, Soechting and Flanders outlined the principle of synergies, (namely, that the first two principal components account for 80% of the variance and the first three for ∼90% of the variance among grasps of 57 objects [36]), many groups have searched for a way to build a “synergistic” hand, where each actuator moves the fingers in a manner consistent with a basic human action. Notable work include a matrix formulation by Gabiccini et al. [37] and using differential mechanisms as in Brown and Asada [38]. The TU Hand ties this concept to the physical manifestation of the hand in a manner where the design tradeoffs can be easily selected. The actuators do not pull the finger tendons; the actuators pull on the compliant transmission mechanism and the compliant transmission mechanism, in turn, pulls on the finger tendons. In this way, a small number of actuator actions are mapped to a larger number of tendons, each and every actuator can have an effect on any or all fingers and the effects of each actuator are additive by the principle of superposition in linearly elastic mechanics of materials. The synergistic actions are selected by setting the compliance characteristics of the transmission matrix, which is encapuslated in the immitance matrices of its constituent parts [39]. The compliant transmission mechanism not only allows the hand to conform to unknown or uncertain objects, it also allows the design to “program in” the hand’s behavior by selecting the characteristics of the compliant mechanism. This concept can be extended to as many actuators and as many tendons as there is space available. In the context of the competition, the task for our team was to select the mechanism with the most suitable immittance matrix so that the resulting basic motions of the hand would be most conducive to completing the tasks. The physical manifestation of the compliant mechanism and how is was selected are described in Sect. 3. A mechanism is said to be a mechanical device which is used to transfer or transform motion, force, or energy. Traditionally, a rigid-body mechanism is made out of rigid links connected via movable joints. Mechanisms like these are always prone to friction and loss of energy and also very difficult to both make and to maintain. They also transform motion, force, or energy. Unlike rigid-link mechanisms, however, they gain their mobility from the deflection of flexible parts rather than from joints that can move. An advantage of compliant mechanisms is that they can dramatically reduce total number of parts required to accomplish a specific task. Some mechanisms may be manufactured by injection molding, constructing just one piece. This can reduce manufacturing and assembly time and cost. Compliant mechanisms usually have a smaller number of movable joints, such as pin (turning) and sliding joints. So they have reduced wear and less need for lubrication. If the mechanism is not easily accessible, or for operation in rough environments, It may be the only option. The reduction of the number of joints may increase mechanism precision because of reduced or complete lack of backlash. Vibration and noise caused by the turning and sliding joints of

The TU Hand: Using Compliant Connections

63

rigid-body mechanisms is also not present in complaint mechanism. Energy is stored in the form of strain energy because a compliance mechanism is based on deflection of flexible parts. This stored energy is similar to the potential energy in a deflected spring, and the effects of springs may be integrated into a compliant mechanism’s design. This may be useful to easily store and/or transform energy to be released later on or differently. Just as there are a number of advantages associated with compliant mechanisms, there are also disadvantages. One of them, namely analyzing and designing compliant mechanisms, requires specialized expertise. Knowledge of mechanism analysis methods and the deflection of flexible members is thoroughly required. Another is that it cannot produce a continuous rotational motion such as that possible with a pin joint. In his work on compliant mechanisms, Howell [40] made all these key notes about how they are distinct from traditional rigid body mechanisms mentioned in the above paragraph. After reviewing all the advantages and disadvantages, a compliant mechanism is finally chosen as the transmission mechanism mainly because of it’s simplicity, light weight and easy to replace nature.

3

Hand Design and Construction

The TU Hand uses two actuators to control the movements of either end of a central spring called the “mainspring”. The mainspring also produces excursions of the five individual finger tendons that actuate the five anthropomorphic digits of the hand. This design creates a compliant transmission between the hand’s actuators and fingers. Through the use of this compliant transmission, the movement of each actuator has an effect on all of the hand’s fingers. This allows the hand to assume a variety of grasps by combining the movement of the actuators. While this strategy significantly simplifies the control of the hand, it reduces the number of poses that the hand is able to assume. However, by judiciously selecting the points at which the finger tendons connect to the main spring, it is possible to alter the range of possible poses. 3.1

Morphology and Anthropomorphism

The five digits of the TU Hand are comprised of four identical fingers and a single thumb. Each finger consists of four “bones”, or links, which are formed of 3D-printed ABS plastic. The lengths of these bones, as well as their human anatomy analogs, are given in Table 1. The most proximal finger bone is rigidly attached to the palm of the TU Hand (which is made of carbon fiber composite for light weight and rigidity). Bones are connected to one another by pin joints that are held in extension by internal torsion springs when the actuators are relaxed. The axes of these joints are parallel, allowing the finger to move in a single plane but omitting the adduction/abduction movements of which a human finger is capable.

64

D. Das et al. Table 1. Links of the fingers of the TU Hand

Bone number

Bone 1

Bone 2

Bone 3

Bone 4

Biological analog Metacarpal Proximal Phalanx Middle Phalanx Distal Phalanx Length (cm)

2.540

4.445

3.175

2.900

A finger is actuated by pulling a single “tendon” (analogous to the flexor digitorum profundus in the human finger) of braided Dyneema that is routed internally to the finger over a series of thin steel pins on the finger’s palmar side before being anchored to a steel pin on the most distal bone. These cylindrical pins, which are only 1.588 mm in diameter, provide discrete points of contact between the tendon and the bones. This was found to operate better than using pulleys, as the friction of the Dyneema tendon over the pins is minimal. The location of these pins, as well as other finger dimensions, are shown in the annotated drawing of Fig. 2.

Fig. 2. Model of a finger of the TU Hand.

As a finger’s tendon is pulled by the hand’s actuation system, the force is distributed to each of the finger joints through the steel routing pins, and the joints flex, deforming their internal torsion springs. The degree to which each joint flexes is determined by the relative stiffness of its torsion spring compared to the stiffness of the springs at the other joints. Therefore, by selecting a certain ratio of joint stiffnesses, the free-space trajectory of the finger may also be selected. It was determined that a natural, free-space finger motion, such as that used in [41], was desirable for the presented challenges, and a trajectory was selected in which all finger joints open and close uniformly. Because the path of the flexor tendon over the steel routing pins is known, it was possible to determine the ratio of torsion spring stiffnesses required to produce the uniform finger trajectory. This ratio was determined to be 1:1:1, that is, all spring stiffnesses

The TU Hand: Using Compliant Connections

65

Fig. 3. Flexion of a finger of the TU Hand. As the flexor tendon is pulled, the finger flexes uniformly

were identical. This unique result is due to the symmetric configuration of flexor routing pins about each joint. The springs selected for use in the fingers have a spring constant of 0.016 N-m/rad. Flexion of a finger of the TU Hand is shown graphically in Fig. 3. Since the challenges do not require grasps that need thumb abductionadduction, the thumb of the TU Hand used in the competition is greatly simplified from the one proposed in [42] (planned for future iterations of the hand). It has only three links that are connected by two joints with parallel axes, the most proximal of which is rigidly connected to the palm. This is similar to a human thumb with a fused metacarpophalangeal joint (middle joint) as in a Steiger Arthrodesis, which usually renders humans capable of doing most everyday tasks despite the loss of motion. Like the fingers, the thumb is actuated by a single flexor that works in opposition to torsion springs located at both of its joints. Flexion of the thumb is shown graphically in Fig. 4. The thumb is fixed in the opposed position. This effectively eliminates many of the grasps in Cutkosky’s taxonomy [43], but can be constructed with a simple pin joint where it joins the palm of the hand rather than a two degree-of-freedom condyloid or saddle joint that would be more difficult to control and would be more vulnerable to mechanical failure. For the purposes of this competition, this seemed an acceptable tradeoff. 3.2

Using a “Ties and Skips Pattern” to Produce a Pinch Grasp and Power Grasp

The transmission mechanism shown in Fig. 5 between the fingers and the actuators is one of the key components of the TU hand. The mechanism consists

66

D. Das et al.

Fig. 4. The thumb of the TU hand is actuated by a single flexor tendon that operates in opposition to torsion springs located internally at each joint

of two sliders and a spring, the “mainspring” [13]. Each finger’s tendon, which are each tied to different coils given in Table 2, and the two actuators are connected to that spring. Because certain coils have a finger tendon tied to them and others are skipped, the pattern of the occupied coils connecting the actuators and the fingers are referred to as a “ties and skips pattern”. Because nearly all objects specified in the competition can be held securely in either a two-finger pinch grasp or power grasp, the “ties and skips” are chosen carefully so that power and pinch grasps can be naturally produced by simple combinations of the actuators. The pattern is as follows: the front end of the spring is connected to actuator 1 followed by the pinky finger, ring finger, middle finger, and index finger. The thumb is also connected only a few coils away from the index finger mainly because in our strategy for this competition the thumb is just a supporting base for grasping any object. To limit the total compliance in the crucial two-finger pinch grasp, the index finger and thumb were connected to the same coil of the spring so that power and displacement is the same for both of these fingers which is an advantage in a pinch grasp. Finally the rear end of the spring is attached to actuator 2. If actuator 1 alone moves, while actuator 2 holds its position, the spring relaxes and (in free space) finger displacement is distributed in increasing order from pinky to index finger. If actuator 2 alone moves, while actuator 1 is held fixed, the spring extends and finger displacement is in decreasing order from pinky to index finger, which forms a pinch grasp. Because this moves the ends of

The TU Hand: Using Compliant Connections

67

Fig. 5. Transmission mechanism of the TU hand used in the competition, Encircled is one of the 5 ties Table 2. Ties and skips pattern used in competition. Coils are numbered in sequential order from motor 1 Finger

Coil to which it is Attached

Pinky

16

Ring

27

Middle 44 Index

56

Thumb 59

the spring away from each other, we will refer to this motion as differential mode. When both actuator 1 and actuator 2 move by the same amount, (in free space) the spring maintains the same length and a power grasp is formed (Fig. 8a). A multiport relationship for a compliant mechanism gives the following forcedisplacement relationship: f = Sδ, (1) where f ∈ Rn is the force at each port, δ ∈ Rn is the displacement at each port, S ∈ Rn×n is a symmetric matrix with units of [linear] stiffness, and a port is a physical point of connection to the compliant mechanism [44]. Equation (2) shows the stiffness matrix for a multiport compliant mechanism with n ports. The physical compliant stiffness of the mainspring allows us to superimpose common and differential modes, and the multiport model allows us to predict its behavior. Choosing the ties and skips pattern that gives the optimal behavior

68

D. Das et al.

can be daunting for more than 2 or three tendons, but we can approximate the overall behavior of the hand by considering the angle the eigenvectors associated with the two largest eigenvalues of the stiffness matrix make with respect to the actuator subspace and the finger tendon subspaces of Rn . As a general rule, to have an effective hand, these eigenvectors should be between a 30 and 60◦ angle between the two subspaces; outside of this region the actuator efforts are not transmitted well to the fingers [13]. The ties and skip pattern used for the competition (shown in Table 2 is modified from the one in this previous work in which differential mode produced an “ice cream cone-shaped” grasp (which was similar to the third synergy identified by Santello, Flanders, and Soechting) to a pinch grasp, which made the hand competition-ready. Both grasps are illustrated in Fig. 6. ⎞ ⎛ k1 −k1 0 ··· 0 0 .. ⎟ ⎜ ⎜ −k1 k1 + k2 + kp −k2 0 . 0 ⎟ ⎟ ⎜ ⎜ .. ⎟ .. .. ⎜ 0 . . 0 . ⎟ −k2 ⎟ (2) S=⎜ ⎟ ⎜ .. .. ⎟ ⎜ 0 . . 0 −kn−1 0 ⎟ ⎜ ⎟ ⎜ . .. ⎝ .. k + k + k −k ⎠ . 0 −k 0

0

···

n−1

0

n−1

n

−kn

p

n

kn

The earlier version of the TU hand produced cylindrical and ice-cream cone grasps, and superpositions of the two. The eigenvectors associated with the first two eigenvalues v1 and v2 of the S used in this work form angles of 34.1◦ and 21.8◦ with the finger tendon space and 55.9◦ and 68.2◦ with the actuator space. For the competition the strategy was to use a pinch grasp using the differential mode. For that v1 and v2 form angles of 58.2◦ and 31.7◦ with the finger tendon space and 71.1◦ and 18.33◦ which is not ideal, but performed effectively in the competition. The ties and skips architecture cannot produce arbitrary stiffness matrices (note that (2) is tridiagonal), so while other choices of v1 and v2 might have been more efficient in terms of actuator effort, they may not have been achievable in this physical architecture because the individual elements of S are coupled in the physical parameters [44]. 3.3

Common and Differential Modes

The TU hand can execute motions corresponding to two different synergies. Specifying the superimposed amounts of each of these two synergies is possible through the two modes of operation described in Sect. 3.2, Common and Differential modes (shown in Fig. 8). Common mode means the total extension or contraction of the spring is zero, but the distance from mechanical ground to each node (tendon “tie”) has increased equally, which produces the same free space displacement for all fingers, or increases the contact forces in the grasp roughly proportionally with increased actuator excursion (though actual amounts will depend on the profile of the object and hand pose). And for differential mode.

The TU Hand: Using Compliant Connections

69

Fig. 6. The TU hand showing 2 of its synergies

The net extension or contraction of the spring varies, which makes displacement for each fingers different. By using this concept, any hand using this compliant architecture will have the same common mode synergy but the differential mode synergy which will behave differently according to the ties and skip pattern chosen for the hand. The best part of having these two different synergies is the fact that TU hand can superimpose them, maintaining good balance between force and grasp posture to grab objects. The user has the flexibility of choosing how much of each synergy he wants to use at any given time. One such instance shown in Fig. 7. 3.4

Actuation

The actuation system of the TU hand is driven by two STP-MTR-23055 stepper motors. These two-phase, bipolar stepper motors can each produce a holding torque of 1.186 N-m and have a resolution of 200 steps per revolution, or 400 half-steps per revolution, depending on the stepping mode used. The rotational motion of each motor is converted to linear motion through the use of a ballscrew coupled to the motor shaft. The ballscrews have a pitch of 5 mm and are used to drive carriages that connect to the actuation ports of the “mainspring” of the TU hand. It is possible to estimate the maximum linear force F that can be applied to each actuation port of the mainspring by using (3), where T is the motor torque, η is the normal efficiancy, and P is the ballscrew pitch. Assuming a normal efficiency of 90%, the maximum linear force is found to be 1341.33 N, which is

70

D. Das et al.

Fig. 7. The TU hand conforms to an ice cream cone and softly grasps it using the superposition of its two synergies

more than enough to overcome the torsion spring forces of the fingers and provide a stable grasp. There is a limitation to the actuation unit. The transmission mechanism which is the spring, cannot go through itself. That means if one of the finger is blocked, it is possible for the adjacent finger to be blocked to, depending on the order of the ties and skips. In the Conclusion and future direction section, we are going to talk about possible action to overcome that. F = 3.5

2T πη P

(3)

Firmware

An Arduino UNO microcontroller development system is used as a controller in the TU hand. C for Arduino, also known as Arduino C, is used as the firmware language of the hand. The whole subroutine is created keeping in mind that the entire operation has to be simple to understand and faster to execute by the volunteer to better suit the environment of the competition. The whole firmware can be divided into three main operations, homing, grasping and stop. The routine also used the stepper library which is a default library in Arduino C. Because the hand is operated by a stepper motor, there is no position feedback; actuator position can only be determined by counting steps. This means that the stepper motors must go through a homing move to establish the proper absolute position on startup. Homing is executed whenever the hand is powered

The TU Hand: Using Compliant Connections Starting Position

71

Final Position

(a)

(b)

(c)

Fig. 8. (a) Common mode: the actuators translate the “mainspring” without changing its length, increasing the force on all finger tendons equally; (b) Differential mode: one actuator elongates the “mainspring” while the other remains stationary, altering the distribution of forces across the finger tendons; (c) Superposition of Common and Differential modes: the actuators translate and elongate the “mainspring”, increasing all finger tendon forces and altering their distribution

up. This means both of the actuators will move to a specified position, given by tripping an electromechanical flag sensor or limit switches(model no 5e4 t85) by each carriage. This corresponds to the hand in the fully open pose. This automatic move was included to avoid having to execute this manually by a human operator, which will be time consuming and prone to error. At any time after power-up, this routine can be executed by pressing the ‘h’ from a computer connected to the Arduino by USB. It is also recommended to press ‘h’ or home the device after every time a grasping maneuver is performed to open the hand. To bring the hand to the homing position, motors 1 and 2 are ramped up to a speed of 110 steps per second. It then runs until an edge is seen on the flag sensor or limit switches, at which point the motor is shut off. The only thing precomputed was the homing position of both actuators which make sure the actuators are correctly in position and from then on we can form different type of grasp.

72

D. Das et al.

We obtain the synergies by observing what ties and skip combination gives what type of synergies. Grasps can be executed in like manner, by pressing certain keys on a connected computer: ‘p’ for power grasp and ‘w’ for pinch grasp. In either case the volunteer has to home the device before he or she proceeds to grasp objects. Even though the volunteer has the option of moving the actuators to arbitrary locations, we determined every object in the competition can be grabbed by these two grasps. It was easier to limit the options to ‘p’ and ‘w’ in the instructions than to manually set the actuator excursions, and the benefits of fine-tuning these amounts in this competition were not found to be significant in our practice session. The following operating procedure was designed for ease of manual operation rather than separately defining a specific grasp for each object reducing chances of failures due to operator error. To execute a power grasp, Motor A and B are ramped up to a speed of 110 steps per second and stops when an edge is detected from the flag sensor corresponding to that motor. To execute a pinch grasp, Motor A is ramped up and stops when an edge is detected from its flag sensor. To open the hand one simply executes the homing routine. Although it was not planned in this particular competition, common mode and differential mode can be superimposed by pressing ‘e’ and ‘i’ for any amount of time. Stopping the motors occurs when any key is struck other than those listed above. The stop routine is placed in the outermost block with full control, i.e. stop has the highest priority of any other routines. This will stop any operations and prevent damage to the hand. Any user-initiated key press must be done while the serial window of Arduino is open and the interface shows “responsive” (or connected to the Arduino). Following each key press, ‘Enter’ has to be pressed to execute that command. Although the hand is capable of mixing any amounts of the two synergies to form a grasp, during the competition, we found we could get good results with only two pre-programmed grasps, the “pinch” (using predominantly differential mode) and “power” (using predominantly common mode) grasps. The volunteer simply had to invoke one grasp or the other depending on the object to be grasped, which was listed in the instructions. We found that the simplification of the procedure for the volunteer in this method vastly outweighed any finetuning that we could achieve by encoding a larger number of grasps with different amounts of each synergy.

4

Grasping Objects with the Fingers of the Hand

Due to difficulties obtaining a travel visa, the representative from Team Tulsa was unable to travel to Korea for the competition during IROS, however we simulated the competition at the University of Tulsa as closely as possible to the real competition within a few days after the competition occurred. A volunteer completely unaffiliated with our team and the Biological Robotics at Tulsa (BRAT) Research Group was recruited from among the mechanical engineering student body. The volunteer had only seen the hand during a brief lab visit and had never operated it before.

The TU Hand: Using Compliant Connections

73

None of the trials in the competition involve dexterous in-hand manipulation (as defined by Okamura and Cutkosky [45]). Those manipulation operations that are to be performed in the competition involve manipulation of the environment with a tool that must be securely grasped in the hand. This may involve uncertain wrenches being placed on the object that the hand must withstand. Therefore to succeed in the competition, the hand must be capable of producing a secure grasp that renders the tool in the appropriate pose. The underactuated nature of the TU Hand makes this possible with a minimum amount of programming; the operator simply adjusts the amount of common and differential mode actuator excursion until he or she is satisfied that a secure grasp has been attained and then the stepper motors lock the actuator position. The tool will then be able undergo small motions consistent with the fingertip rolling kinematics and the mechanism compliance in response to environmental loads with the hand providing a natural restoring force that either maintains contact or returns the tool to its original configuration when the environmental load is removed or reduced. These grasps can be selected intelligently by the volunteer by pushing buttons corresponding to increased or decreased common or differential mode or be programmed into the firmware in advance and simply invoked by the volunteer according to each challenge. A showcase of attending the challenges shown in Fig. 9. 4.1

Strategy

As might be gathered from the preceding sections, our team’s strategy was to create a robust, adaptable hardware platform and keep the operator, object representation and programming complexity to a minimum. Given our “ties and skips pattern” described in Sect. 3 and the immitance matrix that followed from it, we had two modes that could be superimposed. One produced open-close motions of all fingers with the thumb in free space, with forces more or less evenly distributed among the four fingers when in contact, and the other preferentially moved the index finger, followed by the middle finger, with only a minor effect on the ring and little fingers. Our hope was that the optimal grasp for the object or task required would at least lie close to one that could be produced by the superposition of these common and differential modes. With only two synergistic operations, the volunteer could produce the grasp needed by following very simple instructions and entering simple commands on the computer keyboard with the contralateral hand. If the instructions were not followed perfectly, the hardware compliance in the hand and its synergistic operation would ensure that a good grasp would be obtained even if it deviated slightly from that envisioned by our team during planning. As per the rules of the Hand-in-Hand competition the volunteer has to operate the whole challenge by himself/herself, and he or she must be someone who hasn’t seen the hand before and is not familiar with the procedure at the start of the competition. Therefore certain measures are planned to make the whole operation smooth for the volunteer. A passive wrist flexion-extension joint is introduced between the forearm and the palm of the hand. It is not actuated,

74

D. Das et al.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Fig. 9. Different moments of the challenges being attended by the volunteer

but can be set using the contralateral hand to a convenient pose. A velcro strap is attached to the end portion of the hand, close to the motor section, for more flexible movement of the hand while the portion with the motors is fastened the velcro strap to the volunteer’s forearm. This places the center of mass of the entire device near the volunteer’s elbow, reducing its moment arm and consequently operator fatigue.

The TU Hand: Using Compliant Connections

75

Fig. 10. The volunteer (Fahad Ansari, University of Tulsa Mechanical Engineering Undergraduate) poses the TU hand after attempting all the challenges

Obtaining Optimal Results from an Uninitiated Volunteer. Since the competition would be simulated at The University of Tulsa, a volunteer with no affiliation with the authors’ research group was recruited from among the undergraduate student body in Mechanical Engineering. To get the most successful result from the volunteer a document was prepared that described all the rules and keypresses used in operating the device. The volunteer was also permitted to see all the objects listed before starting the challenge. A camera is placed in a suitable area to capture the whole challenge. The basket is placed in the floor for greater visibility and ease of use of the hand. The volunteer was given the note of the operation procedure and what needs to be done. He took the handle and put the strap around his wrist and then proceeded to the basket. Figure 10 shows the volunteer Fahad Ansari posing with the TU Hand. Exploiting In-Hand Compliance. In-hand compliance played a key role in making the challenge successful. Because the hand does not have any integrated force sensor, there is no way of knowing when to stop after starting a grasping maneuver. If improper force is applied, the object may deform or the hand may eject the object if too much force is applied. For example, the strawberry can be squashed if too much pinch force is applied. In-hand compliance resolves this issue. Because of the complaint transmission, there is active force distribution among the fingers, which means whenever one finger is blocked, instead of pressing further it distributes the excess force to the other fingers. Finger-to-finger compliance can also make for a stable grasp [46].

76

4.2

D. Das et al.

Operation of the Hand During the Challenge

The volunteer operates the hand by gripping a handle and strapping the velcro to the end of the forearm of the volunteer. Because of the small size of the basket, the volunteer operator’s forearm must be near-vertical to avoid colliding with the edges of the basket. This requires use of the lockable wrist joint hinge with the contralateral hand to place the wrist in the hyperextended position. The palm has to bend to place the hand inside of the bucket which is done before the start of the challenge. The volunteer is given a document containing all the necessary commands, and which kind of grasp is required for each object. For instance the chocolate bar is grasped with a power grasp, but pinch grasp is required for grasping the strawberry.

5

Manipulating Objects in the Environment with Tools Grasped in the Hand

The current version of the hand cannot easily complete dexterous in-hand manipulation behaviors, but because the hand is capable of performing two types of grasps, it is possible to perform some operations besides pure grasping by mixing two different grasps. For instance: the hand could cut paper with a pair of scissors but could not cut the paper in the pattern with the current hand configuration. Doing so would require subtle finger movements and wrist dexterity the hand does not have. Aspirating the syringe unimanually requires active thumb extension, a capability the hand did not have. The following tasks were attempted: grabbing a hammer and driving bolts, grasping a spoon and picking up peas from a dish, grabbing a hand saw and cutting using that saw. 5.1

Challenges Faced

Lack of Adduction and Abduction. The fingers of the TU hand have only three joints with parallel axes. While this arrangement does enable the fingers to conform to a wide range of objects, the fingers are somewhat limited by their lack of adduction and abduction. Abducting the fingers allows the fingertips to more appropriately close around rounded objects, such as strawberries or lemons. Abduction also serves to increase the grasp area, allowing for more stable grasps on larger objects. Mechanisms Within the Finger. As the flexor tendon of a finger is pulled, the finger flexes, decreasing the distance between subsequent steel routing pins that are located on either side of a finger joint. In free space, the torsion springs internal to each joint oppose the moments applied to each joint by the flexor tendon and ensure that each joint follows its prescribed trajectory. However, the addition of external forces can upset the balance between the flexor forces and the torsion springs, causing some joints to contract more quickly, resulting in changes in finger pose independent of the finger actuation system.

The TU Hand: Using Compliant Connections

77

Fig. 11. Because they are underactuated, the fingers of the TU hand exhibit mechanisms when subjected to significant external forces. These mechanisms can cause the hand to shift from a favorable grasp or pose to an unfavorable or unstable one

This problem is most evident when grasping heavy objects. These objects can cause some joints of the finger to flex (creating slack in the flexor tendon) as others are forced to extend by the weight of the object (removing the introduced slack from the tendon). This results in the finger individually, and the hand as a whole, assuming poses that were not anticipated and are often less stable than the desired grasp. An example of this pose indeterminacy, (sometimes called “mechanisms”) is shown diagrammatically in Fig. 11. Finger mechanisms are also problematic, though less evident, when precision grasps are being performed on smaller objects. When grasping a small object, the fingers and thumb of the hand must be moved with relative certainty so that they contact the object in a stable orientation. Unfortunately, as mentioned previously, external forces applied to a finger by a grasped object can cause the finger to deviate from its desired trajectory. For smaller objects, these grasping forces are usually quite small, but they are often large enough to create noticeable deviations in a finger’s trajectory. While these deviations are much smaller than the deviations caused by heavier objects, they are equally problematic because of the increased precision required to stably grasp small objects. In order to combat this problem, future iterations of the TU hand will utilize fingers that are fully actuated, and thus less susceptible to reconfiguration by external forces, such as those proposed in [47].

78

6

D. Das et al.

Results

The most important thing we learned from participating in the IROS 2016 grasping and manipulation competition is the real challenge lies when transitioning from simulation to grasping and manipulating objects in the physical world. Producing, programming, planning, and executing plans on a physical hand that can cope with uncertainty found in such a competition requires a substantial time investment. One often has to improvise solutions (Table 3). Despite its limited number of actuators, the TU hand was able to effectively grasp a wide range of objects over the course of the challenge. All objects were successfully retrieved from the basket by the volunteer. The number of tries and amount of time taken (determined from the video footage from when motion toward the basket begins until the object is released) are shown in Table 4. This was largely due to the intrinsic compliance of the system which allowed the hand to conform to objects whose exact size and shape were not known or not constant, such as a curved banana or a bag of chips. The compliance of the hand also compensated for small errors in positioning and orientation that were a result of human inaccuracies introduced by the volunteer, allowing some objects (e.g., the lemon) to be grasped in various ways. Table 3. Success rates of retrieving objects from the basket Objects

Attempt Success Time (Apprx)

Strawberry

2

Yes

5s

Scotch Brite Dobie sponge 1

Yes

4s

Lime

1

Yes

4s

Snickers

1

Yes

7s

Bag of Chips

1

Yes

7s

Banana

2

Yes

5s

Table 4. Success rates of Manipulation task Objects

Attempt Success Time (Apprx)

Picking up Peas

1

Yes

10 s

Hammering Nail

1

Yes

8s

Push and pull Syringe 2

No

N/A

While the finger-to-finger compliance introduced by the “mainspring” architecture was crucial to the success of the hand, indeterminacy in joint angles caused by the compliance of individual fingers (i.e., finger mechanisms) was a

The TU Hand: Using Compliant Connections

79

Fig. 12. Indeterminacy in the joint angles causes the hand to grasp the spoon using the dorsal side of the index finger instead of the “pulp” of the finger in an undesired yet stable grasp

consistent issue. The TU Hand was unable to achieve a stable grasp on the hammer because its weight generated significant forces on the fingers during grasping, causing the fingers to deflect. A similar issue was encountered when using a spoon to scoop objects from a bowl. As the tip of the index finger made contact with the spoon, the grasping force caused additional flexion in the most distal joint of the finger. This resulted in the spoon being grasped by the dorsal portion of the finger instead of the “pulp” of the finger as shown in Fig. 12. While this grasp was not the intended grasp, it was stable enough to allow the volunteer to complete the challenge. We found that when grasping slippery objects, such as the Snickers bar, the objects became close to being dislodged. The authors are working on a silicone skin to increase the coefficient of friction for future iterations of the hand. When grasping compliant objects, even those with slippery surface such as the bag of chips, no sign of slipping from the hand was noticed, and was a successful grasp. The reason the bag of chips resulted in a more secure grasp than the Snickers bar is because more fingers were in contact with the bag of chips. The Snickers bar didn’t offer as much surface area, which made it difficult for the volunteer to acquire the grasp with as many contact points. In the future, to make the hand competition ready, the orientation and surface area of objects will need to be considered.

80

D. Das et al.

For the Manipulation Tasks, the volunteer attempted all of it. But in the video, we have only showed what was successful. Pick up peas using a spoon, hammering a nail was the two task were achieved.

7

Conclusion and Future Directions

Team Tulsa competed in the hand-in-hand challenge because we were specifically interested in the physical interaction between the hand and the object and this allowed us to take advantage of the human intelligence provided by the volunteer. The University of Tulsa Anthropomorphic Robotic Hand is a simple hand with only two actuators, making it easy for a volunteer to operate. Actuation forces are transmitted to all fingers and the thumb through a compliant mechanism. Each actuator produces a force on the end of a coil spring, the “mainspring”. The characteristics of the finger-to-finger and actuator-to-finger compliance and generated finger tendon efforts are produced by a method we call “ties and skips”; finger tendons are securely tied to individual coils of the mainspring while “skipping” others. By choosing which coils are “tied”, and which are “skipped”, our team could embed the natural hand characteristics which would be most effective in the competition. This compliant mechanism superimposes the actions of two actuated synergies: common and differential mode. By superimposing different amounts of common and differential mode, the user can produce a wide variety of grasps. For this competition, our team configured the compliant transmission so that maximum common mode actuation was preferential to a power grasp and maximum differential mode corresponded to a pinch grasp. The fingers of the hand were of a simple design, with a single flexor tendon and return springs in the joints to open the hand. Despite its simple design, the TU Hand is able to grasp objects that vary in mass, size, and shape by utilizing its two synergies (that allow some measure of preshaping) and exploiting the inherent compliance of the system to cope with an uncertain representation of the object. The compliant finger-to-finger coupling in the TU Hand was instrumental in producing secure grasps on the objects in the shopping basket, allowing the volunteer to be very successful in retrieving the objects. All objects were successfully retrieved from the basket within three attempts; most were retrieved on the first attempt. For the non-grasping portion of the competition, there were varying levels of success: when interacting with lightweight objects, such as a spoon scooping peas from a dish, the hand performed well. For heavier objects, such as a hammer, the indeterminacy of the finger joint angles inherent in the simple finger design permitted the object to shift in the hand, making the attempt less successful. This competition represented the inaugural use of the TU Hand in performing everyday tasks and we continue to develop and improve the platform. The second version of the TU Hand will maintain the ease of operation and underactuated nature of the current version, but will use a more advanced version of the fingers. These biologically-inspired tendon-driven fingers will be capable of abduction-adduction motions, and will also resolve the indeterminacy of the

The TU Hand: Using Compliant Connections

81

joints of the fingers. The thumb will also be modified so as to allow abductionadduction motions, whether they be actuated or passive in nature. Version II will also include advances to the compliant coupling mechanism beyond that of the “mainspring” architecture to achieve better control over its multiport stiffness matrix. From an ease-of-operation perspective, the compliant mechanism will be brought intrinsic to the hand, greatly reducing the number of tendons crossing the wrist. There are also plans to make the hand less bulky for easy maneuverability, eliminating the large linear actuators in the forearm and replacing them with smaller brushless motors that influence the compliant mechanism through a rotary gear train. We believe that these changes will increase our team’s success in future competitions. It is our hope that with this increase in functionality, the hand will not incur a drastic increase in the number of actuators, preserving the intuitive, simple to operate character of the hand.

References 1. Deshpande, A.D., Ko, J., Fox, D., Matsuoka, Y.: Anatomically correct testbed hand control: muscle and joint control strategies. In: Proceedings - IEEE International Conference on Robotics and Automation, pp. 4416–4422 (2009) 2. Weghe, M.V., Rogers, M., Weissert, M., Matsuoka, Y.: The ACT hand: design of the skeletal structure. In: 2004 IEEE International Conference on Robotics and Automation, pp. 3375–3379 (2004) 3. Deshpande, A.D.: Contribution of passive properties of muscle-tendon units to the metacarpophalangeal joint torque of the index finger. In: 2010 3rd IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics, pp. 288–294, September 2010 4. Chang, L.Y., Matsuoka, Y.: A kinematic thumb model for the ACT hand. In: Proceedings - IEEE International Conference on Robotics and Automation 2006, pp. 1000–1005 (2006) 5. Deshpande, A.D., Xu, Z., Weghe, M.J.V., Brown, B.H., Ko, J., Chang, L.Y., Wilkinson, D.D., Bidic, S.M., Matsuoka, Y.: Mechanisms of the anatomically correct testbed hand. IEEE/ASME Trans. Mechatron. 18(1), 238–250 (2013) 6. Odhner, L.U., Jentoft, L.P., Claffee, M.R., Corson, N., Tenzer, Y., Ma, R.R., Buehler, M., Kohout, R., Howe, R.D., Dollar, A.M.: A compliant, underactuated hand for robust manipulation. Int. J. Robot. Res. 33(5), 736–752 (2014) 7. Odhner, L.U., Ma, R.R., Dollar, A.M.: Exploring dexterous manipulation workspaces with the iHY hand. J. Robot. Soc. Jpn 32(4), 318–322 (2014) 8. Cutkosky, M.R.: Robotic Grasping and Fine Manipulation. Kluwer International Series in Engineering and Computer Science: Robotics. Kluwer Academic Publishers (1985) 9. Roa, M.A., Su´ arez, R.: Grasp quality measures: review and performance. Auton. Robots 38, 65–88 (2015) 10. Miller, A.T., Allen, P.K.: GraspIt! A versatile simulator for robotic grasping. IEEE Robot. Autom. Mag. 11(4), 110–122 (2004) 11. Lin, Y., Sun, Y.: Grasp planning to maximize task coverage. Int. J. Robot. Res. 34(9), 1195–1210 (2015) 12. Phoka, T., Pipattanasomporn, P., Niparnan, N., Sudsang, A.: Regrasp planning of four-fingered hand for parallel grasp of a polygonal object. In: Proceedings of the

82

13.

14. 15. 16. 17. 18. 19. 20. 21.

22. 23.

24.

25. 26.

27.

28.

29.

30. 31. 32.

D. Das et al. 2005 IEEE International Conference on Robotics and Automation, pp. 779–784. IEEE (2005) Das, D., Rake, N.J., Schultz, J.A.: Compliantly underactuated hands based on multiport networks. In: 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), pp. 1010–1015, November 2016 Birglen, L., Gosselin, C.M.: Kinetostatic analysis of underactuated fingers. IEEE Trans. Robot. Autom. 20(2), 211–221 (2004) Hirose, S., Umetani, Y.: The development of soft gripper for the versatile robot hand. Mech. Mach. Theory 13(3), 351–359 (1978) Williams, D.J.: Grant’s atlas of anatomy, eleventh edition by Anne M.R. Agur and Arthur F. Dalley. Clin. Anat. 19(6), 575 (2006) Liarokapis, M.V.: Quantifying anthropomorphism of robot hands. In: IEEE International Conference on Robotics and Automation, Karlsruhe (2013) Murray, R.M., Li, Z., Sastry, S.S.: A Mathematical Introduction to Robotic Manipulation. CRC Press (1994) Soto Martell, J.W., Gini, G.: Robotic hands: design review and proposal of new design process. World Acad. Sci. Eng. Technol. 26, 85–90 (2007) Jacobsen, S.C., Wood, J.E., Knutti, D.F., Biggers, K.B.: The Utah/M.I.T. Dextrous hand: work in progress. Int. J. Robot. Res. 3(4), 21–50 (1984) Jacobsen, S., Iversen, E., Knutti, D., Johnson, R., Biggers, K.: Design of the UTAH/MIT Dextrous hand. In: 1986 IEEE International Conference on Robotics and Automation, Proceedings, vol. 3, pp. 1520–1532. IEEE (1986) Mason, M.T., Salisbury, J.K.: Robot Hands and the Mechanics of Manipulation, 1st edn. MIT Press, Cambridge (1985) Grebenstein, M., Chalon, M., Friedl, W., Haddadin, S., Wimb¨ ock, T., Hirzinger, G., Siegwart, R.: The hand of the dlr hand arm system: designed for interaction. Int. J. Robot. Res. 31(13), 1531–1555 (2012) Dalley, S.A., Wiste, T.E., Varol, H.A., Goldfarb, M.: A multigrasp hand prosthesis for transradial amputees. In: 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, pp. 5062–5065. IEEE (2010) Wiste, T.E., Dalley, S.A., Varol, H.A., Goldfarb, M.: Design of a multigrasp transradial prosthesis. J. Med. Dev. 5(3), 031009 (2011) Pons, J.L., Rocon, E., Ceres, R., Reynaerts, D., Saro, B., Levin, S., Van Moorleghem, W.: The MANUS-HAND dextrous robotics upper limb prosthesis: mechanical and manipulation aspects. Auton. Robots 16(2), 143–163 (2004) Massa, B., Roccella, S., Carrozza, M.C., Dario, P.: Design and development of an underactuated prosthetic hand. In: IEEE International Conference on Robotics and Automation 2002, Washington, DC, pp. 3374–3379 (2002) Liu, H., Meusel, P., Seitz, N., Willberg, B., Hirzinger, G., Jin, M.H., Liu, Y.W., Wei, R., Xie, Z.W.: The modular multisensory DLR-HIT-Hand. Mech. Mach. Theory 42(5), 612–625 (2007) Zatsiorsky, V.M., Li, Z.-M., Latash, M.L.: Coordinated force production in multifinger tasks: finger interaction and neural network modeling. Biol. Cybern. 79(2), 139–150 (1998) Tedrake, R.: Underactuated robotics: Learning, planning, and control for efficient and agile machines: Course notes for MIT 6.832 Birglen, L.: Force analysis of connected differential mechanisms: application to grasping. Int. J. Robot. Res. 25(10), 1033–1046 (2006) Prattichizzo, D., Trinkle, J.C.: Grasping. In: Siciliano, B., Khatib, O. (eds.) Springer Handbook of Robotics, pp. 671–700. Springer, Heidelberg (2008). https:// doi.org/10.1007/978-3-540-30301-5 28

The TU Hand: Using Compliant Connections

83

33. Ciocarlie, M., Hicks, F.M., Holmberg, R., Hawke, J., Schlicht, M., Gee, J., Stanford, S., Bahadur, R.: The velo gripper: a versatile single-actuator design for enveloping, parallel and fingertip grasps. Int. J. Robot. Res. 33(5), 753–767 (2014) 34. Dechev, N., Cleghorn, W.L., Naumann, S.: Multiple finger, passive adaptive grasp prosthetic hand. Mech. Mach. Theory 36(10), 1157–1173 (2001) 35. Catalano, M.G., Grioli, G., Farnioli, E., Serio, A., Piazza, C., Bicchi, A.: Adaptive synergies for the design and control of the Pisa/IIT SoftHand. Int. J. Robot. Res. 33(5), 768–782 (2014) 36. Santello, M., Flanders, M., Soechting, J.F.: Postural hand synergies for tool use. J. Neurosci. Official J. Soc. Neurosci. 18(23), 10105–15 (1998) 37. Gabiccini, M., Farnioli, E., Bicchi, A.: Grasp and manipulation analysis for synergistic underactuated hands under general loading conditions. In: 2012 IEEE International Conference on Robotics and Automation, pp. 2836–2842, May 2012 38. Brown, C.Y., Asada, H.H.: Inter-finger coordination and postural synergies in robot hands via mechanical implementation of principal components analysis. In: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2877– 2882, October 2007 39. Martell, M.J., Schultz, J.A.: Multiport modeling of force and displacement in elastic transmissions for underactuated hands. In: Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, pp. 1074– 1079 (2014) 40. Howell, L.L.: Intro to compliant mechanisms. http://compliantmechanisms.byu. edu/content/intro-compliant-mechanisms. Accessed 31 Jan 2017 41. Yoon, D., Lee, G., Lee, S., Choi, Y.: Underactuated finger mechanism for natural motion and self-adaptive grasping towards bionic partial hand. In: 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob), pp. 548–553. IEEE, June 2016 42. Pulleyking, S., Das, D., Schultz, J.: Simplified robotic thumb inspired by surgical intervention. In: Proceedings of the 6th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, pp. 613–619 (2016) 43. Cutkosky, M.R.: On grasp choice, grasp models, and the design of hands for manufacturing tasks. IEEE Trans. Robot. Autom. 5(3), 269–279 (1989) 44. Schultz, J., Ueda, J.: Two-port network models for compliant rhomboidal strain amplifiers. IEEE Trans. Robot. 29(1), 42–54 (2013) 45. Okamura, A.M., Smaby, N., Cutkosky, M.R.: An overview of dexterous manipulation. In: IEEE International Conference on Robotics and Automation, Proceedings 2000 ICRA, Millennium Conference, Symposia Proceedings (Cat. No. 00CH37065), vol. 1, pp. 255–262 (2000) 46. Cutkosky, M.R., Kao, I.: Computing and controlling compliance of a robotic hand. IEEE Robot. Autom. 5(2), 151–165 (1989) 47. Rake, N.J., Skinner, S.P., O’Mahony, G.D., Schultz, J.A.: Modeling and implementation of a simplified human tendon structure in a robotic finger. In: Proceedings of the 6th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics (2016)

Design and Application of Dorabot-hand2 System Zhikang Wang1(B) , Shuo Liu2 , and Hao Zhang1 1

2

Dorabot Inc., Shenzhen, China [email protected], [email protected] University of California Merced, Merced, USA [email protected]

Abstract. We present Dorabot-hand2, a dexterous robot hand and its design principles. The goal of designing this hand is to gain capability of handling everyday tasks. The hand is tendon-driven and is based on modular design. We focus on certain aspects of the design, including strength, friction, cost and maintainability. We conclude with a description of the hand’s performance when competing in the Robotic Grasping and Manipulation Competition at IROS 2016.

Keywords: Dexterous hands Tendon-driven

1

· Underactuated · Modular design

Introduction

Most conventional grippers are parallel grippers that can only perform a simple grasp with many limitations. Parallel grippers are not universal as they usually grasp objects via friction force generated by the contact surface. These grippers often require specialized tooling to match part features. Robotic hands help people with hand disabilities to regain grasping and manipulation functionality. Furthermore, robotic hands can greatly increase the versatility of an industrial robotic arm with the potential to eliminate the costly tooling requirements associated with handling a variety of parts using a single system. In grasping research, the focus is on reconfiguration of actuators that allow convenient customization of the gripper to strengthen or to degenerate grasping functions, similar to building blocks. Most commercially available dexterous hands are complex in design and are cost prohibitive. Moreover, in most cases, tactile sensors are only added on the fingertips so feedback data is limited. This paper proposes a modular design using a bionics approach that incorporates a variety of sensors. From analyzing the anatomy of the human hand, this paper provides an underactuated design concept. We use a modular design to meet a variety of hand configurations and to achieve rapid iteration. The use of parameter-driven modeling method greatly accelerates the speed of the serialization work. As inspired by the Yale OpenHand Project [1], our dexterous hand c Springer International Publishing AG, part of Springer Nature 2018  Y. Sun and J. Falco (Eds.): RGMC 2016, CCIS 816, pp. 84–106, 2018. https://doi.org/10.1007/978-3-319-94568-2_5

Design and Application of Dorabot-hand2 System

85

prototype is produced by additive layer manufacturing technology (3D printing), which greatly reduces both development time and cost. This chapter is organized as follows. First, we introduce design ideas i.e. modular design, actuation and control, sensing and so on. Second, we discuss the performance of Dorabot-hand2 in the manual track of the Robotic Grasping and Manipulation Competition at IROS 2016. Then, we express our thoughts about this manual track and compare different types of grippers. Finally, we conclude with a summary and suggestions for future work.

2

Determination of the Transmission Form

2.1

Review of Existing Robotic Hand Designs

This section will review some typical existing robotic hand designs and their transmission forms. The general transmission mechanisms used in dexterous hands are in the following forms: motor joint, gear, linkage and tendon. Each transmission form has its own characteristics.

(a) HIT/DLR hand (b) The MARS hand (c) Handyman hand

(d) DLR hand

Fig. 1. Transmission forms.

HIT/DLR Hand: The motor joint transmission form uses the embedded motors to drive the joints directly (see Fig. 1a). It uses small harmonic gear reducers, synchronous belts and disk motors to build the fingers. These flexible drive mechanisms provide shock absorption and security. Using this transmission form, dexterous hands can achieve compact structures. However, the disadvantages are high costs, excessive inertia and other complicated problems. The MARS hand: It uses the linkages to transmit power from motors (see Fig. 1b). This kind of hand usually uses the combined four-bar linkage mechanisms to drive the fingers. The actuators are located at the palm or the forearm. The advantage of using this form is that a very large amount of grasping force can be transmitted with minimal energy loss. However, it is difficult to achieve a compact structure for the hand using this kind of transmission. The modular design of the phalanxes is also difficult to achieve.

86

Z. Wang et al.

Handyman Hand: This hand use the gear transmission form. It uses a series of gears to transmit the motion (see Fig. 1c). The actuators are usually located at the palm. It can also transmit a large torque. However, these gears introduce significant inertia and make the fingers heavy; therefore, this transmission form is rarely used for dexterous hands. DLR hand: DLR hand uses tendon transmission form. This kind of hand usually uses tendons, sheaths and pulleys to transmit the motion (see Fig. 1d). This method allows for flexibility in actuator placement simplifying and compacting the hand structure for lower inertia. The tendon transmission also allows the actuation form to switch between the underactuated form and the fully actuated form by changing the number of the actuated tendons. Moreover, it is easy to achieve modular design using this transmission form. The redundancy of the tendinous system offers the possibility of co-contracting the tendons so as to optimally tune their stiffness and configure the limbs for different tasks (precision grasp, power grasp etc.) [2]. It also has some shortcomings, such as deformation of the tendons, wear of tendons, high friction and lacks tuning mechanism. This paper focuses on the design of the Dorabot-hand2, a dexterous and modular hand incorporating the flexible transmission method to grasp and manipulate everyday objects. 2.2

Discussing of Full Tendon Driven Actuation

Anatomical terms used in this paper: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

HM: Hamatometacarpal MCP: Metacarpophalangeal PIP: Proximal interphalangeal DIP: Distal interphalangeal TM: Trapezometacarpal IP: Interphalangeal Distal: Closer to the fingertip Meddle: Between proximal and distal Proximal: Closer to the torso Flex: Flexion/extend motion Abd: Adduction/abduction motion PL : Lower pair PH : Higher pair DOF: Degree of freedom

2.2.1 The Conditions of Full Tendon Drive Actuation There are two forms of tendon transmissions in robotic systems: close-ended or open-ended tendon driven mechanisms [2]. To control a n-DOF (degree of freedom) manipulator with close-ended tendon form, n closed-ended tendons are needed. To control a n-DOF manipulator with the open-ended tendon form, at least n + 1 open-ended tendons are needed [3].

Design and Application of Dorabot-hand2 System

(a) DOF=2

(b) DOF=3

87

(c) DOF=3

Fig. 2. Structure synthesis.

2.2.2 Structural Synthesis of Full Tendon Driven Actuation The fingers of a dexterous hand are usually composed of two to three phalanxes. In this paper, we only illustrate the structures having two or three DOFs and the isomorphic structures are removed [4]. The two-DOF system is illustrated in Fig. 2a. The three-DOF systems are illustrated in Fig. 2b and c. 2.2.3 Overview of Underactuation From the previous sections, we know that when we use the full actuation form, the number of actuators is more than the number of DOFs. This makes the whole dexterous hand system complex and difficult to control. Moreover, the most expensive parts of the dexterous hand are actuators. Since the full actuation form leads to higher cost and we want to lower the cost of the hand, we decreased the number of the actuators. In this section, we introduce the principle of underactuation. Underactuation means having fewer actuators than DOFs. With this mechanism, the dexterous hand has the ability to adapt to the shape of the object automatically. We use an example to introduce this idea [5]. As shown in Fig. 3, the mechanical diagram of an underactuated three joint finger and its grasping processes. The DOF of this multi-linkage mechanism is (ignoring the influence of springs): (1) DOF = 3n − 2PL − PH = 3 × 7 − 2 × 9 − 0 = 3 An elastic element with a certain stiffness, such as a torsion spring, can be added between AC and CD; CG and FG as shown in Fig. 3. We suppress some DOFs by using these springs: DOF = 3n − 2PL − PH = 3 × 1 − 2 × 1 − 0 = 1

(2)

Applying a driving force P to linkage AB (see Fig. 3a). Before the finger touches other objects, the whole finger rotates around the axis A (see Fig. 3b). If AC

88

Z. Wang et al.

(a) Mechanical (b) Initial state diagram

(c) Process 1

(d) Process 2

(e) Process 3

Fig. 3. Underactuated grasping example.

touches other objects and cannot move anymore, the spring between AC and CD will be stretched. The DOF of the middle joint is unsuppressed (see Fig. 3c): DOF = 3n − 2PL − PH = 3 × 4 − 2 × 5 − 1 = 1

(3)

Keep applying the force P. The middle phalanx and the distal phalanx keeps turning as an whole around the axis C (see Fig. 3c). If CG touches the other object and cannot move anymore, the spring between CG and FG will be stretched. The DOF of distal joint is unsuppressed (see Fig. 3d); DOF = 3n − 2PL − PH = 3 × 7 − 2 × 9 − 2 = 1

(4)

This way, the force P makes the whole finger touch the object firmly. By observing the whole grasping process, we can find that the DOFs of the fingers can be suppressed or be unsuppressed in different conditions. The fingers can adapt their poses to the objects being grasped. 2.3

Kinematics of the Human Hand

2.3.1

Structure of the Human Hand and Kinematic Constraint Conditions As shown in Fig. 4, the structure of the thumb is different from other fingers. It has a flexible metacarpal and one less phalanx than the other fingers. The movement of the fingers is achieved by muscle contraction. These muscles are located in the forearm (forearm muscles) and the hand (hand muscles). There are two phenomenons about the muscle contraction and the joint motion: 1. The contraction of each muscle causes multiple joints to move simultaneously and has different effects on different joints. 2. The motion of a specific joint is caused by multiple muscles. In general, when we want to control a specific joint to move, the movement will cause other joints to move simultaneously. By analyzing the data of joint motions we have the following kinematic constraint conditions [6]:

Design and Application of Dorabot-hand2 System

89

Fig. 4. Human hand anatomy.

1. The flexion/extend kinematic constraint between PIP and DIP of fingers: f lex (θDIP ) =

2 f lex (θP IP ) 3

(5)

2. The flexion/extend kinematic constraint between MCP and PIP of fingers: f lex (θM C ) = k × f lex (θP I ) , 0 ≤ k ≤

1 2

(6)

3. The flexion/extend and adduction/abduction kinematic constraints between TM and MCP of thumb:   1 f lex (θT M ) = 2 f lex (θM CP ) − π (7) 6 5 abd (θM CP ) (8) 7 4. The adduction/abduction kinematic constraint between IP and MCP of thumb. (9) f lex (θIP ) = α × abd (θM CP ) , α ≥ 0 abd (θT M ) =

2.3.2 The Simplification of Human Hand Kinematics The hand is one of the most important organs for humans to interact with the world. It is the most flexible part of our body. When designing the dexterous hands, it is reasonable to imitate the human hand. From Sect. 2.2.1 we can see that the kinematic constraints of the human hand are complex. We made some simplifications of human hand kinematic relations, shown below:

90

Z. Wang et al.

1. Decouple the MCP from the kinematic relations and let it rotate independently. 2. Retain the kinematic constraint between PIP and DIP. 3. Only consider the general kinematic relations of the fingers; the thumb’s kinematic relation is not considered. We decide to use two actuators to actuate each finger. In each finger, one actuator actuates the MCP and the other actuates the PIP and DIP. 2.4

Analysis of Tendon-Sheath Transmission

Sheaths are used to support and guide the tendons, which requires high axial compression deformation resistance and high redial flexibility. Tendons are used to transmit the pull force, which requires a high strength of extension. These features allow the tendon-sheath system to transmit powerful force with flexible power transmission route. When the tendon-sheath system transmits a force, the sheath is under pressure and the tendon is under tension. During the force transmission, the tendon slips in the sheath, which causes a friction that requires additional attention. In this section we establish the static model of tendon-sheath transmission [7]. The tendon-sheath system can be regarded as a continuous spatial curve. The curvature and torsion of the sheath represents the degree of flexion and twist, respectively. In differential geometry, it has been proven that the geometry of the curve is uniquely determined by the curvature and torsion function with the arc length as a parameter. Therefore, the establishment of the tendon-sheath transmission static model should include these two parameters. Figure 5a is the static model of the m-th tendon segment. Tm and Tm+1 are the tendon-tensions of the both ends of the tendon. θm is the radius angle and Rm is the radius of curvature. Figure 5b shows the forces balance diagram of the i-th tendon-unit in m-th tendon segment. T and ΔT are the tendon-tensions of the both ends of this tendon-unit. Δθ is the radius angle of this tendon-unit. ΔN and ΔF are the normal pressure and the friction of this tendon-unit being

(c) Different tendon-units (a) Tendon transmission model (b) Tendon-unit forces with a twist angle balance diagram

Fig. 5. Tendon-sheath transmission force analysis diagram.

Design and Application of Dorabot-hand2 System

91

supplied by the sheath. Figure 5c shows the neighboring tendon-units with a twist angle αi . Figure 5c indicates that the twist between the two tendon-units does not influence the force transmission: T2i = T1i+1

(10)

Therefore, we just need to analyze the forces on single tendon-unit showing in Fig. 5b. The forces balance equation along the tendon-unit can be expressed as: Δθ Δθ − T cos =0 2 2

(11)

Δθ Δθ + (T + ΔT )sin − ΔN = 0 2 2

(12)

ΔF + (T + ΔT )cos T sin and then

ΔF = −ΔT cos

Δθ 2

(13)

Δθ Δθ + ΔT sin (14) 2 2 When the tendon-unit is small enough, the high-order infinitesimal quantity ΔT sin(Δθ/2) can be omitted. Replace sin(Δθ/2) and cos(Δθ/2) by equivalent infinitesimal quantities Δθ/2 and 1, respectively. then ΔF = −ΔT (15) ΔN = 2T sin

ΔN = T Δθ

(16)

Because the tendon-unit is small enough, we use dT, dΔ, dF, dN to replace ΔT, Δθ, ΔF, ΔN . then dT = −μdθ (17) T Make integral operation on both sides of the formula above, then we get the tendon force at the m-th circular segment with given θ: T (θ) = Tm · e−μθ , (0 ≤ θ ≤ θm )

(18)

Then we can get the frictional force and the normal force at the m-th circular segment with given θ: F (θ) = Tm · (1 − e−μθ ), (0 ≤ θ ≤ θm ) N (θ) = where m = 1, 2, . . . , M . Let θ = θm , then

F (θ) , (0 ≤ θ ≤ θm ) μ

Tm+1 = Tm · e−μθm

(19) (20)

(21)

92

Z. Wang et al.

where Tm is the input tension, Tm+1 is the output tension, θm is the radius angle, μ is the friction coefficient. So we get the To ut of the tendon which have M circular segments: (22) T2 = T1 · e−μθ1 T3 = T2 · e−μθ2

(23)

.. .

then

TM = TM −1 · e−μθM −1

(24)

TM + 1 = TM · e−μθM

(25) M

TM = T1 · e−μθ1 · e−μθ2 . . . · e−μθM = T1 · e−μΣm−1 θm

M let Tin = T1 , TM = Tout , Σm−1 θm = ϕ then we have the result: Tout = Tin · e−μϕ

(26)

(27)

where ϕ is the sum of all M circular angles. From the results above, we can get the parameters relating to the loss of transmission energy. These parameters guide us in designing the transmission structures in the following sections: 1. The transmission energy loss increases with the increase of the friction coefficient μ. 2. The transmission energy loss increases with the increase of the sum of all M circular angles ϕ. 2.5

The Analysis of Static Configurations of the Finger

The static configuration of the finger refers to the motion rules for joints when the finger is in a static state. This section analyzes the static configurations of the dexterous hand and deduces the relations between the tendon tensions and the joint angles [8]. 2.5.1 Schematic of the Underactuated Finger The underactuation structure of a single finger is shown in Fig. 6. The dashed lines represent the tendons. R is the radius of the pulley. The torsion springs’ stiffness are k1 , k2 , k3 . The initial angles of the torsion springs are θ0 . θ1 , θ2 , θ3 are the absolute rotation angles of each torsion spring. T1 is the tension in tendon1 and T2 is the tension in tendon2. Tendon1 wraps around the first pulley and is fixed on the proximal phalanx. Tendon2 wraps around the second pulley and is fixed on the distal phalanx.

Design and Application of Dorabot-hand2 System

93

Fig. 6. Actuation structure.

2.5.2 Analysis First of all, analyze the static configuration of MC joint. According to the structure, the actuated torque at MC joint is τ1 = T1 R. Scenario 1: τ1 ≤ k1 θ0 , MC joint won’t rotate. Scenario 2: τ1 > k1 θ0 , MC joint rotates. then (28) M1 = T1 R − k1 (θ0 + θ1 ) Second, analyze the static configuration of PIP joint and DIP joint. According to the structure, the actuated torque at PIP joint is the same as the actuated torque at DIP joint. They both are τ2 = T2 R. Scenario 1: τ2 ≤ k3 θ0 and τ2 ≤ k2 θ0 , PIP joint and DIP joint both cannot rotate. Scenario 2: τ2 ≤ k3 θ0 and τ2 > k2 θ0 , only PIP joint rotates. then (29) M2 = T2 R − k2 (θ0 + θ2 ) Scenario 3: τ2 > k3 θ0 and τ2 > k2 θ0 , PIP joint and DIP joint rotate together. then (30) M2 = T2 R − k2 (θ0 + θ2 ) + k3 (θ0 + θ3 ) M3 = T2 R − k3 (θ0 + θ3 )

(31)

From the above analysis, we can see that different static configurations can be achieve by choosing different stiffness ki and different actuated force Ti . In order to achieve the kinematic relations which we discussed in Sect. 2.3.2, we need to make sure of the following: 1. MC joint should be in Scenario 2; 2. PIP joint and DIP joint should be in Scenario 3;

94

Z. Wang et al.

Because fingers are in static state, the torque at each joint is balanced. Therefore Mi = 0 resulting in: then T1 R − k1 θ0 (32) θ1 = k1 2T2 R − θ0 (33) θ2 = k2 θ3 =

T2 R − k3 θ0 k3

(34)

To achieve the flexion/extend kinematic constraint between MCP and PIP of fingers (θ2 /θ3 = 3/2) discussed in Sect. 2.3.2, then 2T2 R 3 k2 − θ 0 (35) = T2 R − k3 θ0 2 k 3

3

Hand Design

In this section, we analyze the design concepts that are applied to our hand with the focus on its mechanical structure and actuation control. Moreover, we discuss the perception of this hand, which includes tactile sensors and angle sensors. At the end of this section, we summarize the advantages and the disadvantages of our design. 3.1

Modular Design

The key ideas of modular design are the standardization and division of the modules. Modular design should follow these principles: 1. The types of modules should be minimized. 2. The module should make reconfiguration easier.

Fig. 7. Modular structure of Dorabot-hand2.

Design and Application of Dorabot-hand2 System

95

The division guidelines for Dorabot-hand2: 1. The module should be replaceable. 2. The functions and the structures of the modules should maintain a certain degree of independence and integrity. 3. The modules should be easily connected and separated. 4. The modular design cannot affect the overall functions of the system. Based on the modular design principles, the modular division guidelines and the application characteristics of this dexterous hand, the following five modules were divided: finger module, palm module, actuation module, connection module and vision module as depicted in Fig. 7. In the following subsections, we explain the designs of the modules mentioned above.

Fig. 8. Finger module.

3.1.1 Finger Module Design We use a hinged mechanical structure to build the finger joint. Each finger consists of four identical phalanx parts and two different fingertip parts. Only three of the four phalanxes can be rotated and the fixed phalanx is used to connect the finger module and the palm module together (see Fig. 8). Next, we introduce the design of the finger structure in details. The finger module can be divided into smaller modules: phalanx module and fingertip module. Phalanx module (see Fig. 9a): the phalanx consists of tactile sensor A, the right baffle B, the left baffle C, the angle sensor D, the main body E, the spindle F, the torsion spring G, the pulley H (see Fig. 9b). The fingertip module (see Fig. 10): each finger has two types of fingertip modules. We can design different fingertip modules according to different tasks. The fingertip is fixed at both ends of the finger with slots and latches.

96

Z. Wang et al.

(a) Assembly of phalanx module

(b) Exploded view drawing of phalanx module

Fig. 9. Phalanx module.

(a) Fingertip1

(b) Fingertip2

Fig. 10. Fingertip module.

The following section explains the phalanx body in details. The body is the main structure of the finger and carries other parts as well. It has various structures for mounting different parts.

(a) Front view

(b) Back view

Fig. 11. The main body part of the phalanx.

Design and Application of Dorabot-hand2 System

97

The design details (see Fig. 11): A fixes the left baffle; B is used to remove the shaft; C fixes the angle sensor; D is used to install torsion spring; E fixes the tactile sensor; F fixes the right baffle; G fixes the actuation rope; H limits the position of the end of the spring sheath and guides the actuation rope.

Fig. 12. Palm module.

3.1.2 Palm Module Design The palm module connects the finger module with the actuation module and it’s also used to fix the actuation tendons. The dexterous hand configuration depends on palm module structure. The specific design details (see Fig. 12): A fixes the finger module; B strengthens three side-by-side palm modules, which increase the structural stability of the entire dexterous hand; C limits the palm module in the lateral direction; D limits the position of the actuation rope; E fixes the palm module on the actuation module. 3.1.3 Actuation Module Design The actuation module (see Fig. 13) contains the actuators for driving the finger. It consists of the upper case A and lower case B, and the outside of the upper case is glued with guide pieces C (for guiding the sheath to reduce the resistance between the sheath and the case). Two servomotors D and two corresponding tendon windings E are equipped inside the case. Servomotors and the sheaths are attached to the case (sheaths are not shown in the picture). The actuation module uses the wedge slides to connect with other modules. The design of the actuation module case with a cross-sectional view of the upper and lower cases are shown below (see Fig. 14): A guides and fixes the transmission sheath; B fixes the palm module; C connects with palm module; D cooperates with the lower case; E and M fix the servomotor; F limits the position of the transmission sheath; G and H are the clearance holes for assembling; I is the hole for assembling; J connects with flange module; K uses to thread the servomotor control tendons; L works with the upper case;

98

Z. Wang et al.

(a) Assembly of actuation module

(b) Exploded view drawing of actuation module

Fig. 13. Actuation module.

(a) Upper case

(b) Lower case

Fig. 14. Cross-sectional view of actuation module cases.

3.1.4 Connection Module Design As shown in the following figure (see Fig. 15), the connector module consists of a single part. It connects the actuation module with the vision module. The specific design details are as follows: A restricts the connection module to the vision module; B fixes the connection module; C fixes the actuation module on the connection module; D fixes the lateral baffle. 3.1.5 Vision Sensor Module Design The vision module is primarily used to provide spatial information for the dexterous hand system. As shown (see Fig. 16), the vision module consists of the mounting flange A, RealSense adapter PCB B, RealSense case C, RealSense module D, heat sink E. The design details of the mounting flange are shown in the following figure (Fig. 17): A limits the position of the cable; B is for placing the nuts to fix the connector module; C fixes the RealSense adapter PCB; D fixes the visual module to the robot mounting flange; E fixes the RealSense case.

Design and Application of Dorabot-hand2 System

99

Fig. 15. Connector module.

(a) Assembly of vision module

(b) Exploded view drawing of vision module

Fig. 16. Vision module.

Fig. 17. Mounting flange.

100

Z. Wang et al.

The design details of the RealSense case are shown in the following figure (see Fig. 18): A limits the location of RealSense module; B fixes the RealSense case on the mounting flange; C limits the location of RealSense case; D fixes the RealSense module in the RealSense case.

Fig. 18. RealSense case.

3.1.6 Reconfigurability The reconfigurability can increase the grasping capabilities of the dexterous hand. For example, for three-finger dexterous hand, there are three common configurations (see Fig. 19): cylindrical, spherical and parallel. The cylindrical configuration is good for grasping columnar items, while the spherical configuration is good for grasping spherical objects. The parallel configuration is good for grasping smaller objects. Therefore, reconfiguration is an important ability to improve the dexterity of the robot hand.

(a) Cylindrical

(b) Spherical

(c) Parallel

Fig. 19. Configurations of three-finger dexterous hand.

The reconstruction of the hand is achieved using different palm modules. The re-configurable palm module has two forms: one is motor-driven and the another one is manual-replaceable. Motor-driven form adjusts the configuration of the dexterous hand automatically, which is easy to use. But its structure is complex and the cost is relatively higher. When using the manual-replaceable option, users need to manually change the module to apply a new configuration;

Design and Application of Dorabot-hand2 System

101

however, there is a small amount of typical configuration structures for dexterous hands, so we only need to design a limited number of different configurations for the palm module. Therefore the design cost is lower, but the process of modifying the module is troublesome. Since the design of Dorabot-hand2 is still in the proof-of-concept phase, the design presented in this paper only provides a cylindrical configuration (see Fig. 20).

Fig. 20. Cylindrical configuration of Dorabot-hand2.

3.2

Actuated and Control

This section discusses the actuation principle and the simple control methods. 3.2.1 Actuation Each finger has three rotational joints. They are defined as MC (metacarpophalangeal joint), PIP (proximal interphalangeal joint), DIP (distal interphalangeal joint), where the MC is controlled by a single tendon. The PIP and DIP are controlled by the same tendon to achieve the coupling movement (see Fig. 6). As a result of the modular design ideas, the actuation structure can be easily modified by other hand developers. By increasing or decreasing the number of tendons, the dexterous hand can be underactuated or fully actuated. 3.2.2 Control The actuation mechanism of the fingers is composed of the tendon and torsion spring system. We define the servo which controls the MC joint as the Servo-I, and the servo which controls the PIP and DIP joint as the Servo-II. When the torsion spring stiffness of the DIP joint is significantly greater than the torsion spring stiffness of the PIP joint, the rotation of the Servo-II causes the PIP joint

102

Z. Wang et al.

to move first. The DIP joint begins to rotate only when the PIP joint cannot rotate anymore because of external resistance. The following are two simple examples of the grasping control:

Fig. 21. Grasp a marker pen.

1. Grasp a marker pen (see Fig. 21). First, control the MC joint to rotate to a suitable angle. Then control the PIP joint to rotate until the distal phalanx touches the marker pen. 2. Grasp a mug (see Fig. 22). First, control the MC joint to rotate to a suitable angle. Then control the PIP joint to rotate until the middle phalanx touches the mug. Finally continue to control the servo to rotate until the distal phalanx also touch the mug.

Fig. 22. Grasp a mug.

3.3

Sensing

3.3.1 Tactile Sensor Each phalanx is equipped with tactile sensors to obtain information, such as touching force, object shape, surface texture, sliding motion and so on. The tactile sensors are designed to be modular in structure (see Fig. 23a) to facilitate easy integration. Therefore, the tactile sensor module can be conveniently attached to different surfaces of the phalanx to further expand its sensing surface.

Design and Application of Dorabot-hand2 System

103

3.3.2 Joint Angle Sensor Each phalanx is designed with mounting holes for the angle sensor. The angle of each finger joint is obtained by the angle sensor mounted on these joints. A complete collection of such joint angles determines the configuration of this dexterous hand. The angle sensors we used is from MURATA (see Fig. 23b), which features small size, high precision, long life and good linearity.

(a) Tactile sensor

(b) Angle sensor

Fig. 23. Sensing.

3.4

Hand Design Pros and Cons

3.4.1 Pros 1. Modular design allows easy modification. The modules are relatively independent and can be combined into different structures/configurations according to different functional requirements. 2. Assembling is relatively easy and it is more convenient to maintain. 3. The actuation structure can be changed easily; therefore, it is easy to switch between the underactuated mode and the fully actuated mode. 4. All structural parts are produced using 3D printing. It’s low cost and easy to produce. 5. Integrate a variety of sensors to enable future research. 3.4.2 Cons 1. Decoupling between the modules is not adequate and the assembling process requires technical skills. 2. The 3D printed parts are not strong enough. The 3D printing process is not stable and is relatively low in precision. 3. There is a large energy loss in tendon actuation. Actuation requires high tensile strength and wear resistance for the tendon. 4. Tactile sensors can only be installed on flat surfaces. 5. It is not optimal to have the same length for every phalanx. There is a large gap in the palm of the hand when the hand is clenched. It is very difficult to hold thin objects.

104

Z. Wang et al.

6. The whole design lacks structure for fine tuning. Sometimes the length of the tendon changes due to fatigue, creating significant backlash which degrades control.

4

Implementation of Manual Track

The manual track uses a human volunteer as the brain and body of the robotic system in order to sense the world and control the hand. The volunteer carries the hand and place it at any desired position then control the fingers to perform manipulation tasks. The volunteer can be trained upfront and fed with hints during the competition. 4.1

Manipulation

4.1.1 System Overview During the volunteer training process, we decided to learn together with the volunteer. Since our team is more experienced with controlling the hand, we first shown the volunteer our ways of performing the tasks. Then the volunteer tried to imitate our actions. To our surprise, after he learned how the hand works and what the key points are for each task, the volunteer started to develop his own methods of controlling the hand. After he successfully accomplished all the tasks, we made a video. The video was then used to guide the volunteer during the competition. 4.1.2 Performance The manually operated Dorabot-hand2 achieved a perfect score during the manipulation track of the competition. We had some difficulties with using a scissor to cut paper. The finger size of Dorabot-hand2 was too large to fit into both holes of the scissor. In the dry-run, we were told that using the environment to help manipulate objects and tools is not valid. However, during the competition, this was overruled and the volunteer independently came up with a solution that uses the table edge to open the scissor and use the table top to close. By using these techniques we were able to accomplish this task. 4.2

Pick and Place

4.2.1 System Overview The situation for Pick-and-Place segment is similar to Manipulation segment. After the volunteer learned to use the hand, we asked him to practice multiple rounds with random initial setup. By observing the volunteer’s practice, we realized that bowls are easier to manipulate than hammers. We gave a rough sequence of actions to guide the volunteer based on our observation. We filmed us performing the tasks with the optimal order of steps that we conclude and used it to guide the volunteer during the competition.

Design and Application of Dorabot-hand2 System

105

4.2.2 Performance For Pick-and-Place segment tasks, we were able to pick up all objects. However, during placement, we had to replace the scissor multiple times to fit it in a predefined rectangle area. Therefore, we lost 4 points and received 96 points. Picking and placing the hammer was difficult due to its heavy weight. The hammer was heavy for the hand and the only possible way of grasping it was to grasp by the head with a full wrapping. However, the hammer lies flat on the bottom of the basket. In this situation performing a full wrap was not possible. The solution we came up with is to first grasp it at the head, then lift it around the bottom of the handle and place the hammer against the side of the basket with the head up top. Eventually, we were able to create a clear space for the head and wrap the hand around as desired.

5

Summary and Future Works

The manual track explores the potential of robotic hands. Today, humans are still more intelligent than computers in general. By having a human controlling the robot hand, we lose in precision but gain significantly in feedback. Using only vision, a human can know precisely the status of the robot hand and the target object to be grasped. To our understanding, the manual track is a competition to test the potentials of each robot hand. Although, human behaviors and actions might not be replicated by computers in the present time, knowing the capabilities of each hand is still valuable. This understanding can and will guide us to develop more intelligent systems in the future. In industry and academia, there exists numerous amount of robotic hands that are different in size, materials, grasping methods and so on. Bench-marking and ranking each and every one of the hands for the performance in each task and grasping a series of objects is very meaningful work. The outcome can guide researchers to develop more powerful hands and help businesses to quickly create robot hands that fit their needs. Based on the feedbacks from this competition, our future works include: 1. 2. 3. 4.

Improve the independence of the module. Redesign the re-configurable palm module. Design tendon tuning mechanism. Improve the design of the tactile sensor.

As of now, Dorabot-hand2 is not convenient for users to change its structure. Changing the number of each finger’s DOFs is not a easy thing. To solve this, we can improve the independence of the module. First, we should change the actuation structures, and then use segmented wires to connect the tactile sensors which are mounted on the fingers. Our current palm module is not capable of dynamic reconfiguration, because each finger is mounted on the palm by screws. Fingers can’t change their relative angles like human fingers. This means Dorabot-hand2 can only grasp a limited variety of objects. The new design will overcome this problem by an additional DOF in the palm module. In our current

106

Z. Wang et al.

design, the tendons can not be easily tuned. During this competition, we noticed that the tendons become loosened after actuating for some amount of time, so we might want to add tendon tuning mechanisms. The current tactile sensor we have on the hand can only be used as a binary trigger with only two states. However, our goal is to have a modular tactile sensor that outputs force distribution on surface of touch. This type of output can help us to accomplish more complicated grasping tasks.

References 1. Ma, R.R., Odhner, L.U., Dollar, A.M.: A modular, open-source 3D printed underactuated hand. In: 2013 IEEE International Conference on Robotics and Automation (ICRA), pp. 2737–2743. IEEE (2013) 2. Nazma, E., Mohd, S.: Tendon driven robotic hands: a review. Int. J. Mech. Eng. Robot. Res. 1, 1520–1532 (2012) 3. Morecki, A., Busko, Z., Gasztold, H., Jaworek, K.: Synthesis and control of the anthropomorphic two-handed manipulator. In: Proceedings of the 10th International Symposium on Industrial Robots, pp. 461–474 (1980) 4. Lee, J.-J.: Tendon-driven manipulators: analysis, synthesis, and control. Ph.D. thesis (1991) 5. Liang, L.J.: Study on integrated design and application of variable constraint and self-adaptive structure of machinery. Ph.D. thesis, Chongqing University, Chongqing (2008) 6. Jianping, Z., Yuanfeng, S., Qingqing, C.: Data glove correction method based anatomical constraint and its application to robot arm control. Digital Commun. 40(4), 18–24 (2013) 7. Aihua, Y.: Upper limb rehabilitation robot tendon-sheath transmission characteristics and motion control. Ph.D. thesis, Donghua University, Shanghai (2015) 8. Fei, L.: Design optimization of the underactuated multi-fingered dextrous hand. Ph.D. thesis, North China University of Technology, Beijing (2010)

Manipulation Using the “Utah” Prosthetic Hand: The Role of Stiffness in Manipulation Radhen Patel, Jacob Segil, and Nikolaus Correll(B) University of Colorado Boulder, Boulder, CO 80309, USA [email protected]

Abstract. We describe our approach to the IROS “Hand-in-Hand” manipulation challenge using a simple one degree-of-freedom prehensor, which is known to be highly effective in prosthetic applications. The claw consists of two prongs of which only one is mobile, requiring the user to first make contact with the immobile prong to create a constraint and then use the second prong to exert force on the object. Despite its simplicity, this design is able to grasp a wide variety of objects and reliably manipulate them. In particular, stiffness is advantageous both when manipulating very small objects, where force needs to be applied precisely, as well as heavy ones, where forces needs to be exerted without deforming the claw itself. This approach reaches its limitations during tasks that require more degrees of freedom, for example grasping and subsequently actuating scissors. These tasks instead highlight the benefits of compliance and underactuation, stimulating a discussion about trade-offs in hand designs.

1

Introduction

The human hand is a complex and robust system which allows for dexterous manipulation, advanced sensing, and durability that far surpasses any electromechanical system ever created. Engineers in both robotics and prosthetics attempted to recreate aspects of the human hand over the past several decades [8,9,20,21]. This problem is difficult, as human hand control during grasping is dominated by trajectories in a configuration space of much smaller dimension than the degrees of freedom of the mechanism would suggest. This is due to intricate bio-mechanical joint coupling and resulting inter-finger coordination. Mechanically, this is achieved by arranging tendons in a complex, branching structure, where each tendon has the potential to actuate multiple joints. The combination of tendon coupling and muscle activation patterns leads to so-called muscular and postural synergies [22]. Yet, it is not the degree of anthropomorphism that makes a “good” hand. Commercially available — and highly successful — prosthetic hands range from one degree-of-freedom prehensors [3,7] to hands with multiple degrees of [2,4, 6]. A thorough review of the electromechanical design of prosthetic hands is c Springer International Publishing AG, part of Springer Nature 2018  Y. Sun and J. Falco (Eds.): RGMC 2016, CCIS 816, pp. 107–116, 2018. https://doi.org/10.1007/978-3-319-94568-2_6

108

R. Patel et al.

provided in [8], where the various specifications of the prosthetic hands on the market today are provided. All of these hands provide their users with reasonable functionality reasoning for a more antropomorphic design over a simple claw, might be weighted for aesthetics than functionality. Like prosthetic hands, robotic end-effectors also span a wide range of electromechanical complexity ranging from simple one degree-of-freedom prehensors [1], underactuated devices [14], to devices with multiple fingers [5]. Here, mechanisms are underactuated when some of their degrees of freedom are controlled by the environment instead of a dedicated motor. Designing robotic hands poses different challenges than prosthetic ones as robots do not have the perception and planning abilities of humans. Here, compliance and underactuation is seen to make up for control errors, allowing the hand to conform to the object. An extreme variant of compliance and underactuation are “soft robotic” hands [13,16], which are entirely made out of soft and compliant materials or structures, rather than of rigid parts. These hands possess many mechanical degrees of freedom and are able to implement complex deformations. Although proficient in employing a number of power grasps, obtaining stiffness due to inflation is difficult, and additional embedded sensing and computation are required to precisely align the fingers with objects before closing to perform reliable pinch grasps [16]. More recently, different approaches in robotics tried to take advantage from the idea of synergies, aiming to reproduce a similar coordinated and ordered ensemble of human hand motions. One example of such a design is the Pisa/IIT SoftHand [11], which has 19 degrees of freedom, only one of which is actively controlled. Similarly, Farinha et al. describes the development of the ISR Alpha hand to improve dexterity over other underactuated systems while maintaining anthropomorphic dimensions, low mass and low production cost in [15]. It is difficult to assess the capability of a specific hand due to the confounding effects of the various mechanical, electrical, and computer systems involved in the robotic or prosthetic hand. This paper describes our experiences using a simple prosthetic hand at the first Robotic Grasping and Manipulation Competition at the International Conference on Intelligent Robotics and Systems (IROS) in Daejeon, Korea, whose “hand-in-hand” track aimed at comparing the capabilities of different designs on a series of standardized tasks without requiring autonomous perception. This test is similar in spirit to standardized tests for prosthetic hands, for example the Southampton Hand Assessment Procedure (SHAP). The SHAP was developed as a clinical tool to assess pathologic or prosthetic hand function and was validated for reliability (test-retest, interrater) and validity (criterion, content). However, this test cannot parse the effects of the prosthetic hand compared to the ability of the user; in other words, the prosthetic hand function is a coupled effect of both the robotic device and the user. The “Hand-in-hand” competition aims at alleviating this limitation by requiring the competing teams to provide detailed instructions, for example in the form of pictures, for solving a number of pre-defined tasks.

Manipulation Using the “Utah” Prosthetic Hand

109

Here, we describe our experiences in using a simple mechanical claw design with stiff prongs to perform a variety of tasks that many compliant hands were not able to accomplish, and performs well when compared with much more sophisticated prosthetic designs. In the remainder of this chapter, we first describe the grasping and manipulation tasks that we addressed (Sect. 3). We then describe the prosthetic claw as well as the grasping approach it requires in Sect. 2. Section 4 describes the specific solutions to achieve the required task. Section 5 describes limitations that we observe and describes in which cases compliance is needed.

2

Hand Design and Grasping Approach

The Motion Control Electric Terminal Device (ETD) [3] or “Utah hand” is one of the simplest mechanical designs being used in prosthetics and has a long history of success in the prosthetics community. It is a single degree of freedom prehensor and is shaped like a hook (Fig. 1), in order to provide multiple surfaces for grasping. The single DC motor actuates one prong while the other prong stays stationary. Expert users have described as many as 10 methods of grasping due to the unique geometry of the digits including fine pinch, power grasp, lateral prehensor, hook, and others. The ETD provides a wide opening, weighs only 14 oz., and can produce up to 25 lbs of pinch force. Finally, the ETD is a rigid system in which both prongs are made of steel and the actuator is non-backdrivable. This ensures that the device can grasp small delicate objects as well as larger more compliant objects without advanced control techniques.

Fig. 1. The Utah prosthetic hand, Motion Control ETD (Electric Terminal Device). Only one prong of the claw is actuated to open(left) and close(right).

110

R. Patel et al.

To securely grasp the objects with the two pronged claw, we employ a two step approach. We first approach the object with the stiff prong until contact is made. We then close the hand, exerting force on the object. Here, the initial touch creates an additional constraint, whereas the force provided by the second prong holds the object in place. This approach is illustrated in Fig. 2. This simple approach has been first formally described in [17]. Many grasping problems, such as those of this competition, can indeed be reduced to a two-dimensional problem, in particular if additional constraints are provided by a table surface. Note that the initial touch phase needs to occur with minimal velocity to prevent the object from flying away from the impact. Also, the contact point must be carefully chosen so that the object is indeed fully constraint once the hand closes. Here, the diligence, perception and planning capabilities afforded by a human operator are desirable and possibly difficult to perform by a robotic system. In order to provide the human operator with a simple interface, we mounted a small joystick on the wrist of the hand, which direct power from a USB port to the hand.

Fig. 2. Diagram of a robotic gripper grasping an object. Left: Robot hand approaching an object. Middle: Initial touch phase of the stiff(fixed) finger with the object creating the initial constraint. Right: Stable grasp after closure of the moving finger.

3

Task Specifications

The “hand-in-hand” track of the competition was divided into two stages, pickand-place and manipulation. Both stages exclusively aimed at benchmarking the mechanical capabilities of a hand, independently of the perception and reasoning capabilities of its operator who was provided with pictures showing how we envisioned the manipulation problem to be solved. No human input or teleoperation from the team was allowed during grasping. The objects used in the competition were of varying sizes and weight and were chosen from the Yale-CMU-Berkeley (YCB) Object and Model Set [10] as well as other household items. Some were non-rigid (straw) and irregular shaped (towel), some were heavy (hammer), and others were rigidly attached to sockets (USB light). Tasks were of varying complexity ranging from simple grasping to performing complex motions or insertions.

Manipulation Using the “Utah” Prosthetic Hand

111

The Utah hand performing some of the tasks from the manipulation stage is shown in Fig. 3, including picking up a spoon from a cup and using it to transfer peas from a bowl to another bowl, hanging a towel on a rack, stirring liquid in a cup, dispensing salt onto a plate, removing a USB lamp from a socket, hammering nails, extending and pressing a syringe, and cutting a piece of paper. Pictures shown in Fig. 3 are identical to those provided as instructions. Figure 5 shows sample sequences for the hammer/nail and straw insertion tasks.

Fig. 3. Utah hand performing various manipulation tasks part of the Hand-in-Hand track of the competition.

4

Results

Despite the simple kinematics with a single degree of freedom, the volunteer operator was able to perform all grasping and manipulation tasks successfully with only few challenges. We obtained a full score (300/300) for both pick-andplace and manipulation tasks. 4.1

Stage One: Pick-and-Place

This stage tested the grasping capabilities of the robot hand with objects of varying sizes, stiffness and weights. It also challenged the ability of a hand to navigate a constrained environment in which objects are touching each other, which requires the hand to be shoved between items. Approaching the objects presented in a basket (Fig. 4) and making the first point of contact with the stiff digit (finger) of the gripper was straightforward, as the Utah hand is small and the prongs are stiff and thin. Since a human operator was performing the task via the robot hand, the initial touch phase was gentle enough to not to move the object around. The pinch force from the gripper was strong enough to lift and place objects of different sizes (bag of chips, scotch bar) and weight (hammer, small plastic water bottle) from the basket to their predefined location. All tasks were performed in less than 30 min and led to full score.

112

R. Patel et al.

Fig. 4. A shopping basket filled with items for the pick-and-place task.

4.2

Stage Two: Manipulation

Manipulation requires the objects not only to be grasped in a specific conformation, but also to be held securely enough to allow for precise control. Figure 3 shows a series of example task, which were straightforward to execute using our grasping approach. Also here, the small size of the hand was advantageous. For example, the prongs were thin enough to fit in between the rim of a dish and the table. Similarly, they were thin enough to get underneath the handles of a nightlight stuck in an outlet and stiff enough to rely sufficient force from the operator to the nightlight. At the same time, the pinch force was large enough to also hold heavier objects like the hammer securely and transfer force to the nails. The task that challenged the simple gripper design the most was picking up a scissor and cutting a paper along predefined lines (Fig. 6). Unlike the PISA/IIT Softhand (the only hand that also obtained full score), the Utah hand does not provide any compliance. Although we could insert the prongs of our claw into the handles of the scissors, the kinematics of the claw are different than those required to open and close the scissors, which let us loose control over the scissors during cutting. We were able to overcome this problem during the competition

Manipulation Using the “Utah” Prosthetic Hand

113

Fig. 5. Instructions provided to the operator, illustrating the one-constraint plus oneforce approach for a large, heavy object (hammer) and a small, light-weight object (straw) using the Utah prosthetic hand.

by adding padding to the prongs (nail and scotch tape), which provided the necessary compliance for this task.

5

Discussion

We demonstrated that a very simple mechanism is indeed capable to perform a wide range of manipulation tasks if perception and planning are a given. We also show where a simple mechanism reaches its limitations and where compliance is needed. Such compliance can be achieved by adding soft materials or by using an underactuated approach, such as demonstrated by the Pisa/IIT Softhand. The question that remains, however, is what the minimal design is that still performs at a wide variety of tasks. We believe that the simplicity of the Utah hand would serve as a good starting point to explore this space. Here, possibilities include combining stiff prongs with inflatable cushions, employing variable stiffness materials [18], or augmenting the hand with suction abilities, which are known to be able to grasp a wide variety of objects [12]. We note that the success of the method described in [17] depends on the stiffness of the hand, and only works if the dynamics of grasping are negligible. It is only when a finger is completely stiff that it serves exclusively as a constraint and does not exert any force. If a finger is flexible, it acts as a spring, providing not only a constraint, but also exerting force on the object. This is not necessarily a problem in practice, as these undesired forces are often not enough to overcome surface friction and if they were, would simply drive the object into the direction of the opposite finger. Similarly, making contact with an object will also involve an impulse that depends on the velocity with which the object is approached. There exist situations, however, where even the slightest motion is undesirable

114

R. Patel et al.

Fig. 6. Left: The Utah hand performing the scissor task using additional padding for compliance. Right: PISA/IIT Softhand performing the scissor task.

or when forces need to be exerted precisely. This is the case in scenarios where objects might fall, for example when constructing a tower [19], or when the required forces are very large, for example when removing a nightlight from an outlet. The Pisa/IIT Softhand was not only able to perform the scissor task without modification of its design, but was also much faster at applying constraint and force subsequently. The mechanical coupling of the fingers actuated by a single motor provides the hand with degrees of freedom (synergies) similar to that of a human hand. This make this hand compliant enough to grasp objects in a way that is similar to how a human hand grasps objects (holding a scissor and cutting a paper). We believe that this made the hand more intuitive to use for a human operator and required less interpretation of the instructions. The success of the Utah hand and the Pisa/IIT Softhand, which are both prosthetic hands, were in contrast with robotic hands that emphasized compliance in order to make a robotic system more forgiving with respect to inaccurate perception. Albeit this approach might make those hands indeed simpler to control and with a robotic system, none of them were able to perform as many manipulation tasks as the prosthetic hands. Teams whose robot hand did not have a rigid (stiff) finger, aligned the gripper such that the spoon would be in center of the gripper. They then closed the gripper to grasp it successfully. Note that here a human operator was operating the robot hand and so tasks like centering or aligning it with the object correctly seems rather simple. However, in an autonomous system where the gripper is attached to a robot arm vision alone might not be sufficient to centering a gripper [19]. Then, even a small misalignment might lead the finger that makes contact first to eject the spoon from the range of the gripper. Here, an approach that first applies a constraint and then applies force together with appropriate contact sensing might be easier to implement and lead to more robust results.

Manipulation Using the “Utah” Prosthetic Hand

6

115

Conclusion

Despite the intricacies of the sensing and mechanics of the human hand, a large variety of manipulation tasks can be solved using a simple, one-degree of freedom, stiff mechanism that allows the user to provide both a constraint and a force on an object. In particular, this approach is able to perform a number of tasks that are impossible to accomplish with underactuated, compliant hands that were designed to make up for the perception and planning deficiencies of state-of-theart autonomous robots. Yet, the perception and planning requirements to correctly use such as simple mechanism are non-trivial as initial contact has to be made in such a way that closing the hand will fully stabilize the object. Here, underactuation and compliance seem to be indeed advantageous, in particular when they can be precisely controlled, which the human hand impressively demonstrates. Acknowledgments. This research has been supported by the Airforce Office of Scientific Research (AFOSR) and the Korean government, we are grateful for this support.

References 1. Baxter Robot Grippers, Rethink Robotics. http://www.rethinkrobotics.com/ accessories/ 2. Bebionic v2 Brochure, RSL Steeper. http://www.rslsteeper.com/uploads/files/ 159/bebionic-ukrow-product-brochurersllit294-issue-21.pdf 3. Electric Terminal Device (ETD), Motion Control. http://www.utaharm.com/ ETD-Sales-Sheet.pdf 4. ILIMB User Manual, Touch Bionics. http://www.touchbionics.com 5. Jaco Arm Grippers, Kinova Robotics. http://www.kinovarobotics.com/assistiverobotics/?section=assistive 6. Michelangelo prosthetic hand. http://www.ottobockus.com/prosthetics/upperlimb-prosthetics/solution-overview/michelangelo-prosthetic-hand/ 7. System Electric Hand, Otto bock. https://professionals.ottobockus.com/ Prosthetics/Upper-Limb-Prosthetics/Myo-Hands-and-Components/MyoTerminal-Devices/Electric-Greifer-System/System-Electric-Greifer-DMCVariPlus/p/8E34∼59 8. Belter, J.T., Segil, J.L.: Mechanical design and performance specifications of anthropomorphic prosthetic hands: a review. J. Rehabil. Res. Dev. 50(5), 599 (2013) 9. Biagiotti, L., Lotti, F., Melchiorri, C., Vassura, G.: How Far is the Human Hand. A Review on Anthropomorphic Robotic end Effectors. University of Bologna, Bologna (2008) 10. Calli, B., Walsman, A., Singh, A., Srinivasa, S., Abbeel, P., Dollar, A.M.: Benchmarking in manipulation research: The YCB object and model set and benchmarking protocols. arXiv preprint arXiv:1502.03143 (2015) 11. Catalano, M.G., Grioli, G., Farnioli, E., Serio, A., Piazza, C., Bicchi, A.: Adaptive synergies for the design and control of the Pisa/IIT softhand. Int. J. Robot. Res. 33(5), 768–782 (2014)

116

R. Patel et al.

12. Correll, N., Bekris, K.E., Berenson, D., Brock, O., Causo, A., Hauser, K., Okada, K., Rodriguez, A., Romano, J.M., Wurman, P.R.: Analysis and observations from the first Amazon picking challenge. IEEE Trans. Autom. Sci. Eng. (2016). (to appear) 13. Deimel, R., Brock, O.: A novel type of compliant and underactuated robotic hand for dexterous grasping. Int. J. Robot. Res. 35(1–3), 161–185 (2015). https://doi. org/10.1177/0278364915592961 14. Dollar, A.M., Howe, R.D.: The highly adaptive SDM hand: design and performance evaluation. Int. J. Robot. Res. 29(5), 585–597 (2010) 15. Farinha, A., Lima, P.U.: A novel underactuated hand suitable for human-oriented domestic environments. In: 2016 International Conference on Autonomous Robot Systems and Competitions (ICARSC), pp. 106–111. IEEE (2016) 16. Farrow, N., Li, Y., Correll, N.: Morphological and embedded computation in a self-contained soft robotic hand. arXiv preprint arXiv:1605.00354 (2016) 17. Fearing, R.: Simplified grasping and manipulation with dextrous robot hands. IEEE J. Robot. Autom. 2(4), 188–195 (1986) 18. McEvoy, M.A., Correll, N.: Thermoplastic variable stiffness composites with embedded, networked sensing, actuation, and control. J. Compos. Mater. 49(15), 1799–1808 (2015) 19. Patel, R., Alastuey, J.C., Correll, N.: Improving grasp performance using inhand proximity and force sensing. In: International Symposium on Experimental Robotics (ISER), Tokyo, Japan (2016) 20. Rossini, P.M., Micera, S., Benvenuto, A., Carpaneto, J., Cavallo, G., Citi, L., Cipriani, C., Denaro, L., Denaro, V., Di Pino, G., et al.: Double nerve intraneural interface implant on a human amputee for robotic hand control. Clin. Neurophysiol. 121(5), 777–783 (2010) 21. Weir, R.: The great divide-the human-machine interface. Issues in the control of prostheses, manipulators, and other human machine systems. In: Proceedings of the 2003 IEEE 29th Annual Bioengineering Conference, pp. 275–276. IEEE (2003) 22. Weiss, E.J., Flanders, M.: Muscular and postural synergies of the human hand. J. Neurophysiol. 92(1), 523–535 (2004)

SKKU Hand Arm System: Hardware and Control Scheme Dongmin Choi, Byung-jin Jung, and Hyungpil Moon(B) Mechanical Engineering, Sungkyunkwan University, Suwon, Korea {taredm,jbjsin,hyungpil}@skku.edu http://ris.skku.edu

Abstract. In this work, we introduce the SKKU Hand Arm System I (SKKU-HAS-I) focusing on the hardware design and control scheme of the arm and hand systems. For the robot arm system, a driving module unit is designed and the workspace analysis is performed for the arm. A Virtual Spring Damper based controller is applied to the arm system for the task of the control. The design of the robot hand is based on mimicking the human hand and we perform an optimization process for three different design measures, workspace intersection volumes, manipulability, and opposing angles. The developed robot hand is equipped with various sensors for the contact information. Experimental results are provided for the evaluation of the developed robot hand. Keywords: Hand Arm System · Anthropomorphic robotic hand Light weight arm · Robot joint module · Compliance control · Grasping

1

Introduction

In this paper, we introduce the hardware composition and the control scheme of the SKKU Hand Arm System I (SKKU-HAS-I). The main design concept of the SKKU-HAS-I is the imitation of the dexterous manipulation ability of human hands. The hardware of the SKKU hand is designed based on the analysis of the human hand in terms of the grasping quality. For the grasping quality measure, the intersection volume of the human fingers is analyzed and adapted to the design of the robotic hand. Moreover, the tactile and fingertip sensors are employed to collect grasping information from the touch. The performance of the SKKU hand is evaluated with a grasp taxonomy that represents 18 types of grasping. On the other hand, the design goal of the SKKU manipulator is to achieve adaptivity in field applications. For the reliable operations, the manipulator is modularized by driving units which contain motor, reducer, sensors and drives. The workspace analysis for the manipulator is performed to decide the link length and the alignment of the driving units. Moreover, for the intuitive robot teaching, the direct teaching method based on Virtual Spring Damper (VSD) is applied to the control algorithm. c Springer International Publishing AG, part of Springer Nature 2018  Y. Sun and J. Falco (Eds.): RGMC 2016, CCIS 816, pp. 117–135, 2018. https://doi.org/10.1007/978-3-319-94568-2_7

118

D. Choi et al.

In the followings, we describe the development of the SKKU arm in Sect. 2 and the arm control scheme in Sect. 3. In Sect. 4, we discuss the SKKU hand to complete the introduction of SKKU-HAS-I.

2 2.1

Development of SKKU Manipulator Mechanical Design of SKKU Arm

The 6-DOF manipulator is designed and developed for the competition as shown in Fig. 1. The workspace analysis is performed to determine the link length of the manipulator. The main workspace for the task is determined as a cubic with 550 mm of edges. And the position limit of the each joint is determined by the concept design of the joint structure as shown in Table 1. The length of the joint is determined by the numerical iteration that uses the overlapping of the overall workspace region and the proposed cubic workspace. The kinematic relation between the joint position and the tool center position of the 6-DOF manipulator with roll-pitch-roll structure which contains unknown link length is used to generate the temporary overall workspace region for the numerical iteration.

Fig. 1. SKKU Arm at the competition.

In the result, the determined kinematic parameters of the manipulator are described in Denavit-Hartenberg (DH) notation as shown in Table 2. The graphical representation of the overall workspace is illustrated in Fig. 2. The yellow transparent convex structure is the workspace of the manipulator and the blue cubic is the proposed collaboration area.

SKKU Hand Arm System: Hardware and Control Scheme

Fig. 2. Workspace analysis of SKKU Arm. (Color figure online)

Table 1. Joint position limit of SKKU Arm No. of joint Position limit (rad) 1

±π

2

± π2

3

±π

4

± 23 π

5

±π

6

± π2

Table 2. DH parameters of SKKU Arm Joint ai−1 (m) αi−1 (rad) di (m) θi (rad) 1

0

2

0

3

0

4

0

5

0

6

0

7

0

π 2 π 2

− π2 π 2

− π2 π 2

− π2

0.2617 θ1 + 0

θ2 −

0.3757 θ3 0

θ4

0.3862 θ5 0

θ6

0.1707 0

π 2 π 2

119

120

D. Choi et al.

The proprioceptive sensors are selected for the arm control and the task application. We use commercially available sensors to compose the similar environment to the conventional manufacturing system. The encoder from US-DigitalTM is used to acquire the position and the velocity of the motor. The linear encoder from RSFTM is attached to the link side of the module to acquire the actual position and the velocity of the link. For the acquisition of the joint torque information, a custom-built joint torque sensor is attached to the module of the second and the 4th joints. In the case of the other joints, the current based joint torque acquisition is applied. Moreover, a 3-DOF accelerometer is attached at the tool center point (TCP) of the manipulator for the acceleration tracking in Cartesian space.

Fig. 3. Structure of manipulator joint module.

Each joint is designed with a hollow shaft structure for the convenience of the wiring with provision to prevent faults caused by wire jamming. The structure of the module is illustrated in Fig. 3 (The numbers indicate the number of each joint.) The joint modules are aligned with the center line of the manipulator to conserve the symmetry of the dynamic characteristics. The alignment of the joint modules is illustrated in Fig. 4. The overall image of the developed manipulator is represented in Fig. 5. 2.2

System Configuration

The electric hardware configuration for the control of the developed manipulator is discussed in this section. With contact force transfer during the task-related collision taking several milliseconds, a control frequency approaching 1 kHz is required for the robust performance. Moreover, because the wires of the system pass through the hollow shaft of the actuator, the wires must be isolated from

SKKU Hand Arm System: Hardware and Control Scheme

121

Fig. 4. Alignment of manipulator joint module.

Fig. 5. SKKU Arm.

noise. For this reason, the EtherCAT-based Fieldbus network is incorporated for communication between the controller and control devices. The master of the EtherCAT network is composed of an Ubuntu Linux and EtherLab open source EtherCAT controller. Xenomai based real-time kernel is applied to the master controller to guarantee the hard real-time control up to 1 kHz of the control frequency. The client of the system for the user input is located on the outside of EtherCAT network. In addition, the client is designed as a Qt-based software that connects to the EtherCAT master via TCP/IP based socket communication.

122

D. Choi et al.

The G-SOLWHI 20/100 from ELMO Motion Control Inc. is used as the motor drive of each joint. We use commercial motion controllers to grantee the robust thermal dissipation and the performance objectives listed as follows. 1. position/current control of the actuator. 2. acquisition of the motor side and the link side encoders. 3. acquisition and control of the electric current engaged in the motor for the torque estimation. The acceleration data of the TCP (Tool Center Point) is transferred to data acquisition board as the form of analog signal. The data acquisition board, NI9144 of National Instrument Inc. is hired to translate the voltage level to EtherCAT signal.

3 3.1

Control System Integration of SKKU Arm Integration of Motion Controller

The two types of joint motion controllers are integrated into the manipulator. The first one is the joint space position controller and the second one is the Cartesian space motion controller. For the joint motion planning, the 5-order polynomial and the linear combined trajectory used. The 5-order polynomial trajectory is used for the acceleration to a pre-defined velocity of the linear segment and the deceleration before the target position. The trajectory illustrated in Fig. 6 is one of the trajectories generated by the planning method. 100 80 60

Deg, Deg/s, Deg/s

2

40 20 0 −20 −40 −60

Position Velocity Acceleration

−80 −100

0

1

2

3

4

5 6 Time (Sec)

7

8

9

10

Fig. 6. Trajectory for external torque estimation experiment.

For the Cartesian space motion control, the VSD method is used [1–3]. The VSD is the directional compliance control of a manipulator defined in Cartesian space. Details of the VSD control scheme are described in Eq. 1.

SKKU Hand Arm System: Hardware and Control Scheme

τdes = J T (P (xdes − x) − D (x) ˙ + G (θ))

123

(1)

The τdes is a column vector that contains the desired torque of manipulator joints. The xdes and x is column vectors that contain the desired Cartesian position and the current position of the TCP. P and D are diagonal matrices containing the parameters of Cartesian space compliance. Concisely, the VSD is a projection of the Cartesian space force to the joint space. The control experiment based on VSD is represented at Fig. 7. The position of TCP follows the trajectory indicated by the yellow arrow.

Fig. 7. VSD based position control. (Color figure online)

Moreover, for the intuitive task application, the direct teaching based task planning method is applied. Initially, the manipulator remains in a free floating mode where only gravity compensation is applied. During the motion planning, the human guides the manipulator to the operation points and the client monitors and saves the points. When the position teaching is completed, the manipulator generates 5th order polynomial position trajectory between the Cartesian space points and follows it with VSD.

4

The Anthropomorphic Robotic Hand

Many robot hands are developed with various types of mechanisms and shapes. To surpass the functionality of the human hand is very difficult because common actuation systems have mechanical limitations. Therefore, almost robot hands can be divided into two main classes [19]. First, a tendon-driven robot hand, for which actuators are located outside of the hand and connected to the joint with long wires. This approach makes the robot hand smaller because the actuator

124

D. Choi et al.

is not inside of the robot hand and there exist fewer constraints for kinematic configurations. However, the routing of the wires is complicated and there is a disadvantage of involving the friction and the wear of the wire. On the other hand, a directly motor driven structure makes the controller design easy but it has the limitation of the joint configuration. One of the tendon-driven robot hands is the Utah/M.I.T Dexterous Hand. It has 16-DOF and each joint is connected to two tendons. Each tendon is driven by an actuator which consists of a pressure controlling valve and the low stiction cylinders [13]. Around the same time, the Stanford/JPL hand was developed by Salisbury Robotics. This hand is made up of three fingers and adopted nonanthropomorphic design. The thumb is positioned on the opposite side of the other two fingers. The joint operation is done by remotely located motors through the steel cables. Also, the n + 1 design is used for tendon-driven; four tendons control the three joints of each finger [15]. With the recent advances in technology, many robot hands come close to the human hand. The Shadow Dexterous Hand is built by the Shadow Robot Company. It is in human-like size with 20 actuated DOF [18]. A behavior of each finger is a human-like movement, but the pneumatic actuation system for tendon-driven is huge in its size compared with the size of the hand. Also, the ACT (Anatomically Correct Testbed) hand is known to mimic the bio-mechanical features of the human hand. It is developed based on the skeletal structure and ligaments of the human hand [20]. Its movement is very likely to the human hand, but the control of the hand is too complicated because of the tendon network of the ACT hand. Hirzinger et al. developed the anthropomorphic DLR Hand [10]. The finger design is based on the human hand and each finger is actuated antagonistically using two motors and two elastic elements. The VSA (Variable Stiffness Actuation) mechanism provides the robustness against uncertain impacts. The other type of the actuation mechanism is a directly motor driven structure. The DLR Hand II is one of the most famous robot hand based on the directly motor driven [5]. It has 13 DOF with four fingers. And the 6-DOF force torque sensor is attached to each fingertip. They used the differential mechanism to realize the MCP joint with two motors. The Gifu Hand II was built by Gifu University in Japan [14]. One of the special features is the tactile sensor. They developed a distributed tactile sensor with 624 detecting points to get the pressure distributions at the time of object grasping. Also, a 6-axis force torque sensor can be equipped at each finger tip. The DLR/HIT Hand is the result of the joint research of the German Aerospace Center and Harbin Institute of Technology based on the experience of DLR Hand II [12]. It is made up of four fingers and an extra DOF for the palm. They focused on the modularization and miniaturization of the robot hand. It is more human-like sized compared with the DLR Hand II and was designed to reduce the cost of the hand. The Barrett Hand is a product of the Barrett Technology and one of the famous grippers [11]. It consists of three fingers and it has totally four DOF; each finger has 1 DOF for flexion/extension and the palm spread motion had 1 DOF. Because of its simple structure, a complicated grasping problem can be simplified.

SKKU Hand Arm System: Hardware and Control Scheme

4.1

125

A Design Goal

For the service robot, the robot hand needs the ability to deal with various objects such as the human hand. Many tendon driven types of robot hands show the shape and its movements similar to those of the human hand. However, it has a disadvantage from the viewpoint of a controller design of the hand. Multi-fingered gripper such as the Barrett Hand is useful when the robot hand only need to grasp an object. On the other hand, the gripper can not provide dexterous manipulation because it has low DOF. As a result, the design goal is to develop an anthropomorphic robot hand based on the directly motor driven mechanism. 4.2

Analysis of the Human Hand

To develop the anthropomorphic robot hand, the human hand can be a good reference. It has the ability to grasp almost everything and can manipulate an object in various tasks. For this reason, many researchers designed the robot hand to mimic the skeletal structure of the human hand. There are many researches on the human hand. One of them is the anatomy of the human hand [8]. The human hand has five fingers, approximately 400 g of weight. It is made up of bone, muscle, ligament, tendon, fascia, vascular, and so on. All of the parts are organized under the skin. Notably, we need to focus its skeletal structure, because it determines the kinematic configuration of the hand and can affect the finger behavior. Bones consist of phalanges, metacarpals and carpals. Five fingers can be classified with one of two categories; the thumb and the other fingers. The thumb is made up of DP (Distal phalange), PP (Proximal Phalange) and MP (Metacarpal Phalange). These phalanges are connected with IP (Inter Phalangeal) joint, MCP (Metacarpophalangeal) joint, TM (Trapeziometacarpal) joint. The rest of four fingers are slightly different from the thumb. The MP (Medial Phalange) is located between DP (Distal phalange) and PP (Proximal Phalange). Also, MCP (Metacarpophalangeal) joint, PIP (Proximal Inter Phalangeal) and DIP (Distal Inter Phalangeal) joint connect each phalange. The DIP and PIP joint looks like a hinge and it can be modeled as a revolute joint. The MCP joint is a kind of an ellipsoid joint. It provides various movements including flexion/extension, and adduction/abduction. Every joint has operational limits. For MCP joint, the flexion is allowed from 0◦ to 90◦ ; from 0◦ to 110◦ for the PIP, from 0◦ to 60◦ or 70◦ for the DIP. In addition, the distance of each joint has a constant ratio about 1:1.618 [4]. However, it is very difficult to describe the human hand as a kinematic model of a robot. The human hand has numerous degrees of freedom and the movement of some joints is complicated such as the MCP joint. In [6], a simplified human hand model is presented with 24 degrees of freedom. Also, they show the operational range of joints. More degrees of freedom can describe the behavior of the human hand better, it causes increasing complexity of the kinematic model. Therefore, the design of the robot hand is based on the simplified model. In the simplified model, STM and CMC joint can be neglected because its operational

126

D. Choi et al.

range is very small. Also, PIP and DIP joints are strongly coupled [7], so the DIP joint is designed as a passive joint. As a result, the robot hand consists of the thumb with four active joints and three fingers with three active joints and one passive joint. 4.3

Design Parameters and Optimization

The kinematic model of the robot hand, discussed above, is not a complete model. We need to determine the length of links and a position/orientation of the thumb. These factors can affect the performance of the robot hand. So, to find the optimal factors is very importance, and our approach is explained in this section. To evaluate the performance of the robot hand, we define a performance index; intersection volume, average manipulability in the intersection, and the opposition angle. The intersection volume is the volume of the intersection area between the workspace of the thumb and the other fingers’. A large intersection volume means that each finger can easily move to another position. To perform a complicated task, the robot hand sometimes should adjust its grasp status. So, the intersection volume can be a good performance index. To compute the intersection volume, a workspace of each finger should be determined. A joint angle qij is uniformly sampled within the operational range and a forward kinematics for all sampled qij is computed. Then, the set of the fingertip position xi is the workspace of the finger. The workspace can be described by an octree for an efficient computation. The octree is a tree structure in which each node has 8 children and used to partition 3D space [16]. The intersection volume can be calculated directly from the number of bins which occupies the same position. The second index is the average manipulability in the intersection. Yoshikawa presented the manipulability index for a quality measurement of the manipulator [21], and it is defined as follows.  (2) M anipulability = det(JJ T ) It describes the distance to singular configurations and shows the capacity of change in position and orientation of the end-effector of a robot given a joint configuration. The last index is the opposition angle. The most important element for the behavior of the human hand is thumb opposability [17]. The human hand obtained dexterity and versatility by having the thumb opposability and this is the major difference between the human and the animal. The opposition angle is defined as the angle between the finger tip of the thumb and the other finger tips. The optimization process is based on a brute-force search. It is a common technique to solve a problem that consists of enumerating all possible candidates and checking each candidate. Similarly, the length of each link and the orientation of the thumb can be a possible candidate and we can find an optimal configuration by checking three performance indices. And the result is the

SKKU Hand Arm System: Hardware and Control Scheme

127

comparison with the Gifu hand, and the human hand as shown in Tables 3, 4 and 5. Table 3. A comparison result: Intersection volume Intersection volume Index Middle Ring SKKU Hand

378680

69563 389890

Gifu Hand

159590 223330 335540

Human Hand 347560 474240 627620

Table 4. A comparison result: Average manipulability Average manipulability Index Middle Ring SKKU Hand

2.39

1.34

0.33

Gifu Hand

1.51

1.24

0.97

Human Hand 2.74

2.70

2.59

Table 5. A comparison result: Opposition angle Opposition angle Index Middle Ring SKKU Hand

90.3

95.5

90.7

Gifu Hand

92.0 100.9

108.6

Human Hand 108.4 111.7

103.4

The design of the fingers are shown in Figs. 8a and b. All fingers have four joints, but the index, middle, ring fingers have one passive joint. The DH parameter of two types of fingers are described in Tables 6 and 7. For the adduction/abduction motion in MCP joint, an actuator is positioned in the palm and it is mechanically coupled by a four-bar linkage structure as shown in Fig. 9a. We do not need care about its relation because the input and the output motion is identical. However, the motion of the DIP joint is complicated. The DIP joint and the PIP joint are coupled by the four bar linkage such as the MCP joint, but its relation is complicated. The position of the DIP joint can be determined by the 4-bar linkage analysis as follows. As shown in Fig. 9c, θ4 can be expressed by θ4 = β + ψ − θ3

(3)

128

D. Choi et al.

(a) Design: Thumb

(b) Design: Index, Middle, Ring

Fig. 8. A finger design Table 6. DH parameters of the thumb i αi−1 ai−1

di θi

1 − π2

0

π 2

2

0

θ1

0

0

θ2

3 0

65 mm

0

θ3

4 0

53.02 mm 0

θ4

where α = 70.68◦ , β = 160.82◦ , l0 = 10.28 mm, l2 = 22.70 mm, lb3 = 33 mm, l3 = 12.18 mm. And the loop-closure equation is l0 + l2 = lb3 + l3 .

(4)

l0 cos α + l2 cos φ = lb3 cos θ3 + l3 cos ψ l0 sin α + l2 sin φ = lb3 sin θ3 + l3 sin ψ

(5)

Expanding above equation,

Table 7. DH parameters of the fingers (index, middle, ring) i αi−1 ai−1

di θi

1 − π2

0

0

0

2

π 2

θ1

0

θ2

3 0

65 mm 0

θ3

4 0

33 mm 0

θ4

SKKU Hand Arm System: Hardware and Control Scheme

(a) Linkage mechanism in MCP joint

(b) Linkage mechanism in DIP joint

129

(c) An analysis of the 4bar linkage

Fig. 9. 4-bar linkage mechanism

And square both sides of above equation, we have 2 cos θ3 2 + l32 cos ψ 2 + l02 cos α2 + 2lb3 l3 cos θ3 cos ψ l22 cos φ2 = lb3

− 2l3 l0 cos ψ cos α − 2lb3 l0 cos θ3 cos α l22

2

sin φ =

2 lb3

2

l32

2

l02

(6)

2

sin θ3 + sin ψ + sin α + 2lb3 l3 sin θ3 sin ψ − 2l3 l0 sin ψ sin α − 2lb3 l0 sin θ3 sin α.

By addition of two equation (Eq. 6) and rearranging, we can obtain following equation. 0 = A cos ψ + B sin ψ + C A = (2lb3 l3 cos θ3 − 2l0 l3 cos α)

(7)

B = (2lb3 l3 sin θ3 − 2l0 l3 sin α) 2 + l32 + l02 − l22 − 2lb3 l0 cos(θ3 − α) C = lb3

By using tangent half-angle formula, cos ψ =   tan ψ2

1−t2 1+t2 ,

sin ψ =

(C − A)t2 + (2B)t + (C + A) = 0 √ B ± A2 + B 2 − C 2 t= (A − C)   √ B − A2 + B 2 − C 2 ψ = 2 arctan (A − C)

2t 1+t2 ,

where t =

(8)

130

D. Choi et al.

Finally, we have θ4 described as following equation.   √ B − A2 + B 2 − C 2 − θ3 θ4 = β + 2 arctan (A − C)

(9)

The relation between DIP and PIP joint is not linear. However, it is almost linear within the PIP range from 0◦ to 60◦ . So, we consider the relation between DIP and PIP joint as a linear equation by a line fitting method. 4.4

Actuator System

The actuation system consists of a BLDC (Brushless DC) motor, a spur gear and a harmonic drive gear including an incremental encoder and a motor driver for controlling the actuation system. All BLDC motors are custom designed with 15 W power. This motor can drive up to 16.5 mNm with the power for 25.7 N fingertip force. The spur gear is directly connected to the BLDC motor and it transmits the power to the harmonic drive gear. The total gear ratio between the motor and the output is 140 : 1. A magnet is attached to the end of the motor shaft in order to measure the position by using hall effect rotary encoder. Also, the motor driver is customized for our robot hand. It consists of three parts; communication module, controller module, and gate driver module. The communication module provides the EtherCAT slave and the controller module performs a computation for the current control of the motor. These two modules are commercial products by Synapticon. 4.5

Sensors

Each fingertip is equipped with the force/torque sensor (16 mm diameter and 12 mm height) and the sensor is covered with a sphere shape. It provides force and torque measurements through the CAN bus. A signal processing circuit and ADC (analog to digital converter) are integrated into the sensor. The ranges of the force and torque measurement are 30 N and 988 Nmm. In the grasp problem, the contact force is an important element. It should be controlled precisely for a stable grasp. Also, the contact position can be calculated from the geometry of the sensor and the force/torque measurement. When the robot hand need a power grasp, the force/torque sensor is not useful. Because the sensor is placed on the fingertip of the robot hand, sometimes we cannot measure contact forces. In this case, the joint torque sensor can be useful. The robot hand is equipped with the strain gauge based joint torque sensor at each joint. The tactile sensor has human-like characteristics and it is similar to the human skin. Many tactile sensors provide the distributed position and force measurement. Sometimes it is very useful for a dexterous manipulation. Also, we developed the tactile sensor for the robot hand. The electrode of the sensor is located on the top and the bottom layer. When the pressure is applied to the sensor surface, the resistance is changed and we can measure the contact position.

SKKU Hand Arm System: Hardware and Control Scheme

4.6

131

Evaluation of the Robot Hand

The final version of the robot hand has 13 degrees of freedom with four fingers. The total length of the robot hand is 248 mm and the weight is 1.83 kg. The size is slightly bigger than the human hand, but there is trade-off relation between the power and the size because of the actuation system. For evaluation, the robot hand should grasp an object stably and we obtained a compliance controller. The control input is given by τ = J T (Kp (xd − x)) + g(q)

(10)

where J is the Jacobian of the finger, Kp is a diagonal matrices of Cartesian space stiffness and gq is a gravity term. Also xd , x is desired and current position of

(a) An Experiment for compliance controller

(b) A contact force measurement

Fig. 10. A compliance control with force feedback

132

D. Choi et al.

the fingertip. This equation is almost same as the Eq. 1. As shown in Fig. 10, the robot hand grasps an object and the contact force measurement is presented. Because the robot hand aims to serve the housework, it should be able to grasp almost objects from real life. However, there are numerous objects. For this reason, to evaluate the robot hand, we considered a grasp taxonomy. The grasp taxonomy is to classify the set of different grasp types and there are many researches [9]. Our experiment is based on the grasp taxonomy. We tried every type of grasps and 18 grasp types can be done. Figure 11 shows the results. Also, the dexterous manipulation is the important ability of the robot hand. To insert the electric bulb into the socket is a good example and the robot hand can perform this task as shown in Fig. 12. And the robot hand can perform a pick and place task as shown in Fig. 13. It is a fully automated demonstration. The Kinect sensor provides a point cloud data as well as RGB images and the grasp planner compute an optimal grasp and an approach direction, then the robot hand/arm moves to the exact position and grasp the target object. Finally, Fig. 14 shows various tasks; drawing with a pen, spraying to the board,

Fig. 11. The grasp taxonomy and experimental results

Fig. 12. A manipulation task: inserting the electric bulb into the socket

SKKU Hand Arm System: Hardware and Control Scheme

133

Fig. 13. A manipulation task: pick and place

Fig. 14. A manipulation task: drawing with a pen, spraying, erasing

erasing the board. These tasks and objects can easily be seen all around us. This experiment is showing the potential for the use of the robotic hand in housework.

5

Discussions and Conclusions

The competition is consist of two stages. The first stage is picking an object in the basket and the second stage is performing several manipulation tasks. Consequently, our team could not complete many missions. There are two main reasons why the robot did not work as we planned. First, the robot hand is too big compared with the human hand. In the competition, all of the objects are suitable for normal people, but it is uncomfortable for the robot hand. When a small object is placed on the table such as a spoon, the robot hand cannot grasp the object because of thick fingers and the spherical shape of the finger tip. Second, almost parts of our system are made in a lab instead of commercial products. Using in-house parts may have some benefits, but there are also many disadvantages as well. Sometimes the robot failed to complete a manipulation task due to potential malfunctioning of in-house parts and we wasted a lot of time for debugging the robot. Although our robot was not in the upper ranks of the competition, we had a useful experience. We have shown that the robot hand

134

D. Choi et al.

and the robot arm works well. Also our robot succeeded several manipulation tasks. As a result, we expect to improve our robot system.

References 1. Hogan, N.: Impedance control: an approach to manipulation: Part I. Theory, Part II. Implementation, Part III. Application. ASME Trans. Dynamic Syst. Meas. Control 107, 12–24 (1985) 2. Flash, T., Hogan, N.: The coordination of arm movements: an experimentally confirmed mathematical model. J. Neurosci. 5(7), 1688–1703 (1985) 3. Flash, T.: The control of hand equilibrium trajectories in multi-joint arm movements. Biol. Cybern. 57(4/5), 257–274 (1987) 4. Alexander, B., Viktor, K.: Proportions of hand segments. Int. J. Morphol. 28(3), 755–758 (2010) 5. Butterfass, J., Grebenstein, M., Liu, H., Hirzinger, G.: DLR-Hand II: next generation of a dextrous robot hand. In: IEEE International Conference on Robotics and Automation, vol. 1, pp. 109–114 (2001) 6. Cobos, S., Ferre, M., Sanchez Uran, M.A., Ortego, J., Pena, C.: Efficient human hand kinematics for manipulation tasks. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2246–2251 (2008) 7. Cobos, S., Ferre, M., S´ anchez-Ur´ an, M.A., Ortego, J.: Constraints for realistic hand manipulation. In: Proceedings of the Presence, pp. 369–370 (2007) 8. Ezaki, M.: Atlas of hand anatomy and clinical implications. J. Bone Joint Surg. Am. 86(12), 2799–2800 (2004) 9. Feix, T., Romero, J., Schmiedmayer, H.-B., Dollar, A.M., Kragic, D.: The GRASP taxonomy of human grasp types. IEEE Trans. Hum. Mach. Syst. 46(1), 66–77 (2016) 10. Grebenstein, M., Chalon, M., Hirzinger, G., Siegwart, R.: Antagonistically driven finger design for the anthropomorphic DLR Hand Arm System. In: 10th IEEE-RAS International Conference on Humanoid Robots, pp. 609–616 (2010) 11. Hasan, M.R., Vepa, R., Shaheed, H., Huijberts, H.: Modelling and control of the Barrett Hand for grasping. In: 2013 UKSim 15th International Conference on Computer Modelling and Simulation, pp. 230–235 (2013) 12. Liu, H., Meusel, P., Hirzinger, G., Jin, M., Liu, Y., Xie, Z.: The modular multisensory DLR-HIT-Hand: hardware and software architecture. IEEE/ASME Trans. Mechatron. 13(4), 461–469 (2008) 13. Jacobsen, S., Iversen, E., Knutti, D., Johnson, R., Biggers, K.: Design of the Utah/M.I.T. dextrous hand. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 3, pp. 1520–1532 (1986) 14. Kawasaki, H., Komatsu, T., Uchiyama, K.: Dexterous anthropomorphic robot hand with distributed tactile sensor: Gifu Hand II. IEEE/ASME Trans. Mechatron. 7(3), 296–303 (2002) 15. Loucks, C., Johnson, V., Boissiere, P., Starr, G., Steele, J.: Modeling and control of the stanford/JPL hand. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 4, pp. 57–578 (1987) 16. Meagher, D.: Geometric modeling using octree encoding. Comput. Graph. Image Process. 19(2), 129–147 (1982) 17. Napier, J.R., Tuttle, R.H.: Hands. Princeton University Press, Princeton (1993)

SKKU Hand Arm System: Hardware and Control Scheme

135

18. Rothling, F., Haschke, R., Steil, J.J., Ritter, H.: Platform portable anthropomorphic grasping with the bielefeld 20-DOF shadow and 9-DOF TUM hand. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2951– 2956 (2007) 19. Salisbury, J.: Active stiffness control of a manipulator in Cartesian coordinates. In: 19th IEEE Conference on Decision and Control Including the Symposium on Adaptive Processes, pp. 95–100 (1980) 20. Vande Weghe, M., Rogers, M., Weissert, M., Matsuoka, Y.: The ACT Hand: design of the skeletal structure. In: IEEE International Conference on Robotics and Automation, vol. 4, pp. 3375–3379 (2004) 21. Yoshikawa, T.: Manipulability of robotic mechanisms. Int. J. Robot. Res. 4(2), 3–9 (1985)

A Robotic System for Autonomous Grasping and Manipulation Mingu Kwon1(B) , Dandan Zhou1 , Shuo Liu2 , and Hao Zhang1 1

2

Dorabot Inc., Shenzhen, China {mgkwon,dd,hao}@dorabot.com University of California Merced, Merced, CA, USA [email protected]

Abstract. A robotic system that consists of only a gripper can be utilized for certain applications such as supporting disabled people. However, with a robot manipulator introduced into the system, it can achieve far more tasks such as automation of manufacturing and logistics processes. The autonomous track of the IROS2016 Robotic Grasping and Manipulation Competition was designed to bring a robotic system into ordinary everyday tasks involving grasping and manipulation. The main objective of this paper is the evaluation of the autonomous robotic system by comparing the performance against manual human-interacted system in terms of intelligence and robustness. We used UR5, Dora-Hand2 and Realsense SR300 to build an autonomous system for grasping and manipulation. The system has been evaluated by performing ten manipulation tasks and a pick-and-place task. The overall performance was below the manual system. However, for the tasks that involved repetitive motion, the automated system outperformed the manual system. Keywords: Manipulation

1

· Pick and Place · Grasp planning

Introduction

Robots are capable of achieving precision and speed in simple and repetitive tasks. In many manufacturing and logistics processes, humans are already replaced by robots and it has been proven that the performance of later is incomparable. However, for complicated tasks such as grasping an item from a basket and placing it on a table, humans out-perform robots. Although the intelligence of robots has improved significantly due to the success of deep learning, it is still far behind of human in the field of control, manipulation and vision in terms of computation speed and quality of the outcome. The track consists of two sub-tracks to evaluate performance of our system for ordinary everyday tasks. In this chapter, brief description of our system and implementation details are introduced. Then we discuss our view on the autonomous track and how an autonomous system can perform against manual system. Finally, we discuss possible future improvements. c Springer International Publishing AG, part of Springer Nature 2018  Y. Sun and J. Falco (Eds.): RGMC 2016, CCIS 816, pp. 136–145, 2018. https://doi.org/10.1007/978-3-319-94568-2_8

A Robotic System for Autonomous Grasping and Manipulation

2

137

Hardware

Our hardware system, shown in Fig. 1, is based on a UR5 manipulator developed by Universal Robots, a Dorabot-Hand2 developed by Dorabot Inc. and Realsense SR300 developed by Intel. The UR5 is a 6-DoF manipulator with a maximum payload of 5 kg and a workspace with a radius of 850 mm. The SR300 provides RGB-D data which is integrated into our custom vision system.

Fig. 1. Robot system.

Dorabot-Hand2, shown in Fig. 2, is a dexterous tendon-driven gripper. It consists of 3 modular fingers. Each finger has 3 degree of freedom and consists of 4 phalanges. Each phalanx has an angle sensor and a tactile sensor. The angle sensors are used to update the robot model in the planning scene in realtime. The tactile sensors are used to provide grasp feedback. Each finger is underactuated and controlled by two servo motors. This makes the control easier yet utilizes the advantage of its mechanism. The hand is controlled with two micro-controllers.

3 3.1

Software Framework

The Robot Operating System (ROS) is a framework with a collection of tools and libraries that allows a robot system to be built quickly [8]. It manages processes

138

M. Kwon et al.

Fig. 2. Dorabot-Hand2.

(nodes) and communication. ROS is chosen due to familiarity, simplicity and scalability. 3.2

Manipulation

MoveIt! is an open source software that provides capabilities such as kinematics, motion planning and manipulation [4]. Moveit! supports the use of different plugins for inverse kinematics. We used a ROS metapackage called TRAC-IK [1]. It is reliable and fast, especially when joint limits are involved. It utilizes two solvers concurrently, one implemented with the open source Orocos Kinematics and Dynamics Library (KDL) with Newton’s methods for convergence [10], and the other implemented with Sequential Quadratic Programming (SQP) nonlinear optimization with Quasi-Newton methods [2]. For motion planning, we used Open Motion Planning Library (OMPL) which provides various state-of-the-art sampling-based planners [11]. It is readily available with Moveit! and Flexible Collision Library (FCL) integration [5]. We used Lower Bound Tree Rapidly-exploring Random Tree (LBTRRT) which is an asymptotically near-optimal version of Rapidly-exploring Random Tree (RRT) [7,9]. It uses both RRT and Rapidly-exploring Random Graphs (RRG) algorithms, and guarantees convergence to a solution within a constant factor [6]. MoveIt! is already integrated with OMPL, FCL and Trac-IK. These are provided as plug-ins. Using move group interface within MoveIt! allows easy and quick implementation of task pipeline. However, we wrote a separate package that provides similar functionalities of move group instead of using the interface directly. The new package allows more control on planning a trajectory. For example, we could merge the planned trajectories, pre-plan before execution, smooth and re-sample before path-parameterization, check manipulability of a kinematic solution and change constraints dynamically. Our main focus was to provide options in controlling path-quality, velocity, and acceleration in order to achieve best stability during the task execution.

A Robotic System for Autonomous Grasping and Manipulation

3.3

139

Vision System

Our system included deep-learning recognition algorithm using RGB-D data. The algorithm was implemented with Convolutional Neural Network (CNN). We acquired a large amount of labeled training RGB-D data using the Realsense SR300 camera. For the pick-and-place items that were revealed prior to the competition, we successfully trained the network and were able to perform segmentation. However, since the rest of models were not provided prior to the competition we could not utilize the algorithm. Instead, we designed a quick and simple solution. Using Height Accumulated Features (HAF) grasping [3], our vision node determines multiple graspable areas from point clouds. Instead of evaluating each graspable options with complicated criteria, we decided to use the pose that was highest in z-direction among the poses that were near to the center of the bin. This is because the size of the gripper and the robot wrist is too big to perform manipulation within the bin. 3.4

Gripper Control

Each finger of Dorabot-Hand2 is under-actuated. Having 2 DoFs and its mechanical system is dictated by the natural dynamics of its mechanism. Hence, control becomes much easier. The control system consists of two Arduino microcontrollers where Arduino controls six servo motors and the other acquires tactile information from each phalanx and angular displacement from each joint. For the competition, we did not utilize the sensors due to the simplicity of the tasks. Instead, we programmed states such as “Grab”, “Release” and “Open”. This makes the control system very simple. However, one of the disadvantages of this method was the required recalibration for each state is required after a period of time due to the deformation of tendons.

4 4.1

Implementation Manipulation

4.1.1 System Overview As shown in Fig. 3, for each task, a scheduler node schedules stored goal poses and gripper actuation. To reach a certain goal pose, a grasp pose is determined by grasp planning with object pose. Then inverse kinematics node solves the goal from euclidean space to joint space. The motion planning node plans a collisionfree path and parametrization node turns the path into a trajectory which is fed to the UR5 controller. The scheduler waits for a feedback from controller and then initiates the next action. 4.1.2 Planning Technique The scope of the manipulation tasks is evaluation of the gripper. Gripper performance is evaluated with the following criteria:

140

M. Kwon et al.

Fig. 3. System flowchart for manipulation.

– Stability under inertial force created by fast movement with heavy object – Robustness for handling and manipulating differently shaped and sized objects – Accuracy when performing tasks that require precise manipulation Trajectories are planned with the objective of maximizing the performance in these aspects. Optimization of the trajectories should consider utilizing the full capability of the gripper. For example, salt shaker task requires fast up-down movement that may cause the shaker to escape from the gripper. We designed the downward trajectory to be curved so the shaker locks within the finger and prevent the escapement. The trajectories are planned ahead of time during the execution. Since we know the location of objects and start/goal poses of each trajectory, this was possible. This eliminates the planning time in between the trajectories. For repetitive looping motion, the initial trajectories planned are stored, then fed to controller until the motion ends.

A Robotic System for Autonomous Grasping and Manipulation

141

4.1.3 Performance Our system successfully completed 7 out of 10 tasks. This result shows that the gripper is capable of withstanding inertia from fast motion and is very robust in grasping different types of objects. The details for each task are listed below: 1. Use a spoon to scoop peas This task was completed smoothly. The difficulty is maintaining the orientation of the spoon in the hand because the handle of the spoon is a cylinder-like shape. The hand could not lift up the spoon along global Z-axis a few times in the dry-run, due to actuation and object placement errors. However, we were able to complete the task most of the time and no accidents occurred in the competition. 2. Grasp a towel on the table and hang it on a rack We achieved nearly 100% success rate for all the tests we ran. We did extremely well in this task based on two factors: a layer of silicon gel on each fingertip and the tendon structure. The silicon gel ensured that there is enough friction to pick up deformable objects, such as the towel. Meanwhile, the tendon driven control mechanism guaranteed enough force on the towel while picking it up. 3. Use a spoon to stir water in a cup We performed well in this task, which is similar to using a spoon to scoop peas. This task only requires the hand to interact with the spoon. By careful operation, no disturbance force was added on the spoon to maintain its orientation in the hand and we avoided possible collisions. Therefore, as long as the hand is able to grasp the spoon, this task can be completed with ease. 4. Shake out salt from a salt shaker to a defined location This task was a bit challenging as we were asked to shake out a large amount of salt from the shaker. The grasping and manipulation components of this task were relatively easy, but to shake out salt required the arm to accelerate and decelerate in a very high frequency. The salt was caked together in the shaker so we had to shake around 100 iterations to get enough salt out. 5. Grasp a plug and insert it into a socket We completed only part of the task in the competition. We were successful at unplugging but did not complete the unplugging action. This is due to the nature of under-actuated and tendon-driven design of the joints. When external force is introduced into the mechanical system of the gripper, the joint orientation changes. 6. Hammer a nail We could not complete this task in the competition. The payload of the gripper was not sufficient enough for such a heavy object. Grasping the hammer by the head like we did in manual track was hard to repeat using computer. A small error is likely to result in task failure and might even break the hand. For instance, if the hand does not closed fully and drops the object or if the arm does not move properly. After considering the risks, we decided to skip this task.

142

M. Kwon et al.

7. Insert straw into a to-go cup with lid This task was challenging in the manual track but it was simple in the autonomous track. In the manual track, the hand is programmed to control all three fingers to open or close simultaneously. However, in the autonomous track we achieved complicated grasping plan. The grasping plan we performed was to close the middle finger first and use the outside of the middle finger and the inside of the side finger to grasp the straw. This way, we can hold the straw very tight and insert it into the to-go cup with a gentle push. 8. Putting on or removing bolts from nuts with a nut driver This task is straightforward. The only difficulty was accurately placing the screwdriver onto the bolt. Taking advantage of the human interaction rule, we used joystick and human feedback to control the position of screwdriver to align with the bolt. After placing the screwdriver onto the bolt, we handed back the control to the computer and let it control the arm to finish screwing. 9. Fully extend the syringe and the fully press the syringe Fully extend and press the syringe require a lot of force. We were able to grasp the syringe by wrapping the fingers around the body, but the success rate was not high. To improve the success rate, we decided to use a tighter grasp plan. Using this method, a significant amount of force is added on the syringe and we had a huge improvement in the performance of this task. 10. Use a scissors to cut a piece of paper into halves along a line Cutting paper is a task that requires more than just grasping. It involves complicated in-hand manipulation which our gripper was not designed for. Although we used the same hand and successfully finished this task in the manual track, using computer control to interact with the environment and achieve such accuracy was not possible at that time. In the end, we decided to give up on this task. Due to software limitations, we gave up on the hammer task and the scissor task. We were only able to complete part of the plug task, which was within our expectation. In general, we successfully finished all attempted tasks so we are satisfied with the result. However, for future implementation, we could consider having a torque sensor to obtain the force feedback and correct the motion in order to perform tasks that involves physical contacts between the grasped item and other objects. This also eliminates remote control of the robot. Alternatively, we could replace the UR5 with an arm that is capable of torque control. 4.2

Pick and Place

4.2.1 System Overview As shown in Fig. 4, the first step of the scheduler is to activate the gripper. At the same time it looks into the bin and tries to find a candidate object that is able to be grasped. If such object is found, it calculates a motion plan and moves the hand to the grasp location and perform the grasp. Then the manipulator moves to the place-zone and drops the object. After completing the whole process, the scheduler loops back and tries to pick a new object.

A Robotic System for Autonomous Grasping and Manipulation

143

Fig. 4. System flowchart for Pick and Place.

4.2.2 Planning Technique The technique is identical to the manipulation technique of Sect. 4, with the exception of vision. As mentioned in previous section, due to the technical difficulties (object models were not provided prior to competition for training), we did not use the segmentation algorithm we prepared, but rather we implemented a quick solution during the competition. The solution utilizes HAF. HAF provides a list of graspable poses with a score for each. For us, relative location of a graspable pose inside the bin was more important than the score given by HAF. This is because the size of our gripper and the robot wrist was too big to manipulate freely inside the bin. Therefore, we had to select a pose that was in the middle of the bin regardless of the score. When there are several poses near the center of the bin, then we decided to use the highest one in z-direction. Since the vision node does not include the capability for segmentation and recognition, the system is not able to drop the special items into the specified drop zone. Instead, our system placed every object into a common drop zone.

144

M. Kwon et al.

4.2.3 Performance Our system was able to pick only four objects out of the basket and place into the drop zone. First, our vision node was not capable of object recognition. Therefore, instead of grasping based on the recognizing the object, we used height feature gathered directly from the point cloud to guide the grasping process. Second, our gripper was not able to manipulate flexibly within the basket due to the size of UR5 wrist joints. We failed multiple runs caused by collision of the arm and the basket. Third, the system did not utilize the tactile information, and hence incorrect grasping force was applied to objects. Failing to lift the object but giving some force on the object also caused most of the object to lie directly on the bottom of the basket. This also affected the collision checking as for safety issues, we intentionally prevented the hand colliding with the bottom of the basket. There are many improvements to be made for the pick and place task in the autonomous track. We would have achieved better performance with additional time to build a complete system based on vision, i.e. segment different objects, recognize each object and retrieve grasp from a pre-calculated database.

5 5.1

Discussion The Meaning of Autonomous Track

In comparison with the manual track, the autonomous track is less focused on the capability of the hand and more on the intelligence and robustness of the control system. From this aspect, same hardware should be provided by the organizer or sponsor instead of each team bringing their own robot. In our opinion, the function of most manipulators are identical. The input of the manipulator is usually the end-effector position and the output is a motion trajectory moving the end-effector to its goal position. We propose changes to our hand design system such that the hand can be easily duplicated using 3D printing. Indeed, there still remains a lot of difficulties in developing a fair and meaningful competition, it needs both the committee and the participating teams to reflect and work on improving the competition together. The alternative goal of the autonomous track is to have robot work similar ways as a human. By examining the result of the autonomous track and our existing knowledge on perception, planning and control, we conclude that robots can provide precision and speed in simple and repetitive tasks. However, for tasks that involve complicated perception process, computationally heavy planning and control techniques, humans outperform robots. This outstanding gap is what needs to be overcome and brings meaning to the autonomous track. 5.2

Future Work - Robustness vs Intelligence

Our system, especially in the manipulation track uses hard coding instead of sensing. Taking advantage of the accuracy of UR5, we can repeatedly and

A Robotic System for Autonomous Grasping and Manipulation

145

successfully perform most of the tasks in a fixed environment. So why do we need intelligence? In fact, a significant amount of hard coded robots have been deployed in the industry over many years and they had achieved a great success in replacing human labor and helping human to finish the task that are hard to accomplish by human abilities. However, this is not the goal. Nowadays, with the fast development of artificial intelligence, we are no longer satisfied by position controlled robots with expensive fixturing. We want them to think, and adopt as humans do. We want them to feel the world and react to the world. It is true that with a robust system, robot can already be useful. But this is also limited in simple settings and in simple environments. By adding intelligence, we can break this limit and make robot closer to life. To achieve this, we need to improve the sensing capabilities of the robot as we mentioned in the previous sections. A torque sensor will improve the performance of physical interaction of the robot with surrounding environment. Tactile sensors and a better visual perception algorithm can improve the grasping performance. These provide more room for the robot to think, make decisions and take a step closer to become a more intelligent being. In conclusion, intelligence and robustness are not against each other. Indeed, the preliminary for a intelligence system is a robust system.

References 1. Beeson, P., Ames, B.: TRAC-IK: an open-source library for improved solving of generic inverse kinematics. In: Proceedings of the IEEE RAS Humanoids Conference, Seoul, Korea, November 2015 2. Boggs, P.T., Tolle, J.W.: Sequential quadratic programming. Acta Numerica 4, 1–51 (1995) 3. Fischinger, D., Weiss, A., Vincze, M.: Learning grasps with topographic features. Int. J. Robotic Res. 34(9), 1167–1194 (2015) 4. Sucan, I.A., Chitta, S.: Moveit! http://moveit.ros.org/ 5. Pan, J., Chitta, S., Manocha, D.: FCL: a general purpose library for collision and proximity queries. IEEE, May 2012 6. Kala, R.: Rapidly exploring random graphs: motion planning of multiple mobile robots. Adv. Robot. 27(14), 1113–1122 (2013) 7. LaValle, S.M., Kuffner, J.J.: Rapidly-exploring random trees: a new tool for path planning. In: Proceedings IEEE International Conference on Robotics and Automation, pp. 473–479 (1999) 8. Quigley, M., Conley, K., Gerkey, B.P., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.Y.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software (2009) 9. Salzman, O., Halperin, D.: Asymptotically near-optimal RRT for fast, high-quality, motion planning. CoRR, abs/1308.0189 (2013) 10. Smits, R.: KDL: Kinematics and Dynamics Library. http://www.orocos.org/kdl 11. S ¸ ucan, I.A., Moll, M., Kavraki, L.E.: The open motion planning library. IEEE Robot. Autom. Mag. 19(4), 72–82 (2012). http://ompl.kavrakilab.org

Improving Grasp Performance Using In-Hand Proximity and Contact Sensing Radhen Patel, Rebeca Curtis, Branden Romero, and Nikolaus Correll(B) University of Colorado Boulder, Boulder, CO 80309, USA [email protected]

Abstract. We describe the grasping and manipulation strategy that we employed at the autonomous track of the Robotic Grasping and Manipulation Competition at IROS 2016. A salient feature of our architecture is the tight coupling between visual (Asus Xtion) and tactile perception (Robotic Materials), to reduce the uncertainty in sensing and actuation. We demonstrate the importance of tactile sensing and reactive control during the final stages of grasping using a Kinova Robotic arm. The set of tools and algorithms for object grasping presented here have been integrated into the open-source Robot Operating System (ROS). We have focused exclusively on the manipulation aspect (Track 1) of the competition as the bin-picking task (Track 2) would require a different perception strategy, focusing more on object identification.

1

Introduction

Grasping and manipulation tasks are system-level problems that require tight integration of mechanism design, perception, and planning. In a nutshell, a robot has to locate an object, plan and execute a grasp, and finally apply sufficient constraints to the object so that it remains in the robots hand. If the task goes beyond simple pick-and-place and requires further manipulation of the object, the robot also needs to consider the pose of the object. Choosing a perception system, a suitable end-effector, and a feasible plan is a co-design problem that has been dramatically facilitated with the emergence of standardized platforms such as the PR2 robot, Rethink Robotics Baxter, and open-source software such as ROS, OpenCV and MoveIt! [11]. Yet, only very few system-level grasping and manipulation studies exist, notably platforms presented at the Amazon Picking Challenge [13], the autonomous butler Herb [36], the PR2 [6], and other service robots that include manipulation for delivery, assembly or gardening tasks [8,12,24]. These studies are important, because the components of a grasping and manipulation system are difficult to benchmark in isolation. Specifically, it is often unclear what assumptions have exactly been made and how changes in these assumptions would affect the reliability and robustness of the system. At the task level, it is difficult to choose tasks that are representative for a wide range of real world manipulation tasks. For example, it is possible to score well c Springer International Publishing AG, part of Springer Nature 2018  Y. Sun and J. Falco (Eds.): RGMC 2016, CCIS 816, pp. 146–160, 2018. https://doi.org/10.1007/978-3-319-94568-2_9

Improving Grasp Performance Using In-Hand Proximity

147

in a pick-and-place competition for boxed items by exclusively focusing on items that can be retrieved using suction, a strategy that will not work at all for the same articles after unboxing. The First Grasping and Manipulation competition at the International Conference on Intelligent Robots and Systems (IROS) challenged the community to solve a wide variety of grasping and manipulation tasks that range from simple bin-picking tasks to performing complex sequences of pick-and-place tasks. The competition rules promote general solutions by only combining scores achieved with the same hand. In this spirit, we have developed a comprehensive autonomous grasping solution around a Kinova Jaco 7-DoF robotic arm, a RGB-D sensor (Asus Xtion), and a three-fingered hand (Kinova) equipped with proximity and tactile sensors (Robotic Materials). The resulting system combines deliberate planning with reactive control using an intricate grasp state machine whose transitions are driven by 3D-perception and tactile events. 1.1

Related Work

We provide a brief overview of related work in the sub fields that comprise grasping and manipulation. What hand mechanism design is most suitable to address a large variety of tasks remains an open question. At one end of the spectrum there are anthropomorphic hands with multiple degrees of freedom [2–4]; on the other end there are simple one degree-of-freedom prehensors [1] and underactuated devices [17] or soft robotic hands [16,19], which are entirely made out of soft and compliant materials or structures rather than of rigid parts. Although intuition would suggest that a robotic end-effector’s versatility is related to its level of anthropomorphism, existing devices have been unable to accurately recreate the features of the human hand, making simple, easier to control designs competitive. Planning for grasping and manipulation tasks has been traditionally studied using two distinct approaches: knowledge-based approaches and analytic approaches. The former is based on empirical studies of human grasping and manipulation [15], while the latter is based on physical models, that is the interactions between the hand and grasped object are modeled in terms of motions and forces, using the laws of physics [28]. However, each approach has its own disadvantages. As the mechanical and sensorial mechanisms of the human hand are difficult to reproduce and it is yet unclear how sensing and actuation interact, knowledge-based approaches are only of limited use [5]. Also, it is not clear how to generalize human-inspired grasps for novel objects. Although the analytic approaches may allow a robot to reason about how to grasp a certain object by itself, the abstractions made in the analysis to make it tractable result in models that often are only applicable to simulations or carefully structured laboratory experiments [39]. Due to the limitations of the knowledge-based and empirical approaches, machine learning as a solution to these tasks has been on the rise. Methods vary from observing how humans grasp an object and reducing the configuration space of the robot to find pregrasp postures [10], learning potential grasp points from 2D images [35], learning

148

R. Patel et al.

via reinforcement and imitation learning [25], to learning graspable and nongraspable objects via 2D and 3D features [31]. With the given time frame, known nature of the tasks and the given competition rules, we ignore the problem of grasp generation and hard-code strategies that work well for the competition tasks and the mechanism/sensorial capabilities of our hand. We believe this approach to generalize well, assuming that such strategies can be associated with an object beforehand and perception algorithms for semantic labeling to exist. Designing a perception system for grasping is strongly dependent on the endeffector choice, the variety of objects that need to be grasped, and on the environment the robot needs to operate in [40]. For example, whether objects will be grasped using suction or require careful alignment with a gripper impose very different requirements on perception. Similarly, methods that compute grasps based on the perceived geometry of an object might work very well for a large number of objects, but might fail with amorphous objects, for example a net of tennis balls [13]. Finally, whether the objects are placed nicely on a table, are cluttered, or occluded, will dramatically change the difficulty of the problem. Some approaches assume complete or partial knowledge of the object to synthesize a grasp hypothesis [18,27], while others assume no prior knowledge of the object whatsoever [7]. Uncertainty in the pose estimation above a critical threshold results in a failed grasp regardless of the perception approach. Only when execution is robust to uncertainties in sensing and actuation, can a grasp succeed with high probability. There are a number of approaches that use contact and tactile or visual feedback during grasp execution to adapt to unforeseen situations [20,22]. These approaches increase robustness under uncertainty via some feedback mechanism. Such feedback can be obtained from visual, pressure, force-torque sensors, or proximity sensors [23]. In this work, we are building up on results from [29,30,32], which use proximity, distance, and dynamic tactile sensing information, to detect different grasp events and increase robustness of the overall process with respect to uncertainty in 3D perception.

2

Task Specification

The autonomous track consisted of two stages: pick-and-place and manipulation. All sets of tasks were required to be performed fully autonomously, that is without human intervention. The pick-and-place stage required contestants to design a system that would pick and then place a set of objects into a designated area autonomously. The majority of the objects could be placed within their designated area without constraints on their orientation. A few objects had to be placed in a specific orientation, for example the hammer and the scissors as shown in Fig. 1. The set of objects consisted of ten objects chosen from a set of twenty objects [9] that were disclosed before the competition. These objects were then randomly placed within a shopping basket (Fig. 1), and the contestants were allotted 30 min to perform the task. Each successful placement was rewarded five points, leading to a maximum of fifty points. The manipulation stage consisted of ten tasks (Fig. 2) that varied in difficulty. The ten tasks were selected from a pool of 18 tasks and were divided into four

Improving Grasp Performance Using In-Hand Proximity

149

Fig. 1. Left: Objects and their predefined locations for the track-2 stage-1 pick and place task. Right: Basket containing all the objects.

levels based on difficulty. Contestants that designed a system that successfully completed one of four tasks in level one were rewarded ten points, twenty points for one of three tasks in level two, thirty points for one of two tasks in level three, and forty points for the one task in level four. As a result, a maximum of 200 points could be achieved.

3

Technical Approach

We developed a comprehensive autonomous grasping solution around a Kinova Jaco 7-DoF robotic arm, RGB-D sensor (Asus Xtion), and a Kinova threefingered hand with proximity, contact and force sensors (Robotic Materials). The resulting system combines deliberate planning with reactive control using an intricate grasp state machine whose transitions are driven by 3D-perception and tactile events. In particular, we developed a general-purpose software pipeline composed of several independent nodes that perform specific tasks such as eyeto-hand calibration, object recognition and tracking, and kinematic control and planning of the arm (Fig. 3) in the form of a Robot Operating System (ROS) package, which is available open-source1 . 3.1

Calibration

To initialize the system, the user needs to first calibrate the RGB-D camera. Our system allows the user to place the camera in a position suitable for their needs rather than rigidly attaching it to a single location. While this allows the system to quickly adapt to a variety of tasks that require different perspectives, mobility adds uncertainty to the model since the sensors’s location in space is unknown. In order to find the transformation between camera and robot frame, we rigidly mounted an augmented reality (AR) tag to the wrist joint of the Jaco arm 1

https://github.com/correlllab/cu-perception-manipulation-stack.

150

R. Patel et al.

Fig. 2. Manipulation tasks from the competition. Clockwise, starting top left. Level I: tasks (i) Scooping peas, (ii) Stirring, (iii) Salt shaking. Level II: tasks (iv) Towel picking, (v) Plugging and unplugging USB lights, (vi) Hammering nails, (vii) Straw inserting. Level III: tasks (viii) Nut fastening, (ix) Syringe pumping. Level IV: tasks (x) Paper cutting.

Fig. 3. A flow-chart depicting the various components of our system.

Improving Grasp Performance Using In-Hand Proximity

151

(Fig. 4, left). Once the AR tag is visible to the sensor, the system can estimate the transform between the sensor and the AR tag. Since the position of the wrist joint is known to our model, the system can then estimate the position of the sensor in space using forward kinematics. As a result, we are able to obtain a calibrated scene with an offset error of about 3 cm. The calibrated scene in RViz is shown in Fig. 4, right. This error is larger than what is reported from using state-of-the-art methods for camera calibration described in [38], but allowed us to perform calibration of intrinsic camera parameters off-line (using the openNI tool2 ) and quickly and robustly obtain the transformation from camera to robot frame at the competition site.

Fig. 4. Left: Position of camera relative to camera. Right: Model scene after calibration in RViz.

3.2

Perception

Our perception strategy is build around the Point Cloud Library (PCL) [34] which processes the depth data received from the ASUS Xtion. In each task, the objects lies on a table or flat surface that fills a large portion of the field of view of the depth sensor. We first segment out this tabletop using a simple nondeterministic outlier detection method random sample consensus (RANSAC) [21]. Filtering the tabletop out from the point cloud greatly reduces the points in our cloud and leaves gaps between remaining objects that assist in segmentation. Using Euclidean distance, neighboring points are clustered together to form separate objects, assuming that they are sufficiently spaced apart. Objects too close together, such as a stack of blocks, are segmented using secondary features such as color. These segmented objects are then matched to already seen object templates present in the database using 3D feature detectors and labeled accordingly (e.g., cup, plate, bowl). Similar to 2D object recognition, 3D object recognition relies on finding characteristic key points and matching them to a database. Features based on the normal of a surface are reliable since it has similar values when computed for the same surface of an object in different point clouds and at different orientations. The normal of each point is calculated by taking the nearest neighbors within a defined radius to find the tangent plane. The perpendicular vector of that plane 2

http://wiki.ros.org/openni launch/Tutorials/IntrinsicCalibration.

152

R. Patel et al.

pointing towards the camera is the normal. The vector not pointing towards the camera would not be visible to the sensor, so it can clearly be discarded. An example point cloud of a cup with computed normals and the corresponding feature histogram is shown in Fig. 5. Next, we compare our detected features with our known database using the Signature of Histograms of Orientations (SHOT) descriptor [37]. Histograms are computed on the orientations of normals in a sphere or 3D volume and then grouped together using their intersection to form the local descriptor. Similar to the well known SIFT algorithm (Scale Invariant Feature Transform) based on [26] for 2D object recognition, SHOT is also robust to occlusion and rotation and can be used to determine orientation. One big advantage to using 3D object recognition over 2D is the ability to use the depth data provided from the camera for estimating the location. This additional location information is used to calculate grasping orientations and for avoiding collisions. Once the camera location is found relative to the robot arm, we are able to perform a simple transformation to calculate the object’s pose relative to the robot for grasping and manipulation described later. All parameters of our processing pipeline are accessible in a user interface, allowing us to fine tune parameters to lighting conditions and changes in camera pose in the competition environment (Fig. 6).

Fig. 5. 3D object recognition of a cup. Clockwise, starting top left. (i) Point cloud of cup from the YCB Dataset, (ii) Green arrows display normals computed for a select few points in the cloud, (iii) Viewpoint Feature Histograms (VFH) showing the similarity of the model cup with the new cup [33] (Color figure online)

3.3

Control

The control node of our system controls the arm through two modes, Cartesian control and velocity control. The mode chosen at a particular time step depends on the action being executed. In particular, we split up control into two distinctive actions, approach and search. The former deals with large scale movements that put the end-effector in the vicinity of the object of interest, while the latter uses the feedback from the finger sensor to place the end-effector at the optimal position for manipulation by searching for salient features of the object. Tasks typically start with a Cartesian motion. First, the arm must approach the appropriate object specified by the task, so once the perception node gives the

Improving Grasp Performance Using In-Hand Proximity

153

Fig. 6. Left: Calibrated view of the experimental setup and the Jaco2 arm as seen in RViz. Right: Custom made fingers and integrated proximity and tactile sensors on Jaco2 arm.

pose estimate of an object, the Cartesian control makes use of inverse kinematics to plan a trajectory and then the plan executes to appropriately position the arm. Note that the position is specified as offsets and rotations from an object centroid based-off manual experimentation. Once executed, the task goes into search mode to get in position to grasp the object properly and then closes the hand. If the task requires further large scale movements, i.e., move a spoon to a bowl, then the Cartesian control mode will be activated again. Limitations in the perception system, due to noise from the RGB-D sensor and miscalibration, lead to uncertainty in the object’s pose. Because of this uncertainty, exclusively relying on open-loop position control may lead to collisions or failed execution of the task, for example failing to grasp a spoon because it is not within reach. So to deal with this uncertainty in perception the Cartesian control positions the arm at a safe offset from the feature of interest, and then uses velocity control to search for a task-relevant feature, for example the handle of a spoon. Once the feature is detected, which we will discuss in more detail below, the system will proceed with the appropriate action such as grasping the object or pushing the object. If the object is not found during the search, the sub task is restarted. Sensor Feedback. We use two distinct channels of information from the finger sensors (proximity and contact) within our feedback controller. Passing the nonlinear sensor input through a high-pass filter with 20 Hz cut-off frequency [29] allows us to detect contact, which appears as extrema in the high-pass signal. The resulting signals are roughly equivalent to the SA-I (Slow Adaptive, smaller receptive fields) and FA-I (Fast Adaptive, smaller receptive fields) signals in the human hand, that is constant pressure and dynamic tactile events, respectively [29]. After calibrating the sensors by fixing the base value of non-linear and surface dependent sensory input moments before executing the grasp, values ranging above and below specific thresholds are considered object and contact

154

R. Patel et al.

detection events respectively (Fig. 7). The pseudo code for both these event detection is provided in Algorithms 1 and 2.

Fig. 7. Sensor values (analog reading) versus time for the SA-I (blue) and FA-I (pink) channel equivalents from the 1st finger on the Jaco arm. The gradual increase in the SA-I channel refers to an object detection event. The first peak in the FA-I channel refers to the contact event. A drop in the SA-I channel refers to the object separation event. The second down peak in the FA-I channel is the release event. (Color figure online)

Algorithm 1. Touch detection 1: function detect touch(current F AI f inger,y) 2: touch ← current fingers touch 3: FAI ← [current FAI finger1, current FAI finger2, current FAI finger3] 4: for f ingers ← 1 to 3 do 5: if F AI[f ingers] < −threshold & current f inger touch == F alse then 6: touch[f ingers] ← T rue 7: if F AI[f ingers] > threshold & current f inger touch == T rue then 8: touch[f ingers] ← F alse

4

Results

In this section we describe how the finger sensors and perception pipeline facilitated grasping and manipulation using object recognition, contact point detection, and pose estimation for the ten competition tasks. Combining 3D perception with proximity information greatly increased the robustness of our manipulation approach by mitigating calibration error and sensor noise.

Improving Grasp Performance Using In-Hand Proximity

155

Algorithm 2. Object detection 1: function detect object(current SAI f inger,y) 2: detected ← current object detect 3: SAI ← [current SAI finger1, current SAI finger2, current SAI finger3] 4: for f ingers ← 1 to 3 do 5: if SAI[f ingers] < −thershold & current object detect == F alse then 6: detected[f ingers] ← T rue 7: if F AI[f ingers] > threshold & current object detect == T rue then 8: detected[f ingers] ← F alse

In task i and ii both required perceiving the thin and narrow spoon handle. Without using the finger sensors, failure modes include positioning the endeffector to far away from the spoon or running into the spoon and thereby changing its position. The proximity information from the sensors enabled us to position the end-effector correctly in a position to properly grasp the spoon using hard-coded search routines around the estimated position. Once the hand was in a position to grasp the spoon, it was moved to make contact with the spoon. If the robot continued to move after initial contact with the spoon, the spoon could get displaced leading to an empty grasp. The contact/release information from the sensors indicated when the fingers made contact with the spoon, and terminated the motion of the hand in a timely manner. The spoon was then securely grasped by closing the fingers in a controlled manner. The following tasks were then straightforward to execute via prerecorded motions; task i required simple motions to scoop peas and deposit them and task ii required stirring of the contents in a cup. With a proper orientation of the spoon after grasping, both tasks were easily completed. Grasping a straw out of a cup (task vi ) was similar to grasping the spoon. Using the proximity information from the sensors we could correctly locate the straw in space. The sensor’s high sensitivity allowed us to identify the touch event before the grasp started to displace the straw and successfully pick it up. It was difficult, however, to insert the straw into the plastic cup through the small opening in the lid due to the comparably large error in perception (3–5 cm), and we did not use an additional step to use the sensors to properly locate [14] the cup. Unlike the aforementioned tasks, task iii, grasping and shaking a salt dispenser, was trivial in terms of perception and grasping. Dynamic manipulation, on the other hand, proved difficult for the Kinova robot. Sufficient jerk to release salt from the shaker could not be achieved within the limits of the arm. Here, using wrist rotation instead of moving the entire arm led to best results, but still dispensed the salt at a very slow rate that made the task take a long time to complete. For task iv, the Kinova hand was able to create sufficient force closure with the USB light to pull it out of a USB connector in the socket. Plugging the connector back in was difficult due to lack of stiffness in the hand and the light

156

R. Patel et al.

itself, which was made from a flexible material. Solving this task successfully requires grasping the light as close as possible to its stiffest part and then use repeated trial and error or additional optical sensing. Picking up a hammer and punching nails in a foam block (task v ), was challenging due to the weight of the hammer and lack of stiffness in the Kinova hand. Inserting a screwdriver into a nut (task vii ) again emphasized precision. Picking up the screwdriver was relatively simple, however correctly inserting the driver into the nut was not possible with our setup. Although trial and error based on an initial estimate on the nut’s pose is a viable strategy, the nut does not have a large enough area that is suitable for self-alignment. In addition, the rotation of the screwdriver is crucial to catching the nut in order to apply a rotational force. The limited resolution in our perception pipeline does not provide us with enough information to align these items properly for manipulation. Similar to removing the USB night light, charging and emptying a syringe with air (task ix ) was a test to the robot hand’s ability to apply a strong pinch grasp. Since such a task requires two arms to perform, participants with a single robot arm were allowed to have a team member hold the syringe with their hand while the robot pulled the syringe handle. Picking up a towel and hanging it onto a hanger (task iv ) was straightforward as this proved to be a simple pick and place. Here, the challenge was picking up the towel very close to the table surface. Proximity information in the fingers allowed us to stop the arm above the table at a distance which was safe enough for the fingers not to brush against the table and reliable enough to grab the towel. The final and most difficult task was taking a pair of scissors and cutting a paper along predefined lines (task viii ). One had to first identify the lines on the paper which we did using a standard line-detection algorithm. Picking up the scissors was facilitated with the handle hanging over the table. The challenging part was orienting the scissors correctly to cut along the lines. The fingers of the Jaco arm did not have the ability to comply with the shape of the scissors (i.e., bending the fingers such that the hand does not lose grip of the scissors while repeatedly opening and closing the scissors) resulting in lost of contact with the fingers when either opening or closing it. We have focused exclusively on the manipulation aspect of the competition as the bin-picking task would require a different perception strategy, focusing on object identification.

5

Discussion

A key insight in addressing a wide variety of tasks in a competitive environment was that 3D perception, mechanical compliance, and tactile sensing complement each other and deficiencies in one can be made up by the other to some extent. Indeed, many teams were able to solve a majority of the tasks without using any perception, but relied exclusively on mechanical compliance and hard coded

Improving Grasp Performance Using In-Hand Proximity

157

positions of objects. Analogously, humans might be able to perform tasks without tactile sensing or being blind-folded, but it is the combination of the two that makes them most efficient. Indeed, better 3D perception and calibration might have allowed us to forgo tactile sensing altogether. Likewise, some of the tasks could have been accomplished using exclusively in-hand proximity and contact sensing. It might be this redundancy that lets the community mostly focus on thoroughly exploring single sensing modalities rather that exploring comprehensive solutions that combine 3D perception, tactile sensing and mechanical compliance. We also learned valuable lessons in how to specify competition rules in order to push the community toward generalizable outcomes. Bin picking and tabletop manipulation are indeed sufficiently different problems that the system presented here was not able to solve tasks in both categories, albeit mainly due to different requirements in perception. The competition rules permitted use of tools to lift the objects allowing one team to use a series of foam blocks with adhesive tape to pick up the tool (foam block) first and then deliver to the object for adhesion. This is an interesting solution, which uses compliance in a smart way and would lead to acceptable outcomes in some constraint scenarios, but only poorly generalizes to household manipulation tasks. As in the Amazon Picking Challenge [13], proximity and tactile sensing were underrepresented in the competition. Albeit we greatly benefited from the availability of contact and touch information, all of the tasks could be solved relying on accurate pose estimation and compliance. The limitations of this approach are best illustrated in the towel manipulation tasks. Here, most teams let their robot’s hands run into the table in order to make sure they are close enough to the towel. While this worked for this task, the force exerted by compliant robots might lead to undesired outcomes in some environments, and excessive use of such strategies is unlikely in future real world applications. Some of the tasks demonstrated the need for dynamic control strategies. Specifically, position and velocity-based controllers are not sufficient for tasks like emptying the salt shaker, which require accurate control of jerk. Similarly, undoing a plug leads to significant jerk, which leads to disturbance of the environment. The requirements on dynamic control are therefore two-fold: first, the ability to specify not only position and velocity, but also acceleration profiles. Second, high-bandwidth impedance control, usually available only in expensive industrial robot arms, is not for luxury but is critical for safety in operations involving quick changing of loading conditions. The largest source of error resulted from errors in calibration. These include the intrinsic camera parameters, but also finding the transformation and rotation from the ASUS Xtion to the base of the arm. While there exist more powerful calibration strategies than chosen here [38] and we could also permanently mount the camera to the robot’s arm frame, we note that different tasks require different camera perspectives. We therefore consider calibration an open problem, and are interested in exploring solutions that augment object localization and pose

158

R. Patel et al.

calibration using tactile sensing [14], as well as using the 3D model of the robot itself to add data points to the calibration process. All of the tasks in this competition could be solved without using any motion planning. That is, all motions were executed by simply commanding the robot to a Cartesian pose, assuming that there exist a collision-free trajectory. As this cannot be assumed in a real-world application, we plan to integrate the solution presented here with the motion planning framework MoveIt! [11]. This task is less straightforward than it sounds as the discrete planning approach that is customary in motion planning does not smoothly integrate with continuous feedback control, and how to do this properly is subject to further research.

6

Conclusion

We have presented a comprehensive perception and manipulation pipeline that combines 3D perception with proximity and tactile sensing using exclusively commercially available hardware. All software developed for this project is available open-source (see footnote 1) and continues to be expanded on. We have shown that in-hand proximity and tactile sensing can dramatically improve the robustness of a large variety of grasping and manipulation tasks in face of uncertainty in sensing and actuation, and we argue that those sensing modalities are critical for performing robust manipulation in the real world. Challenges that remain towards this end are: (1) increasing the accuracy of orientation estimation of objects and the efficiency of 3D perception for larger data sets of objects, (2) better integration of deliberative and reactive control strategies, and (3) improved mechanism design allowing for controlling compliance and stiffness to be able to manipulate heavy objects as well as those that require deformation of the hand.

References 1. Baxter Robot Grippers, Rethink Robotics. http://www.rethinkrobotics.com/ accessories/ 2. Bebionic v2 Brochure, RSL Steeper. http://www.rslsteeper.com/uploads/files/ 159/bebionic-ukrow-product-brochurersllit294-issue-21.pdf 3. ILIMB User Manual, Touch Bionics. http://www.touchbionics.com 4. Michelangelo prosthetic hand. http://www.ottobockus.com/prosthetics/upperlimb-prosthetics/solution-overview/michelangelo-prosthetic-hand/ 5. Balasubramanian, R., Xu, L., Brook, P.D., Smith, J.R., Matsuoka, Y.: Physical human interactive guidance: identifying grasping principles from human-planned grasps. IEEE Trans. Rob. 28(4), 899–910 (2012) 6. Bohren, J., Rusu, R.B., Jones, E.G., Marder-Eppstein, E., Pantofaru, C., Wise, M., M¨ osenlechner, L., Meeussen, W., Holzer, S.: Towards autonomous robotic butlers: lessons learned with the PR2. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 5568–5575. IEEE (2011) 7. Bone, G.M., Lambert, A., Edwards, M.: Automated modeling and robotic grasping of unknown three-dimensional objects. In: IEEE International Conference on Robotics and Automation, 2008, ICRA 2008, pp. 292–298. IEEE (2008)

Improving Grasp Performance Using In-Hand Proximity

159

8. Breuer, T., Macedo, G.R.G., Hartanto, R., Hochgeschwender, N., Holz, D., Hegger, F., Jin, Z., M¨ uller, C., Paulus, J., Reckhaus, M., et al.: Johnny: an autonomous service robot for domestic environments. J. Intell. Robot. Syst. 66(1–2), 245–272 (2012) 9. Calli, B., Walsman, A., Singh, A., Srinivasa, S., Abbeel, P., Dollar, A.M.: Benchmarking in manipulation research: the YCB object and model set and benchmarking protocols. arXiv preprint arXiv:1502.03143 (2015) 10. Ciocarlie, M.T., Allen, P.K.: Hand posture subspaces for dexterous robotic grasping. Int. J. Robot. Res. 28(7), 851–867 (2009) 11. Coleman, D., Sucan, I., Chitta, S., Correll, N.: Reducing the barrier to entry of complex robotic software: a moveit! case-study. J. Softw. Eng. Robot. Spec. Issue Best Pract. Robot Softw. Dev. 5(1), 3–16 (2014). http://arxiv.org/abs/1404.3785 12. Correll, N., Arechiga, N., Bolger, A., Bollini, M., Charrow, B., Clayton, A., Dominguez, F., Donahue, K., Dyar, S., Johnson, L., et al.: Indoor robot gardening: design and implementation. Intel. Serv. Robot. 3(4), 219–232 (2010) 13. Correll, N., Bekris, K.E., Berenson, D., Brock, O., Causo, A., Hauser, K., Okada, K., Rodriguez, A., Romano, J.M., Wurman, P.R.: Analysis and observations from the first amazon picking challenge. IEEE Trans. Autom. Sci. Eng. (2016) 14. Cox, R., Correll, N.: Merging local and global 3D perception using contact sensing. In: AAAI Spring Symposium on Interactive Multi-Sensory Object Perception for Embodied Agents, Stanford, CA (2017) 15. Cutkosky, M.R., Howe, R.D.: Human grasp choice and robotic grasp analysis. In: Venkataraman, S.T., Iberall, T. (eds.) Dextrous Robot Hands, pp. 5–31. Springer, New York (1990). https://doi.org/10.1007/978-1-4613-8974-3 1 16. Deimel, R., Brock, O.: A novel type of compliant and underactuated robotic hand for dexterous grasping. Int. J. Robot. Res. (2015). https://doi.org/10.1177/ 0278364915592961 17. Dollar, A.M., Howe, R.D.: The highly adaptive SDM hand: design and performance evaluation. Int. J. Robot. Res. 29(5), 585–597 (2010) 18. Dune, C., Marchand, E., Collowet, C., Leroux, C.: Active rough shape estimation of unknown objects. In: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3622–3627. IEEE (2008) 19. Farrow, N., Li, Y., Correll, N.: Morphological and embedded computation in a self-contained soft robotic hand. arXiv preprint arXiv:1605.00354 (2016) 20. Felip, J., Morales, A.: Robust sensor-based grasp primitive for a three-finger robot hand. In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1811–1816. IEEE (2009) 21. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981) 22. Hsiao, K., Chitta, S., Ciocarlie, M., Jones, E.G.: Contact-reactive grasping of objects with partial shape information. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1228–1235. IEEE (2010) 23. Hsiao, K., Nangeroni, P., Huber, M., Saxena, A., Ng, A.Y.: Reactive grasping using optical proximity sensors. In: IEEE International Conference on Robotics and Automation, 2009, ICRA 2009, pp. 2098–2105. IEEE (2009) 24. Knepper, R.A., Srinivasa, S.S., Mason, M.T.: Hierarchical planning architectures for mobile manipulation tasks in indoor environments. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 1985–1990. IEEE (2010) 25. Kroemer, O., Detry, R., Piater, J., Peters, J.: Combining active learning and reactive control for robot grasping. Robot. Auton. Syst. 58(9), 1105–1116 (2010)

160

R. Patel et al.

26. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004) 27. Marton, Z.c., Pangercic, D., Blodow, N., Kleinehellefort, J., Beetz, M.: General 3D modelling of novel objects from a single view. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3700–3705. IEEE (2010) 28. Miller, A.T., Allen, P.K.: Graspit! a versatile simulator for robotic grasping. IEEE Robot. Autom. Mag. 11(4), 110–122 (2004) 29. Patel, R., Canardo Alastuey, J., Correll, N.: Improving grasp performance using in-hand proximity and force sensing. In: International Symposium on Experimental Robotics (ISER), Tokyo, Japan (2016) 30. Patel, R., Correll, N.: Integrated force and distance sensing for robotic manipulation using elastomer-embedded commodity proximity sensors. In: Robotics: Science and Systems, Ann Arbor, MN (2016) 31. Rao, D., Le, Q.V., Phoka, T., Quigley, M., Sudsang, A., Ng, A.Y.: Grasping novel objects with depth segmentation. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2578–2585. IEEE (2010) 32. Romano, J.M., Hsiao, K., Niemeyer, G., Chitta, S., Kuchenbecker, K.J.: Humaninspired robotic grasp control with tactile sensing. IEEE Trans. Rob. 27(6), 1067– 1079 (2011) 33. Rusu, R.B., Bradski, G., Thibaux, R., Hsu, J.: Fast 3D recognition and pose using the viewpoint feature histogram. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2155–2162. IEEE (2010) 34. Rusu, R.B., Cousins, S.: 3D is here: Point Cloud Library (PCL). In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–4. IEEE (2011) 35. Saxena, A., Driemeyer, J., Ng, A.Y.: Robotic grasping of novel objects using vision. Int. J. Robot. Res. 27(2), 157–173 (2008) 36. Srinivasa, S.S., Ferguson, D., Helfrich, C.J., Berenson, D., Collet, A., Diankov, R., Gallagher, G., Hollinger, G., Kuffner, J., Weghe, M.V.: Herb: a home exploring robotic butler. Auton. Robots 28(1), 5–20 (2010) 37. Tombari, F., Salti, S., Di Stefano, L.: Unique signatures of histograms for local surface description. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6313, pp. 356–369. Springer, Heidelberg (2010). https://doi.org/10. 1007/978-3-642-15558-1 26 38. Villena-Mart´ınez, V., Fuster-Guill´ o, A., Azor´ın-L´ opez, J., Saval-Calvo, M., MoraPascual, J., Garcia-Rodriguez, J., Garcia-Garcia, A.: A quantitative comparison of calibration methods for RGB-D sensors using different technologies. Sensors 17(2), 243 (2017) 39. Weisz, J., Allen, P.K.: Pose error robust grasping from contact wrench space metrics. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 557–562. IEEE (2012) 40. Zhang, L., Trinkle, J.C.: The application of particle filtering to grasping acquisition with visual occlusion and tactile sensing. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 3805–3812. IEEE (2012)

Robotic Grasping and Manipulation Competition @IROS2016: Team Tsinghua Fuchun Sun(B) , Huaping Liu, Bin Fang, Di Guo, Tao Kong, Chao Yang, Yao Huang, Mingxuan Jing, and Junyi Che Department of Computer Science and Technology, Tsinghua Unviersity, Bejing, China [email protected]

Abstract. This chapter describes the preparation and implementation of the Robotic Grasping and Manipulation Competition @IROS 2016. Our Tsinghua Team participated in both the hand-in-hand and fullyautonomous tracks. The structure of a novel designed gripper and an algorithm for object detection and grasp pose estimation are described. The competition results demonstrates the effectiveness of the strategies used in the competition. Keywords: Hand-in-hand

1

· Fully-autonomous · Object detection

Introduction

Nowadays, robots are applied in more and more areas from the industrial plant to domestic environment. The robotic manipulation and grasping is one of the most fundamental function of the robot, which emphasizes both the dexterity of the robotic hand and the ability of the perception. In the early times, few sensors were integrated into the robotic hand which hindering the perception ability of the hand. In addition, the number of Degrees of Freedom (DoF) of the hand was limiting the robotic hand to complete some simple and predefined tasks [1]. During the past decades, the development of new sensor has led to more sensors for integration into the robotic hand leading to more anthropomorphic designs. [2–4]. Generally speaking, the more sensors and DoFs of the robotic hand, the more dexterous and intelligent the robotic hand is. At the same time, it is more difficult to control these more complex hands. Therefore, it is a common choice to design a robotic hand according to specific tasks. In the Robotic Grasping and Manipulation Competition @IROS2016, our Tsinghua Team participated in the hand-in-hand and fully autonomous tracks using real robotic systems. There are two stages in each track, namely pick-andplace and manipulation. A brief description of each stage is as follows: – Pick-and-Place (Fig. 1(a)): Ten objects are placed in a shopping basket on a table. The robot picks up the object from the basket one by one and places the object in the target placing location. c Springer International Publishing AG, part of Springer Nature 2018  Y. Sun and J. Falco (Eds.): RGMC 2016, CCIS 816, pp. 161–171, 2018. https://doi.org/10.1007/978-3-319-94568-2_10

162

F. Sun et al.

– Manipulation (Fig. 1(b)): The robot must complete a total of 10 tasks (pick up peas, hammer a nail, etc.) as chosen by the competition organizers. The tasks are separated into four levels of difficulties.

Fig. 1. (a) Pick-and-place (b) Manipulation

2 2.1

Architecture Hand-in-Hand

A mechanical gripper (Figs. 2, 3 and 4) with distance, pressure, visual sensor and blocking protection feedback has been designed which can project our hand from high current circuit when working. The mechanical gripper consists of a shell, a communication interface conversion circuit, a master chip, a pressure sensor, a distance sensor circuit, a blocking prevention protection circuit, a rubber with a non-slip function, and a camera. The conversion circuit output signal through the Bluetooth serial port module. The master chip is on the main circuit board, which is mainly used for producing the signal of steering gear drive, ADC data processing and data transmission. The pressure sensor, using two received

Fig. 2. Structure of the hand gripper

Robotic Grasping and Manipulation Competition

163

Fig. 3. Real gripper and its introduction

pressure sensor in detection circuit, is integrated on both ends of gripper and covered with non slippery rubber. The distance sensor has a special processing chip, an infrared emission tube and an infrared receiving tube. The infrared emission tube and receiving tube are attached to the bottom of the gripper. The blocking prevention protection circuit is integrated on the circuit board of the master chip.

Fig. 4. Communication block diagram of the hand

164

F. Sun et al.

The shell of the mechanical gripper, which is used for accommodating and supporting the circuit and the sensing part of the gripper, can be customized according to the requirements of the scene. While the gripper is in the pre-grasp state, the on-board vision system calculates the type and location of the object which is then transmitted to the host computer for analysis and processing. The host computer will then issue high level commands to the gripper based on results. When the gripper is in a closed state, the distance sensor is inactive. When opening the gripper, the infrared emission tube from the sensor module emits infrared light. When an object is within the gripping range, the signal pin from the sensor module will go high, and the port GPIO master chip will be triggered. The master chip will transmit information to the host computer for PC analysis using the serial port through the Bluetooth serial port. When the host computer gives a closed command, the information is transmitted to the master chip through the Bluetooth serial port and chip analysis processes command which makes the gripper close. The pressure sensor is monitored during the closing of the gripper. When the pressure value of the pressure sensor on both sides of the gripper changes and exceeds the set threshold value, the master chip stops the gripper closing and maintains the current position. If the pressure sensor does not trigger because the shape of the object prevents sensor contact, then the blocking feedback will have an effect. Blocking feedback causes, the master chip to send the information of the object which has been grasped to the host computer, and detects the voltage of the self-recovering fuse. If the voltage is greater than the threshold, the closing level of the gripper will be reduced, and then detect the voltage of the self-recovery fuse again, if it is still greater than the threshold value and the closing level will continue to be reduced, otherwise it will remain unchanged. 2.2

Fully Autonomous

In fully autonomous track, a Baxter robot is used. All tasks are performed autonomously without any human inputs. Hardware Setup: The robot experimental setup is shown in Fig. 5. We only use the left 7-DOF arm of the Baxter Research Robot in the competition to complete the pick-and-place and operation tasks. Our robot hand is the stock Baxter parallel-jaw gripper with the stock, short fingers and square pads. And in terms of visual perception, we use the RGB-D camera Microsoft KinectV2 to receive point cloud data and color image (Fig. 6). We use rubber gaskets at the gripper to increase friction while touching the objects. The gripper is restricted to a 3 to 7 cm width in order to grasp more items in the pallet or basket. We mount the camera to the Baxter’s head which is the easiest and most effective way for camera’s calibration. We used a single computer consisting of a 3.40 GHz intel Corei7 6800K CPU (12 physical cores), 32 GB of system memory, and an NVIDIA GeForce GTX1080 graphics card. Communication between the Baxter Robot to the PC is handled by the robot operating system (ROS) [5].

Robotic Grasping and Manipulation Competition

165

Fig. 5. The experimental platform: (a) real-world robot in pick up scenario (b) robot in the simulation

Fig. 6. View from the camera: (a) rgb color image (b) RGB-D image

Software Setup: As shown in Fig. 7, our robotic system involves object detection, robot planning and control. Object detection using deep learning techniques will be introduced in the next section. In the module of object detection, we use Caffe [6] framework to design our object detection network, and the output of detection network is the object candidate in the pallet or basket. Using representative work in robot research, we generate the best grasp pose of the object [7]. We must transform the frame [8] from the camera to robot in order to let the robotic gripper reach the real target in the Baxter robot frame. After computing the target goals, such as the object center position, and the socket hole position, we use the motion planning to compute the trajectory for the Baxter robot left arm with an easy-to-use platform of the motion planning is moveit! [9] which is used to plan the pick-and-place task. We also check the state of the gripper using the force sensor mounted on the end of the arm. If there is no object in grasp, the cycle executes.

166

F. Sun et al.

Fig. 7. The flow chart of our system

3

Object Detection and Grasp Pose Estimation

Here we describe the object discovery and grasp pose estimation algorithm. To grasp an object in a clustered environment, the object position is calculated, a grasp pose is estimated, the grasp is performed on the object and the object is placed at the target location. 3.1

Object Detection

We have seen the great progress in object detection over the past, primarily due to the development of Deep Convolutional Networks (ConvNet). In this section, we describe the object detector used in the competition. The proposed HyperNet

Fig. 8. The object detection, grasp pose estimation and recognition pipeline.

Robotic Grasping and Manipulation Competition

167

[10] will perform 2D object detection and has shown good results using typical public datasets. However, there is a gap between the datasets a the real-world environment, preventing directly use the HyperNet model. One characteristic of HyperNet is its capability to detect general objects, without considering its classes. First, the model on the MS COCO datasets is trained [11], which includes more than 300,000 images and 80 object categories. This lengthy process takes approximately two weeks. Using the HyperNet trained on the large dataset, the detector can identify various types of objects, including all the objects in the competition (Fig. 9).

Fig. 9. The HyperNet model [10]

3.2

Grasp Pose Estimation

They are two separate components of the grasp process where object detection and localization leads to the determination of grasp points. In other words, the robot needs to decide the grasp points given the object position (output by the object detector described in Sect. 3.1. In practice, the end-effector of the robot only need to decide the grasp angle. Here we introduce a simple yet effective method to decide the grasp angle given the object position. We divide the full angle into 6 parts {0, 30, 60, 90, 120, 150}, and evaluate the grasp probabilities for each of these. The grasp probability is defined by the variances of the angle, where the larger the angle, the higher the probability of a successful grasp. The algorithm will estimate all the probabilities of 6 parts and output the best grasp angle. Input: rgb and depth image Output: best_angle begin: init: best_angle = 0, final_variances = 0 for angle:= 0, 30, 60, 90, 120, 150 a) rotate both rgb and depth image by angle; b) concatenate the images along with the channels, output a 4-channel image;

168

F. Sun et al.

c) calculate the sum along the vertical direction; d) calculate the variance v of the sum output of c); if v > final_variances: best_angle = angle end end end 3.3

Object Recognition

The HyperNet can locate all of the objects. However, it is the task of the user of HyperNet to classify each grasped object. Given an image of the object, we firstly use the Alexnet [12] to extract its features and then train a multi-class SVM to classify the object. The object detection, grasp pose estimation and recognition pipeline is shown in Fig. 8

4 4.1

Experiment Results Object Detection Result

In the experiment, a basket containing a set of objects is placed in front of the Baxter robot. The visual information of the basket is captured by the Kinect2 camera mounted on the head of the robot. With the algorithms described in Sect. 3, we obtained the detection results as shown in Fig. 11. The blue rectangles indicate the detected objects with the highest score. (Fig. 10) 4.2

Competition Results

Our team placed first and third place in the fully autonomous and the handin-hand tracks respectively. Some successful cases are described in the following part.

Fig. 10. A basket is placed in front of the Baxter robot.

Robotic Grasping and Manipulation Competition

169

Fig. 11. The detection results (Color figure online)

Fig. 12. Hand-in-hand track: use a spoon to stir water in a cup.

Fig. 13. Fully autonomous track: (a) Grasp a towel on the table and hang it onto a support; (b) Transfer straw into a to-go cup with lid.

170

F. Sun et al.

Figure 12 demonstrates the task of using a spoon to stir water in a cup in the hand-in-hand track. The left image indicates the simulated environment and the instructions for the volunteer who manually operates the robotic hand. In this task, the spoon is picked up from an empty cup and then submerged into a cup of water. Finally, the robotic hand uses the spoon to stir the water for two cycles. Figure 13 shows two of the tasks in the fully autonomous track. In the left image, the Baxter robot picks up the towel on the table and then hangs it on the rack. In the right image, the Baxter robot picks up a straw from the cup and transfers it into a to-go cup covered by a lid.

5

Conclusion

In this chapter, we describe the preparation and implementation of the Robotic Grasping and Manipulation Competition @IROS2016. Our Tsinghua Team has participated in both the hand-in-hand track and fully-autonomous track. In hand-in-hand track, a novel gripper with multiple sensors are designed for specific tasks in the competition. In the fully autonomous track, the Baxter robot is used to implement the tasks. Deep learning techniques are also used to enhance the perception ability of the robot. Finally, our team received first place in the fully autonomous track and third place in the hand-in-hand track.

References 1. Okada, T.: An artificial finger equipped with adaptability to an object. Bull. Electrotech. Lab. 37(2), 1078–1090 (1974) 2. Loucks, C.S., Johnson, V.J., Boissiere, P.T., Starr, G.P., Steele, J.P.H.: Modeling and control of the Stanford/JPL hand. In: Proceedings of 1987 IEEE International Conference on Robotics and Automation, vol. 4, pp. 573–578. IEEE (1987) 3. Butterfass, J., Grebenstein, M., Liu, H., Hirzinger, G.: DLR-hand II: next generation of a dextrous robot hand. In: Proceedings of 2001 IEEE International Conference on Robotics and Automation, ICRA, vol. 1, pp. 109–114. IEEE (2001) 4. http://www.shadowrobot.com/ 5. http://www.ros.org/ 6. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 675–678. ACM (2014) 7. Gualtieri, M., ten Pas, A., Saenko, K., Platt, R.: High precision grasp pose detection in dense clutter. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 598–605. IEEE (2016) 8. Foote, T.: tf: the transform library. In: 2013 IEEE International Conference on Technologies for Practical Robot Applications (TePRA), pp. 1–6. IEEE (2013) 9. http://moveit.ros.org/ 10. Kong, T., Yao, A., Chen, Y., Sun, F.: HyperNet: towards accurate region proposal generation and joint object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 845–853 (2016)

Robotic Grasping and Manipulation Competition

171

11. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll´ar, P., Zitnick, C.L.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1 48 12. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

Complete Robotic Systems for the IROS Grasping and Manipulation Challenge Eadom Dessalene and Daniel Lofaro(B) George Mason University, 4400 University Drive, Fairfax, VA, USA {edessale,dlofaro}@gmu.edu

Abstract. Advances in perception, motion planning and grasping algorithms have enabled the movement from pick-and-place robots incapable of handling disturbances in the environment to intelligent robots with manipulation algorithms capable of dealing with novel surroundings. While the tasks outlined by the IROS Grasping and Manipulation Challenge included many challenging tasks (some of which surpassed current progress in robotic manipulation), assumptions about the competition environment were allowed. With these assumptions, we present our vision on two full robotic system pipelines behind the autonomous basket picking and task completion components of the IROS Grasping and Manipulation Competition.

1

Track 1: Autonomous Basket Picking

In this section we provide a full, closed loop robotic system solution to the autonomous basket picking task. 1.1

Problem Statement

Ten objects are randomly placed in a shopping basket. The goal is to pick the objects up and place them at predefined spots on a table one by one. A typical scenario is shown at Fig. 1. 1.2

Our Performance

Our approach to the basket picking was crude. Using record and replay, the robot was demonstrated a reach-and-grasp attempt at the left, right, top and bottom wall of the basket as well as a non-prehensile rearrangement of the contents of the bin. The robot executed a reach-and-grasp against each wall of the shelf and when 4 successive grasp attempts failed, the non-prehensile demonstration was executed. Grasp success was decided based on the joint configuration of the hand after the compliant grasping. If the hand joint values were fully closed, the robot decided there was nothing successfully grasped: Otherwise, it detected success. This approach failed for two reasons: The lack of adaptive recovery midimitation in the face of failure, as well as faulty robotic hand hardware. It was c Springer International Publishing AG, part of Springer Nature 2018  Y. Sun and J. Falco (Eds.): RGMC 2016, CCIS 816, pp. 172–179, 2018. https://doi.org/10.1007/978-3-319-94568-2_11

Robotic Systems for the IROS Grasping and Manipulation Challenge

Fig. 1. Nine of the 10 items in the basket displayed in simulation. From left-ward scan going upwards: Can of pringle chips, plastic strawberry, plastic sponge, hammer, plastic banana, scizzors, red bowl, plastic lemon. The environment at the competition is significantly more cluttered. (Color figure online)

173

Fig. 2. In this situation, collisionfree grasps cannot be found for the scizzors, can of Pringle chips and the sponge. The banana lies in an extremely constrained situation: For these 4 items, sliding the objects upwards and grasping a protruded edge is very likely more successful than a direct grasp attempt.

common for the robotic hand to not respond to open-close commands due to a serial communication error internal to the Dynamixel servos. This would occasionally leave the hand fully extended, and as the robot attempted to reach into the basket the hand would block the planned trajectory of the robot’s end effector. There was no closed loop feedback based on joint impedance, and so the robot would continually attempt to force its hand into the basket until the control loop was terminated by the user. 1.3

Assumptions

1. The spots on the table are grouped into physical weight classes (heavy, medium, and light-weight). This means that a visual representation of any of the objects is not required. 2. The robot, table and basket can be positioned at any pre-defined position relative to each other at the start of the trial run. 3. The basket can be rigidly mounted to either the robot or the table. 4. The spots on the table have positions that are known with respect to both the robot and the table. 5. The pictures and 3D meshes of all objects in the competition, taken straight from the YCB dataset, are known beforehand. 1.4

Vision

The traditional static approach to robotic grasping would be to first estimate the 6-D pose of the object via perception, generate a feasible grasp given the full CAD model and finally calculate a motion planning trajectory for the full kinematic chain of the arm and hand that satisfy collision constraints. However,

174

E. Dessalene and D. Lofaro

this process involves several complex tasks and results in failure when any step of this object-detection based grasping fails. Grasp detection however generates grasps independent of object identity, instead outputting viable grasps from a single-view RGBD image or point cloud based on local features of an unknown object’s surface. Therefore, grasp detection on incomplete point cloud data is a significantly faster alternative to the object detection approach. A Convolutional Neural Network (CNN) is a multilayer learning framework that comprises of an initial input layer, several convolutional layers and an output layer in an attempt to learn a hierarchy of feature representations. A learned approach using a CNN built off grasping rectangles is possible [1] or the incomplete pointcloud can be characterized by a surface mesh of the segmented scene, and each cluster can be categorized into a geometric shape primitive [2], with grasps generated based on these shapes. 1.5

Manipulation

When evaluating generated grasps, a grasp metric is needed to select a grasp with the most probable chance of success. The point metric [3] calculates the distance from the center of the generated grasp to the center of each ground truth grasp: If any distance is below some threshold, the grasp is considered a success. The rectangle metric [3] involves determining if the candidate orientation is within 30◦ of the ground truth grasping rectangle, as well as whether the Jaccard Index (also known as the Jaccard similarity index) between the generated grasp and the ground truth grasp is greater than 25%, where the Jaccard index between two rectangles is defined by:   (1) J(R1 , R2 ) = area(R1 R2 )/area(R1 R2 ) In order to avoid situations where a generated grasp targets the basket instead of the objects inside, we exploit assumption #2 and knowing the position of the basket with respect to the robot as well as the dimensions of the basket, extract only the point cloud of the objects inside the basket. There are situations where there are either no generated grasps that satisfy the rectangle grasp metric or where the arm configurations for each successful grasp collide with either the basket or surrounding clutter. Figure 2 illustrates this situation. Therefore, some level of exploitation of the environment is needed in order to successfully grasp every object. While we do not include this in our system, we note that the slide-to-edge manipulation strategy was successful in manipulating objects in undesirable poses on a tabletop setting (e.g. a book lying flat on a table) [4]. The same can be applied against the walls of the basket. 1.6

Feedback

Feedback can be utilized to determine which spot to place the grasped object, the success of an executed grasp and the presence of stable contact between the object and the wall of the basket when a horizontal force is executed. Exploiting

Robotic Systems for the IROS Grasping and Manipulation Challenge

175

assumption #1, we distinguish between each object based on their weight. Joint torque sensors can be used to calculate the end effector torque of the arm with the grasped object in a stationary position in mid-air. By setting the torque thresholds for the three weight classes for an arm in a specific joint configuration, active torque feedback can be used to determine the weight class of the grasped object. We utilized this approach with Baxter, the research robot, and found this approach successful. Mistakenly dropping the object while delivering it to its predefined spot is disastrous, as because the workspace is limited to the region within the basket, grasp recovery would be impossible and would require a separate robotic system dedicated to retrieval. While active slip detection utilizing dynamic tactile sensing seems attractive, there is no direct means of re-grasping a dropped object. Therefore, it is important to determine the quality of a grasp while the hand is over the basket. The execution of a rigorous shake within the brim of the basket before executing the transport task works sufficiently well over the context of this competition.

2

Track 2: 9 Daily Tasks

It was noticeable that every team (including Team-GMU RoboPatriots) at the IROS Grasping and Manipulation Competition utilized a record and replay technique. A human operator held the end effector of the robotic arm and utilizing a binary open-close command to the robotic hand (in our case, the EZGripper), had the robot perform each of the nine tasks. We discuss the shortcomings of planning algorithms that work in a high dimensional configuration space and inspired by the dominant usage of record and replay at the IROS Grasping and Manipulation Challenge, we propose the usage of imitation learning. 2.1

Problem Statement

There are 18 total tasks presented beforehand: Of this pool of 18 tasks, 9 random tasks are chosen two days before the competition. The tasks are diverse and are based off everyday tasks a household robot is expected to perform. 2.2

Our Performance

Our approach to the completion of each of the 9 tasks was similar to the basket picking. Each of the objects meant to stay stationary throughout the tasks was mounted onto a tabletop surface, and each of the non-static objects (spoon, syringe, plug, etc.) were set in pre-defined spots to be grasped. Demonstrations of the task performance was done through kinesthetic teaching, or having the human move the end effector throughout the task completion and saving the trajectory in joint space. The tasks were then replayed with these recordings.

176

E. Dessalene and D. Lofaro

Because our pipeline was open-loop and did not incorporate in-hand localization, the largest problem was that the pose of the grasped object became difficult to predict, especially for objects small relative to the robotic hand. This created a difficult problem as there was a need for the trajectory of the end effector for each task to dynamically adapt to how each object was grasped. Take the case of the pea transfer using a spoon: Because of the cylindrical handle and the low coefficient of friction of the material, the fingers of the robotic hand would commonly slip as the contact area between the fingers and the radial shape of the handle was too small and slippery, and the spoon would fall out of the hand as the fingers closed in on each other. And in cases where the grasp did succeed, the handle would rotate as the fingers closed in on the handle. As there was no object localization, the bowl of the spoon would commonly be oriented in a way that opposed the trajectory of the end effector, preventing the funneling of peas into the spoon. 2.3 1. 2. 3. 4.

Assumptions

Any of the items may be mounted to either the table or the robot Anything may be attached to the competition items Multiple attempts per task are allowed. For the pattern-cutting task (task 9), it is allowed for the robot to start the task already holding the scissors

2.4

Limitations with the Spatial Planning Approach

The tasks presented are diverse and are very difficult and computationally expensive when framed as a geometric grasp generator and configuration space motion planning problem. For instance, take the task involving the pouring of salt into a specific location. First, a semantic grasp [5] defined by a constraint on the grasp of the salt shaker must be generated. The grasp cannot occlude the pores of the salt shaker, otherwise the hand blocks the flow of salt. While the generation of semantic grasps can be done offline, each semantic grasp must have an arm configuration that has a certain amount of manipulability measure with respect to the pose of the end effector at the start of the rigorous shaking. This is such that dexterous manipulation is not constrained by approaches towards the arm’s joint limits. Testing each grasp for this requirement introduces more computational complexity. Next, a trajectory must be planned such that the orientation of the salt shaker throughout the motion is highly constrained with respect to its pitch such that the salt does not fall out. Imposing this constraint in Cartesian space drastically reduces the configuration space of traditional path planning algorithms such as the Probabilistic Roadmap [6] or the RRT-variants [7].

Robotic Systems for the IROS Grasping and Manipulation Challenge

2.5

177

Imitation Learning

Imitation learning comprises of two steps: First, a set of repeatedly carried-out actions of a human demonstrator is observed by the robot. Then, the robot reproduces the action to imitate the demonstrator. When failures are found, the user provides more demonstrations that handle these failures or better yet, learn from these failures [8], instead of programmatically handling these failure cases one by one. With the record and replay approach, as the task description becomes more complex, the more difficult it becomes maintaining large controllers that handle these failures. Noise and unpredicted disturbances in recorded playback of tasks make simple playback insufficient to deal with complex tasks. Imitation learning is robust to all of these problems with its ability to learn and generalize. Seeing as the contestants physically performed the tasks through the robot rather than direct human motion recordings, we propose kinesthetic teaching as it removes the need for explicit mapping between human and robot joints. Exploiting assumption #2, we propose the usage of object localization markers such as ARTags [9], assuming the robot can identify relevant objects in the scene along with their poses. Imitation learning can be defined as the following [10]: A world consists of states S and actions A, with the mapping of states through actions defined by a probabilistic transition function T (s | s, a) : S × A × S → [0, 1]. A policy function π : S → A learned from the demonstrations selects actions based on observations of the world state. The actions throughout all ten tasks can be described by the trajectory of the end-effector of the arm of the robot as well as the orientation of the end effector and the position of the fingers with respect to the end effector. Dynamic motion primitives [11] can be used to combine the end-effector motion with any further degrees of freedom: Therefore, the DMP formulation is a good framework for representing the action space throughout these tasks. For tasks that are bimanual, a second trajectory (the opposing arm) is included in the action space. To describe how imitation learning can solve each of the ten tasks, Fig. 3 consists of the features that describe the representation of the state of the world as well as any constraints the robot learns from the demonstrated trajectory. 2.6

Future Improvement

Household and warehouse robots tasked with grasping objects are unlikely to function 100% autonomously. As they are working in human environments, they must not only be safe and intuitive but must also be able to react to dynamic changes in the environment, accept and receive communication with humans, and coordinate not just with multiple agents but with a human as well. We encourage the future IROS Grasping and Manipulation Challenge events to allow some degree of teleoperation. One possible approach to this would be to restrict the usage of direct haptic end effector teleoperation, but to allow the human to demonstrate physical queues such as pointing and hand gesturing to complete each task.

178

E. Dessalene and D. Lofaro

Task Features Transfer of peas from a bowl 6D pose to a plate with a spoon spoon

Towel Transfer: Cup Stirring Salt Shaking

USB/AC Plug Removal and Insertion

Hammering a nail Straw transfer from cup to cup

Screw in a wrench

Extend and compress syringe

Cut 4 patterns out of a flat sheet of paper

Learned Constraints of -Orientation of spoon throughout transfer of peas -Bowl of spoon must not oppose the trajectory of the spoon for successful funneling of peas -End effector position must align with handle of spoon of None

6D pose towel 6D position of -Radial trajectory of spoon during the stirspoon ring motion 6D position of -Grasp cannot occlude the pores of the salt shaker shaker -Orientation of shaker throughout transport 6D Pose on -Both grasps cannot occlude the USB plug AC light and AC plug end of the lights 6D Pose on USB Light 6D Pose on -Grasp cannot occlude the neck or claw of hammer the hammer None (Straw is -Constrained to vertical motion when permounted with forming insertion of straw respect to a fixed cup) 6D Pose of nut -Grasp cannot occlude the front of the nut driver driver, when screwing nuts -End effector is constrained to rotations about its yaw throughout screwing motion 6D Pose of sy- -Bimanual grasping task: One grasp occuringe pying only the chamber and the other grasp occupying only the lever of the syringe, with no locking dependencies between each grasp. 6D Pose of -Periodic open and closing of the gripper sheet throughout the cutting -End effector motion constrained only to path of the pattern on paper

Fig. 3. For each task, the state representation comprises of the 6D pose of the end effector along with all the mentioned features. We assume the position of all static objects are known throughout the demonstrating and imitating process, making this a tractable problem for imitation learning. Violating the learned constraints results in either a penalization or failure.

References 1. Jain, S., Farshchiansadegh, A., Broad, A., Abdollahi, F., Mussa-Ivaldi, F., Argall, B.: Assistive robotic manipulation through shared autonomy and a body-machine interface. In: 2015 IEEE International Conference on Rehabilitation Robotics (ICORR), pp. 526–531. IEEE (2015) 2. Kappler, D., Bohg, J., Schaal, S.:. Leveraging big data for grasp planning. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 4304–4311. IEEE (2015)

Robotic Systems for the IROS Grasping and Manipulation Challenge

179

3. Redmon, J., Angelova, A.: Real-time grasp detection using convolutional neural networks. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 1316–1322. IEEE (2015) ´ 4. Eppner, C., Deimel, R., Alvarez-Ruiz, J., Maertens, M., Brock, O.: Exploitation of environmental constraints in human and robotic grasping. Int. J. Robot. Res. (2015). https://doi.org/10.1177/0278364914559753 5. Dang, H., Allen, P.K.: Semantic grasping: planning robotic grasps functionally suitable for an object manipulation task. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1311–1317. IEEE (2012) 6. Kavraki, L.E., Svestka, P., Latombe, J.-C., Overmars, M.H.: Probabilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE Trans. Robot. Autom. 12(4), 566–580 (1996) 7. LaValle, S.M.: Rapidly-exploring random trees: a new tool for path planning (1998) 8. Billard, A., Grollman, D.: Robot learning by demonstration. Scholarpedia 8(12), 3824 (2013) 9. Fiala, M.: ARTag, a fiducial marker system using digital techniques. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 2, pp. 590–596. IEEE (2005) 10. Argall, B.D., Chernova, S., Veloso, M., Browning, B.: A survey of robot learning from demonstration. Robot. Auton. Syst. 57(5), 469–483 (2009) 11. Ijspeert, A.J., Nakanishi, J., Hoffmann, H., Pastor, P., Schaal, S.: Dynamical movement primitives: learning attractor models for motor behaviors. Neural Comput. 25(2), 328–373 (2013)

Robotic Grasping and Manipulation Competition: Competitor Feedback and Lessons Learned Joe Falco1(B) , Yu Sun2 , and Maximo Roa3 1

National Institute of Standards and Technology (NIST), Gaithersburg, USA [email protected] 2 University of South Florida, Tampa, USA 3 German Aerospace Center (DLR), Cologne, Germany

Abstract. The First Robot Grasping and Manipulation Competition, held during IROS 2016, allowed researchers focused on the application of robot systems to compare the performance of hand designs as well as autonomous grasping and manipulation solutions across a common set of tasks. The competition was comprised of three tracks that included hand-in-hand grasping, fully autonomous grasping, and simulation. The hand-in-hand and fully autonomous tracks used 18 predefined manipulation tasks and 20 objects. Additionally, a bin picking operation was also performed within the hand-in-hand and fully autonomous tracks using a shopping basket and a subset of the objects. The simulation track included two parts. The first was a pick and place operation, where a simulated hand extracted as many objects as possible from a cluttered shelf and placed them randomly in a bin. The second part was a bin picking operation where a simulated robotic hand lifted as many balls as possible from a bin and deposited them into a second bin. This paper presents competitor feedback as well as an analysis of lessons learned towards improvements and advancements for the next competition at IROS 2017.

Keywords: Robot Benchmarks

1

· Grasping · Manipulation · Competition

Introduction

The first Robot Grasping and Manipulation Competition, held during the 2016 International Conference on Intelligent Robots and Systems (IROS) in Daejeon, South Korea was sponsored by The IEEE Robotics and Automation Society (RAS) Technical Committee (TC) on Robotic Hands Grasping and Manipulation (RHGM) [1]. The goal of the competition was to bring together researchers focused on the application of robot systems to benchmark the performance of autonomous grasping and manipulation solutions across a variety of application This is a U.S. government work and its text is not subject to copyright protection in the United States; however, its text may be subject to foreign copyright protection 2018 Y. Sun and J. Falco (Eds.): RGMC 2016, CCIS 816, pp. 180–189, 2018. https://doi.org/10.1007/978-3-319-94568-2_12

Competitor Feedback and Lessons Learned

181

spaces, including healthcare, manufacturing, and service robotics. Being the first of a planned series of competitions in the area of grasping and manipulation, this competition was designed to evaluate the performance of robot solutions that include grasp planning, end-effector design, perception, and manipulation control.

2

Competition Overview

The competition was comprised of three tracks; hand-in-hand grasping, fully autonomous grasping and manipulation, and simulation. The hand-in-hand and fully autonomous tracks used 18 predefined manipulation tasks and 20 objects that were readily obtainable through on-line retailers. Additionally, a bin picking operation was performed for these two tracks using a shopping basket and a subset of the objects. In order to help teams prepare their systems for the competitions, the rules, along with 10 randomly chosen predefined tasks and supporting objects, were provided one month prior to the event. The complete set of competition tasks (Fig. 1) and supporting objects (Fig. 2) were released to contestants one week before the competition and the actual IROS competition setup and objects were available for test two days before the competition. The competition design used many items from the Yale-CMU-Berkeley (YCB) Object and Model Set [2] and the 2015 Amazon Picking Challenge (APC2015) [3] object datasets1 . The YCB dataset was designed for developing benchmarks in robotic grasping and manipulation research and the APC2015 dataset supports the Amazon Picking Challenge, a competition developed to spur advancement in fundamental technologies for automated picking in unstructured warehouse environments.

Fig. 1. The ten tasks of the hand-in-hand and autonomous tracks used during the IROS 2016 Grasping and Manipulation Competition.

The hand-in-hand track enabled teams to compete based on the mechanical characteristics of their hand designs without the added requirements of an 1

Certain commercial entities and items are identified in this paper to foster understanding. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the materials or equipment identified are necessarily the best available for the purpose.

182

J. Falco et al.

Fig. 2. Task and bin-picking objects used in the IROS 2016 Grasping and Manipulation Competition. (Color figure online)

integrated robot system. It consisted of two stages. The first stage was pick and place where ten objects were removed from a shopping basket and placed within an identified area on a table top. The second stage was manipulation where a set of ten predefined tasks were performed. This track was carried out using the assistance of a human volunteer to support the hand through both stages as shown in Fig. 3. The fully autonomous track required a complete robot system containing hand, arm, and perception components to accomplish the same two stages as shown in Fig. 3 using a locate object, plan, and execute grasp approach to solving the problem. The tasks contained within this track also tested a robot system’s manipulation capabilities. The simulation track, using the Kris’ Locomotion and Manipulation Planning Toolbox (Klamp’t) [4], consisted of two stages (Fig. 4). The first stage was pick and place where the task was to extract as many objects as possible from a cluttered shelf and place them randomly in a bin. The second stage was bin picking where a simulated grasping and manipulation system lifted as many balls as possible from a bin and deposited them into a second bin. Within each track, total scores earned in both stages were accumulated and used to rank performance. The time to complete each stage was also tracked to determine a winner in the case of a tie, where the contestant using the least amount of time had the advantage.

3

Competitor Feedback

Upon completion of the IROS 2016 RGMC competition, several of the organizers sponsored an interactive feedback session with competitors as an opportunity for both the administrators and teams to discuss and to identify opportunities for

Competitor Feedback and Lessons Learned

183

Fig. 3. Competitor hand design being used during the hand-in-hand track: pick and place (left) and hammer a nail task (right).

Fig. 4. Simulation task showing the extraction of as many objects as possible from a cluttered shelf and subsequent placement in a bin (left) and pick and placement of as many balls as possible from one bin to another (right).

improvement. This one-hour feedback session was intentionally held following the computation of final scores to ensure that competitors were candid about their competition experience. The hand-in-hand track of the competition was developed to evaluate robotic hand hardware designs without the need for an integrated robotic system, eliminating the need of a robotic arm for part manipulation and a perception solution for part localization. This track relied on the use of volunteers with non-technical backgrounds to operate the robotic hand during the associated tracks based on instruction that was automatically generated by the team’s computer with no input from competitors via audio or teleoperation. Contrary to the fact that each team was given a period of time to coach volunteers on the use of their hand at

184

J. Falco et al.

the competition prior to each track, some contestants were under the impression that this coaching was limited to an automated instructional mechanism such as a video or software interface and that interaction with volunteers prior to the competition was not permitted. While this did not seem to affect the competition, some contestants felt that the time spent on the details of the instructional video, details that could have been resolved through coaching, detracted from robot system development time. Others found it difficult to give volunteers the exact procedures for grasping objects using the coaching method. It was discussed that the use of inexperienced and disparate hand-in-hand operators leads to subjective based inconsistencies of results across the systems under test. A suggested solution to operator inconsistency was to allow each team to provide an expert operator of the hand which assumes that manual operation is optimized and test results more closely track the performance of the hand hardware. In such a test scenario, a robotic hand designed for use in human-robot interaction applications such as prosthetics may inherently outperform hands designed for autonomous operation when integrated as a robot system. It was noted that, despite the issues associated with the hand-in-hand track of the competition, there was a definite need for the competition to better benchmark capabilities of robotic hands without the need for an autonomous robotic solution. The fully autonomous track required integrated robotic systems consisting of arm, end effector, and perception components. Discussions indicated that competition rules regarding autonomy were misleading. The confusion may have stemmed from the choice of wording “fully autonomous” by the organizing committee. Fully autonomous within the robotics community most often implies no human intervention on a robotic operation. In the case of the competition, although the term “fully autonomous” was used, the organizers intended to allow certain degrees of teleoperation and human intervention in order to reduce the difficulty associated with the manipulation component of the competition. It was noted that this first competition was purposely made less challenging by use of these leniencies in order to assess the readiness of technology without discouraging participation in future competitions. One contestant indicated that autonomous tasks should be made strictly autonomous with no teleoperation or human intervention. Another area of discussion was the use of tools grasped by the robotic hand to acquire an object outside of any task-specific tools defined by the competition. An example of a task specific tool defined by the competition involved the use of a predefined nut driver to tighten a bolt into a threaded hole. In one instance, a team made use of foam blocks containing an adhesive surface as a tool to be grasped by their robot. Here the strategy involved grasping the block using a conventional gripper and using adhesion to acquire an object that could not be easily grasped (e.g., a bag of potato chips). Such competitor-defined tools were valid based on competition rules which only penalized manual reconfiguration of end effectors. Another discussion regarding the autonomous track pertained to challenges posed by limitations in perception systems. Being color-based, many of the perception systems had difficulty in discriminating between objects in the

Competitor Feedback and Lessons Learned

185

pick-and-place event because so many of the them happened to be yellow or contain a lot of yellow. Examples include the shopping basket, lemon, banana, sponge, candy wrapper, potato chip bag, and scissor handle. These objects and the basket can be found on the right side of Fig. 2. Both the hand-in-hand and fully autonomous tracks used a shopping basket randomly filled with objects to be grasped. At the start of these tracks, competition administrators randomly filled each basket and the baskets were delivered to each team location. Competitors identified instances where the random distribution of objects presented a disadvantage in comparison to the random distribution presented to other teams. Instances were described where objects could not be grasped because they were located too close to basket walls for the planned grasp pose. Other instances were described where objects prevented access to several other underlying objects. It was suggested that the randomness of object placement should be predetermined and fixed across teams with defined levels of difficulty for each predetermined distribution. It was also suggested that competition tasks could be broken into steps, where if a particular step is unachievable, the contestant could skip this task forfeiting associated points and move on to the next. In summary, the discussion led to the conclusion that fixed data sets would allow more control of increasing levels of difficulty that can equally apply to the data given to all competitors. Additionally, it was noted that the shopping basket used was too small for some end-effector sizes. With regard to the simulation track, contestants felt that future events should more carefully evaluate available simulation packages to determine which is best suited and the most reliable to support the competition tasks. In addition, it was suggested that in the event of another simulation track that the organizers should attempt to tie the simulation tasks to the real world competition tasks. More general discussions indicated that more time was needed to complete competition tracks. Regarding future competitions, there were suggestions for additional tracks such as dynamic tracking of objects to be grasped and in-hand manipulation. Other suggestions included that future competitions provide a wider range of tasks to support a broader range of hand designs, that some tasks remain unknown until the competition, and that task instructions be more descriptive. Discussion also indicated that benchmarks are needed for the integrated systems in addition to those for individual hand performance. It was also noted that more logistics support funding was needed to support competitor travel as well as shipping costs for competition equipment.

4

Lessons Learned

The hand-in-hand track proved difficult to coordinate and score primarily due to the use of volunteers. The volunteers were chosen based on their educational background, being that of the non-engineering and science related fields. It was also apparent that there were differences in eye-hand coordination capabilities between volunteers and there was confusion regarding the training process for these individuals. This is in contrast with other evaluation approaches which rely

186

J. Falco et al.

on users who are already trained. One example of this is the evaluation of urban search and rescue teleoperated robots through the use of professional responders who are have taken training on how to operate the robots [5]. It is apparent that this methodology will yield a better evaluation of the robotic system under test, but would be too time consuming to support at a conference-sponsored competition. Other evaluations of search and rescue robots on a standard task set require the robot developer to supply the best operator of the system to conduct the testing. This method would be better suited for the hand-in-hand track, however the organizers have concluded that because of its subjective nature, the hand-in-hand track will not be included in subsequent competitions. The organizer’s observations of this competition with the added discussion from competitors have resulted in the decision that all tracks will be fully autonomous with no allowances for teleoperation or manual intervention of manipulator compliance. Objects included in the pick and place track will be carefully selected to ensure a range of object feature variability, including taking into consideration relative object-to-object and object-to-bin color and texture contrast. In addition, methods will be developed to stage object placement in bins to ensure that all participants compete using the same random bin complexity. With respect to issues concerning the use of tools in addition to the endeffector, the organizers feel that competitors should be free to use any tooling (i.e., custom, hand tools, suction cups) with the continued provision that any manual reconfiguration of an end-effector across different tasks would result in a new robotic hand and a new score based on the tasks it performs. The organizers feel that a diverse set of objects and tasks will drive competitors towards endeffector designs that are adaptable to many object types and away from multiple customized designs which in the long run will increase development complexity, time, and costs. To improve the instructions of future competitions, the organizers will consider the use of video to help better explain the caveats associated with the competition rule set. In addition, the competition venue will make tools available for those interested in evaluating the performance of their hands without the need of an autonomous robotic system through the use of the National Institute of Standards and Technology (NIST) robotic hand grasping and manipulation benchmarks for assessing hand characteristics such as grasp strength, slip resistance, grasp cycle time, touch sensitivity, and in-hand manipulation [6,7]. These benchmarks include a set of physical measurements with supporting test methods and instrumented object artifacts that assess elemental performance of robotic hands through the use of external measurement devices. To address the stated needs for better benchmarking techniques within the competition for supporting performance measures of fully autonomous robotic systems, the organizers will investigate the tasks and measures that promote the use of unbiased evaluation methods to assess how well a robot system performs in a particular application space.

Competitor Feedback and Lessons Learned

187

To align the competition with research interests in robotics, new tasks will be selected based on existing manipulation data sets. A recent survey [9,11,12] provides a comprehensive survey on object manipulation data sets. In addition, recent study [10] has shown that there are a small number of motions that performed very frequently in daily-living tasks. Future manipulation tasks will be take that into consideration. Since the objects are given at least a week before the competition, all teams pre-programmed the grasp posture and finger allocations before the competition. In the future, we plan to withhold a portion of objects and request the teams to generate proper grasps that would not only hold the objects, but also allow their robot to perform physical interactive tasks without dropping the objects [13–17].

5

Conclusion

The first Robot Grasping and Manipulation Competition, held during IROS 2016, allowed researchers focused on the application of robot systems to compare the performance of hand designs as well as autonomous grasping and manipulation solutions across a common set of tasks. At the time of this publication, the 2nd RHGM-sponsored Grasping and Manipulation Competition has been approved for IROS 2017 in Vancouver, Canada which will build on the successes and make adjustments based on this analysis of the 2016 competition. In general, it was determined that the hand-in-hand track with manual operation is too subjective for the competition space and is omitted in order to allow for more development time to prepare for the 2017 tracks which all require autonomous robot systems. To assess hand designs as stand-alone robotic system components, competitors will be given the opportunity to quantify the basic performance of their hands using a set of grasping and in-hand manipulation benchmarking tools. In addition, the simulation track is also omitted from the 2017 event pending an analysis of simulation tools to identify those that are best-suited and the most reliable to support the competition tasks. The competition will consist of two tracks: (1) Service Tasks, and (2) Manufacturing Tasks. The manufacturing track will include the added challenge of pick-and-place where parts to be assembled will be randomly located on a kit tray. The goal is to pick the objects up and assemble them per a set of instructions. Emphasis will be placed on designing a random scheme of objects that can be easily reproduced for each team in order to keep the level of difficulty of the random distribution the same across teams. In addition, the design of the distribution will ensure that objects in close proximity are reasonably contrasted for detection with a perception system. The manufacturing track will focus on small parts assembly where tasks will incorporate fastening methods such as threading, snap fits, and gear meshing using standard components such as screws, nuts, washers, gears, and electrical connectors [8]. Components to be assembled will be presented in a structured format to simplify the perception problem. The service track will consist of several daily living tasks (DLT’s) similar to the tasks defined in the 2016 competition. The tasks are designed to fit

188

J. Falco et al.

within four levels of difficulty where more points are given the greater the level of difficulty. All tracks will be designed to be fully-autonomous which means that once the timer starts for a given task or set of tasks, there can be no human intervention. Time is recorded for each task or task set in order to decide the winner in the case of point-based ties. There will be no restrictions for automatic end-effector reconfiguration or the use of a tool held by the end-effector; however, manual reconfiguration or changing of an end-effector across different tasks would be considered as a new robotic hand and a new score based on the tasks it performs following each manual process.

References 1. IEEE RAS TC Robotic Hands Grasping and Manipulation. http://www.rhgm.org. Accessed 30 June 2017 2. Yale-CMU-Berkeley Object Dataset. http://rll.eecs.berkeley.edu/ycb/. Access 30 Mar 2017 3. Amazon Picking Challenge. http://rll.berkeley.edu/amazon picking challenge/. Accessed 30 June 2017 4. Klamp’t (Kris’ Locomotion and Manipulation Planning Toolbox). http://motion. pratt.duke.edu/klampt/. Accessed 30 June 2017 5. Messina, E., et al.: Statement of Requirements for Urban Search and Rescue Robot Performance Standards. DHS and NIST Preliminary Report, p. 27 (2005) 6. Falco, J., Van Wyk, K.: Grasping the performance: facilitating replicable performance measures via benchmarking and standardized methodologies. Robot. Autom. Mag. 22(4), 125–136 (2015) 7. Performance Metrics and Benchmarks to Advance the State of Robotic Grasping Web Portal. https://www.nist.gov/programs-projects/performance-metrics-andbenchmarks-advance-state-robotic-grasping. Accessed 30 June 2017 8. Performance Metrics and Benchmarks to Advance the State of Robotic Assembly Web Portal. https://www.nist.gov/programs-projects/performance-metrics-andbenchmarks-advance-state-robotic-assembly. Accessed 30 June 2017 9. Huang, Y., Bianchi, M., Liarokapis, M., Sun, Y.: Recent data sets on object manipulation: a survey. Big Data 4(4), 197–216 (2016) 10. Paulius, D., et al.: Functional object-oriented network for manipulation learning. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2655–2662 (2016) 11. Bianchi, M., Bohg, J., Sun, Y.: Latest datasets and technologies presented in the workshop on grasping and manipulation datasets. arXiv preprint arXiv:1609.02531 (2016) 12. Huang, Y., Sun, Y.: Datasets on object manipulation and interaction: a survey. arXiv preprint arXiv:1607.00442 (2016) 13. Sun, Y., Lin, Y., Huang, Y.: Robotic grasping for instrument manipulations. In: 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), pp. 302–304. IEEE (2016) 14. Lin, Y., Sun, Y.: Task-oriented grasp planning based on disturbance distribution. In: Inaba, M., Corke, P. (eds.) Robotics Research, pp. 577–592. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-28872-7 33

Competitor Feedback and Lessons Learned

189

15. Lin, Y., Sun, Y.: Task-based grasp quality measures for grasp synthesis. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 485–490. IEEE (2015) 16. Lin, Y., Sun, Y.: Grasp planning to maximize task coverage. Int. J. Robot. Res. 34(9), 1195–1210 (2015) 17. Lin, Y., Sun, Y.: Robot grasp planning based on demonstrated grasp strategies. Int. J. Robot. Res. 34(1), 26–42 (2015)

Robotic Grasping and Manipulation Competition: Future Tasks to Support the Development of Assembly Robotics Karl Van Wyk(B) , Joe Falco, and Elena Messina National Institute of Standards and Technology (NIST), Gaithersburg, USA [email protected] Abstract. The Robot Grasping and Manipulation Competition, held during the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) in Daejeon, South Korea was sponsored by the IEEE Robotic and Automation Society (RAS) Technical Committee (TC) on Robotic Hands Grasping and Manipulation (RHGM) [1]. This competition was the first of a planned series of grasping and manipulation-themed events of increasing difficulty that are intended to spur technological developments and advance test methods and benchmarks so that they can be formalized for use by the community. The coupling of standardized performance testing with robot competitions will promote the use of unbiased evaluation methods to assess how well a robot system performs in a particular application space. A strategy is presented for a series of grasping and manipulation competitions that facilitate objective performance benchmarking of robotic assembly solutions. This strategy is based on test methods that can be used for more rigorous assessments and comparison of systems and components outside of the competition regime. While competitions have proven to be useful mechanisms for assessing the relative performance of robotic systems with measures of success, they often lack a methodical measurement science foundation. Consequently, scientifically sound and statistically significant metrics, measurement, and evaluation methods to quantify performance are missing. Using performance measurement methods in a condensed format will accommodate competition time limits while introducing the methods to the community as tools for benchmarking performance in the developmental and deployment phases of a robot system. The particular evaluation methods presented here are focused on the mechanical assembly process, an application space that is expected to accelerate with the new robot technologies coming to market. Keywords: Robot · Grasping · Manipulation · Competition Benchmarks · Performance measures · Manufacturing

1

Introduction

Robot competitions [2–4] are plentiful and provide an excellent opportunity for researchers and developers to benchmark task-based solutions in a vying This is a U.S. government work and its text is not subject to copyright protection in the United States; however, its text may be subject to foreign copyright protection 2018 Y. Sun and J. Falco (Eds.): RGMC 2016, CCIS 816, pp. 190–200, 2018. https://doi.org/10.1007/978-3-319-94568-2_13

Future Tasks to Support the Development of Assembly Robotics

191

environment where the final score is based on degree of task completion followed by a subjective analysis of winners and losers to determine relative advantages and disadvantages of the competing systems. These competitions provide a common problem space to demonstrate advancement of the state-of-the-art in new software and hardware solutions of an integrated system while promoting the field of robotics for both educational and general audiences. We address concerns that competitions often lack scientifically sound and statistically significant metrics, measurement, and evaluation methods [5,6]. The 2016 competition featured two main categories of challenges with a mixture of service-oriented (e.g., home assistant) and manufacturing-relevant tasks and objects. A pick-and-place challenge involved removing items from a shopping bin and placing them on target surfaces. A series of manipulation tasks that were manufacturing oriented included twisting a bolt into a threaded hole with a nut driver, hammering a nail, and sawing open a cardboard box. Retrospectively, these tasks served as a good starting point, but needed maturity in various key ways. For instance, the diversity of the tasks (particularly manufacturing related) were not sufficient and scoring was unforgiving if a particular step in the process was unachievable. Various features of the tasks (e.g., fastener sizes) were arbitrarily chosen, and initialization of tests, although randomized, were not controlled across competitors. Consequently, some teams experienced much more difficult starting scenarios. Further details about the experiences with the inaugural competition can be found in this book’s chapter on Competition Feedback and Lessons Learned. Moving forward, a more rigorous approach that is better-aligned with manufacturing tasks is being undertaken. This is intended to help advance robotic grasping and manipulation specifically towards addressing assembly tasks. Therefore, an approach to competitions that employs a more rigorous assembly-centric performance evaluation methodology and artifacts is described in the remainder of this chapter. Standardized performance testing is an emerging and necessary tool within the robotics community providing unbiased evaluation methods that assess how well a system performs a particular ability. These performance evaluations can be used to assess a system’s individual components, as well as its system level operation. The National Institute of Standards and Technology (NIST) works to develop technical foundations for performance standards in several key areas of robotics including emergency response robots, perception, grasping and manipulation, and agility [7–12]. In addition, this NIST work is often introduced at competition venues as a mechanism to disseminate, as well as evaluate, the performance test methods prior to the standardization phase. To help progress the use of robots for assembly operations, we present a strategy for a grasping and manipulation competition track that promotes objective performance benchmarking of robotic assembly solutions based on test methods that can be used for more rigorous assessments and comparison of systems and components outside of the competition regime. Using these methods in a condensed format will accommodate competition time limits while introducing the methods to the community as tools for benchmarking performance. Competitions

192

K. Van Wyk et al.

have also proven useful to help advance the development of performance benchmarking methods and we expect that the use of these methods during competitions will help to further develop them for use by the robotics community. NIST anticipates that such research will lead to a principled way of specifying robot system characteristics and will help smaller organizations to determine which robot system components are best suited for their application space [13].

2

Why Robotic Assembly?

The International Federation of Robotics indicated in its 2016 World Robotics Report that by 2019, more than 1.4 million new industrial robots will be installed in factories around the world [14]. They also emphasize that these new robots will not only support traditional large manufacturers, but small and mediumsized enterprises (SMEs) as well. In order for robotic solutions to benefit SMEbased manufacturing operations, where it is cost prohibitive to employ robotics experts, the robots must be programmable by line operators and easy to redeploy to support low volume, high mixture production runs. Analysis of robot implementations in the auto industry estimates that assembly accounts for 50% of all manufacturing costs yet it only accounts for 7.3% of robot sales [14]. A Price Waterhouse (PwC) survey of 107 respondents, conducted in conjunction with the Manufacturing Institute, found that the most common task amongst US manufacturers was assembly (25%) followed by machining (21%), and the least common tasks were warehousing and performing dangerous tasks (both 6.5%). The survey also indicates that assembly was the most common task that manufacturers planned to invest in robotic technology to support (27%) [15]. As early as the 1970s there were expectations that robots would be able to perform assembly operations to alleviate humans from what were thought to be onerous, dangerous, repetitive, and tedious tasks. While this seemed achievable in concept, robot technologies of the time could not cost-effectively support the tight tolerances and component variability associated with the assembly process. Despite many advancements in hardware and control software, the limitations encountered in the early days of attempting robotic assembly operations still persist after many decades. Due to their highly rigid designs and positionbased control, most industrial robots require customized fixtures that are tailored to a particular assembly operation and component geometry in order to perform assembly tasks. These specialized fixtures introduce costs and add time to the setup of every new assembly job. Even more expensive and sophisticated approaches were conceived that compensated for motion errors using force sensing at the end-effector. These methods required 6-axis force-torque sensing at the tool point, low-level force feedback to the robot position or force controller, and the highly application specific algorithmic support for accomplishing assembly operations. Mechanisms and methods to help enable robotic assembly are surveyed in [16]. Recent progress in technologies for robotic arms and end-effectors hold potential to overcome the problems with robotic assembly. For instance, collaborative

Future Tasks to Support the Development of Assembly Robotics

193

robots or Co-Bots are designed to safely work alongside human workers in both manufacturing and service sectors [17]. These robots are equipped with force sensing and/or compliance in order to limit contact forces and prevent injury to humans working in their proximity. These capabilities also prove advantageous for facilitating assembly operations. Concurrently, robotic hand technology is emerging as a next generation endeffector technology with advanced force control and manipulation capabilities. Some existing robotic hand cutaneous sensors coupled with the latest advances in artificial intelligence are approaching and even exceeding the sensing capabilities of the human hand. Moreover, the enhanced reconfigurability of robotic hands promise new ways of tackling the small parts assembly field for manufacturing operations.

3

Measuring the Assembly Capabilities of Robot Systems

An assembly consists of a set of operations that join together individual parts or subassemblies. For the purposes of the robotic grasping and manipulation competition, we focus on assemblies that incorporate small part insertions and fastening methods such as threading, snap fitting, and gear meshing using standard components including screws, nuts, washers, gears and electrical connectors. Since robot system designs can vary greatly, a goal in developing standardized performance tests for assembly robotics is to provide a modular set of taskbased tests to support a full spectrum of robotic solutions. On one end of the spectrum, a robot system and its components can be designed to suit a specific application task and perform this one task in a structured environment very efficiently. The structure comes in the form of specialized fixtures, part feeders, end-effectors, and tools that provide the necessary compliance to accommodate the assembly tolerances in the presence of robot position errors. On the other end of the spectrum, a robot system can be designed to be flexible and adaptive for handling a variety of parts, variations in similar parts, and multiple assembly process types in an unstructured environment. With respect to the task-based performance tests, a robot system designed to solve a particular task may excel at an individual test module, whereas a flexible system will be proficient across multiple test modules. Manual assembly efficiencies take into account the time associated with individual actions such as grasp, orient, insert, and fasten, as performed by a human with decades of experience and practice using their hands, eyes, and brains. One avenue for methodically designing task-level tests within manufacturing leverages factors identified by Boothroyd-Dewhurst (B-D) design for assembly (DFA) studies [18]. These studies have already identified and tabulated various important factors based on manual human performance in an assembly task. For instance, size and symmetry of parts, tool usage, fixturing, mechanical resistance, mechanical fastening processes, visual occlusion, and physical obstruction all influence time-based human performance. Designing benchmarking tasks that efficiently sample this design space greatly aids the assessment of a robotic system as a whole, and quickly identify its strengths and weaknesses.

194

K. Van Wyk et al.

Aside from designing the physical tests, relevant performance metrics must also be carefully considered. For most applications, two very simple metrics that are most important capture speed and reliability. Speed is typically measured as the completion time for a particular task or sub-task. Reliability is captured as the probability of successfully completing a task or sub-task. The theoretical upper bound probability for successfully inserting a component (PS) is calculated given a confidence level (CL), the number of successes (m), and the number of independent trials (n). Given the binomial cumulative distribution function, F (m − 1; n, P S) =

m−1  i=0

 n P S i (1 − P S)n−i ≥ CL, i

(1)

the PS is its minimum value to some precision while still satisfying the above inequality. Both of these metrics are intuitive and relatively inexpensive to measure. Other subsidiary metrics can include the measurement of transmitted forces by the robot during the assembly process, cost-effectiveness of the robotic solution, and energy efficiency. We focus on speed and reliability metrics for the competition. Another important aspect of performance measurement is providing confidence in the measured results. Consequently, multiple test repetitions of a particular task are required to generate a sufficient amount of data for benchmarking comparisons. Moreover, the use of various statistical tests including tests for correlation, distribution, variance, and mean help identify significant comparative differences in performance data. Conducting these tests can also help reduce the number of false claims that may be issued regarding a robot’s level of performance.

4

Proposed Assembly Performance Tests

We present the concept of manufacturing task boards, where each task board design has a manufacturing theme such as insertion, threaded fastening, gear meshing, and electrical connectors (to be designed). The task boards are designed to incorporate standard off-the-shelf components of varying sizes that are representative of components typically used in assemblies. The parts can be presented to the robot system for grasping with various degrees of difficulty ranging from placement in known locations to randomized placement. Note, the use of tools is permitted, although manual changing of end-effector components is not. Although the tests are designed to be accomplished using a single robot arm, any number of arms may be used. 4.1

Insertion Task Board

The insertion task board (Fig. 1) is designed to quantify a robot system’s capability in performing “simple” peg-in-hole insertions. Relevant experiment design factors include (1) size of peg, (2) cross-sectional shape, and (3) position of

Future Tasks to Support the Development of Assembly Robotics

195

peg. Peg-hole clearances are designed to be standard sliding fits with fixed peg lengths. The pegs are of standard metric sizes, and are commercially available in the form of bar stock. Specifically, the edge lengths of the square cross-sectional pegs are 5 mm, 10 mm, 15 mm, and 20 mm. The diameters of the circular pegs are 5 mm, 10 mm, 15 mm, and 20 mm, as well. The plate insertion geometry cutouts with the necessary sliding fit tolerances can be inexpensively manufactured using an on-line, laser cutting service based on a NIST-supplied design. In the standard test configuration, the plate is fastened to a rigid surface with the gravity vector parallel to the plane of the plate. The test begins with all pegs inserted as shown in Fig. 1. The goal is to remove all pegs from one side of the board, and re-insert them from the other side.

Fig. 1. Insertion taskboard.

Completion time (CT) is the time required to grasp, move, and insert an individual peg. From the B-D handling table [18], the insertion task board has a ‘00’ handling code for the grasping and manipulation of the pegs with an associated time of 1.13 s by humans. Furthermore, the B-D insertion table indicates a ‘00’ insertion code with an associated time of 1.5 s. Therefore, the theoretical completion time for each peg by a human is 2.63 s. With 32 pegs, the total board should be completed by a human within 84.16 s. Note, both CT and PS can be analyzed with data collected across all pegs simultaneously, or compartmentalized, by dividing data into square pegs and circular pegs, pegs of different sizes, or some combination thereof. Compartmentalization of data can help shed light on the robot system’s performance sensitivity with regards to different features of the pegs. 4.2

Fastener Task Board

The fastener task board (Fig. 2) is designed to quantify a robot system’s capability for fine sensorimotor control. For manufacturing applications, this test seeks

196

K. Van Wyk et al.

to measure a robot’s performance at inserting and removing threaded fasteners. Relevant experiment design factors include (1) size of fastener, (2) shape of fastener, and (3) position of fastener.

Fig. 2. Fastener taskboard.

A square aluminum plate is drilled and tapped with a pattern of holes to support four each of M5 × 0.5, M10 × 1.25, M16 × 2.0, and M20 × 4.0 ISO Standard metric bolts, nuts, and washers (Fig. 2). In the standard test configuration, the plate is fastened to a rigid surface with the gravity vector parallel to the plane of the plate. The test begins with all fasteners attached to the plate as shown in Fig. 2. The robot system under test must then autonomously remove the fasteners and refasten them to the other side of the plate. Completion time (CT) is the time required to remove fasteners at a particular location, and re-fasten them from the other side of the board. This process requires (1) unfastening nut, (2) grasping and moving nut off to the side, (3) grasping and moving washer off to the side, (4) unfastening bolt, (5) grasping and moving bolt to other side of plate, (6) fastening bolt to plate, (7) grasping and moving washer, (8) inserting washer, (9) grasping and moving nut, and (10) fastening nut. An underlying assumption for the subsequent calculations is that the time to complete an unfastening and fastening step is approximately the same. An example calculation of CT for a set of M5 fasteners using the above process and the B-D handling codes (Table 1) includes 6 s for step 1, 1.43 s for step 2, 1.69 s for step 3, 6 s for step 4, 1.5 s for step 5, 6 s for step 6, 1.69 s for step 7, 1.5 s for step 8, 1.43 s for step 9, and 6 s for step 10. The total CT for a set of M5 fasteners for a human is then estimated to be 33.24 s. Similar calculations can be made for the other fasteners, and a total CT for the entire board can be estimated. The theoretical upper bound probability for successfully rerouting a set of fasteners can be calculated using the same inequality as listed before. Again, both

Future Tasks to Support the Development of Assembly Robotics

197

Table 1. Handling and insertion codes and times for various fasteners as indicated by B-D tables. Part(s)

Handling code

Handling time (s)

Insertion code

Insertion time (s)

M5 nut

‘01’

1.43

‘38’

6

M10, M16, M20 Nuts

‘00’

1.13

‘38’

6

All bolts

‘10’

1.5

‘38’

6

M5 washer

‘03’

1.69

‘00’

1.5

M10, M16, M20 washers ‘00’

1.13

‘38’

6

the CT and PS measures can be calculated including all fasteners simultaneously, or compartmentalized by subdividing by size of fastener or type of fastener. 4.3

Gear Task Board

The gear task board (Fig. 3) is designed to quantify a robot system’s capability for performing gear meshing. Relevant experiment design factors include (1) gear pitch diameter, (2) gear pitch, and (3) position of gear. The gears are of standard metric sizes, and are commercially available. The design results in four clusters of gears, where each cluster involves gears of the same pitch.

Fig. 3. Gear taskboard.

The test begins with all gears inserted and meshed as shown in Fig. 3. The goal is to remove all gears from one side of the board, and re-insert and re-mesh them from the other side. Note, the use of tools is permitted, although manual changing of end-effector components is not. Moreover, the test is designed to be accomplished using a single robot arm, although any number of arms may be used to accomplish the gear task board.

198

K. Van Wyk et al.

Completion time (CT) is the time required to grasp, move, and insert a gear. From the B-D handling table [18], the grasping and transportation of the gears in the lower left quadrant have a ‘00’ handling code with an associated time of 1.13 s by humans. Furthermore, the B-D insertion table indicates a ‘03’ insertion code with an associated time of 3.5 s. Therefore, the theoretical completion time for each gear by a human is 4.63 s. This gear cluster should then be completed by a human within 18.52 s. Once again, the theoretical upper bound probability for migrating gears can be calculated using the previously listed inequality. CT and PS measures can be calculated across all gears, per gear cluster, or per gear. 4.4

Challenge Task Board

The concept of task boards can be extended to support competitions. Competitors are supplied with a set of task boards, one for each of the mechanical assembly topics mentioned above. These are used to develop and test their robotic applications where we provide them with test methods and evaluation techniques to measure their progress. In such a scenario, a subset of each manufacturing topic defined on the boards described above is included in a competition board (including electrical connectors) as shown in Fig. 4. Furthermore, the challenge task board is designed to be low cost with readily available components, and NIST will potentially supply them as kits to the competitors at no cost in order to help promote the use of these benchmarking tools. At the competition, the task board presented to the competitors contains a mixture of assembly components using a subset of the same components defined in practice boards and in a layout previously unknown to the competitors. To accommodate time limitations, teams will only perform one test cycle on the challenge task board. One point will be awarded for every part removed from the taskboard, and one point for every part re-inserted or re-fastened from the other side of the board. The maximum total points is achieved when all parts are migrated from one side of the board to the other. The rules for completing this task board include (1) no manual end-effector changes, (2) no manual relocation of robot base after initialization, (3) board must remain in upright configuration (but can be re-located by the robot along the working surface), and (4) any number of robotic arms may be used. We believe there are many benefits to this approach including easy expandability of assembly topics, good initial coverage of particular assembly tasks to gauge competitor capabilities, and benchmarking tools for the assessment of assembly robotics both inside and outside competitions with feedback from users to help improve them. In addition, the modularity of the competition task board facilitates the selection and administration of suitable difficulty levels based on the progress of competitors prior to the competition.

Future Tasks to Support the Development of Assembly Robotics

199

Fig. 4. Challenge taskboard.

5

Conclusions

The progress of technological advancement and adoption in robotics can be accelerated through rigorous benchmarks and performance evaluations. Competitions have been shown to support the development and dissemination of concepts and draft versions of benchmarks and test methods. To help stimulate a broader understanding of performance requirements for robotic assembly, NIST is participating in a robotic hand grasping and manipulation competition. This chapter described four task boards which present common assembly operations: insertion, fastening, and gear meshing. Given how widespread these operations are, human-based time benchmarks exist and can be used for comparison with robotic solutions. Robotic grasping and manipulation solutions are expected to improve and these task boards provide a means for quantifying this technological progress. Future competitions will incorporate additional assembly-relevant tasks and may expand the metrics captured beyond time and reliability. Aside from competitions, these metrics and task board-based test methods can be useful for understanding the strengths and weaknesses of different hardware and software solutions, yielding a trustworthy foundation for comparing and selecting robotic systems for assembly operations.

References 1. Robotic Hands Grasping and Manipulation RAS TC. http://www.rhgm.org/. Accessed 30 June 2017 2. Robocup. http://www.robocup.org/. Accessed 30 June 2017 3. DARPA Robotics Challenge. http://archive.darpa.mil/roboticschallenge/. Accessed 30 June 2017 4. Amazon Picking Challenge. http://www.amazonrobotics.com/#/roboticschallenge/. Accessed 25 Aug 2017 5. Amigoni, F., Bonarini, A., Fontana, G., Matteucci, M., Schiaffonati, V.: To what extent are competitions experiments? A critical view. http://rockinrobotchallenge. eu/competitions experiments.pdf. Accessed 30 June 2017

200

K. Van Wyk et al.

6. Anderson, M., Jenkins, O.C., Osentoski, S.: Recasting robotics challenges as experiments. IEEE Robot. Autom. Mag. 18(2), 10–11 (2011) 7. Jacoff, A., et al.: Using competitions to advance the development of standard test methods for response robots. Association of Computing Machinery (ACM) (2012). http://ws680.nist.gov/publication/get pdf.cfm?pub id=910662. Accessed 30 June 2017 8. Marvel, J., Hong, T., Messina, E.: 2011 solutions in perception challenge performance metrics and results. In: Proceedings of the Workshop on Performance Metrics for Intelligent Systems, pp. 59–63 (2012) 9. Falco, J., Van Wyk, K.: Grasping the performance: facilitating replicable performance measures via benchmarking and standardized methodologies. Robot. Autom. Mag. 22(4), 125–136 (2015) 10. Performance metrics and benchmarks to advance the state of robotic grasping web portal. https://www.nist.gov/programs-projects/performance-metricsand-benchmarks-advance-state-robotic-grasping. Accessed 30 June 2017 11. Downs, A., Harrison, W., Schlenoff, C.: Test methods for robot agility in manufacturing. Ind. Robot Int. J. 43(5), 563–572 (2016) 12. Agile robotics for industrial automation competition (ARIAC). https://www.nist. gov/el/intelligent-systems-division-73500/agile-robotics-industrial-automation. Accessed 30 June 2017 13. Shneier, S., Messina, E., Schlenoff, C., Proctor, F., Kramer, T., Falco, J.: Measuring and representing the performance of manufacturing assembly robots, NIST Internal Report 8090, November 2015 14. International federation of robotics, World Robotics Report (2016). http://www. ifr.org/industrial-robots/statistics/. Accessed 30 June 2017 15. Price Waterhouse, The new hire: How a new generation of robots is transforming manufacturing. https://www.pwc.com/us/en/industrial-products/assets/ industrial-robot-trends-in-manufacturing-report.pdf. Accessed 30 June 2017 16. Bostelman, R., Falco, J.: Survey of industrial manipulation technologies for autonomous assembly applications, National Institute of Standards and Technology Report, NISTIR 7844 (2012) 17. Tobe, F.: Why co-bots will be a huge innovation and growth driver for robotics industry. IEEE Spectrum, December 2015. http://spectrum.ieee.org/ automaton/robotics/industrial-robots/collaborative-robots-innovation-growthdriver. Accessed 30 June 2017 18. Boothroyd, G., Dewhurst, P., Knight, W.: Product Design for Manufacture and Assembly, 3rd edn. CRC Press, Boca Raton (2011)

Author Index

Bianchi, Matteo 19 Bicchi, Antonio 19 Bonilla, Manuel 19 Bonomo, Fabio 19 Brando, Alberto 19 Catalano, Manuel G. 19 Che, Junyi 161 Chen, Yang 39 Cheng, Nadia 1 Choi, Dongmin 117 Choi, Hyouk Ryeol 1 Correll, Nikolaus 107, 146 Curtis, Rebeca 146 Das, Dipayan 57 Dessalene, Eadom 172 Engeberg, Erik D. 1

Liu, Huaping 161 Liu, Shuo 84, 136 Lofaro, Daniel 172 Luberto, Emanuele 19 Messina, Elena 190 Moon, Hyungpil 117 Patel, Radhen 107, 146 Piazza, Cristina 19 Pollard, Nancy 1 Rake, Nathanael J. 57 Raugi, Alessandro 19 Roa, Maximo 1, 180 Rocchi, Alessio 19 Romero, Branden 146

Falco, Joe 1, 180, 190 Fang, Bin 161 Farnioli, Edoardo 19

Santaera, Gaspare 19 Santina, Cosimo Della 19 Schultz, Joshua A. 57 Segil, Jacob 107 Sun, Fuchun 161 Sun, Yu 1, 180

Garabini, Manolo 19 Grioli, Giorgio 19 Guo, Di 161 Guo, Shaofei 39

Van Wyk, Karl 190

Hao, Lina 39 Huang, Yao 161

Xia, Zeyang

Jing, Mingxuan 161 Jung, Byung-jin 117

Yang, Chao 161 Yang, Hui 39

Kong, Tao 161 Kwon, Mingu 136

Zhang, Hao 84, 136 Zhou, Dandan 136

Wang, Zhikang

84

1