Simulation Results for Manipulation of Unknown Objects in Hand

0 downloads 0 Views 647KB Size Report
noisy contact information will be available, we also evaluated the performance of the approach when adding artificial noise. Index Terms— Object-in-Hand ...
Simulation Results for Manipulation of Unknown Objects in Hand Qiang Li, Robert Haschke, Helge Ritter and Bram Bolder

Abstract— We propose a simple but efficient control strategy to manipulate objects of unknown shape, weight, and friction properties – prerequisites which are necessary for classical offline grasping and manipulation methods. Instead, the proposed control strategy employs estimated contact point locations, which can be obtained from modern tactile sensors with good spatial resolution. Such a control strategy is implemented in a hierarchical control structure. The feasibility of the strategy is proven in simulation experiments employing a physics engine providing exact contact information. However, to motivate the applicability in real world scenarios, where only coarse and noisy contact information will be available, we also evaluated the performance of the approach when adding artificial noise. Index Terms— Object-in-Hand Manipulation

I. I NTRODUCTION We consider the challenging task of dexterously manipulating an object within a multi-fingered robot hand, i.e. moving the object with respect to the hand. Although, object manipulation could also be done by arm motion solely (and employing a simple two-jaw gripper), this would require a much larger work-space to be used. A lot of work focuses on pick-and-place manipulation in cluttered scenes [10], [5]. The emphasis of this research is to find suitable hand poses and collision-free trajectories to transport an object. However, in contrast to the task considered in this paper, the object is not (deliberately) moved within the hand. Recent approaches to object-in-hand manipulation avoid the complex analysis of geometric relations and apply stateof-the-art motion planning methods like RRT [17] and PRM [11] to the manipulation problem. Xue et. al [16] tackled the problem of screwing a light bulb. Exploiting the axial symmetry of the object, they look for contact trajectories in a set of contact points previously obtained from classical grasp planning methods. All these approaches attempt to find feasible motion trajectories in an offline fashion utilizing a physics simulation to model the outcome of random actions. Again, this requires a considerable amount of prior knowledge about the manipulated object. Employing fast tactile feedback, Ishihara et. al [4] propose a control law to spin a pen of known shape at an impressive speed. Tahara et. al [15] point out a method to manipulate objects of unknown shape. They use a virtual object frame determined by the triangular finger-tip configuration of a three-fingered hand to derive a control law to manipulate the object’s pose. However, Q. Li, R. Haschke and H. Ritter are with the Research Institute for Cognition and Robotics (CoR-Lab), Bielefeld University, Germany {qli,rhaschke,helge}@cor-lab.uni-bielefeld.de B. Bolder is with the Honda Research Institute Europe (HRI-EU), Offenbach, Germany [email protected]

without explicit sensory feedback, their method is limited in accuracy. The latter two approaches propose a reactive control law for object manipulation, which is in our opinion a major prerequisite for robust object manipulation. Open-loop execution of manipulation trajectories obtained in an offline optimization process cannot account for real-life deviations from the planned trajectory: The initial object pose might be estimated incorrectly, fingers might unpredictedly slide or roll on the object’s surface, or even loose contact at all. Consequently, we also propose a reactive control strategy based on pose feedback. Currently, we obtain this feedback from a physical simulation, which is used to show the feasibility of the approach. However, the pose feedback can also be estimated from visual features. To confirm the applicability of our method in noisy real-world scenarios, we add artificial noise to the accurate sensor readings obtained from simulation. Conceptually, the process of object-in-hand manipulation can be divided into two stages: a local manipulation controller and a globally acting regrasp planner. The local controller reactively moves the object by a small amount only. When the joint limits or the boundary of the object’s configuration space are encountered, a higher level planning step becomes necessary, also known as finger gait planning. During local manipulation, a state monitor can be used to check the distance to configuration space limits. Once a limitation is reached, regrasp planning is employed to adapt the grasp configuration. Subsequently, local manipulation is continued. This paper only focuses on local manipulation. In the following sections, we first outline the assumptions underlying to our manipulation approach (sec. II), introduce the concept of the strategy (sec. III), and detail both the employed inverse kinematics approach (sec. IV) and the proposed hierarchical control scheme to simultaneously control position and force at contact points (sec. V). Section VI introduces the simulation toolkit and the simulation experiments, whose results are discussed in sec. VII. Finally, section VII gives a conclusion. II. A SSUMPTIONS O F C ONTROL S TRATEGY A. Point contacts The real contact geometry between the fingertip and the object is complex and difficult to model, even if geometric shape information is available. Usually, the contact force is distributed on a larger contact area. In this paper, we assume that there is only one contact point on every finger and the distributed contact force is concentrated on this point. We do not explicitly model friction properties. However, the

physical simulation adopts a Coloumb friction model approximating circular friction cones by four-sided pyramides. The contact is assumed to be compliant, due to elasticity of either the body or the finger tip. This assumption is important to realize the contact force controller on top of a joint position controller employing a linear spring model. B. Micro manipulation assumption In our manipulation strategy we do not explicitly control rolling or sliding. Rather, we assume that all contact locations stay fixed for every control cycle, i.e. contact frames Ci relative to the object frame O and relative to the finger tip frames Fi do not change. Notably, this implies, that the contact points on the object and on the finger tip move with identical velocity. Obviously this assumption will be often violated in praxis, e.g. when slippage occurs. However, employing the feedback from observed contact locations we can determine changes of the grasp configuration (including sliding and rolling) after each control cycle. Instead of analyzing the noisy motion of contact locations, there actually exist more accurate methods to determine incipient slippage based on an analysis of the high-frequency vibrations caused by slippage of an object. For example, Schoepfer et. al have employed a neural network to recognize incipient slippage based on an analysis of the Fourier spectrum and exploited this slip detector to dynamically adapt the grasp force to the actual weight of the object [12]. C. Available Sensory Feedback In order to facilitate the application of our manipulation strategy in real world scenarios we follow a two-fold strategy: (i) We replace offline motion planning by an online control strategy generating joint angle control signals based on current sensory feedback and the kinematics model of the multi-fingered hand. (ii) We avoid as much information about the object as possible. Especially, we assume that the global object shape, mass and mass distribution, as well as local contact properties like surface geometry and friction properties are not known to the robot. Furthermore, we do not explicitly model sliding or rolling. Rather, we assume, that we can estimate the current object pose, contact locations, contact normal vector, and contact force magnitudes employing vision, joint angle sensors, and modern tactile sensors [13]. D. Quasi-static manipulation The desired velocity of object manipulation is slow, such that dynamics effects don’t need to be considered. Rather we generate a quasi-static motion, assuming grasp stability at every instant in time. III. R EACTIVE M ANIPULATION S TRATEGY Conventional grasp and manipulation planning methods [2], [6] uncouple the planning from the control stage. In those approaches, the planning stage strongly depends on knowledge about the geometries of the object and the fingertips. Some work explicitly considers spherical finger

Translation Along Y Axis

Rotation Around X Axis

Fig. 1: Scenario for local object manipulation in hand.

tips to facilitate the geometry-based planning process [15]. Furthermore, the friction coefficients for all contacts are required to evaluate grasp stability to obtain optimally stable grasps. In real world scenarios, especially when handling unknown objects, this information is not available. Nevertheless humans can easily manipulate objects without this knowledge. We suppose, that the incredible dexterity of human manipulation originates from tight control loops employing tactile sensor feedback. Consequently we propose to employ tactile feedback to estimate contact positions and forces and introduce a manipulation strategy based on this feedback. If friction properties and joint torques are not available anymore, we cannot deliberately control rolling and slipping anymore, because internal forces cannot be designed. However, as we will show, local object manipulation is possible without explicitly designing all details of physical handobject interaction. Fig. 1 illustrates the considered object manipulation scenario, showing a multi-fingered hand – a model of the Shadow Dexterous Robot Hand – manipulating a cylinder. Notice, that the object size, although known to the underlying physics engine, is not available to the control algorithm itself. In order to describe the manipulation algorithm, we define three coordinate frames, the reference coordinate frame Or , the object coordinate frame Oo , and contact coordinate frames Ocf / Oco . The reference coordinate frame Or is an inertial reference frame, which can be defined at any point in the space. The motion of both the object and finger tips, will be represented w.r.t. this frame. The object coordinate frame Oo is object-local and moves with the motion of the object. The origin of this frame is defined as any reference point on the object. With this frame the relative motion of a fingertip w.r.t. the object can be described. The two contact coordinate frames Ocf / Oco are both positioned at the contact point. However, while Ocf is considered to be fixed relative to the corresponding finger tip, Oco is fixed w.r.t. to the object frame. Although, both coordinate frames have the same origin w.r.t. the reference frame Or , their orientation might be different. For the purpose of clear visualization, only the contact coordinate frames on the index finger and the middle finger are shown in Fig. 1.

Fig. 2: Model of tactile finger tip of Shadow Robot Hand.

The robot hand is modeled as a kinematics tree whose root is located in the palm. This tree is composed of five chains corresponding to the five fingers. On every chain the endeffector frame is defined to be the contact frame Ocf . We assume that all kinematics parameters are known and can be modeled off-line, except the distal phalanges on every finger. The homogeneous transformation matrices for these segments are estimated online based on a coarse geometry model of the fingertip (Fig. 2) and an estimation of the contact position and contact normal vector. According to the kinematics model provided by [6], the relation between the velocity of the contact point on the object and the contact point on the finger can be expressed as follows: vcf i + Rf i · S˙ f i = vcoi + Ro · S˙ oi vcoi

= vo + ωo × Ro · Soi ,

angle configuration. Note, that the Jacobian has a blockdiagonal structure corresponding to the five finger-related sub-chains of the kinematic tree. Because the actual contact point locations can change in the course of manipulation, the Jacobian is calculated on-line according to the estimated contact positions and contact normal vectors. Equation (3) is inverted using the pseudo-inverse of J, which is numerically robust computed using singular value decomposition (SVD). This yields the minimum norm solution for the targeted joint velocities.

(1) (2)

where vo is the linear velocity of the reference point on the object and ωo is the angular velocity of the object. vcf i , vcoi denote the linear velocities of the contact point on the ith fingertip and on the object respectively. They are represented in the reference frame Or . Rf i denotes the transform matrix of the distal phalange described in the reference coordinate frame, Sf i describes the surface of the distal phalange of the ith finger. Ro represents the object transform matrix described in the reference frame and finally Soi represents the surface of the object. Due to the micro manipulation assumption, we can calculate the desired linear velocity of the contact point on the fingertip given the desired position and orientation of the object. The desired angular velocity of the object can be easily calculated from the current and targeted rotation matrix [7]. IV. I NVERSE H AND K INEMATICS According to the previous section, the desired velocity of the contact points on the fingertips can be obtained given the desired trajectory of the object – due to the assumption, that there is no relative velocity between object and finger tip contact points. This target velocity of the contact points on the fingertips is used to calculate target joint angles for the current control cycle using resolved motion rate control, inverting the forward kinematics equation: x˙ = J(θ) · θ˙

Fig. 3: Hierarchical structure of manipulation controller

(3)

where x˙ denotes the vector of linear velocities of contact points on all finger tips, θ˙ is the vector of joint velocities, and J is the Jacobian matrix, depending on the current joint

V. H IERARCHICAL C ONTROL S TRUCTURE A hierarchical control structure is used to realize the reactive manipulation strategy (cf. Fig. 3). At the highest level, the desired trajectory Oo (t) of the object is computed according to the task at hand, e.g. employing visual feedback to realize a given motion sequence. At the middle level, a finger motion and contact force solver is implemented to obtain the desired velocities vcf i of all contact points as well as the desired contact forces fitgt . We assume, that these scalar contact forces act along the contact normal vectors ni . Employing the micro manipulation assumption, we can easily derive new contact positions Ocf i based on the desired motion of the object frame Oo and the associated motion of contact points Ocoi on the object. The targeted contact point velocities vcf i are then simply given by the distance vector from old to new contact positions, divided by the time step of the control cycle. To keepPthe resultant force applied to the object to be zero, i.e. fid = 0, a simple online force planner is used. This planner is suitable for the object manipulation scenarios, where the fingertips contact opposite faces and the contact normal vectors of different fingers are collinear. At the lowest level, a composite position/force controller generates the desired contact point velocities employing a linear spring model to realize the contact force control loop in parallel to the position control loop. With this composite controller it can be guaranteed, that the contact points on the fingers can track the desired velocity from the finger motion solver and the fingers do contact the object with the desired force along the contact normal direction. The principle of the composite controller is to superimpose the position and force error components. It can be summarized by the following

u1

fi PI Controller

TM

Stiffness Coefficient

+

u −0.5 Start Position Trajectory End Position

−1

u2

Fig. 4: Composite Position + Force Control Scheme.

(cm)

P Controller

−1.5

X

pi

−2.5

−2

−3 −3.5 18

equation:

18.1

18.2

18.3

18.4

18.5 18.6 (cm)

18.7

18.8

18.9

19

Y

18.5 18.6 (cm)

18.7

18.8

18.9

19

Y

u is considered as the composite force/position control signal used as velocity input for the inverse kinematics module. Notice, that the position controller is realized as a P controller parameterized by gain kPp , while the force controller is a PI controller parameterized by gains kPf and kIf . We employ a PI controller in order to guarantee a higher level of priority to the force controller. The control scheme is summarized in Fig. 4. VI. S IMULATION E XPERIMENT The object manipulation algorithm is validated in a physical simulation experiment. We use the Vortex physics engine to obtain real-time contact information (i.e. contact position, normal vector and contact force magnitude), and the object’s pose (object position and orientation). We employ the Kinematics and Dynamics Library (KDL) [1] to model the kinematic tree and to compute the inverse kinematics of the hand. To this end we modeled the kinematic structure of the robot hand up to the finger tip frames. Finally, within every control cycle, the fixed transformation matrix from the finger tip frame to the contact frames Ocf i is added to the tree based on the estimated contact positions. Weighted Damped Least Squares is used to calculate the joint angle velocities to weight the motion contribution of individual joints and to increase numerical stability in the vicinity of singularities. It is feasible to transfer this algorithm to the real world. By utilizing robot vision, the pose of the manipulated object can be extracted if markers are attached to the object as shown in [14], [3]. The contact position can be extracted from the tactile fingertips of the Shadow Dexterous Robot Hand. These finger tips provide a sensor array comprising 34 tactels giving an average spatial resolution of 3mm. Employing the known shape of the finger tips (shown in Fig. 2), we can compute the contact points and normals w.r.t. the fingertip frame. The transfer of the implemented control strategy from simulation to the real Shadow Robot Hand will be the next step of our work. Currently three geometric primitives, namely sphere (radius=2.5cm), cylinder (radius=1.4cm, height=9cm), and box (sized 6×4×21cm3 ) are evaluated. The objects are sized middle-scale compared to the robot hand, so rolling and slipping between the fingertips and the object will occur during the course of the manipulation. Object shape information is not available to the manipulation strategy. The controller

21

Z

(4)

(cm)

21.5

Z   u = kPp · ∆pi + kstiff · kPf · ∆f + kIf · ∆f · ˆf .

20.5 18

18.1

18.2

18.3

18.4

Fig. 5: Object translation along y-axis gains kPp , kPf , kIf are manually set to guarantee stability of the object manipulation. Obviously, before the object can be manipulated within the hand, a stable, i.e. force-closure, grasp needs to be established. Because grasping is not the focus of the present paper, we only shortly outline the grasping strategy we adopted in our experiments [9]. Starting from a pre-grasp posture, which roughly encloses the object in a cage-like fashion, the fingers are closed towards a final grasp posture until object contact is observed. Both hand postures are chosen based on object size from a set of three prototypes, namely force grasp, precision grasp, and pincer grasp. In our simulation experiments, the object is hovering in front of the hand, such that every contact would push away the object. Hence, the object is frozen in space during the initial grasp phase. Finally, the whole manipulation process comprises three phases: • Grasping of the object, when it is fixed in the world. This is necessary to achieve a successful grasp without kicking the object off. • Unfreeze the object and stabilize the grasp employing active force control in order to prepare manipulation. • Actually manipulate the object, i.e. change its pose relative to the palm of the hand. In principle, the object can be manipulated along any trajectory, as long as configuration space limits are not hit. To show the feasibility of our reactive strategy, we exemplary consider the tasks of moving the object along the y-axis and rotating about the x-axis of the reference coordinate frame. Without loss the generality, the orientation of the object is aligned to the world reference frame, such that the principal axes of the cylinder or the box are parallel to the axes of the world frame. The object is grasped with the thumb opposing three fingers. The contact normals of all contacts are roughly aligned to the y-axis of the world frame. Figs. 5, 6 show the manipulation results for the cylindrical object. The results for the box manipulation are similar

Unit (cm)

0.05 0 −0.05

0

200 400 600 800 1000 1200 R−>TH composite Control ouput Y component (Global Frame)

0

200 400 600 800 1000 1200 R−>TH−−feedback error Y position components (Global Frame)

0

200 400 600 800 1000 1200 R−>TH−−feedback error Z force components (Local Frame)

Unit (cm)

0.1 0 −0.1

Unit (N)

0.5 0 −0.5

(a) Contact force magnitude

Fig. 6: Object rotation about x-axis

−2 −3

0

200

400 600 800 1000 R−>TH spatial Position in Reference Frame

1200

1400

0

200

400 600 800 1000 R−>TH spatial Position in Reference Frame

1200

1400

0

200

400 600 800 1000 Z spatial Position in Reference Frame

1200

1400 +noise

Y (cm)

22 21 20

21.5 unit (cm)

and are skipped due to limited space. In Fig. 5 the motion trajectories for a reciprocating translation movement about 0.5cm along the y-axis are shown in x-y and z-y planes. Fig. 6 shows the evolution of Euler angles of time, when rotating the object about the x-axis. The latter figure also visualizes the whole manipulation process, including initial grasping. At stage 0, the grasp is realized, and the predefined desired grasp force of 1N is approached. At stage 1, the object becomes unfrozen. In this stage, the desired action of the robot hand is to keep the position and orientation of the object unchanged. Because of unbalanced grasp forces (every finger has the predefined contact force of 1N ), this unfreezing is an instant disturbance to the balance of the object. However, the reactive manipulation strategy keeps the object stably in place by adapting the desired contact force on the thumb as well as the position of the thumb’s contact point on the object. At stage 2, the object is rotated as can be seen from the smooth change of Euler angle α from 0 to -0.2 rad and back to zero again. The simulation experiment shows, that the object can be locally manipulated in the hand and it also shows that the micro manipulation approach is feasible. In order to confirm the applicability of our method in noisy real-world scenarios, we superimposed artificial measurement noise to the values obtained from the physics engine. The standard deviations of the added Gaussian noise are: 0.5cm for contact positions, and 0.3 for contact force magnitude (desired contact force is 1.0). Fig. 7 summarizes the manipulation experiments given these noisy conditions. Fig. 7a shows that the contact force magnitude error(3th row) oscillates around zero – according to the added sensor noise. However, as shown in fig. 7b the translation motion is not strongly affected by the sensor noise, because the measured, i.e. noisy contact position trajectory on average follows the targeted value. These simulation results show that the object can be successfully manipulated and moved along the y-axis despite noisy feedback.

X (cm)

−1

21 20.5

nominal

(b) Contact position feedback

Fig. 7: Object translation with noisy sensor feedback.

A. Discussion In practice we often will observe rolling or sliding when the robot manipulates the object. Compared to traditional active control methods, the proposed reactive strategy uses different methods to deal with these unpredictable events. In traditional methods, the friction cones (see the green cones in Fig. 1) and the measure of force closure are the key issues under the condition of a point contact model with friction, i.e. find contact forces fc , such that an arbitrary external disturbance wrench Fe can be resisted by the grasp: Gfc = −Fe q f12 + f22 ≤ µf3 constrained to

(5) (6)

Here G denotes the grasp matrix [8]. To actively control rolling, requires that the desired contact force trajectory is constrained by Eqs. (5,6). Active sliding control requires to decrease the contact normal force to the critical value, and moving the fingertip on the object under the constraint equation Eq. (6). Apparently, the knowledge of the friction

cannot be actively controlled and thus are assumed to be zero. Simulation experiments have shown the feasibility of the approach. With this method the multi-fingered robot hand can manipulate an object successfully moving along desired trajectories, even if the geometry of the object is unknown and the interaction between the object and the fingers is unpredicted. VIII. ACKNOWLEDGMENT Qiang Li gratefully acknowledges the financial support from Honda Research Institute Europe for the project “Autonomous Exploration of Manual Interaction Space”. R EFERENCES Fig. 8: Contribution to the Velocity of Contact Point (Thumb)

force between the fingertip and the object is needed to realize active sliding control. The complete knowledge of the geometry of the fingertip and the object would be needed to realize active rolling control. Our reactive control strategy does not assume such knowledge. Rather, the proposed manipulation strategy actively compensates for occurring rolling and sliding without the explicit knowledge about friction properties. Instead of finding feasible force-closure trajectories satisfying Eqs. (5,6) we solve the simple problem of finding scalar contact forces acting along the contact normals, such that the resultant force equals zero. We argue, that the inner forces, corresponding to the null space of the underlying system of equations, can be found in an on-line fashion employing slip detection algorithms [12]. This also allows to manipulate objects of previously unknown weight. The computed target velocity of all contact points is composed from two components: position and force errors. The different contributions of position and force control are visualized in Fig. 8 for the translation motion along the yaxis again. Stage 1 and 3 represent the motion along the positive and negative direction of y-axis. Stage 2 and 4 represent object stable. The blue dash-dotted line, the green dashed line and the red solid line represent the contributions of the force, position, and overall error components respectively. The data is again described in the reference coordinate frame shown in Fig. 1. While the motion along the surface (composed by the x-z-plane) is primarily composed from position errors, the motion along the normal direction (y-axis) is composed from both error components. Notice, that the PI controllers used for force control also act as low-pass filters, smoothing the tracking trajectory. VII. C ONCLUSION In this paper, we proposed simple but efficient reactive strategy to locally manipulate objects with a multi-fingered robot hand assuming as less as possible knowledge about the object. This reactive strategy is based on the micro manipulation assumption, i.e. assuming that sliding and rolling

[1] KDL wiki | the orocos project. http://www.orocos.org/kdl. [2] A.A.Cole, P. Hsu, and S.S.Sastry. Dynamic regrasping by coordinated control of sliding for multi-fingered hand. In Robotics and Automation Proceedings,IEEE International Conference on, volume v2, pages 781 – 786, Scottsdale, AZ , USA, May 1989. [3] Christof Elbrechter. ICL project. http://www.iclcv.org/. [4] T. Ishihara, A. Namiki, M. Ishikawa, and M. Shimojo. Dynamic pen spinning using a high-speed multifingered hand with high-speed tactile sensor. In Humanoid Robots,the 6th IEEE-RAS International Conference on, pages 258 –263, 2006. [5] Quoc V. Le, David Kamm, A. F. Kara, and Andrew Y. Ng. Learning to grasp objects with multiple contact points. In Robotics and Automation,IEEE International Conference on, pages 5062–5069, 2010. [6] M.Zribi, J. Chen, and M.S.Mahmoud. Control of multifingered robot hands with rolling and sliding contacts. Journal of Intelligent and Robotic Systems, v24(no.2), February 1999. [7] R.Campa and H.Torre. Pose control of robot manipulators using different orientation representations: A comparative review. In Ameriacan Control Conference, St.Lois,MO,USA, 2009. [8] R.M.Murray, Z.X.Li, and S.S.Sastry. A Mathematical Introduction to Robotic Manipulation. CRC Press, 1994. [9] Frank Roethling, R. Haschke, Jochen J. Steil, and Helge J. Ritter. Platform portable anthropomorphic grasping with the bielefeld 20-dof shadow and 9-dof tum hand. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2951–2956, San Diego, California, USA, Oct 2007. [10] Steffen W. Ruehl, Andreas Hermann, Zhixing Xue, Thilo Kerscher, and Ruediger Dillmann. Graspability: A description of work surfaces for planning of robot manipulation sequences. In Robotics and Automation,IEEE International Conference on, pages 496 –502, 2011. [11] J.P. Saut, A. Sahbani, S. El-Khoury, and V. Perdereau. Dexterous manipulation planning using probabilistic roadmaps in continuous grasp subspaces. In Intelligent Robots and Systems,IEEE/RSJ International Conference on, pages 2907–2912, 2007. [12] Matthias Schoepfer, Carsten Schuermann, Michael Pardowitz, and Helge J. Ritter. Using a piezo-resistive tactile sensor for detection of incipient slippage. In International Symposium on Robotics, Munich, Germany, 2010. [13] Carsten Schuermann, R. Haschke, and Helge J. Ritter. Modular high speed tactile sensor system with video interface. In Tactile sensing in Humanoids – Tactile Sensors and beyond @ IEEE-Ras Conference on Humanoid Robots (Humanoids), Paris, France, 2009. [14] Jan Steffen, Erhan Oztop, and Helge J. Ritter. Structured unsupervised kernel regression for closed-loop motion control. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, Taiwan, 2010. [15] K. Tahara, S. Arimoto, and M. Yoshida. Dynamic object manipulation using a virtual frame by a triple soft-fingered robotic hand. In Robotics and Automation,IEEE International Conference on, pages 4322 –4327, 2010. [16] Z. Xue, J.M. Zollner, and R. Dillmann. Dexterous manipulation planning of objects with surface of revolution. In Intelligent Robots and Systems,IEEE/RSJ International Conference on, pages 2703–2708, 2008. [17] M. Yashima. Manipulation planning for object re-orientation based on randomized techniques. In Robotics and Automation,IEEE International Conference on, volume 2, pages 1245–1251, 2004.

Suggest Documents