Terrain edge detection for biped walking robots using

0 downloads 0 Views 15MB Size Report
Jun 3, 2017 - its geometry, a multi-modal sensory strategy with 6 degrees of ...... incline, it is more likely to be found when the vCoP is near the edge of the ... for active sensing motion, which interpolates four vertices of the .... p23mg − k23.
Terrain edge detection for biped walking robots using active sensing with vCoP-position hybrid control Yisoo Leea , Hosang Leea , Soonwook Hwanga , Jaeheung Parka,b,∗ a Department

of Transdisciplinary Studies,Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Republic of Korea b Advanced Institutes of Convergence Technology, Suwon-si, Gyeonggi-do, Republic of Korea

Abstract Perception of terrain is one of the most important technical impediments for biped robots to locomote in human environments. Perception error exists due to imperfect exteroceptive devices causing issues such as limited viewing angles or occlusion of visual sensors. This error can cause unexpected contact between the robot and the environment. The reaction wrenches from unexpected contact negatively affect the control and balance of the biped robot. In this study, active sensing method is proposed to estimate the edges of the contact terrain using robot link geometry information. By generating proper active motion that maintains the contact, the geometric information of the contact link is collected to estimate the edge of the contact terrain. To recognize contact and calculate its geometry, a multi-modal sensory strategy with 6 degrees of freedom (DOF) force/torque sensors at robot ankles and joint encoders is proposed. The concept of virtual Center-of-Pressure (vCoP) is utilized to generate active motion of the foot while maintaining the contact. To prevent loss of balance during active motion, the normal force condition of the vCoP is taken into account and the contact moment of the supporting foot is controlled. The operational space control scheme for a floating-base robot is adopted to apply compliant motion and vCoP-position hybrid control framework. The proposed method is verified by experiments using a 12-DOF biped robot to estimate the edge line of a block that has contact with the robot foot. Keywords: active sensing, Center-of-Pressure (CoP), biped robot, edge line, contact detection, balancing

1. Introduction There have been many researches and developments in humanoid robotics to maximize the versatility of manipulation and locomotion. In particular, ∗ Corresponding

author Email addresses: [email protected] (Yisoo Lee), [email protected] (Hosang Lee), [email protected] (Soonwook Hwang), [email protected] (Jaeheung Park)

Preprint submitted to Journal of LATEX Templates

June 3, 2017

5

10

15

20

25

30

35

40

45

various humanoid robots have been unveiled to the public in past decades, presenting enhanced technologies of humanoid robotics. For example, at the DARPA Robotics Challenge (DRC) [1], the robotics competition held in 2013 and 2015, various types of humanoid robots successfully performed complex tasks of traversing irregular terrains, and it was proved that the humanoid robots can be considered a practical option that can be controlled and managed to tackle hazardous and complex activities. When humanoid robots are deployed in human environments, perception of external environments is one of the most important technical issues for robots to coexist and interact with humans. Before the robot moves to a target area and handles a given task, it should be provided accurate three-dimensional geometric information of the surroundings in which is expected to operate. If inaccurate geometric information is given to the robot, unexpected contact or collisions may occur between the robot and environmental objects including humans, which can have dangerous implications such as the robot falling down and damage itself [2] or injuring humans and damaging external objects in the environment. Therefore, many researches have proposed methods for the perception of obstacles and terrain, and mapping algorithms for footstep planning of the humanoid robots [3–5]. Most of the conventional approaches utilize sensory devices such as cameras and Light Detection and Ranging (LiDAR) equipments for perception. Cameras collect information on the colors and depth of the environment, and LiDARs measure and provide the spatial distances from the LiDARs origin to various objects. Algorithms such as Random Sample Consensus (RANSAC) [6] have been developed for the identification of objects using cameras and LiDARs. During DRC, most of the teams attached one or more cameras and LiDARs to their humanoid robots not only to recognize static environmental objects [7–10] but also to estimate the dynamic state of the robot with reference to the objects and the environment while it is walking [11]. Although the sensory approach for perception is effective, especially for scanning a wide area of the environment to generate a three-dimensional map, there are several limitations impeding the precise perception of the external environment of the humanoid robot. Due to the limited hardware capability of the sensory devices, they may have calibration errors, and are vulnerable to changes in illumination [13, 14]. Most sensory devices are mounted on the head which is at the maximum height of the robot. With such device placement, it is difficult for the robot to sense the terrain near its foot because of the limited viewing angle of the camera and occlusion with the robot body. An example described in Figure 1 shows the limitations of the real robot. In certain postures, the robot’s body may block the sensory device’s view of the environmental objects [9]. Moreover, the motion of the robot including biped walking will cause movement of the sensory devices which may reduce accuracy of perception in real-time and result in odometry errors [3]. Since terrain near the foot is difficult to detect when using the sensory devices, this odometry error can cause critical problems in foot stepping of the robot. Tactile sensing, which utilizes the contact between the robot and environ2

RGB-D Camera

1.4 m Scanning Area 1m

(b)

(a)

Figure 1: (a) Configuration of the biped robot and scanning environment. The RGB-D camera is mounted at the head of the robot and the camera is tilted at a maximum angle to the ground direction. The scanning area from the RGB-D camera is approximately described with blue color and the perceptible distance from the foot based on the configuration is more than 0.95 m. (b) Scanned area of the RGB-D camera based on the configuration in Figure 1 (a). and RGB-D edge detection [12] result of the environment. The detected point cloud of edge is shown using orange-colored dots.

50

55

60

65

70

75

ment, can be a complement for visual sensory perception, since tactile feedback is one of the main sources of robust physical interaction among humans [13]. Although this approach is only possible in case of physical contact, precise detection of a small contact area is possible when the humanoid robot touches an object or lays a foot on the ground. Many humanoid robots are equipped with low density sensors [15–17] on their body and high-density sensors on the end-effectors such as fingertips [18–20]. In previous studies with tactile sensors attached to the feet of a humanoid robot, tactile sensing was mainly used for estimation of foot state [21] or for balance control [22], not for terrain edge detection. In terms of showing good results in a fixed-based robot [23], there is a potential to develop the terrain edge detection methods using a tactile sensor for a humanoid robot. However, to apply tactile sensors to large scale robotic systems, networking and spatial calibration problems must be solved [24]. Several methods have been developed to accurately calibrate and reduce the modeling error [25–27]. Tactile sensing using only proprioceptive sensors such as encoders or joint torque sensors has been developed to reduce dependence on exteroceptive sensors. A physical collision detection method using a residual signal [28] is proposed. It estimates robot joint torques produced by contact through robot joint position measurements. The method cannot identify, however, the exact location and amplitude of the contact force without additional sensors. The residual-based method was extended to humanoid robots and validated by simulation [29]. In [14], contact force and location are estimated through non-linear optimization using measured joint torques generated by external forces. This method assumed the shape of the robot link as a simplified model like a cylinder, and assumed that only one force applies at a single contact point. Multiple studies have been conducted to estimate contact information using

3

80

85

90

95

100

105

110

115

120

an active sensing approach. In general, active sensing indicates an approach where state parameters of a passive sensor are adjusted on the basis of historical sensor data to improve information quality [30–33] such as vision, tactile feedback, or multi-sensor systems [34–36]. With the active sensing approach, parameters of contact models are estimated by generating a pushing force and motion of the end-effector at the soft surface of the object [37]. In [38, 39], the active sensing algorithm moves the end-effector of the manipulator around the contact point and estimates the geometry of the environment using joint angles of the robot, which requires compliant motion control. Since typical position control schemes generate the robot motion with high stiffness, torque control is adopted instead, which enables utilization of low feedback gains for position or velocity errors to implement compliance in case of unexpected contact. In order to use the active sensing approach for robotic systems with a floating base, a balancing algorithm that estimates and controls external forces from the contact is required. A method have been developed to detect a line-shaped foothold using Center-of-Pressure (CoP) and rotational motion information of the foot during walking [40]. Available foothold is detected by active motion of the foot and the estimated foothold is applied to the walking control. It showed the possibility and efficiency of the motion data for line detection for humanoid robots. For the precise perception of external environments for humanoid robots, approaches with both visual sensory and tactile sensing should be integrated. In other words, a global map is constructed with three-dimensional geometric information obtained from cameras and LiDARs, and local information such as contact areas that can be occluded by the feet of humanoid robots can be obtained by tactile sensing. In addition, since the performance of exteroceptive sensors restricts the accuracy of perception, active sensing is an effective approach, which is independent of the hardware specifications of the exteroceptive sensors.By providing a geometry of a small contact area or an object that is not precisely detected, active sensing can improve the accuracy of multimodal perception when it is integrated with existing approaches. In this research, a method for the active sensing is developed to estimate the edge line of an object, which comes in contact with the foot of a humanoid robot. For the estimation of the edge line, joint angles, geometry of the robot foot, and force/torque (F/T) sensors on robot ankles are utilized. The robot links are considered rigid and contact foot is assumed to be a flat square plane. Among various contact types between the foot and the environment, a line contact is chosen for the detection, since the robot foot might commonly come in contact with stairs (Figure 2) or blocks, which the robot foot should step on or avoid during walking. Because the stairs or the cinder blocks are rapidly changing in height before and after the edge, the robot would fall down if it misses a step by inaccurate perception [5]. Therefore, the compliant motion control scheme for floating-base robots is adopted by using operational space control based whole-body control framework [41] to reduce the effects of disturbance from unexpected contact. The F/T sensors on both ankles of the robot are used to recognize unexpected contact during compliant motion control. After con4

Figure 2: Complex shape of stairs in human environment.

125

130

135

140

145

tact recognition, the active sensing algorithm is operated to generate rotational motion around the contact line. During rotational motion of the robot foot, geometric information of the foot plane is collected and the edge line of the terrain is estimated as the intersection line of the foot planes. The rotational motion of the foot can be synthesized by virtual CoP (vCoP) generation. We defined the vCoP as the commanding CoP to be generated on the robot foot sole. If the vCoP is located on the contact surface between the sole and the ground, the robot foot will maintain a stable contact. If the vCoP is positioned on the sole surface that is not in contact with the ground, the robot foot will have a rotational motion. The vCoP and position of the tasks are controlled using a hybrid vCoP-position control formulation in the operational space. Since the biped robot has a limited contact area and the active sensing motion provides uncertain external forces from contact with the robot system, a balancing algorithm is required to ensure that the robot does not lose its contact between the supporting foot and the ground. The balancing algorithm developed using the same control framework [42] is adopted for this research. [40] and our study are similar in both the purpose and the method of creating a foot rotation using the concept of CoP while two approaches have some differences in detail. [40] uses the Quadratic Program (QP)-based control framework and requires a ground normal information to estimate an edge of the foothold. Since [40] is focused on the method of walking on partial foothold, there are relatively few discussion on the edge detection method and the accuracy of the estimation results. The method proposed in this paper uses operational space based control framework and edge estimation is possible even if ground information including ground normal is unknown. Experimental results are analyzed to discuss the accuracy

5

150

155

160

165

170

175

of estimation and the causes of estimation errors. This study is organized as follows. In Section. 2, we explain how a hierarchical controller is constructed using operational space control-based whole-body control framework where the active sensing motion and the balance is controlled. In Section. 3, the vCoP generation method is described which synthesizes rotational motion of the landing foot around the edge line of the contact object detected by the F/T sensor. Over-rotation can be prevented by planning the vCoP within the foot boundary. As a result, an edge line of the object is estimated as the intersection of the foot planes by the least square method. In Section. 4, the proposed method is verified by experiment where an edge line of a block is successfully detected. The summary of this work is presented in Section. 5. 2. Control Framework for Active Sensing Motion Generation This section describes the hierarchical control structure designed using the operational space based whole-body control framework to control active sensing motion and balancing. The vCoP-position hybrid control structure is designed from the control framework to generate active sensing motion. As mentioned in Section. 1., active sensing motion starts when an unexpected contact between the robot’s foot and the terrain is recognized. The unexpected contact may occur when the robot’s foot is controlled to swing and land in a foot stepping motion without an accurate perception of the environment. The same control scheme is applied for not only generating active sensing motion but also controlling motion of the biped robot before contact recognition. Since the biped robot is a floating base system, the reaction force caused by unexpected contact on the landing foot can destabilize the balance if it moves the CoP of the supporting foot out of the stable boundary. Therefore, a balancing controller is designed to change the contact moment of the supporting foot in the null-space of the vertical direction position of the trunk and the position and orientation of the landing foot. 2.1. Whole-body Control Framework

180

Park [41] proposed a contact-consistent control method using a whole-body control framework. We briefly introduce the main concept of the controller in this chapter. 2.1.1. Contact Constrained Dynamics Rigid body motion of the floating base robot with k joints can be described with n = k + 6 degrees of freedom (DOF) since the base body can perform 6DOF motion. The dynamic equations taking into account the contact wrenches for the floating-base robot are as follows, A(q)¨ q + b(q, q) ˙ + g(q) + JcT fc = Γ,

6

(1)

185

where q is the n × 1 vector of joint angles and Γ is the n × 1 vector of joint torque. A(q) is the n × n inertia matrix, b(q, q) ˙ is the n × 1 vector of Coriolis / centrifugal forces and g(q) is the n × 1 gravity vector. Jc is the Jacobian for the contact positions and orientations, which is defined as x˙ c = Jc q˙ and fc is the vector of contact wrenches. The rank should always be checked when constructing Jc , since Jc must be a full row rank. The dynamics of the robot (1) are projected to the contact space and can be shown as Λc x ¨c + µc + pc + fc = J¯cT Γ, (2) where Λc

=

(Jc A−1 JcT )−1

(3)

µc

=

Λc {Jc A−1 b(q, q) ˙ − J˙c q} ˙

(4)

pc ¯ JcT

190

= =

Λc Jc A

−1

g(q)

(5)

Λc Jc A

−1

,

(6)

Λc is the inertia matrix at the contact space, µc is the vector of Coriolis/centrifugal forces at the contact space, pc is the vector of gravity forces at the contact space and J¯c is the dynamically consistent inverse of Jc . If the contacts are stationary and rigid, we can assume that x ¨c = 0 and x˙ c = 0. Then equation (2) becomes fc = J¯cT Γ − µc − pc .

(7)

By substituting the equation (7) with equation (1), and with the Jacobian defined as x˙ = J q, ˙ the contact constrained dynamics equation at the operational space x can be obtained as follows. Λ(q)¨ x + µ(q, q) ˙ + p(q) = F,

(8)

Λ(q) = (JA−1 NcT J T )−1 J¯T = ΛJA−1 NcT µ(q, q) ˙ = J¯T b(q, q) ˙ − ΛJ˙q˙ + ΛJA−1 J T Λc J˙c q˙

(10)

where

c

p(q) = J¯T g(q), NcT = I − JcT J¯cT ,

195

200

(9) (11) (12) (13)

Λ(q) is the inertia matrix at the operational space, µ(q, q) ˙ is the vector of the Coriolis and centrifugal forces at the operational space, p(q) is the vector of the gravity forces at the operational space, and F is the force vector acting on the task at the operational space. Since xc is the position and orientation of the contact link and x is the position and orientation of the operational space that corresponds to the task, the two coordinates should not be the same. For example, in a humanoid robot, xc can be the position and orientation of the supporting foot, and x can be the position and orientation of the trunk. 7

2.1.2. Control Framework To formulate a torque solution composed of only real actuated joint torques Γa , k × n selection matrix S can be used to exclude the unactuated joints. Γ = S T Γa

(14)

If the floating base body motion is described in the first 6 joints, S = [0k×6

Ik×k ].

(15)

Using the selection matrix to exclude the unactuated joints, F = J¯T S T Γa

(16)

Define JeT as the generalized inverse 1 of J¯T S T which minimizes the acceleration energy [43]. (17) JeT = J¯T S T . Then the commanding torque for the actuated joints can be described as Γa = JeT F = JeT {Λf ∗ + µ + p},

(18)

where f ∗ is the vector of the desired acceleration of x. For PD control, f ∗ = kp (xd − x) + kd (x˙ d − x), ˙

(19)

where xd is the vector of the desired position, x˙ d is the vector of the desired velocity, and the kp and kd are the proportional and derivative gain, respectively. The null-space projection matrix is shown as e T = I − JeT JeT , N

(20)

JeT = J¯T S T .

(21)

where e T Γa,0 , where Γa,0 is an arbitrary The null-space control torque is obtained by N control torque. Finally, overall commanding torque can be calculated with e T Γa,0 . Γa = JeT F + N 205

(22)

In this study, JeT F , or the task torque, is used to control the motion of the trunk and the landing foot, and to generate its vCoP, which is explained in e T Γa,0 , which indicates the null-space torque, is used to maintain Section. 2.2. N a balance of the robot by controlling the vCoP of the supporting foot, which is explained in Section. 2.3.

8

Trunk

x

z

Landing Foot

y

object

x Supporting Foot

Figure 3: Frame of each foot. Global frame (Os ) is located at the center of the supporting foot and its z axis is oriented in the opposite direction of the gravitational force. Local frame (Ol ), which describes the force and moment of the landing foot, is located at the center of the landing foot.

210

215

220

2.2. vCoP-Position Hybrid Controller To estimate the edge-line location of contact by active sensing, the landing foot must rotate with respect to the edge line of the object without losing contact with it. Previous works [38, 39] evaluated ythte desired angle trajectory for thee rotation of the end effector for collecting data using active motion. However, if the given angle is larger than the slope of the object, the end-effector may over-rotate and may be lifted due to the excessive moment for the rotation. At the beginning, it is difficult to select an appropriate angle range since the geometric information of the object is not known precisely. To prevent overrotation caused by excessive moment of the end-effector, vCoP constraint is applied in this research. We defined vCoP as the commanding CoP which can become the real CoP if the foot has plane contact with the object. The vCoP is defined by the commanding normal force and moment at the local frame Ol of landing foot (Figure 3). l l

225

pn,x pn,y

=

−l mn,y /l fn,z ,

(23)

=

l

(24)

l

mn,x / fn,z ,

where l pn,x and l pn,y are the x and y direction vCoP coordinates of landing foot at local frame Ol . The superscript l indicates the reference frame Ol and the subscript (n, i) means the target location, i-component of the landing foot. l mn,y , l mn,x , and l fn,z are the commanding moment around the x axis, commanding moment around the y axis and commanding force in the direction of the z-axis at the local frame Ol , respectively. The contact can be maintained by 1J T

e is expressed as (J k )T in the previous research by Park [41].

9

vCoP

vCoP

CoP

(a)

vCoP = CoP

vCoP

CoP

CoP

(c)

(b)

(d)

Figure 4: Rotational motion of the foot generated using the relation between vCoP (yellow point) and CoP (red point). (a), (b) If the vCoP is located at the area where the foot does not have any contact with the environment, the CoP will be located at edge of the environment and the foot will rotate around the edge point maintaining the contact. The direction of rotation is determined by the vCoP location. (c) If the vCoP is inside the contact area between the foot and the environment, the CoP becomes the same as the vCoP and the foot will not have any rotational motion. (d) If the vCoP is located outside of the foot boundary, the foot will rotate around the edge of the foot.

230

235

240

245

250

255

a normal force l fn,z and rotational motion can be generated by moments l mn,y and l mn,x if vCoP is not equivalent to the real CoP. If the vCoP is within the boundary of the foot sole, the foot will not have a rotational motion around the edge line of the foot and this relation prevents over-rotation of the landing foot [44]. If the vCoP is located at the foot area where no contact occurs, the foot will rotate around the edge point on a rotation axis, as shown in Figure 4 (a) and (b). As seen in Figure 4 (c), the landing foot will not have any rotational motion if the vCoP is located inside the contact area between the foot and the environment. Besides, the foot will rotate around the edge of the foot and over-rotation will occur when vCoP is located outside of foot boundary (Figure 4 (d)). Therefore, rotational motion will be generated only when the landing foot is in partial contact with the ground since the vCoP is controlled to ensure that it is located inside the foot boundary. When active sensing motion starts, l mn,y , l mn,x , and l fn,z are controlled instead of orientation around x, y axis, and position z at frame Ol to generate a feasible vCoP which should be inside of the landing foot boundary. Other tasks such as the position and orientation of the trunk at frame Os , x and y direction position of the landing foot at frame Ol and orientation around z axis at frame Ol are controlled to retain the initial value during active sensing motion. To perform the tasks including vCoP, the Jacobian and desired force for the main task are formulated as follows. The Jacobian matrix of the main task J1 is evalutated by s  J J1 = s t , (25) Jn where s Jt and s Jn indicate the 6×n Jacobian matrix for the position and orientation of the trunk at frame Os and the landing foot at frame Os , respectively. The superscript s indicates the reference frame Os , and the subscript means the target location, the trunk for t and the landing foot for l. The same Jacobian J1 is used before and after active sensing to prevent any discontinuity of the Jacobian. 10

The desired force acting on the task is then calculated by s  F F1 = s t , Fn

(26)

where s Ft = [s ft,x s ft,y s ft,z s mt,x s mt,y s mt,z ]T indicate the 6 × 1 vector of forces and moments on the trunk described at frame Os , and s Fn = [s fn,x s fn,y s fn,z s mn,x s mn,y s mn,z ]T denotes the 6 × 1 vector of forces and moments on the landing foot described at frame Os . l fn,z , l mn,x , and l mn,y are decided by vCoP and other desired force components are evaluated by the PD control scheme, which is described in Equation (19) at frame Ol . The variable xd becomes the initial position and the variable x˙ d becomes zero in the Equation (19) because it is controlled to maintain the initial position when the active sensing starts. These values can be expressed in frame Os as   Rl 03×3 l s Fn = Fn , (27) 03×3 Rl where Rl is the 3 × 3 rotation matrix from the local frame to the global frame, and l Fn is the force vector Fn on the local frame. Finally, from Equation (18), the task torque Γtask is obtained by following equation. Γtask = Je1T F1 .

(28)

2.3. Balance Controller

260

265

270

275

During the active sensing procedure, the reaction force from the object may cause as disturbance to the robot. Though the motion and the contact force of the robot are planned by roughly taking its balance into account, it may fall down if there is a large control error and disturbance due to contact. In such circumstances, the CoP of the supporting foot has to be maintained within the foot boundary to keep the balance in the single-support phase. We proposed a balance controller that controls the CoP of a supporting foot by modifying the contact moments acting on it [42]. In the proposed balance controller, the balancing torque is composed only in the null-space of the z direction position of the Center of Mass (CoM) at global frame, and the rest of the main tasks are affected by it. In this study, since the vCoP and the contact position are controlled at the same time by vCoP-position hybrid control of the landing foot, the balance control torque is composed in the null-space of the x, y, and z direction positions and orientation of the landing foot as well as the z direction position of the trunk. Due to the control structure, the control components of the landing foot are not affected by the balancing controller and guarantee contact of the landing foot with the object. The position in the horizontal direction and orientation of the trunk are utilized to control balance. The balance control torque Γbal is evaluated by T eT e1,n Γbal = N J 2 M2 ,

11

(29)

Safety Margin

Supporting Foot

y

x

O

: Estimated CoP : Desired CoP Figure 5: Boundary of supporting foot sole and desired CoP location. The red area represents the safety margin region of the supporting foot. The green area is considered as the supporting foot sole where the CoP has to be located for robot balance. If the estimated CoP is not located in the green area, the balance controller modifies the CoP so that it is located at the nearest boundary of the supporting foot sole.

where

280

285

290

=

T eT I − Je1,n J1,n

(30)

Je2T

=

eT . J¯2T S T N 1,n

(31)

 T s T T J1,n = s Jt,z Jn is the 7×n Jacobian matrix part of J1 , and s Jt,z is the Jah iT T s T Js,β cobian matrix for the z-position of the trunk. J2 = s Js,α is the 2×n Jacobian matrix which describes the orientation of the supporting foot frame T Os around x and y axis, and M2,d = [s ms,x,d s ms,y,d ] is the 2 × 1 vector of the desired moments acting on the supporting foot around x and y axis. The derivation of balancing algorithm can be found in [42]. The moment M2 that compensates the CoP at the supporting foot due to the disturbance from the landing foot can be obtained either by calculating wholebody dynamics or by measuring F/T sensor data. In order to minimize the balance control torque Γbal , the desired CoP of the supporting foot is controlled so that it is located at the nearest boundary of the supporting foot sole from the estimated CoP. The supporting foot sole polygon is set to have a smaller area than the contact area of the actual supporting foot by applying a safety margin as seen in the Figure 5. Therefore, the robot does not fall even when the CoP is at the boundary of the supporting foot sole. Based on the relation between the contact wrench and the CoP, the desired contact moments can be derived as, s s

295

eT N 1,n

ms,x,d ms,y,d

= =

(s ps,y,d −s ps,y,e )s fs,z,e , s

s

s

−( ps,x,d − ps,x,e ) fs,z,e ,

(32) (33)

where s ps,x,d , s ds,y,d are the desired CoP in the x and y direction, s ps,x,e and s ps,y,e are the estimated CoP in the x and y direction and s fs,z,e is estimated 12

300

305

310

315

320

325

330

335

force in the z direction. The superscript s indicates the reference frame Os . The subscript (s, i, d) and (s, i, e) mean the desired and the estimated value of the i-component of the supporting foot, respectively. The contact wrench at the supporting foot can be estimated by Equation (7). Equation (32) and (33) calculate additional contact moments needed to move the estimated CoP to the desired CoP, and the balance controller generates the additional contact moments. If the estimated CoP is in the supporting foot sole, value of the desired CoP is selected as equal to the estimated CoP, and the additional contact moments for balance control s ms,x,d , s ms,y,d are zero. From Equation (28) and (29), the overall commanding torque Γa = Γtask + Γbal is given by T eT e1,n Γa = Je1T F1 + N J 2 M2 . (34) The proposed control structure appears to consist of two priorities, but it is similar to the hierarchical control structure with three priorities because the DOF of the null space is smaller than that of the main task. The first priority task is the control of landing foot and z direction position of the trunk. The second is control of the contact moment at the supporting foot for the purpose of balancing. The third priority is control of the horizontal direction position of the trunk and orientation of the trunk. The first, second, and third priority tasks require 7-DOF, 2-DOF, and 5-DOF, respectively. Since the 12-DOF biped robot is used for the experiments in this research, the robot cannot perform every tasks due to insufficient DOF. In this case, the first and the third priority tasks will be performed stably if the s ms,x,d and s ms,y,d are zero. If the s ms,x,d and s ms,y,d have non-zero values to control the contact moments for balancing, the third priority task will be disrupted to provide task redundancy for the balance controller. Only the third priority task is disrupted, and the first and second priority tasks can always be operated reliably without being affected by the other tasks. In other words, the horizontal positions and orientations of the trunk are transformed to produce the desired contact moments needed for balance control. This structure has two advantages compared to the control structure with the three priorities using two null space projection matrices: low computational cost and torque continuity. If a typical hierarchical control structure is used, a minimum 2-DOF must be assigned for the balance control task even in situations where balance control is not required. Therefore, the controller cannot use the DOF allocated to the balance control for other tasks. To utilize the total DOF when balance control is not needed, the task Jacobian must be changed depending on whether the balance control is used or not. In this process, torque discontinuity problem occurs. To solve the problem of torque discontinuity, studies have been performed as in [45], but further research is needed to utilize it for balance control of biped robots. The lowest priority task will cause an unexpected huge motion that cannot be restored to the initial posture if the desired contact moment to be controlled for balancing is too large. Therefore, it cannot be performed continuously to modify large contact moments. Therefore, proper plan of the tasks can increase efficiency of the balance controller. In this study, a balance state is considered

13

when deciding normal force of the landing foot in Section. 3.2 as a role of the CoP plan of the supporting foot. 340

345

350

355

360

3. Active Sensing Strategy In this work, the use of active sensing focuses on detecting an edge line of the stair or a fixed block-type object. The proposed method needs the assumption that the robot link and the environment are rigid enough to use their geometric information. The contact foot is considered as a flat square plane. Although the entire process can be done without exteroceptive sensors, an F/T sensor on an ankle is used as tactile sensor to recognize contact to improve the accuracy of estimation results. At first, the robot performs a foot landing motion using compliant motion control. The compliant motion is implemented by a control scheme using Equation (18) and (19) with low control gain. Due to the control scheme, the robot can have compliant behavior to the disturbance from unexpected contact at the landing foot. When the unexpected contact occurs, the control error and the F/T sensor data have large value and the controller runs the active sensing algorithm to generate active motion. The active sensing motion is synthesized by generating vCoP at the landing foot to rotate around the edge line of the object maintaining the contact. During the active sensing motion, the plane data of the foot is collected for estimation. The intersection line of the collected foot plane can be estimated by using least square method. The estimated intersection line can be treated as edge line of the box shape object since the foot plane rotates around the edge line. 3.1. Contact Recognition

365

370

375

Before unexpected contact occurs, the compliant motion of the trunk and the landing foot is controlled in the main task to follow preplanned trajectories. If both norms of control errors of the landing foot and the F/T sensor values on the ankle exceed predefined thresholds, unexpected contact at the landing foot can be recognized. After contact recognition, active sensing motion starts to collect the geometry of the contact foot plane. From the moment, the controller is switched into the vCoP-Position hybrid controller shown in Section. 2.2 to generate active sensing motion. The F/T sensor is also utilized during active sensing after contact recognition. When the magnitude of the measured normal force is smaller than the predefined threshold value, the contact between the robot foot and the environment can be considered to be lost. For terrain edge estimation, the foot plane has to maintain contact with the terrain and thus the F/T sensor is used as a tactile sensor to verify the contact state. The geometric information of the foot plane is collected only when the contact is sensed by the F/T sensor.

14

CoP 2

CoP 1

Rotation

Edge Line

CoP Boundary for Stable Contact CoP 4

CoP 3

Landing Foot

Figure 6: The vCoP path of the landing foot for active sensing motion generation. The vCoP path interpolates four vertices of the feasible CoP boundary and rotates repeatedly. The feasible CoP boundary is not the same as the boundary of the landing foot sole, taking into account the safety margins and exists inside the landing foot sole. It crosses the edge line for the landing foot rotation.

380

385

390

395

400

3.2. vCoP Generation for Active Sensing Motion To synthesis the rotational motion of the contact foot using vCoP-position hybrid controller, the relation between CoP and vCoP which is described in Section. 2.2 is used. The vCoP is planned to move forward and backward repeatedly to generate clockwise and counterclockwise rotation around the edge line to get sufficient foot motions for estimation. Since the edge line of the contact object can be located anywhere within the foot plane with an unknown incline, it is more likely to be found when the vCoP is near the edge of the safety margin, considered the foot boundary. Figure 6 shows the vCoP path for active sensing motion, which interpolates four vertices of the safety margin, considered the landing foot boundary. The vCoP is planned to continuously move around the path. Since the joint torques of the robot are discontinuous when the path of vCoP is discontinuous, we designed a continuous vCoP path. The vCoP’s time plan is empirically determined. Therefore, the foot can rotate around both x and y axis at frame Ol because the vCoP passes the edge line within the boundary. To generate the determined vCoP l pn,x and l pn,y , the z direction force l ( fn,z ), the moment around x axis (l mn,x ), and the moment around y axis (l mn,y ) need to be determined as per the relation in Equation (23) and (24). Since both the variables l pn,x and l pn,y are related to l fn,z , the equation can be solved to get the desired moments l mn,x and l mn,y when the desired force l fn,z is determined. The l fn,z is the normal force that is applied on the contact object by the robot. Therefore, l fn,z always receives a reaction force from the object during active sensing motion. If l fn,z is too large, the robot cannot stay in balance for a long time. Since the balancing algorithm described in Section. 15

CoM CoP

CoP

x

z

Landing Foot

y

object

x Supporting Foot

Figure 7: Dynamics model of the biped robot for a static situation. Total reaction wrench (s Ftot ) at frame Os is described by the gravitational force acting on the robot (s Fg ) and reaction wrench (l Fn ) at the landing foot.

405

410

415

420

2.3. requires the motion of redundant tasks, continuous balance control can move the trunk position and orientation away from a stable state. Therefore, the l fn,z has to take into account the balance condition of the robot so that s ms,x,d and s ms,y,d do not have very large values. On the other hand, if l fn,z is too small, vCoP generated by the real robot cannot be accurate. In addition, the resolution of the vCoP becomes higher when the z direction force l fn,z , which is a denominator of the vCoP, becomes larger. When deciding the l fn,z , both conditions mentioned above have to be considered. To keep the balance of the robot, the CoP and the friction conditions of the supporting foot need to be taken into account when deciding the l fn,z . After l fn,z is decided, l mn,x and l mn,y are determined by Equation (23) and (24). The feasible boundary of l fn,z for stable contact is derived from a static model of a biped robot after taking the gravitational force and reaction wrench at two feet into account. The assumption of the static model is acceptable because the biped robot is controlled to retain its initial position and orientation, barring the rotational motion of the landing foot. As seen in Figure 7, the static model describes the total reaction wrench (s Ftot ) at frame Os including the gravitational force acting on the robot (s Fg ) and reaction wrench (l Fn ) at landing foot during active sensing motion. Then, the total reaction wrench at the supporting foot and its effect are defined by the following equations.     I3 03×3 s Rl 03×3 l s Ftot = Fg − s Fn , (35) s pˆcom I3 pˆn Rl Rl where s Ftot = [s ftot,x s ftot,y s ftot,z s mtot,x s mtot,y s mtot,z ]T is the 6 × 1 vector of total applying force on the supporting foot, s Fg = [0 0 − mg 0 0 0]T is the 16

425

430

435

6 × 1 vector of gravitational force on the CoM of the robot, m is the total mass of the robot, g is the magnitude of the gravitational acceleration, and I3 is the 3 × 3 identity matrix. s pcom is the 3 × 1 vector of the relative position of the CoM with respect to the supporting foot, and s pˆcom is the 3 × 3 hat matrix of s pcom . Likewise, s pn is the 3 × 1 vector of the relative position of the landing foot, or the estimated contact point during active sensing, with respect to the supporting foot, and s pˆn is the 3 × 3 skew symmetric matrix of s pn . The x and y direction forces and moment around z axis on the landing foot are used to control the center of the foot to maintain at the initial value. However, the forces and moment are small during the active sensing motion because of the low control gain. Therefore, l fn,x , l fn,y , and l mn,z are assumed to be zero in the static model. From Equation (35), components of the s Ftot are evaluated by ftot,x ftot,y ftot,z

r23 fn,z ,

(37) l

−mg + r33 fn,z ,

=

l

(38) l

l

=

pˆ13 mg − k13 fn,z + r11 mn,x + r12 mn,y ,

(39)

=

pˆ23 mg − k23 l fn,z + r21 l mn,x + r22 l mn,y ,

(40)

l

l

l

−k33 fn,z + r31 mn,x + r32 mn,y .

=

(41)

where the (i, j) component of 3 × 3 matrix K = s pˆn Rl is kij . Likewise, the (i, j) components of Rl and s pˆcom are denoted as rij and pˆij , respectively. The expected CoP of the supporting foot ps,x and ps,y calculated by the Equation (35), (38), (39), and (40) are given by

ps,y

450

=

mtot,y

ps,x

445

(36)

l

mtot,x mtot,z

440

r13 l fn,z ,

=

= −

=

pˆ23 mg − k23 l fn,z + r21 l mn,x + r22 l mn,y , −mg + r33 l fn,z

pˆ13 mg − k13 l fn,z + r11 l mn,x + r12 l mn,y . −mg + r33 l fn,z

(42)

(43)

The l mn,x and l mn,y can be expressed in terms of l pn,x , l pn,y which are known values, and l fn,z can be evaluated using Equation (23) and (24). Then, ps,x and ps,y are given as functions of the l fn,z . The feasible range of l fn,z can be decided by the following inequality conditions for balance. bx,l

≤ ps,x ≤ bx,u ,

(44)

by,l

≤ ps,y ≤ by,u ,

(45)

where bx,l and bx,u are the lower and upper boundary for the CoP in the x direction. Likewise, by,l and by,u are the lower and upper boundary for the CoP in the y direction. The boundary value is determined by taking into account the stability margin due to unexpected contact on the landing foot. When the supporting foot is shaped as a square, the boundary values can be defined as 17

455

bx,u = αs lx /2, bx,l = −αs lx /2, by,u = αs ly /2 and by,l = −αs ly /2. Where αs is the safety coefficient which has value between 0 and 1, lx is the length of the supporting foot and ly is the width of the supporting foot. This condition will significantly reduce the balance problem of the robot. However, the robot can still lose its balance since the static model contains some assumptions. In that case, the balance controller described in Section. 2.3 ensures that the supporting foot does not lose contact with the ground. The normal force at the supporting foot applied by the robot must have a negative value to maintain the contact. ftot,z

460

< 0.

(46)

The constraint in Equation (46) can be described about variable l fn,z using Equation (38). l

fn,z r33


> −µs . ftot,z

(51)

µs >

(52) (53)

Inequality equations about friction conditions can be obtained by substituting Equations (36), (37), (38), and (41) by Equations (51), (52), and (53). µs >

r13 l fn,z mg+r33 l fn,z

> −µs ,

(54)

µs >

r23 l fn,z mg+r33 l fn,z

> −µs ,

(55)

l

µs >

470

l

l

k33 fn,z +r31 mn,x +r32 mn,y mg+r33 l fn,z

> −µs .

(56)

Similar to the inequality equations in (44) and (45), the friction conditions can be described as inequality equations about variable l fn,z . To prevent the selection of a small value compared to the controllable resolution of vCoP, the condition |l fn,z | > min |resolution| has to be also considered. 18

(b)

(a)

k-thpestimatedp EdgepLine point

EdgepLine FootpLine EstimatedpPoints

(xk,pzk)

Zs

Planepof (yp=pyk)

Xs Object

LandingpFoot

Object

Ys

Figure 8: Concept of edge line estimation. (a) For the sake of calculation, the edge line is estimated as the collection of intersection points. (b) The intersection point of the foot lines (xk , zk ) in the y = yk plane is obtained by using the least square method. k ranges from one to an arbitrary positive integer.

475

480

485

Finally, l fn,z can be selected by considering the CoP and the friction conditions of the supporting foot, normal force direction condition and vCoP resolution condition. In this research, l fn,z was assigned a value close to the user defined reference value satisfying the conditions. Finally, determined values l fn,z , l mn,x , and l mn,y are commanded in vCoP-position hybrid controller, as described in Section. 2.2. 3.3. Edge Line Estimation During active sensing motion, the plane information of the landing foot is collected. The contact line can be estimated by the intersection line of the collected planes in a three dimensional space. To estimate the edge line with meaningful data, the measured normal force and the rotation velocity of the landing foot are used. To ensure contact state, data is collected when the measured normal force of the landing foot is larger than the threshold value. To ensure rotating motion and contact of the foot, data is collected when the measured normal force and the rotating velocity of the landing foot calculated using robot kinematics are larger than the threshold values. The line equation is not easily obtained, however, by simply formulating the matrix equation and solving the least square problem since it consists of two simultaneous equations. Thus, the edge line is regarded as the collection of the intersection points of the foot lines, as shown in Figure 8 (a). Using the kinematics of the robot, the landing foot plane equation is given by ax + by + cz + d = 0,

(57)

where a, b, c, and d are the coefficients of the equation. Consider a x − z plane where y = yk . yk has to be a value at which the robot foot exists. The foot line equation on the plane is given by ax + cz = −byk − d, 19

(58)

490

where yk is within the foot boundary in the y direction. After the foot rotates around the edge of the object, the set of the foot lines from Equation (58) is obtained. It can be expressed as the matrix form, which is given by     a1 c1   −b1 yk − d1 x a2 c2  k = −b2 yk − d2  . (59) zk ... ... ... From Equation (59), the least  T    a1 c1 a1 xk  c2  a2 = a2 zk ... ... ...

square solution (xk , zk ) can be obtained as −1  T   c1 a1 c1 −b1 yk − d1  c2  a2 c2  −b2 yk − d2  , (60) ... ... ... ...

which is shown in Figure 8 (b). By choosing a proper range of k, the points (xk ,yk ,zk ) that compose an edge line can be estimated. From the estimated points, the line equation can be obtained. 4. Experiment Results

Figure 9: A view of the 12-DOF DYROS humanoid legged robot. 495

Experiments were conducted on the 12-DOF DYROS RED [46] shown in Figure 9. The robot is torque-controllable using electric motors and harmonic 20

500

drives directly linked to joints. The height and the weight of the robot are 1.425 m and 54.63 kg, respectively. The robot has two ATI DAQ MINI85 F/T sensors on both ankles. The control frequency is 2kHz for the main loop and about 500Hz for robot dynamics calculation.

(0.195, -0.06, 0.15)

zs xs

(0.19, 0.32, 0.15) 0.38 m

Os

ys

0.15 m

(a)

(0.291, -0.01, 0.14)

zs Os

xs ys

(0.108, 0.318, 0.14) 22.6 deg (b)

(0.25, -0.123, 0.06)

zs xs (0.25, 0.3, 0.06)

Os

ys

(c) Figure 10: Experiment environment of three different cases. Edge points of the object are described as blue points. The purple dotted line denotes the edge line to be detected. (a) Case 1: A cinder block is parallel to the y axis of the supporting foot frame. (b) Case 2: A cinder block is rotated at angle of 29.2◦ in the z direction. (c) Case 3: An aluminum profile with a height less than the cinder block is located.

21

Two Contact Lines

Figure 11: Snapshot of the landing foot during experiment Case 3. The landing foot has a two line contact. Since the vCoP is inside the landing foot plane, the foot only rotate around the contact line of terrain.

505

510

515

520

In order to verify the proposed active sensing algorithm, object detection tests are performed using three different cases as shown in Figure 10. The location of an edge line of cinder block and an aluminum profile, which is described as purple dotted line in Figure 10, is detected by proposed method. Purple dotted line in Figure 10 is the edge to be detected. In Case 1, a cinder block is placed parallel to the ys axis. The cinder block, which is tilted at 22.6 degrees to the ground, is rotated 29.2 degrees about the zs axis in Case 2. In Case 3, an aluminum profile was placed approximately parallel to the ys axis. The height of the aluminum profile is 0.06 m and it is lower than cinder block. Unlike other cases, the landing foot may have two line contacts as seen in Figure 11. Right foot is used as a supporting foot and the center of the supporting foot is located at the origin of the global frame Os in Figure 10. Left foot is controlled as a landing foot and steps on the cinder block in the experiments. The length of the foot sole is 0.3 m and width is 0.15 m. Implemented experiments assume the situation when unexpected contact appears between the left foot and unknown object during single step of quasi-static walking. For the experiment, the CoM of robot is controlled to be located above the supporting foot during double support phase. After the CoM arrives at the required position, single support phase begins to control both the trunk and the left foot. The trunk is controlled to maintain its position and the left foot is controlled to generate swinging and landing motion. 4.1. Contact Recognition

525

The trajectory for swinging and landing motion is preplanned for left foot. For the Case 1 and 2, the foot moves 0.23 m upward for a second, 0.2 m forward for next one second, and 0.23 m downward to the ground for next two seconds. Since the height of the aluminum profile is lower than the cinder block, the highest position of the foot trajectory is changed to 0.13 m in Case 3. In the experiments, it was assumed that the position of the object is unknown and thus

22

Figure 12: Snapshot of swinging and landing motion of the foot.

530

535

540

545

550

555

the landing foot hits the object at an arbitrary time and location. The motion of the robot before contact is shown in Figure 12. The zs direction trajectory and norm of its control error of the landing foot in the three cases are described in Figure 13 (a) and (b). Since the compliant control scheme is applied with low feedback gains at operational space, the position of the foot followed its desired trajectory slowly at first, causing relatively large control error around at 1 s. Contact was not detected at that time, however, due to the low magnitude of measured forces at the landing foot as shown in Figure 13 (c). Contact was recognized when the norm of both control errors of landing foot and measured force data exceed threshold at 3.28 s in Case 1, 3.33 s in Case 2, and 3.87 s in Case 3. The threshold value for control error and measured force were 0.03 m and 50 N, respectively. 4.2. Active Sensing Motion After contact is recognized, task is commanded to the landing foot to generate the vCoP to follow desired value shown in Figure 15 (a). Orientation of the landing foot rotates depending on the vCoP location as seen in Figure 15 (b) and (c). The foot rotated around edge line three times repetitively during active sensing for each case. The pitch angle describes rotation angle of the landing foot around ys axis and the roll angle describes rotation angle of the landing foot around xs axis. When the vCoP in x direction is at the positive position, the pitch angle of the foot rotates in the positive direction. Moreover, the foot rotates in the opposite direction when vCoP in x direction is at negative position. Similarly, in case 2, the roll angle also rotates depending on the vCoP position since contact line is not parallel to the supporting foot. In Case 1 and Case 3, it is found that the roll angle is almost unchanged because the object lies horizontally with the foot. Because of the use of vCoP, there was no overrotation as intended and slight changes were caused by the elastic deformation of the foot sole. The commanding normal force at the landing foot is shown in Figure 15 (d). The conditions for normal force in Section. 3.2 are reflected in 23

Foot Z Position (m)

0.25 trajecoty 1 & 2 Case 1 Case 2 trajectory 3 Case 3

0.2 0.15 0.1 0.05 0 0

0.5

1

1.5

2

2.5

3

3.5

2.5

3

3.5

2.5

3

3.5

time (sec)

|Foot Position Control Error| (m)

(a) 0.08 Case 1 Case 2 Case 3

0.06 0.04 0.02 0 0

0.5

1

1.5

2

time (sec)

(b) |Measured Force| (N)

250 Case 1 Case 2 Case 3

200 150 100 50 0 0

0.5

1

1.5

2

time (sec)

(c) Figure 13: Experiment results of contact recognition. (a) Trajectories of the landing foot in the zs direction. (b) Norm of control error of the landing foot. (c) Norm of measurement of force.

560

565

570

the selected commanding value as shown in Figure 15 (d). The reference force -40 N is commanded if it is between the upper and lower boundary, otherwise the closest boundary value is selected. The static friction coefficient µs is 0.5 and safety coefficient αs is 0.4 in these experiments. Since the balance consideration for normal force l fn,z selection is based on some assumptions mentioned in Section. 3.2, the robot can lose its balance during the active motion. In that case, the CoP of the supporting foot is controlled by the balance controller shown in Section. 2.3. Figure 16 shows the CoP of the supporting foot measured by the F/T sensor and desired CoP during active sensing motion. Figure 16 (a) and (b) shows the results of Case 1, Figure 16 (c) and (d) shows the results of Case 2, and Figure 16 (e) and (f) shows the results of Case 3. When the expected CoP calculated in Equation (7) is outside the boundary of the supporting foot sole, the balance controller modifies the

24

Figure 14: Motion of landing foot during active sensing algorithm. After the landing foot steps on an object and contact is recognized, active sensing algorithm is operated to generate repetitive foot rotation around edge line of contact.

575

580

585

CoP to locate at the nearest boundary, which is expressed as desired CoP in Figure 16. If the expected CoP is inside the foot sole, the desired CoP is equal to the expected CoP not to modify it. In these experiments, the safety margin αs is 0.4 such that the CoP boundary is ±0.12 m in xs direction and ±0.03 m in ys direction. Due to the safety margin, the active sensing motion can be successfully generated without losing the contact between the supporting foot and the ground while an error exists between the desired CoP and the measured COP. The CoP of the supporting foot was controlled to be within about 0.02 m without using of the F/T sensor feedback control. In Case 1, the balance controller modified the CoP at 0.0 ∼ 0.07 s, 1.8 ∼ 2.22 s, 4.23 ∼ 4.6 s, and 6.6 ∼ 6.67 s to control ys direction CoP. In Case 2, the ys direction CoP is controlled at 2.48 ∼ 3.1 s. The ys direction CoP is controlled at 0.17 ∼ 1.05 s, 2.45 ∼ 2.77 s, 4.71 ∼ 5.26 s, 5.59 ∼ 5.71 s, and 7.12 ∼ 7.18 s for balancing in Case 3. Since the length of the foot is relatively longer than the width, there were no modification to the xs direction CoP. 4.3. Edge Estimation

590

595

With the acquired kinematic data of the landing foot plane during the active sensing motion, edge points which compose the edge line are estimated by Equation (60). Estimation results of Case 1, Case 2, and Case 3 are shown in Figure 17, 18, and 19, respectively. Red points are the points of edge line, which is estimated by active sensing, black dots are the CoPs measured by F/T sensor and blue lines are the measured edge of the object. When comparing the perpendicular distances between the estimated points and measured line, the average errors in Case 1, Case2, and Case 3 are 0.0154 m, 0.0144 m, and 0.0127 m respectively. Magnitude of the estimation error of active sensing is similar to the results of the RGB-D edge detection method [12]. Therefore, the proposed method will be able to complement the shortcomings of the visual sensory results, which are

25

x direction y direction

vCoP (m)

0.2 0.1 0 -0.1 0

1

2

3

4

5

6

7

8

5

6

7

8

5

6

7

8

time (sec)

(a) Pitch Angle (deg)

0 -10 -20 -30

Case 1 Case 2 Case 3

-40 -50 0

1

2

3

4

time (sec)

(b) Roll Angle (deg)

10 Case 1 Case 2 Case 3

5

0

-5 0

1

2

3

4

time (sec)

(c) 0 Case 1 Case 2 Case 3

Force (N)

-10 -20 -30 -40 -50 0

1

2

3

4

5

6

7

8

time (sec)

(d) Figure 15: Landing foot orientation and vCoP. (a) vCoP trajectory at local frame of landing foot frame. (b) Pitch angle, which describes rotation angle around ys axis of the landing foot at supporting foot frame. (c) Roll angle, which describes rotation angle around xs axis of the landing foot at supporting foot frame. (d) Normal force (l fn,z ) at landing foot in the local frame.

600

caused by limited viewing angle and occlusion. The measured CoP at the landing foot can be treated as edge of the terrain. However, the measured CoP does not accurately describe the edges as shown in 26

0

yCoP (m)

xCoP (m)

0.1 Measured Desired

-0.1 0

2

4

6

0.05

-0.05

8

Measured Desired

0

0

2

8

6

8

0.05

0.1 0

yCoP (m)

xCoP (m)

6

(b)

(a)

Measured Desired

-0.1 0

2

4

6

-0.05

8

Measured Desired

0

0

2

time (sec)

(d)

0

yCoP (m)

0.1 Measured Desired

-0.1 0

2

4

4

time (sec)

(c) xCoP (m)

4

time (sec)

time (sec)

6

0.05 0 -0.05

8

Measured Desired

0

2

4

6

8

time (sec)

time (sec)

(f)

(e)

Figure 16: Measured CoP and desired CoP of the supporting foot during active sensing motion. The desired value is determined by balance controller. Measured CoP is described with red color line and desired CoP is described with black color line. (a) CoP in xs direction of Case 1. (b) CoP in ys direction of Case 1. (c) CoP in xs direction of Case 2. (d) CoP in ys direction of Case 2. (e) CoP in xs direction of Case 3. (f) CoP in ys direction of Case 3.

605

610

615

620

the Figure 17, 18, and 19. While the results on the fixed base robot in [39] had small errors, the results in this paper have relatively large errors because of several reasons. One of them is that the supporting foot slips on the ground during active sensing due to disturbance from contact. In addition, the landing foot sole plane is not rigid and has elasticity, causing error in estimating the edge line by solving the intersection of the foot planes. Two additional experiments were conducted to determine the effect of the slip on the supporting foot and the elasticity of the foot sole. In order to confirm the effect of the slip, we performed the Case 4 experiment in which the supporting foot was fixed on the ground. The foot configuration is shown in Figure 20. Rigid and flat foot is installed at the supporting foot and fixed at the ground with bolts. In Experimental Case 5, the landing foot is also changed to the rigid and flat foot to avoid its deformation during the active sensing motion (Figure 21). Contact environment for the experiments is shown in Figure 22. Obstacle is allocated on a thin rubber mat and its thickness is 0.004 m. While the foot condition differs for each case, experiment procedure in Case 4 and Case 5 are the same as in Case 1. Estimation results of the experiment with the fixed supporting foot are shown in Figure 23. Compared to the measured edge line, estimated contact points have error of 0.006 m in average. Compared with Case 1, Case 2, and 27

0.3

measured edge measured CoP active sensing

0.25 0.2

0.2

y (m)

z (m)

0.3

measured edge measured CoP active sensing

0.1

0.15 0.1

0 0.05 0.3

0.2

0.2

0.1 0

y (m)

0

0.1 0

0

0.1

x (m)

(a)

0.3

0.2

0.3

(b)

0.3

0.3 measured edge measured CoP active sensing

0.25

measured edge measured CoP active sensing

0.25 0.2

z (m)

0.2

z (m)

0.2

x (m)

0.15

0.15

0.1

0.1

0.05

0.05

0

0 0

0.1

0.2

0.3

0

y (m)

0.1

x (m)

(c)

(d)

Figure 17: Edge line estimation results of Case 1. Red points are the estimated points that compose the edge line of the plane at the supporting foot frame. Blue line is the measured line. Black dots are the measured CoP of the landing foot. (a) 3-D plot. (b) X-Y plane. (c) Y-Z plane. (d) X-Z plane.

625

630

635

640

Case 3, the estimation error is reduced. This verifies that the slip has a negative effect on the estimation results. We conducted a Case 5 experiment to see if the elasticity of the sole affected the estimation error. For the Case 5 experiment, the landing foot is changed to the rigid and flat foot and the supporting foot is fixed at the ground as in experiment Case 4. Length of the flat foot is 0.3 m and width is 0.15 m. The estimated contact points have an error of 0.0027 m in average compared to the measured edge line. Error becomes smaller than Case 4 experiment cases. The results of experiment Case 4 and Case 5 demonstrate that errors of estimation are mainly caused by movement of supporting foot and deformation of the foot sole plane. In addition, uncertainty of the kinematic model of the robot and difference of joint angle between the encoder and real angle may influence the occurrence of the error. Additionally, measured value of position of the block and robot foot can have error since those were measured manually by a ruler. In [40], it is said that using the measured CoP can help finding the precise foothold. However, the measured CoP are less accurate than results of our method as seen in Figure 17, 18, 19, 23, and 24. This is because the normal force at the landing foot is small. As shown in Figure 15 (d), the magnitude of commanding normal force is always less than 40 N. For example, if moment of 28

0.3 0.25 0.2

0.2

y (m)

z (m)

0.3

0.1 0

0.15 0.1

measured edge measured CoP active sensing

measured edge measured CoP active sensing

0.05 0.3

0.2

0.2

0.1 0

y (m)

0

0.1 0

0

0.1

x (m)

(a)

0.3

0.2

0.3

(b)

0.3

0.3 measured edge measured CoP active sensing

0.25

measured edge measured CoP active sensing

0.25 0.2

z (m)

0.2

z (m)

0.2

x (m)

0.15

0.15

0.1

0.1

0.05

0.05

0

0 0

0.1

0.2

0.3

0

y (m)

0.1

x (m)

(c)

(d)

Figure 18: Edge line estimation results of Case 2. Red points are the estimated points that compose the edge line of the plane at the supporting foot frame. Blue line is the measured line. Black dots are the measured CoP of the landing foot. (a) 3-D plot. (b) X-Y plane. (c) Y-Z plane. (d) X-Z plane.

645

650

655

1 Nm occurs when the normal force is -40 N, the measured CoP is 0.025 m. In addition to the reaction wrench, the influence of the angular momentum caused by rotating foot’s moment of inertia and the elasticity of the foot floor is also measured by the F/T sensor. When the normal force magnitude is small, the measured CoP is sensitive to these effects. Therefore, the measured CoP may not be on the terrain edge. As seen in the Figure 24, when the rigid foot is used, the measured CoP is more narrowly distributed than the values in the case of the non-rigid foot. This is because the influence of the elasticity of the foot floor disappeared. Therefore, if the normal force magnitude increases, the influence of the angular momentum caused by foot roation and elasticity on the measured CoP will be relatively small; therefore, an accurate measurement is possible as mentioned in [40]. In such a situation, the CoP when the foot is rotating can be regarded as the location of the edge, and filtering it clearly is one of the problems that needs to be solved. Therefore, the active sensing method, which is less affected by the normal force magnitude, is considered to be better for general use.

29

0.3 measured edge measured CoP active sensing

0.25 0.2

0.2

y (m)

z (m)

0.3

0.1

0.15 0.1

measured edge measured CoP active sensing

0 0.05 0.3

0.2

0.2

0.1 0

y (m)

0

0.1 0

0

0.1

x (m)

(a)

0.3

0.2

0.3

(b)

0.3

0.3 measured edge measured CoP active sensing

0.25

measured edge measured CoP active sensing

0.25 0.2

z (m)

0.2

z (m)

0.2

x (m)

0.15

0.15

0.1

0.1

0.05

0.05

0

0 0

0.1

0.2

0.3

0

y (m)

0.1

x (m)

(c)

(d)

Figure 19: Edge line estimation results of Case 3. Red points are the estimated points that compose the edge line of the plane at the supporting foot frame. Blue line is the measured line. Black dots are the measured CoP of the landing foot. (a) 3-D plot. (b) X-Y plane. (c) Y-Z plane. (d) X-Z plane.

5. Conclusion

660

665

670

675

In this research, terrain edge detection method based on an active sensing approach for biped robots is proposed. The geometric data of the environment is obtained during active motion which rotates the robot foot around the edge line maintaining the contact. The edge line can be predicted by solving the intersection of the contact foot plane. The concept of the vCoP is used to generate active sensing motion preventing over-rotation of the contact foot. The vCoP and the position of other tasks were possible to be controlled through the vCoP-position hybrid controller. For estimation, encoders at each joint and F/T sensors at both ankles are used. The F/T sensor is used for tactile sensing of the contact between the robot foot and the terrain. The joint angles measured by encoder are utilized to calculate geometry of the contact environment. Since a biped robot or a humanoid robot has a floating base, the two balancing strategies are applied in the control algorithm. One is that the desired normal force on the landing foot is determined considering its effect on the CoP of the supporting foot. This can compensate the disturbance from the terrain estimated by the static model. The other is the balance controller, which controls the contact moment at the supporting foot. The contact moment is controlled 30

Figure 20: Shape of the robot feet in Case 4. Right foot is fixed with bolts.

Fixed Foot

Figure 21: Shape of the robot feet in Case 5. The right foot is fixed with bolts and the left foot is also changed into a flat and rigid plane.

in the null-space of the main task which contains active sensing motion. Balance can be maintained through the balancing controller. However, if large contact moments need to be modified, it may change horizontal position and orientation of the trunk, the lowest priority task, too much to be restored. Therefore, the

31

(0.22, -0.01, 0.154)

zs xs

Os

ys

(0.22, 0.37, 0.154)

Figure 22: Experiment environment of Case 4 and Case 5. Edge points of the block are described as blue points. Purple dotted line describes edge line to be detected.

0.3

measured edge measured CoP active sensing

0.25 0.2

0.2

y (m)

z (m)

0.3

measured edge measured CoP active sensing

0.1

0.15 0.1

0 0.05 0.3

0.2

0.2

0.1 0

y (m)

0

0.1 0

0

0.1

x (m)

(a)

0.3

(b)

0.3

0.3 measured edge measured CoP active sensing

0.25

measured edge measured CoP active sensing

0.25 0.2

z (m)

0.2

z (m)

0.2

x (m)

0.15

0.15

0.1

0.1

0.05

0.05

0

0 0

0.1

0.2

0.3

0

y (m)

0.1

0.2

0.3

x (m)

(c)

(d)

Figure 23: Edge line estimation results of Case 4. Red points are the estimated points that compose the edge line of the plane at the supporting foot frame. Blue line is the measured line. (a) 3-D plot. (b) X-Y plane. (c) Y-Z plane. (d) X-Z plane.

680

first strategy lowers the possibility of losing balance in advance, by which only a small contact moment is corrected in the balance controller.

32

0.3 measured edge measured CoP active sensing

measured edge measured CoP active sensing

0.25 0.2

0.2

y (m)

z (m)

0.3

0.1

0.15 0.1

0 0.05 0.3

0.2

0.2

0.1 0

y (m)

0

0.1 0

0

0.1

x (m)

(a)

0.3

0.2

0.3

(b)

0.3

0.3 measured edge measured CoP active sensing

0.25

measured edge measured CoP active sensing

0.25 0.2

z (m)

0.2

z (m)

0.2

x (m)

0.15

0.15

0.1

0.1

0.05

0.05

0

0 0

0.1

0.2

0.3

0

y (m)

0.1

x (m)

(c)

(d)

Figure 24: Edge line estimation results of Case 5. Red points are the estimated points that compose the edge line of the plane at the supporting foot frame. Blue line is the measured line. (a) 3-D plot. (b) X-Y plane. (c) Y-Z plane. (d) X-Z plane.

685

690

695

Although the proposed controller successfully estimates an edge line of a terrain such as a cinder block and an aluminum profile in the experiment, it has to be more developed for general use. In order to detect an edge of arbitrary shape of the environment, the active sensing algorithm can be modified to change the vCoP trajectory or the position of the landing foot. In addition, the estimation results can be integrated with vision data for scanning large-scale and complex environments. A unified multimodal sensing algorithm using active sensing and vision or LiDAR will be developed in further studies. Using the proposed method, the 3D map of the environment near the robot can be modified in real-time using the estimated edge location of the terrain. Then, footstep planning using the modified 3D map will lead to a more stable walking control. In the present study, static walking is assumed, which can be performed if the real-time 3D map building algorithm and footstep planning method are implemented. Future research should also be applicable to high-speed walking such as dynamic walking. Acknowledgements This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. NRF-2015R1A2A1A10055798) 33

700

and the Technology Innovation Program (10060081) funded by the Ministry of Trade, Industry and Energy (MI, Korea). The authors would like to thank Soohan Park, from Seoul National University, for his great help with the experiments with RGB-D camera. References

705

710

[1] G. Pratt, J. Manzo, The darpa robotics challenge [competitions], IEEE Robotics & Automation Magazine 20 (2) (2013) 10–12. [2] C. Atkeson, B. Babu, N. Banerjee, D. Berenson, C. Bove, X. Cui, M. DeDonato, R. Du, S. Feng, P. Franklin, et al., No falls, no resets: Reliable humanoid behavior in the darpa robotics challenge, in: IEEE-RAS 15th International Conference on Humanoid Robots, IEEE, 2015, pp. 623–630. [3] D. Maier, C. Lutz, M. Bennewitz, Integrated perception, mapping, and footstep planning for humanoid navigation among 3d obstacles, in: IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, 2013, pp. 2658–2664.

715

720

[4] D. Belter, P. Labecki, P. Fankhauser, R. Siegwart, Rgb-d terrain perception and dense mapping for legged robots, International Journal of Applied Mathematics and Computer Science 26 (1) (2016) 81–97. [5] S. Oßwald, A. G¨ or¨ og, A. Hornung, M. Bennewitz, Autonomous climbing of spiral staircases with humanoids, in: IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, 2011, pp. 4844–4849. [6] M. A. Fischler, R. C. Bolles, Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, Communications of the ACM 24 (6) (1981) 381–395.

725

730

[7] M. Fallon, S. Kuindersma, S. Karumanchi, M. Antone, T. Schneider, H. Dai, C. P. D’Arpino, R. Deits, M. DiCicco, D. Fourie, et al., An architecture for online affordance-based perception and whole-body planning, Journal of Field Robotics 32 (2) (2015) 229–254. [8] H. Jeong, J. Oh, M. Kim, K. Joo, I. S. Kweon, J.-H. Oh, Control strategies for a humanoid robot to drive and then egress a utility vehicle for remote approach, in: IEEE-RAS 15th International Conference on Humanoid Robots, IEEE, 2015, pp. 811–816. [9] H. Wang, Y. F. Zheng, Y. Jun, P. Oh, Drc-hubo walking on rough terrains, in: IEEE International Conference on Technologies for Practical Robot Applications, IEEE, 2014, pp. 1–6.

735

[10] N. Banerjee, X. Long, R. Du, F. Polido, S. Feng, C. G. Atkeson, M. Gennert, T. Padir, Human-supervised control of the atlas humanoid robot for traversing doors, in: IEEE-RAS 15th International Conference on Humanoid Robots, IEEE, 2015, pp. 722–729. 34

740

745

[11] S. Kuindersma, R. Deits, M. Fallon, A. Valenzuela, H. Dai, F. Permenter, T. Koolen, P. Marion, R. Tedrake, Optimization-based locomotion planning, estimation, and control design for the atlas humanoid robot, Autonomous Robots 40 (3) (2016) 429–455. [12] C. Choi, A. J. Trevor, H. I. Christensen, Rgb-d edge detection and edgebased registration, in: IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, 2013, pp. 1568–1575. [13] M. Prats, P. J. Sanz, A. P. Del Pobil, Reliable non-prehensile door opening through the combination of vision, tactile and force feedback, Autonomous Robots 29 (2) (2010) 201–218.

750

[14] N. Likar, L. Zlajpah, External joint torque-based estimation of contact information, International Journal of Advanced Robotic Systems 11. [15] Y. Ohmura, Y. Kuniyoshi, A. Nagakubo, Conformable and scalable tactile sensor skin for curved surfaces, in: IEEE International Conference on Robotics and Automation, IEEE, 2006, pp. 1348–1353.

755

[16] G. Cannata, M. Maggiali, G. Metta, G. Sandini, An embedded artificial skin for humanoid robots, in: IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, IEEE, 2008, pp. 434–438. [17] T. Mukai, M. Onishi, T. Odashima, S. Hirano, L. Zhiwei, Development of the tactile sensor system of a human-interactive robot ri-man, IEEE Transactions on Robotics 24 (2) (2008) 505–512.

760

765

[18] P. A. Schmidt, E. Ma¨el, R. P. W¨ urtz, A sensor for dynamic tactile information with applications in human–robot interaction and object exploration, Robotics and Autonomous Systems 54 (12) (2006) 1005–1014. [19] H. Takao, K. Sawada, M. Ishida, Monolithic silicon smart tactile image sensor with integrated strain sensor array on pneumatically swollen singlediaphragm structure, IEEE Transactions on Electron Devices 53 (5) (2006) 1250–1259. [20] R. S. Dahiya, G. Metta, M. Valle, Development of fingertip tactile sensing chips for humanoid robots, in: IEEE International Conference on Mechatronics, IEEE, 2009, pp. 1–6.

770

775

[21] J. Eljaik, N. Kuppuswamy, F. Nori, Multimodal sensor fusion for foot state estimation in bipedal robots using the extended kalman filter, in: IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, 2015, pp. 2698–2704. [22] K. Suwanratchatamanee, M. Matsumoto, S. Hashimoto, Balance control of humanoid robot in object lifting task with tactile sensing system, in: International Conference on Human System Interactions, IEEE, 2011, pp. 431–436. 35

780

[23] K. Suwanratchatamanee, M. Matsumoto, S. Hashimoto, Robotic tactile sensor system and applications, IEEE Transactions on Industrial Electronics 57 (3) (2010) 1074–1087. [24] A. Del Prete, Control of contact forces using whole-body force and tactile sensors: Theory and implementation on the icub humanoid robot, Ph.D. thesis, Istituto Italiano di Tecnologia (2013).

785

[25] P. Mittendorfer, E. Yoshida, G. Cheng, Realizing whole-body tactile interactions with a self-organizing, multi-modal artificial skin on a humanoid robot, Advanced Robotics 29 (1) (2015) 51–67. [26] R. Tajima, S. Kagami, M. Inaba, H. Inoue, Development of soft and distributed tactile sensors and the application to a humanoid robot, Advanced Robotics 16 (4) (2002) 381–397.

790

[27] R. S. Dahiya, G. Metta, M. Valle, G. Sandini, Tactile sensingfrom humans to humanoids, IEEE Transactions on Robotics 26 (1) (2010) 1–20. [28] A. De Luca, R. Mattone, Sensorless robot collision detection and hybrid force/motion control, in: IEEE International Conference on Robotics and Automation, IEEE, 2005, pp. 999–1004.

795

800

[29] F. Flacco, A. Paolillo, A. Kheddar, Residual-based contacts estimation for humanoid robots, in: IEEE-RAS International Conference on Humanoid Robots, IEEE, 2016, pp. 409–415. [30] J. Pellenz, Rescue robot sensor design: An active sensing approach, in: International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster, 2007, pp. 33–37. [31] K. M. Lynch, H. Maekawa, K. Tanie, Manipulation and active sensing by pushing using tactile feedback., in: IROS, 1992, pp. 416–421. [32] J. Aloimonos, I. Weiss, A. Bandyopadhyay, Active vision, International journal of computer vision 1 (4) (1988) 333–356.

805

810

815

[33] R. Bajcsy, Active perception, Proceedings of the IEEE 76 (8) (1988) 966– 1005. [34] N. Cao, K. H. Low, J. M. Dolan, Multi-robot informative path planning for active sensing of environmental phenomena: A tale of two algorithms, in: International Conference on Autonomous Agents and Multi-agent Systems, International Foundation for Autonomous Agents and Multiagent Systems, 2013, pp. 7–14. [35] T. H. Chung, V. Gupta, J. W. Burdick, R. M. Murray, On a decentralized active sensing strategy using mobile sensor platforms in a network, in: IEEE Conference on Decision and Control, Vol. 2, IEEE, 2004, pp. 1914– 1919. 36

[36] T. Mukai, M. Ishikawa, An active sensing method using estimated errors for multisensor fusion systems, IEEE Transactions on Industrial Electronics 43 (3) (1996) 380–386.

820

[37] M. Azad, V. Ortenzi, H.-C. Lin, E. Rueckert, M. Mistry, Model estimation and control of compliant contact normal force, in: IEEE-RAS International Conference on Humanoid Robots, IEEE, 2016, pp. 442–447. [38] A. Petrovskaya, J. Park, O. Khatib, Probabilistic estimation of whole body contacts for multi-contact robot control, in: IEEE International Conference on Robotics and Automation, IEEE, 2007, pp. 568–573.

825

830

[39] H. Lee, J. Park, An active sensing strategy for contact location without tactile sensors using robot geometry and kinematics, Autonomous Robots 36 (1-2) (2014) 109–121. [40] G. Wiedebach, S. Bertrand, T. Wu, L. Fiorio, S. McCrory, R. Griffin, F. Nori, J. Pratt, Walking on partial footholds including line contacts with the humanoid robot atlas, in: IEEE-RAS 16th International Conference on Humanoid Robots, IEEE, 2016, pp. 1312–1319. [41] J. Park, O. Khatib, Contact consistent control framework for humanoid robots, in: IEEE International Conference on Robotics and Automation, IEEE, 2006, pp. 1963–1969.

835

840

[42] Y. Lee, S. Hwang, J. Park, Balancing of humanoid robot using contact force/moment control by task-oriented whole body control framework, Autonomous Robots 40 (3) (2016) 457–472. [43] H. Bruyninckx, O. Khatib, Gauss’ principle and the dynamics of redundant and constrained manipulators, in: IEEE International Conference on Robotics and Automation, Vol. 3, IEEE, 2000, pp. 2563–2568. [44] M. Vukobratovi, B. Borovac, Zero-moment pointthirty five years of its life, International Journal of Humanoid Robotics 1 (1) (2004) 157–173.

845

[45] H. Han, J. Park, Robot control near singularity and joint limit using a continuous task transition algorithm, International Journal of Advanced Robotic Systems 10 (2013) 1–10. [46] M. Schwartz, S. Hwang, Y. Lee, J. Won, S. Kim, J. Park, Aesthetic design and development of humanoid legged robot, in: IEEE-RAS International Conference on Humanoid Robots, IEEE, 2014, pp. 13–19.

37

Suggest Documents