Turning Control of a Biped Locomotion Robot using Nonlinear Oscillators. Shinya Aoi, Kazuo Tsuchiya and Katsuyoshi Tsujita. Dept. of Aeronautics and ...
Proceedings of the 2004 IEEE International Conference on Robotics & Automation New Orleans, LA • April 2004
Turning Control of a Biped Locomotion Robot using Nonlinear Oscillators Shinya Aoi, Kazuo Tsuchiya and Katsuyoshi Tsujita Dept. of Aeronautics and Astronautics, Graduate School of Engineering, Kyoto University Yoshida-honmachi, Sakyo-ku, Kyoto, 606-8501, Japan Email: {shinya aoi, tsuchiya, tsujita}@kuaero.kyoto-u.ac.jp
Abstract— In our previous work, we developed a locomotion control system of a biped robot which is composed of nonlinear oscillators and realized a stable straight walk of the biped robot against the changes of the environments. Then, we revealed that the adaptability to the changes of the environments is realized as the straight walk results in the change of the period of the motions of the legs. In this paper, we realize a turning walk of the biped robot by using the developed locomotion control system. First, we provide analysis of the turning behavior of the biped robot and reveal that, in this case, the turning behavior leads to the change of the duty ratios of the legs. Moreover, we realize the task that the robot pursues a target on the floor moving along a corner and it demonstrates that the robot can turn a corner successfully in the real world.
480
I. I NTRODUCTION A locomotion control of a biped locomotion robot implies a rhythmic control of an underactuated mechanical system with many degrees of freedom. Usually, locomotion control systems of the biped robot are designed based on the inverse kinematics and the inverse dynamics [2], [6]. This type of control system is composed of a motion plan system and a motion control system; when the reference motion of the body of the robot is given, the motion plan system encodes the reference trajectories of all the joints as the functions of time and the motion control system controls the motions of all the joints with the reference trajectories as commanded signals. The crucial point of designing the controller is how to argue the stability of the motion of the posture which can not be controlled directly. In our previous work [5], we have developed a locomotion control system of a biped robot in which the motion plan system is composed of a nonlinear oscillator system. Touch sensor signals, which depend on the motion of the posture, are fed back to the nonlinear oscillator system and the phases of the oscillators are tuned so that the motion of the posture is stabilized. Using this control system, we have realized a stable straight walk of the biped robot and demonstrated the advantages of the control system by hardware experiments. Although studies on the turning walk of human have been carried out [1], [3], [4], few studies on the turning walk of biped robot from control engineering view point have been achieved. In this paper, we realize a turning walk of the biped robot by using the developed control system in [5] and demonstrate the advantages of the control system with the
0-7803-8232-3/04/$17.00 ©2004 IEEE
78 Fig. 1.
Touch sensor
Schematic model of biped locomotion robot [mm]
experiment in which the robot pursues a target on the floor moving along a corner. II. E QUATIONS OF M OTION In this paper, we consider the biped robot shown in Fig. 1. The robot consists of a body, a pair of arms composed of four links, and a pair of legs composed of six links. Each link is connected to each other through a single degree of freedom rotational joint. A motor is installed at each joint. Left and right legs are numbered as Legs 1 and 2, respectively. The joints are numbered as Joints 1, 2, · · ·, 6 from the side of the body, where Joints 1, 2, and 3 are hip joints, Joint 4 is a knee joint, and Joints 5 and 6 are ankle joints. The arms are numbered in a similar manner. There are four touch sensors at the tip of each leg. The inertial fixed coordinate system is defined as [a0 ] = [a01 , a02 , a03 ]. The coordinate system fixed on the body is defined as [a1 ] = [a11 , a12 , a13 ]. Axes 1, 2, and 3 coincide with the direction of roll, pitch, and yaw, respectively. The position vector from the origin of [a0 ] to the origin [a1 ] is defined as r0 . We define θ0 as 3-1-2 Euler angle (i) from [a0 ] to [a1 ], and we define also θAj as the joint angle (i) of Link j of Arm i and θLk as the joint angle of Link k of Leg i.
3043
Outer Command
Arm1 Osc.
Locomotion Control System
Arm2 Osc.
Inter Osc.
Motion Plan System Motion Generator Leg1 Osc.
Leg2 Osc.
Trajectory Generator
Body Osc. Fig. 3.
Motion generator system
Motion Control System Motor Controller
Touch Sensor Signal
Fig. 2.
B. Design of Nominal Trajectory of Joints First, the nominal trajectories of all the joints are specified in the coordinate system fixed on the body [ a1 ] and are encoded in terms of the phases of the oscillators. Here, we will explain how the nominal trajectories of the legs are expressed in terms of Leg Oscillators. First, the state of Leg i Oscillator is expressed as
Mechanical System
Architecture of the control system
(i)
(i)
(i)
zL = rL exp(jφL ) (i) (i) zL , rL ,
The state variable is defined as follows; (i)
(i)
q T = [ r0 θ0 θAj θLk ]
(1)
i = 1, 2, j = 1, · · · , 4, k = 1, · · · , 6 Equations of motion for the state variable q are derived using Lagrangian equations as follows; (i) (i) (2) M q¨ + H(q, q) ˙ =G+ (τAj + τLk ) + Λ where M is the generalized mass matrix, H(q, q) ˙ is the nonlinear term which includes Coriolis forces and centrifugal (i) (i) forces, G is the gravity term, τAj and τLk are the input torques at Joint j of Arm i and Joint k of Leg i, respectively, and Λ is the reaction force from the ground. The floor is modeled as a spring with damper. III. L OCOMOTION C ONTROL [5] A. Locomotion Control System The locomotion control system consists of a motion plan system and a motion control system (Fig. 2). The motion plan system is composed of a motion generator and a trajectory generator. The motion generator consists of Inter Oscillator and Motion Oscillators. Motion Oscillators are Legs 1 and 2, Arms 1 and 2, and Body Oscillators, which generate the commanded trajectories of the corresponding joints. Then these commanded trajectories are sent to the motor controllers (Fig. 3). The motion plan system receives the feedback signals from the touch sensors at the tips of the legs and the outer commands as feedforward signals. The motion control system is composed of motor controllers which control the motions of joints according to the desired motions commanded by the trajectory generator.
(i) φL
where and are tude, and the phase of Leg i the oscillators are set as rˆ˙ = 0,
(3)
respectively the state, the ampliOscillator. Nominal dynamics of ˙ φˆ = ω ˆ
(4)
where ω is the angular velocity of the oscillator and ˆ∗ indicates the nominal value. Next, we define two nominal positions of the tips of the legs; the anterior extreme position (AEP) and the (i) posterior extreme position (PEP) which are expressed as ηˆP EP (i) and ηˆAEP , respectively, in body fixed reference frame [ a1 ]. The nominal trajectory for the swinging stage is designed as a (i) (i) (i) closed curve ηˆSw which involves the points ηˆAEP and ηˆP EP , and the nominal trajectory for the supporting stage is given as (i) (i) a straight line ηˆSp which also involves the points ηˆAEP and (i) (i) ηˆP EP (Fig. 4). The nominal trajectory ηˆL of the tip of Leg i is generated by using one of these two trajectories alternatively; a swinging stage changes to a supporting stage at PEP and a supporting stage changes to a swinging stage at AEP. Next, the (i) (i) nominal trajectories ηˆSw and ηˆSp are given as the functions (i) (i) of the nominal phase φˆL of Leg i Oscillator, where ηˆSw = (i) ˆ(i) (i) (i) ˆ(i) (i) ˆ ηˆSw (φL ) and ηˆSp = ηˆSp (φL ). The nominal phases φL of (i) Leg i Oscillator at AEP and PEP are defined as φˆAEP and (i) (i) φˆP EP = 0. Finally, the nominal trajectory ηˆL of the tip of the leg is expressed as follows; (i) (i) (i) (i) ηˆSw (φˆL ) 0 ≤ φˆL < φˆAEP (i) ˆ(i) ηˆL (φL ) = (5) (i) (i) (i) (i) ηˆSp (φˆL ) φˆAEP ≤ φˆL < 2π The nominal duty ratio βˆ(i) of Leg i is the ratio between the nominal time interval for the supporting stage and the period of one step of the nominal locomotion, which is given by
3044
(i) φˆ βˆ(i) = 1 − AEP 2π
(6)
Joint 3
Joint 3 Joint 4 AEP ˆ (i) η AEP
Joint 4
Joint 4 Joint 5 PEP AEP ˆ (i) ˆ (i) η PEP η AEP
ˆ (i) η Sw Swinging stage Fig. 4.
ˆ (i) η Sp
Joint 5
AEP ˆ (i) η Sw ˆ (i) η AEP (i) η AEP Swinging stage
PEP ˆ (i) η PEP
Supporting stage Nominal trajectory of leg
On the other hand, the nominal velocity vˆ of the robot is given by 1 − βˆ(i) Sˆ(i) βˆ(i) TˆSw
(7)
where Sˆ(i) is the nominal stride of Leg i and TˆSw is the nominal time interval for the swinging stage.
The trajectories of all the joints are controlled by local PD feedback controller using the nominal joint angles as reference commands. On the other hand, the posture of the robot can not be controlled directly because the tips of the legs do not have actuators. To stabilize the posture, the motion plan system is composed of a dynamical system of nonlinear oscillators which have two inputs; one is mutual interaction term and the other is a feedback term from the touch sensors at the tips of the legs. The former term forms a phase relation between the oscillators. The latter term tunes the phase by the touch sensor signal which depends on the posture and this, in turn, argues the stability of the posture. The phase dynamics of each oscillator are designed as follows; φ˙ I = ω + g1I φ˙ = ω + g1B B (8) ˙ (i) = ω + g (i) φ A 1A ˙ (i) (i) (i) φL = ω + g1L + g2L where φI and φB are the phases of Inter and Body Oscillators, (i) respectively, and φA is the phase of Arm i Oscillator, g1 ’s are the terms which are derived from the interactions with other oscillators, and g2 ’s are the terms caused by the feedback signals of the touch sensors at the tips of the legs. The functions g1 ’s are given as follows; Arm and Body Oscillators are affected unidirectionally from Inter Oscillator and Leg Oscillators are bidirectionally interacted with Inter Oscillator, (i) g1I = − i KL sin(φI − φL + (−1)i π/2) g = −KB sin(φB − φI ) 1B (9) (i) (i) g = −KA sin(φA − φI + (−1)i π/2) 1A (i) (i) g1L = −KL sin(φL − φI − (−1)i π/2)
Joint 5 PEP AEP PEP (i) (i) ˆ PEP η η ˆ (i) ˆ AEP η PEP (i) η AEP Supporting stage
Modified trajectory of leg
The functions g2 ’s are designed in the following way; (i) suppose that φAEP is the phase of Leg i at the instant when (i) Leg i touches the ground, and ηAEP is the position of Leg i at that instant. When Leg i touches the ground, the following procedure is undertaken. (i) (i) 1) Set the phase of Leg i Oscillator from φAEP to φˆAEP . 2) Switch the nominal trajectory of Leg i from the swinging (i) (i) trajectory ηˆSw to the supporting trajectory ηˆSp . (i) 3) Replace the parameter ηˆAEP in the nominal trajectory (i) (i) ηˆSp with ηAEP .
C. Joint Angle Control and Posture Control
where KL , KA , and KB are gain constants.
Joint 4
Joint 5
Fig. 5.
vˆ =
Joint 3
Joint 3
The functions g2 ’s are then given as follows; g2L = (φˆAEP − φAEP )δ(t − ths ) (i)
(i)
(i)
(i)
(10)
(i)
where ths is the time when Leg i touches the ground and δ(·) denotes the Dirac’s delta function. As a result, the nominal (i) trajectory ηˆL of the tip of the leg is modified as shown in Fig. 5; the tip of the leg moves on the trajectory in swinging (i) stage. At ηAEP the tip of the leg touches the ground and then changes to the trajectory in modified supporting stage.
IV. T URNING C ONTROL In this section, a turning control is designed. The turning walk is initiated by outer command. Then, the parameters in the functions of the nominal joint angles are modified so as to realize the turning motion of the robot according to the outer commands. In order to realize the turning motion, the robot needs to orient the upper body toward the turning direction and rotate the hip joint to change the direction of movement. Moreover, since centrifugal forces act radially away from the center of turning, the robot also needs to be compensated for the centrifugal forces. As the turning parameters to be modified according to the outer commands, we choose the turning angle Y (i) of Leg i per step and the roll bias R (the bias of the roll angle of the hip and the ankle joints) of both the legs (Fig. 6). Then, the nominal joint angles are modified using these turning parameters. First, the nominal yaw joint angle of the hip joint,
3045
14
Y
Roll Bias [deg]
Center of Turning
14 1.08
12
Centrifugal Force
12
1.06
10 8
Roll Bias [deg]
R
1.04
6
1.02
4
1.00
2
0.98
0.96
0 0
5
10
15
1.36
10
1.27
8
1.18
6
1.09
4 1.00 0.97
2 0.94
20
25
0 30
0
5
10
Turning Angle [deg]
15
0.94 20
25
30
Turning Angle [deg]
(a) βˆ = 0.7
(b) βˆ = 0.5
Fig. 7. Contour of the ratio between duty ratios of Legs 1 and 2 (β (1) /β (2) )
1.1
1.2
proposed without touch sensor signal
1.05
Joint 1, of Leg i is modified as follows; (i) φˆL Y (i) 1 − cos 2 φˆA /π (i) (i) 0 ≤ φˆL < φˆAEP (i) (i) θˆL1 (φˆL ) = (i) (i) 2π − φˆL Y 1 − cos 2 2 − φˆA /π (i) (i) ˆ φAEP ≤ φˆL < 2π
Ratio
Turning parameter
0.95 0.9
1 0.95 0.9 0
5
10
15 20 25 30 35 Turning Angle [deg]
40
45
(a) Case of change of the turning angle (R = 3) Fig. 8.
(11)
(12)
ˆ is the nominal amplitude, ψˆ is the phase and δˆ is the where B bias value of the roll motion. V. A NALYSIS OF T URNING WALK In the previous work, it has been revealed that when the robot walks straight without the outer commands, the proposed control system argues the robust stability of the posture to the change of the environment by changing the period of locomotion adaptively [5]. Here, we examine dynamics of turning walk of the robot. First, based on the numerical simulations, we make the robot turn a corner with the various turning parameters Y (1) and R using the proposed control system. The parameter Y (2) is set to be 0 [deg] and the robot turns only left. The results of the case in which βˆ = 0.7 and 0.5 are shown in Fig. 7. These figures show the contours of the ratios between the actual duty ratios of Legs 1 and 2 (β (1) /β (2) ). The nominal stride Sˆ is set to be 0.03 [m]. These figures reveal the followings properties; since the turning motion changes the direction of movement, it causes the asymmetry between the motions of left and right sides of the body. It further causes the changes of the time when each leg touches the ground. As a result, the duty ratios
1.05
0.85 0.8
Next, in order to compensate for the centrifugal forces, the nominal roll joint angles of the hip and the ankle joints, Joints 2 and 6, of Leg i are modified as follows; (1) ˆ + R − (−1)i δˆ ˆ cos(φˆB + ψ) θˆL2 (φˆB ) = B (1) ˆ ˆ ˆ + R + (−1)i δˆ ˆ ˆ θL6 (φB ) = −B cos(φB + ψ)
1.1 Ratio
1
Fig. 6.
proposed without touch sensor signal
1.15
50
0
3
6 9 Roll Bias [deg]
12
15
(b) Case of change of the roll bias (Y (1) = 15)
Ratio between duty ratios of Legs 1 and 2 (β (1) /β (2) )
of Legs 1 and 2 become different. That is, when the centrifugal forces become larger according to the turning angle per step the robot tilts outside and the time when the outer leg, Leg 1, touches the ground becomes earlier. As a result, the period for the swinging stage of Leg 1 becomes shorter and the duty ratio of Leg 1 becomes larger. On the other hand, when the robot tilts more inside by the roll bias the time when the inner leg, Leg 2, touches the ground becomes earlier. As a result, the period for the swinging stage of Leg 2 becomes shorter and the duty ratio of Leg 2 becomes larger. Next, the numerical simulations without the proposed control system, that is, without the touch sensor signals are executed and are compared with those with the proposed control system. We investigate two cases, the case in which R is fixed to be 3 [deg] and Y (1) is set to become larger gradually and the case in which Y (1) is set to be 15 [deg] and R is set to become larger gradually. βˆ is set to be 0.7. The results are shown in Fig. 8. These figures show the ratios between the realized duty ratios of Legs 1 and 2 (β (1) /β (2) ). From these figures, it is revealed that the robot with the proposed control system can turn a corner more adaptively than that without the proposed control system by changing the duty ratio of each leg respectively. At last, we investigate the turning characteristics with the hardware experiment (Fig. 9). We investigate the case in which R is fixed to be 0 [deg] and Y (1) is set to become larger gradually and the case in which Y (1) is set to be 15 [deg] and R is set to become larger gradually. The results of the numerical analyses and the hardware experiments in the cases where βˆ = 0.7 and 0.5 are shown in Fig. 10. These figures
3046
Numerical analysis
Hardware Experiment
Leg 1 Leg 2 Nominal
0.82 0.8
Numerical analysis 0.82
Leg 1 Leg 2 Nominal
0.82 0.8
Hardware Experiment 0.82
Leg 1 Leg 2 Nominal
0.8
0.74
0.76 0.74
0.78 Duty Ratio
0.76
0.78
Duty Ratio
Duty Ratio
Duty Ratio
0.78 0.78
0.76 0.74
0.76 0.74
0.72 0.72
0.72
0.72 0.7
0.7
0.7
0.7 0.68 0
5
10
15
20
0
5
Turning Angle [deg]
10 15 Turning Angle [deg]
0.68 0
20
2
4
6
8
10
Duty Ratio
0.6 0.55 0.5
0.7
Leg 1 Leg 2 Nominal
0.65 0.6 0.55 0.5 0.45
0.45
0.4
0.4 0
5
10
15 20 25 30 Turning Angle [deg]
35
40
2
3 4 5 Roll Bias [deg]
6
7
8
5
10
15 20 25 30 Turning Angle [deg]
35
40
Leg 1 Leg 2 Nominal
0.65
0.6
0.6
0.55
0.55
0.5 0.45
0.5 0.45
0.4
0.4
0.35
0.35
0.3 0
1
0.7
Leg 1 Leg 2 Nominal
0.65
Duty Ratio
0.7
Leg 1 Leg 2 Nominal
0
(1) βˆ = 0.7
Duty Ratio
0.7 0.65
12
Roll Bias [deg]
(1) βˆ = 0.7
Duty Ratio
Leg 1 Leg 2 Nominal
0.8
0.3 0
1
2
3 4 Roll Bias [deg]
5
6
0
1
2
3 4 Roll Bias [deg]
(2) βˆ = 0.5
(2) βˆ = 0.5
(a) Case of change of the turning angle (R = 0)
(b) Case of change of the roll bias (Y (1) = 15)
Fig. 10.
5
6
Profile of duty ratios of Legs 1 and 2
the outer commands. The sampling frequency of the visual information is set to be 10 [Hz]. Then, the averages of the horizontal and vertical positions (i) (i) x ¯k and y¯k of the target from the center of vision on the image at kth step of Leg i are calculated as follows; (i) x ¯k
1
=
(i)
Tk
(i)
tk (i)
(i) y¯k
x(t)dt,
tk−1 (i)
(i)
1
=
(i)
Tk
(i)
(i)
tk (i)
y(t)dt
tk−1
Tk = tk − tk−1 (i)
Fig. 9.
Hoap-1 (Fujitsu Automation Ltd.)
show the actual duty ratios of Legs 1 and 2. From these figures, it is verified that the hardware experiments have similar results to those of the numerical analyses.
where tj is the start time at jth step of Leg i. Based on these (i) (i) (i) (i) x ¯k and y¯k , the turning parameters Yk , Rk , and the stride (i) Sk are modified as follows; x (−1)i x ≥ 0 (i) (i) (i) (i) Yk = −f (ξk ), f (x) = 0 (−1)i x < 0 (i)
ξk
=
VI. V ISION - BASED T URNING C ONTROL
(i)
(i)
¯˙ k = −KA x ¯k − DA x (i) −KA x ¯k
− DA
A. Vision-based Turning Control System
(i)
We carried out the hardware experiment in which the robot pursues a target moving on the floor. The robot has one CCD camera and obtains information of the target from the images of the camera. The robot executes the turning motion according to the motion of the target. By the images taken from the CCD camera, the positions of the target are sampled; the horizontal and vertical positions x(t) and y(t) of the target from the center of vision on the image are calculated and sent to the trajectory generator as
= − KA x ¯k + (i)
∆Tk
(i)
(j)
Rk
= tk − tl ˆ = R
Sk
= −KS y¯k
(i) (i)
(i)
(j)
x ¯k − x ¯l
(i)
DA
∆Tk
(i) ∆Tk
(i)
x ¯k +
DA (i)
∆Tk
(j)
x ¯l
(i)
ˆ is the nominal where KA , DA , and KS are gain constants, R (j) roll bias, and tl is the last time when the robot started to turn (j) (i) a corner (|Yl | > 0). Sk is limited with a saturation at ±Sˆ
3047
KA=20,DA=0 KA=20,DA=5 Center of vision
100
80 60 40 20 0 -20
80 60 40 20
-60
-20 0
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
Visual Information y
Visual Information x
0 -50
100
15
20
25
30
-40 5
10
15
20
25
30
35
40
45
0
5
10
15
20
25
30
35
40
45
Time [s]
Turn with vision system with moving target
R EFERENCES
60 40 20 0
0
5
Time [s]
10
15
20
25
30
Time [s]
(b) Hardware experiment Fig. 11.
0 -20
0
80
-40 10
20
Time [s]
-20 -100
40
KA=14,DA=0 KA=14,DA=5 Center of vision
120
50
5
-50
Fig. 12.
140
KA=14,DA=0 KA=14,DA=5 Center of vision
0
0
Time [s]
(a) Numerical analysis
100
60
50
-150 0
Time [s]
150
80
100
-100
0
-40
150
Visual Information y
100
Visual Information y
Visual Information x
120
KA=20,DA=0 KA=20,DA=5 Center of vision
120
Visual Information x
140
Turn with vision system with fixed target
and the robot is restricted not to turn a corner with |Y (1) | > 0 and |Y (2) | > 0. B. Results First, the target is fixed on the floor at the point which is 3.0 [m] distant in parallel to the walking direction and 0.9 [m] distant in perpendicular to the walking direction from the robot. According to the visual information, the robot pursues the target. The results of the numerical analyses and the hardware experiments are shown in Fig. 11. These figures show the visual information from the camera. From these figures, it is revealed that the target on the image converges to the center of vision, that is, the robot can pursue the target. Next, the target is fixed at the end-point of the half circle of 1 [m] radius and the robot starts to walk. Then, the target starts to move along the half circle to the opposite end-point from 8 [s] to 25 [s] and then goes back along the half circle to the start end-point again from 30 [s] to 45 [s]. The results of the hardware experiment are shown in Fig. 12. These figures also show the visual information from the camera. From these figures, it is revealed that the robot can also pursue the target moving on the floor. From these results, it is verified that the voluntary turning motion driven by visual information with the proposed control system is realized.
[1] T. Imai, S.T. Moore, T. Raphan and B. Cohen, Interaction of the body, head, and eyes during walking and turning, Experimental Brain Research, 136, pp.1-18, 2001. [2] S. Kajita, F. Kanehiro, K. Kaneko, K. Fujiwara, K. Yokoi and H. Hirukawa, A Realtime Pattern Generator for Biped Walking, Proc. of IEEE International Conference on Robotics and Automation (ICRA2002), pp.31-37, Washington DC, USA, May 2002. [3] A. E. Patla, S. D. Prentice, C. Robinson and J. Neufeld, Visual control of locomotion: strategies for changing direction and for going over obstacles, Journal of Experimental Psychology: Human Perception and Performance, 17, pp.603-634, 1991 [4] A. E. Patla, C. Robinson, M. Samways and C. J. Armstrong, Visual control of step length during overground locomotion: task-specific modulation of the locomotor synergy, Journal of Experimental Psychology: Human Perception and Performance, 15, pp.603-617, 1989 [5] K. Tsuchiya, S. Aoi and K. Tsujita, Locomotion Control of a Biped Locomotion Robot using Nonlinear Oscillators, Proc. of the IEEE/RSJ Int. Conference on Intelligent Robots and Systems (IROS2003), pp.17451750, Las Vergas, USA, Oct. 2003. [6] J. Yamaguchi, E. Soga, S. Inoue and A. Takanishi, Development of a Bipedal Humanoid Robot - Control Method of Whole Body Cooperative Dynamic Biped Walking -, Proc. of IEEE International Conference on Robotics and Automation (ICRA1999), pp.368-374, Detroit, USA, May 1999.
Acknowledgment This paper is supported in part by Center of Excellence for Research and Education on Complex Functional Mechanical Systems (COE program of the Ministry of Education, Culture, Sports, Science and Technology, Japan). The authors were funded by grants from the Japan Science and Technology Corporation (JST) under the Core Research for Evolutional Science and Technology Program (CREST).
3048