Document not found! Please try again

Object tracking and following six-legged robot system using Kinect ...

5 downloads 0 Views 5MB Size Report
This paper proposes an object tracking and following six-legged robot (6LR) system that uses a Kinect camera based on Kalman filter and backstepping control ...
Journal of Mechanical Science and Technology 29 (12) (2015) 5425~5436 www.springerlink.com/content/1738-494x(Print)/1976-3824(Online)

DOI 10.1007/s12206-015-1144-4

Object tracking and following six-legged robot system using Kinect camera based on Kalman filter and backstepping controller† Amruta Vinod Gulalkari1, Pandu Sandi Pratama1, Giang Hoang1, Dae Hwan Kim1, Bong Huan Jun2 and Sang Bong Kim1,* 1

Department of Mechanical Design Engineering, Pukyong National University, Busan, 48547, Korea Technology Center for Offshore Plant Industry, Korea Research Institute of Ships and Ocean Engineering, Daejon, Korea

2

(Manuscript Received December 6, 2014; Revised July 9, 2015; Accepted August 17, 2015) ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Abstract This paper proposes an object tracking and following six-legged robot (6LR) system that uses a Kinect camera based on Kalman filter and backstepping control method. To achieve this task, the following steps are executed. First, the 6LR is developed with several interconnected devices, such as servomotors, a microcontroller, Bluetooth, and so on. The Kinect camera is installed on the 6LR to perform image processing. Second, the kinematic modeling of the 6LR is presented. Third, a blue-colored candidate object is detected by the Kinect camera through a color-based object detection method, and the position coordinate of the detected object inside the RGB image frames is obtained. The real position coordinate of the detected object (in mm) is obtained by using simple trigonometry and Kinect depth data. Fourth, Kalman filter algorithm is used to estimate the real position coordinate and velocity coordinate of the moving candidate object. Fifth, backstepping method using Lyapunov function is adopted to design a controller for the 6LR to perform the object-following task. Finally, the experimental results are presented to verify the effectiveness and performance of the proposed control method. The results show that the 6LR can successfully follow the moving candidate object with the designed controller. Keywords: Backstepping control; Kinect camera; Kalman filter; Lyapunov function; Six-legged robot ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

1. Introduction Object detection and tracking has become one of the most popular topics of interest in the field of robotic perception. Numerous studies have been conducted on object detection and tracking with the use of mono-vision cameras, time of flight cameras, or stereo cameras [1-3]. In mobile robotic applications, these cameras are installed on either wheeled or legged mobile robots [4, 5]. In Ref. [6], a wheeled robot was utilized to design a people-following system. However, wheeled robots are hindered by certain limitations as wheels are inefficient on irregular or rough terrains. To overcome these limitations, legged robots were developed. In Refs. [7-9], bipedal, tripedal, and quadruped robots were used respectively. However, maintaining stability with such configurations is not an easy task. The selection of the appropriate camera is another important issue in object tracking and following systems as depth data is a basic requirement in such systems. In Ref. [10], a stereo vision system was implemented on mobile robots designed for rough terrain. However, image processing *

Corresponding author. Tel.: +82 516296158, Fax.: +82 516211411 E-mail address: [email protected] † Recommended by Associate Editor Kyoungchul Kong © KSME & Springer 2015

using stereo vision technique is quite an inconvenient process because of correspondence matching issues making depth calculation a complicated task. Moving object tracking is another important requirement to consider in such systems. In Ref. [11], although an objectfollow-ing system was proposed for a legged robot, only a static object was considered, and no tracking method was employed for the moving object. Moreover, in the designing of such systems, controller design is the crucial part that helps the system to achieve better stability. In Ref. [12], a Fuzzy logic-based controller was proposed. However, in the Fuzzy controller, the selection of the membership function is not a simple task. The membership function must be selected carefully, which requires tremendous effort. Therefore, a new controller should be developed. To solve these problems, this paper proposes an object tracking and following six legged-robot (6LR) system that uses a Kinect camera based on Kalman filter and backstepping control method. To achieve this task, the following steps are executed. First, the 6LR is developed with several interconnected devices such as servomotors, a microcontroller, Bluetooth, and so on. The 6LR is more stable than the other configurations of legged robots such as bipedal, tripedal, or quad-

5426

A. V. Gulalkari et al. / Journal of Mechanical Science and Technology 29 (12) (2015) 5425~5436

Fig. 2. One leg configuration of the 6LR.

(a)

Microcontroller and Power supply

(b)

Host computer and Bluetooth

(c)

Kinect camera

Fig. 1. Configuration of the Kinect camera based 6LR system.

ruped robots because the 6LR can use a larger support polygon while walking. The Kinect camera is installed on the 6LR to perform image processing. This RGB-D camera can directly provide depth data, thereby reducing the complexity of the image processing algorithm. Second, the kinematic modeling of the 6LR is presented. Third, a candidate blue-colored object is detected by the Kinect camera through a color-based object detection method, and its position coordinate inside the RGB image frames is obtained. The real position coordinate of the detected object (in mm) is obtained by using simple trigonometry and Kinect depth data. Fourth, Kalman filter algorithm is used to estimate the real position coordinate and velocity coordinate of the moving candidate object. Fifth, backstepping method using Lyapunov function is adopted to design a controller for the 6LR to perform the objectfollowing task. Using backstepping method to design the controller is a simple task because the control law can be directly obtained while the stability of the system is being checked. Finally, the experimental results are presented to verify the effectiveness and performance of the proposed control method. The experimental results reveal that by using the Kinect camera, the 6LR can detect the candidate object well. The candidate object is tracked successfully by the Kalman filter. By using backstepping control method with the appropriate Lyapunov function, the 6LR can follow the moving candidate object well with the designed controller while maintaining the stability of the 6LR.

2. System description Fig. 1 shows the configuration of the Kinect camera-based object tracking and following 6LR. This system mainly consists of a six-legged robot, a Kinect camera, a host computer, and a blue-colored rectangular board as the object for detec-

Fig. 3. Real Kinect camera based 6LR system.

tion and tracking. The developed 6LR consists of one body and six legs. Each leg has four rotational joints and four links. Fig. 2 shows one leg configuration of the 6LR, which has four rotational joints, four links, and one end effector that touches the ground. For the leg configuration, 24 servo motors of Dynamixel AX-12 type are used to control the locomotion of the 6LR by operating six legs (i.e., 24 rotational joints). The robot is controlled by the microcontroller and the host computer. The microcontroller and Bluetooth are attached on the inner side of the 6LR. The microcontroller used to control the 6LR is DSP (Digital signal processor) TMS320F28335. The Bluetooth is used as a wireless communication between the 6LR and the host computer. The Kinect camera used in this paper is the Microsoft Kinect Xbox camera [13]. It is mounted on the top front of the 6LR to capture both RGB and depth images at the speed of 30 frames per second. The depth range of the Kinect Xbox camera is from 800 mm to 4000 mm with a horizontal field of view of 43° and a vertical field of view of 57°. Fig. 3 shows the real Kinect camera-based 6LR system. In this paper, a tripod gait is chosen for the 6LR. That is, one tripod of legs (the foreleg and hind leg from one side and the middle leg from the other) is in swing phase (three legs: lift up), and the other tripod of legs is in retraction phase (other three legs: contact with the ground) when the robot walks. The walking cycle imitating tripod gait with period T is shown in Fig. 4. The sequence is repeated every cycle, and the tripods are alternated. Fig. 5 shows the control structure of the proposed system. The Kinect camera captures RGB and depth images at the rate of 30 Hz. User gives commands to the 6LR through the GUI (Graphical User Interface) built on the host computer. The host computer computes both image processing and control

A. V. Gulalkari et al. / Journal of Mechanical Science and Technology 29 (12) (2015) 5425~5436

2T

T T

5427

T

Fig. 4. Walking cycle of tripod gait.

Fig. 6. Configuration of one leg of 6LR.

Fig. 5. Control structure of the proposed system.

algorithm, and sends commands to the microcontroller by Bluetooth through the RS232 interface. The microcontroller directly controls the servomotors and receives feedback data from the servomotors through the half duplex community.

3. Kinematic modeling of 6LR

Fig. 7. Frame coordinates of the 6LR. T

k ( θi ) = p ei0 = éë xei0 yei0 zei0 ùû xei0 = a1c1i - a4c4i ( s1i s3i - c1i c2i c3i ) + a2c1i c2i

(2) (3)

+ d3c1i s2i - a3 s1i s3i + a3c1i c2i c3i - a4c1i s2i s4i

3.1 Kinematic modeling of one leg Each leg of the 6LR has four joints. A four-joint leg can be considered a four-link revolute joint manipulator that is attached to the robot body. In this paper, Denavit-Hartenberg (D-H) convention is used to derive the kinematic modeling of each leg of the 6LR [14]. Fig. 6 shows the configuration of one leg of the developed six-legged robot with the corresponding joint variables. By using D-H convention, the transformation matrix from joint 0 to the end effector of leg i can be calculated as follows [14]:

yei0 = a1s1i + a4c4i ( c1i s3i - s1i c2i c3i ) + a2 s1i c2i

(4)

+ a3c1i s3i - d3 s1i s2i + a3 s1i c2i c3i - a4 s1i s2i s4i zei0 = d3c2i - a2 s2i - a3c3i s2i - a4c2i s4i - a4 s2i c3i c4i

(5)

T

where θi = éëq1i ,q 2i ,q3i ,q 4i ùû is angular position vector of each joint and is the solution of Eqs. (2)-(5) using the inverse kinematics, a j is the distance between O ji and O 'ji , d3 is the coordinate of O3i' along z2i , c ji = cosq1i , and s ji = sin q ji for j = 1, 2,3, 4 . 3.2 Kinematics of the 6LR

0 4Ti

=

0 1 2 3 1Ti 2Ti 3Ti 4Ti

.

(1)

The end effector position vector of leg i with respect to joint T 0 frame, p 0ei = éë xei0 yei0 zei0 ùû can be calculated by the first three elements of the last column of 04Ti called forward kinematic equation k ( θi ) as follows [14]:

The overall movement of the robot platform can be computed as a function of foot movement by using the kinematics of the entire robot. Fig. 7 shows the frame coordinates of the 6LR. Ob xb yb zb is a body coordinate frame located on the center of body of the 6LR, O ji x ji y ji z ji is a leg coordinate frame

5428

A. V. Gulalkari et al. / Journal of Mechanical Science and Technology 29 (12) (2015) 5425~5436 w

r

v=

r

v

d

w=

qd

r Ob

(9)

T r ´ 3600 2p d

(10)

r

xb

where v is the translational velocity of the 6LR along the yb axis, r is the step distance of each leg, T is the cycle time, w is the angular velocity of the 6LR, q d is the desired angle required to achieve Anterior extreme position (AEP) for each leg (rad), and d is the distance between the center of the 6LR and the end effector AEP as shown in Fig. 8.

r

r

(8)

qd

qd =

yb

r

r T

Fig. 8. Kinematic configuration of the 6LR.

4. Object motion modeling located on joint j of leg i, W is the distance between Leg-L2 and Leg-R2, L is the length of the robot, and E is the distance between Leg-L1 and Leg-R1. According to Fig. 7, the homogeneous transformation matrix b0Ti from the body frame to the joint 0 of leg i can be derived as follows [14]: é cos a i ê ê sin a i b T = 0 i ê ê 0 ëê 0

- sin a i

0

cos a i 0 0

0 1 0

x0xib ù ú y0yib ú ú 0 ú 1 úû

(6)

where a i is the angle to transform from the Ob xb yb zb coordinate frame to the O ji x ji y ji z ji coordinate frame of joint 0 of leg i, and x0xib , y0yib are the translating distances to transform from the Ob xb yb zb coordinate frame to the O ji x ji y ji z ji coordinate frame of joint 0 of leg i along the xb and yb axes, respectively. The end effector position vector of each leg with respect to the body frame can be derived from the transformation matrix as follows: %0 p% bei = b0Tp i ei

épbei ù ép 0ei ù 0 b b ú , p% ei = ê ú , and p ei = éë xei êë 1 úû êë 1 úû

where p% bei = ê

(7)

yeib

zeib ùû

T

is the position vector of the end effector with respect to the body frame of leg i.

An objection motion model should be created to measure the position and velocity of the moving object for the sampling time Dt . In this model, the object is assumed to move on a perfectly frictionless, straight path. The object is stationary at its initial position, but it is pushed by random acceleration. The linear state space x k at time step k is expressed as follows: T X& R Y&R ùû

x k = éë X R YR

(11)

where X& R is the object velocity in the X direction, that is, the derivative of object position X R with respect to time, and Y&R is the object velocity in the Y direction, that is, the derivative of object position YR with respect to time. Between (k - 1) time step and k time step, the object undergoes a constant acceleration of ak that is normally distributed (Gaussian) with mean 0 and standard deviation s a . According to Newton’s laws of motion, the linear state space can be expressed into (12)

x k = Fx k -1 + Gak

é1 ê 0 where F = ê ê0 ê ëê0 é Dt 2 G=ê êë 2

Dt 2 2

0 Dt 1 0 0 1 0 0 Dt

0ù ú Dt ú is the state transition matrix and 0ú ú 1 ûú ù Dt ú úû

T

Eq. (12) can be represented as 3.3 Kinematic modeling to define control inputs Fig. 8 shows the kinematic configuration of the 6LR. In this paper, two control parameters are chosen to control the 6LR for object following based on its kinematic equations. These control inputs are the angular velocity and the translational velocity of the 6LR, which can be expressed as follows:

x k = Fx k -1 + w k

(13)

where w k is the Gaussian process noise at k time step that is normally distributed (mean 0 and standard deviation s a ) with covariance matrix Q such that w k ~ N (0, Q) . The

5429

A. V. Gulalkari et al. / Journal of Mechanical Science and Technology 29 (12) (2015) 5425~5436

state covariance matrix Q is given as é Dt 4 4 0 0 ê 4 Dt 4 0 ê 0 Q = GG T s a2 = ê 0 0 Dt 2 ê ê 0 0 ë 0

0 ù ú 0 ú 2 ús a . 0 ú ú Dt 2 û

( X RGB ,YRGB )

At each time step, a measurement of the true position of the object with the noise obtained from the Kinect camera sensor at k time step is given by (14)

z k = Hx k + v k

é1 ê 0 where H = ê ê0 ê ëê0

0 1 0 0

Fig. 9. Detected candidate object with center coordinate ( X RGB , YRGB ) within desired object area in Kinect RGB image.

0 0 1 0

XR

0ù ú 0ú is the measurement matrix and 0ú ú 1 ûú

( X R , YR )

YR X RGB

v k is the Gaussian measurement noise vector at k time step

which is normally distributed with covariance matrix R such that v k ~ N (0, R ) . The measurement covariance matrix

( X RGB ,YRGB )

R is obtained as follows: és 2p 0 0 0ù ê ú 2 ê 0 sp 0 0ú T R = E[ v k v k ] = ê ú. ê0 0 s v2 0 ú ê ú 0 0 s v2 úû êë 0

5. Image processing algorithm for Kinect camera sensor In this paper, a color-based object detection algorithm is developed for the Kinect camera sensor to take the bluecolored candidate object on its RGB image frames. This algorithm is developed by using the AForge.NET C# framework, which provides a useful set of image processing filters and tools designed for developing image processing algorithms [15]. This image processing algorithm is executed as follows: Step 1: Both RGB and depth information are collected with the help of Kinect camera. The specified colored object is detected, and its position coordinate (in pixels) inside the RGB image frames is then obtained. Finally, the object is located in the image. Step.2: Color filtering is performed on the Kinect RGB images. The color filtering process filters pixels inside or outside of the specified RGB color range and fills the rest with black color. In this process, only the blue-colored object is kept, and all the rest are removed. Step 3: The position coordinate of the detected object is determined by using the “Blob counter” tool, which counts and extracts standalone objects in images by using the connected component algorithm [15]. The connected component algorithm treats all pixels with values less than or equal to the

f

fC

Fig. 10. Extended view of current angle from the Kinect camera with the detected candidate object in the RGB image and the real candidate object.

“background threshold” as background, and pixels with higher values are treated as object pixels. However, the “Blob counter” tool works only with grayscale images, thus, gray scaling is applied to the images before this tool is used. Step 4: The detected object is located in the RGB image frames. Once the object is located, the center coordinate of the detected object is obtained. Fig. 9 shows a detected candidate object with center coordinate ( X RGB , YRGB ) within a desired object area of the Kinect RGB image frame. The next step is converting the pixel positions ( X RGB , YRGB ) of the detected candidate object to the real world coordinate ( X R , YR ) in mm. For this purpose, consider an extended view of current angle from the Kinect camera with the detected candidate object in the RGB image and the real candidate object as shown in Fig. 10. According to the image principle explained in Refs. [6, 16], the current angle between the center of the detected candidate object and the center of RGB image is given by æ X RGBg ö ÷ è f ø

fC = arctan ç

(15)

where g is the coefficient of transformation from pixel to

5430

A. V. Gulalkari et al. / Journal of Mechanical Science and Technology 29 (12) (2015) 5425~5436

Fig. 11. Reference depth for the moving candidate object.

length, and f is the focal length of the Kinect camera. By calculating the current angle fC and using simple trigonometry, from Fig. 10, the position of the real candidate object in X direction can be obtained as X R = YR (tan fC )

(16)

where YR is the position of the real candidate object in the Y direction, which represents the depth in mm and is obtained directly from the Kinect depth data. In this paper, a reference depth between the 6LR and the candidate object is 1100 mm as shown in Fig. 11. If the moving candidate object is at a distance of more than 1100 mm, the 6LR moves forward. If the reference depth between the 6LR and the candidate object reaches 1100 mm, the 6LR stops.

6. Kalman filter for moving object tracking To follow the moving object well, its position and velocity must be tracked accurately. However, tracking an object well is not an easy task because of the variations in ambient light and the noise of the Kinect camera. The Kinect camera may detect multiple unwanted objects as a result of different realtime conditions. Therefore, keeping track of the detected candidate object is necessary. To overcome these problems, Kalman filter algorithm is adopted in this paper. The Kalman filter is a recursive estimator that estimates the current state of the candidate object by using the estimated state from the previous time step and the current measurement [17]. Precisely, the optimal state is found with smallest possible variance error, recursively. The Kalman filter has two phases: prediction and update. The objective of the Kalman filter is to estimate the state vector x k from the measurement vector z k . The mathematical description of the Kalman filter is explained as follows:

Fig. 12. Kinect RGB image showing detected candidate object outside the desired area.

xˆ -k = Fxˆ k -1 + w k Pk-

T

= FPk -1F + Q

(17) (18)

where xˆ k -1 is the a priori state estimate vector at previous time step k - 1 and Pk -1 is the error covariance matrix at previous time step k - 1 . 6.2 Update phase In the update phase, the current a priori prediction is combined with the current measurement information to refine the state estimate. The objective is to estimate a posteriori state estimate vector xˆ k that is a linear combination of the a priori estimate vector and the new measurement vector z k at time step k. In this paper, z k is obtained from the Kinect camera at time step k. Update phase equations at time step k are given as follows: K k = Pk- HT (HPk- HT + R ) -1 xˆ k =

xˆ k-

Pk = (I

(19)

+ K k (z k - Hxˆ -k )

(20)

- K k H )Pk-

(21)

where K k is the Kalman gain matrix that is computed by the above measurement update equations. A posteriori state estimate vector xˆ k and a posteriori error covariance matrix Pk at time step k are computed by the measurement vector z k at time step k. The prediction and update equations are calculated recursively with previous a posteriori estimates to predict new a priori estimate.

6.1 Prediction phase The prediction phase uses the state estimate from the previous time step to produce an estimate of the current state. Eqs. (11)-(14) describes an object motion model at time step k . As the state vector x k is not measured directly, the information provided by the measurement vector z k is used to update the unknown state vector x k . A predicted (a priori) state estimate vector xˆ -k and a predicted (a priori) estimate covariance matrix Pk- are obtained for the next time step k as follows:

7. Error measurement This section explains the definition and measurement methods for heading angle error and distance error as follows: 7.1 Heading angle error Consider that the candidate object is detected in the right corner of the Kinect RGB image frame outside the desired object area as shown in Fig. 12. The objective of the proposed

A. V. Gulalkari et al. / Journal of Mechanical Science and Technology 29 (12) (2015) 5425~5436

yb

5431

Xˆ R

YˆR

YˆR

fC = fˆ

fD Ob

ef Xˆ R

xb

Fig. 13. Heading angle error definition.

Fig. 14. Distance error definition.

system is to bring the candidate detected object inside the desired object area. To make the 6LR turn toward the candidate object, its current angular position should be changed to a desired angular position to achieve object following as shown in Fig. 13. Therefore, the heading angle error ef is defined as the difference between the desired angular position fD and the current angular position fC of the candidate object based on the Y direction as follows:

YR of the detected candidate object is the estimated distance YˆR between the candidate moving object and the 6LR. There-

ef = fD - fC .

(22)

However, after Kalman filter is applied, the current angular position fC is the estimated angular position fˆ of the moving candidate object based on the Y direction. Therefore, the estimated heading angle error can be written as ef = fD - fˆ

(23)

where fˆ can be calculated as follows: æ ç è

fˆ = arctan ç

Xˆ R YˆR

ö ÷÷ ø

(24)

where Xˆ R and YˆR are the estimated positions of the detected candidate object obtained from the Kalman filter. 7.2 Distance error

e y = YD - YˆR .

(25)

However, after Kalman filter is applied, the current distance

(26)

8. Controller design for 6LR This section first explains the end effector position control of one leg. The angular velocity controller and the translational velocity controller are designed to minimize the errors calculated in the previous section. In this paper, backstepping method is adopted to design the controllers [18, 19]. 8.1 Walking controller design In legged robots, the end effector positions of legs are crucial to the stability and movement of the legged robot. The end effector position vector of leg i with respect to body frame, pbei , can be calculated by forward kinematics when the angular position vector of each joint θi is known. Typically, inverse kinematics is used to obtain θi ; however, solving θi by inverse kinematics is difficult because of its complexity as shown in Eqs. (2)-(5). Therefore, differential kinematics algorithm is used to address this problem for the 6LR in this paper. For one leg of the 6LR, as shown in Fig. 6, the end effector position error vector ebi between the desired and actual end effectors is defined as follows: ebi = pbdi - pbei

Fig. 14 shows the definition of the distance error for the Kinect camera-based 6LR system. The distance error e y is defined as the difference between the reference distance YD and the current distance YR of the object based on the center of the 6LR as follows: e y = YD - YR .

fore, the estimated distance error can be written as follows:

(27)

where pbdi = [ xdib ydib zdib ] T is the desired end effector position vector with respect to body frame of leg i. In this section, since the goal is for the end effector to be in a desired position, the Jacobian relating the translational velocity of the end effector and the joint anglular velocity is considered. The translational velocity of the end effector can be expressed as the time derivative of the end effector position vector of leg i with respect to body frame pbei as follows:

5432

A. V. Gulalkari et al. / Journal of Mechanical Science and Technology 29 (12) (2015) 5425~5436

p& bei =

¶pbei & θi = J bAi (θi )θ& i ¶θi

(28) YD = 1.1m

where J bAi (θi ) is the analytical Jacobian of pbei with respect to θi as: J bAi (θi )

¶pb ¶k (θi ) = ei = . ¶θi ¶θi

e& bi = p& bdi - J bAi (θi )θ& i .

(30)

1 b T b (ei ) Kei 2

V&i (eib ) = (eib )T Ke& ib = (eib )T K (p& bdi - J bAi (θi )θ& i ) .

(32)

A control input vector is chosen as a joint angular velocity vector of leg i as follows: T

( )

where J bAi

+

+

æ (θi )Keib + ç I n - J bAi è

( )

éYˆR ù ê ú ëê fˆ ûú

T

( ) =( )

æ b b ç J Ai J Ai è

+

ö J bAi ÷ q& 0i ø

(33)

T

( )

-1

ö ÷ . ø

é cosq ê - sin q ê êë 0

sin q cosq 0

0ù 0ú ú 1 úû

( X R , YR ) é Xˆ ëê R

YˆR

& Xˆ

R

& T YˆR ùú û

(36)

8.2 Angular and translational velocity controller design Using Eqs. (23) and (26), the error vector e can be written as follows: é ef ù é f - fˆ ù ú. e=ê ú=ê D ëêe y ûú êëYD - YˆR úû

V&i (eib ) = -(eib )T KJ bAi J bAi

T

( )

The derivative of the error vector e& is obtained as & é e&f ù é f& - fˆ ù éw D - wˆ ù ú=ê e& = ê ú = ê D ú êëe& y úû êY& - Y&ˆ ú ë vD - vˆ û ë D Rû

T

( )

ù Keib ú û

where f&D = wD and Y&D = vD are the desired angular velocity and the desired translational velocity of the 6LR, respec& & tively. fˆ = wˆ and YˆR = vˆ are the estimated angular velocity and the estimated translational velocity of the 6LR, respectively. Using the backstepping method, a Lyapunov function (clf) candidate is chosen as follows: 1 T 1 1 e e = ef 2 + e y 2 ³ 0 . 2 2 2

(39)

V& = e&f ef + e& y e y = (w D - wˆ )ef + (vD - vˆ)e y .

(40)

Lyapunov stability criterion states that the system is stable if T

(J ) b Ai

(38)

The derivative of clf is obtained as

Keib T

(37)

(34)

Substituting Eq. (33) into Eq. (32) leads to

é = - ê J bAi ë

0ú ú 1 úû

θtik +1 = θtik + θ& tik Dt .

is right pseudo-inverse of J bAi as

J bAi

cosq 0

fD = p 2

V= J bAi

- sin q

éX ù êY ú ê ú êë q úû

(31)

where K is a symmetric positive definite matrix. This function is chosen to be Vi (eib ) > 0 for "ebi ¹ 0 , Vi (0) = 0 . Differentiating Eq. (31) with respect to time and substituting Eq. (30) into it, the derivative of clf is obtained as follows:

( )

écosq ê sin q ê êë 0

é X& ù ê &ú êY ú ê q& ú 0ù ë û

Fig. 15. Block diagram of the proposed system.

Using the backstepping method, a Lyapunov function (clf) candidate can be chosen as follows:

θ& i = J bAi

é vˆ ù U =ê ú ˆû ëw

éey ù e=ê ú ë ef û

(29)

Then time derivative of the error vector is

Vi (eib ) =

éYD ù êf ú ë Dû

(35) Keib < 0 .

The chosen controller of Eq. (33) makes the end effector be in a desired position. The condition V&i < 0 and Vi > 0 implies that the end effector position error ebi ® 0 when time t→∞, i.e., the system is asymptotically stable. The joint angles are used to control the servomotors and can be calculated in discrete time by the Euler integration method. With sampling time Dt , the joint angles at time tk +1 = tk + Dt can be computed as follows:

V& £ 0 , Therefore, to meet this condition, a control law vector

U is chosen as follows: éwˆ ù éwD + K1ef ù U =ê ú=ê ú ë vˆ û ëê vD + K 2e y ûú

(41)

where K1 and K 2 are positive constants. Substituting Eq. (41) in Eq. (40) yields V& = - K1ef2 - K 2e 2y £ 0

(42)

5433

A. V. Gulalkari et al. / Journal of Mechanical Science and Technology 29 (12) (2015) 5425~5436

Object

6LR with Kinect

Kinect RGB

Object

Object

6LR with Kinect

Kinect RGB

Desired area Detected Desired area (a) (b) object image

6LR with Kinect

Kinect RGB

Object

Object

Object

6LR with Kinect

6LR with 6LR with Kinect Kinect

Kinect RGB

Desired area Detected Desired area (c) (d) object image

Kinect RGB

Kinect RGB

Desired area Detected Desired area (e) (f) object image

Fig. 16. Images showing sequence of events occurred during rotational following case of object following of 6LR using Kinect camera.

Fig. 17. Images showing sequence of events occurred during straight following case of object following of 6LR using Kinect camera.

The block diagram of the proposed system is shown in Fig. 15.

The experiment is performed to verify the effectiveness and performance of the proposed system, and the results are shown in Figs. 16 and 17. For the sake of simplicity, two cases of moving object-following are explained separately in this paper, namely, rotational following and straight following. Fig. 16 presents the images that depict the sequence of events occurring during rotational following of the 6LR using a Kinect camera that is developed in this paper. In this case, the candidate blue-colored object and the 6LR moves horizontally from left to right. Fig. 16(a) shows the starting position of the 6LR in which it is heading toward the detected candidate object. The corresponding RGB image shows that the detected candidate object is inside the desired area, and it is the initial position of the object. Figs. 16(b)-(e) show the movements of the object toward the right hand side. In these events, the 6LR is changing its heading angle from its initial position more toward the right hand side to follow the moving object. The corresponding RGB images show that as the object is moving more toward the right hand side, the position of the detected object is moving more towards the outside of the desired area. Fig. 16(f) shows the stop position of the object at the extreme right hand side. When the object stops, the 6LR continues to rotate toward the object until the robot heads exactly toward the object. At this instant, the 6LR stops as shown in Fig. 16(f). The corresponding RGB shows that the object image is exactly in the center of the desired area. These observations indicate that the 6LR follows the object until the detected object image reaches the center of the desired area. Fig. 17 presents the images that show the sequence of events occurring during straight following of the 6LR using a Kinect camera that is developed in this paper. In this case, the movements of the candidate blue-colored object and the 6LR are in vertical forward direction. The experiment is performed from the starting position of the 6LR to 4000 mm. Fig. 17(a) shows the initial position of the 6LR in which it is heading toward the detected candidate object at a distance of 1400 mm. The corresponding RGB image shows that the detected candidate object is inside the desired area, and that the detected

Estimated angular velocity w ˆ (rad)

9. Experimental results

0.14

0.115

0.085

0.055

0.025

-0.005 0

5

10

15

20

25

Time(s)

Fig. 18. Estimated angular velocity of the 6LR for rotational following case.

object image is large. Figs. 17(b)-(e) show the straight forward moving steps of the 6LR following the forward moving object. The corresponding RGB images show the positions of the detected moving object inside the desired area. As the 6LR becomes closer to the object, the size of the detected object image inside the desired area becomes larger, and vice versa. Fig. 17(f) shows the stop positions of the object and the 6LR. In this event, the object stops at a distance of 2500 mm from its initial position. The 6LR continues to move forward until the distance between the 6LR and the object becomes equal to the reference distance (i.e., 1100 mm). The corresponding RGB image shows that the size of the detected object image inside the desired area is bigger than that of all the previous image sequences. The experimental results for the proposed system are shown in Figs. 18-23. Figs. 18-20 provide the experimental results for the rotational following case of the 6LR using a Kinect camera. Figs. 18-20 show the estimated angular velocity, reference and estimated angular positions and estimated heading angle error of the 6LR. As shown in these graphs, when the 6LR is at its initial position, as shown in Fig. 16(a), the estimated heading angle error is small; thus, the estimated angular velocity is also small. In cases where the object is moving, as shown in Figs. 16(b)-(e), the estimated heading angle error is increased. Therefore, the estimated angular velocity is increased. The estimated angular velocity of the 6LR is increased to 0.14 rad/s until 9 s. At 9 s, the object stops as shown in Fig. 16(f); however, the 6LR continues to follow the object. For this reason, a large estimated heading angle differ-

5434

A. V. Gulalkari et al. / Journal of Mechanical Science and Technology 29 (12) (2015) 5425~5436 1.65 Estimation Reference

Angular position fD , fˆ (rad)

1.6

p

2

1.55 1920 1915

1.5

1910 1905 1900

1.45

1895 1890 1885 13.2

1.4

13.4

13.6

13.8

14

14.2

1.35

1.3 0

5

10

15

20

25

Time(s)

Fig. 22. Distance of the moving object from the 6LR in straight following case.

ey (mm)

Estimated heading angle error ef (rad)

Fig. 19. Angular positions of the 6LR for rotational following case.

Estimated distance error

0.25

0.2

0.15

0.1

0.05

0 0

5

10

15

20

25

1200

1000

800

600

400

200

0 0

20

Fig. 20. Estimated heading angle error of the 6LR for rotational following case.

Translational velocity (mm/s) vˆ

60

80

100

120

Fig. 23. Estimated distance error of the 6LR for straight following case.

17(a). With the moving object, as shown in Figs. 17(b)-(e), the estimated distance error is increased. At 56 s, the object is stopped, as shown in Fig. 17(f); however, the 6LR continues to follow the object until it reached the reference distance (1100 mm). For this reason, a large estimated distance error exists between the current position of the object and the current position of the 6LR at 56 s. The estimated distance error is reached 1150 mm at 56 s, which then gradually is decreased and finally converged to 0.

3 2.5 2

1.5 1

0.5 0 0

40

Time(s)

Time(s)

20

40

60

80

100

120

Time(s)

Fig. 21. Estimated translational velocity of the 6LR for straight following case.

ence exists between the current angular position of the object and the current angular position of the 6LR at 9 s. The estimated heading angle error reaches 0.256 rad at 9 s, which then gradually decreases and finally converges to a small value of approximately 0.023 rad. Thus, the estimated angular velocity also converges to zero. Figs. 21-23 show the experimental results for the straight following case of the 6LR using Kinect camera. Fig. 21 shows that the maximum estimated translational velocity of the 6LR is 25 mm/s and is constant until 102 s and then converges to zero. Fig. 22 shows the reference, measurement and estimated distance between the moving object and the 6LR in straight following case. Fig. 23 shows that the estimated distance error and its initial value is 400 mm as shown in Fig.

10. Conclusion This paper proposed an object tracking and following 6LR system that used a Kinect camera based on Kalman filter and backstepping controller. The following steps were executed to achieve object tracking and following. The 6LR was developed. The Kinect camera was installed on the robot platform to perform image processing. The candidate blue-colored object was detected successfully by the Kinect camera through the color-based object detection method. The detected moving object was tracked well by the Kalman filter. Based on the kinematic modeling, the controller for the 6LR robot based on backstepping technique using candidate Lyapunov function was designed. The experimental results were shown to verify the effectiveness of the proposed control method for the vision-based object tracking and following 6LR using a Kinect camera. The results showed that the 6LR followed the candidate object successfully with acceptable errors.

A. V. Gulalkari et al. / Journal of Mechanical Science and Technology 29 (12) (2015) 5425~5436

Acknowledgment This study is a part of the results of the R&D project Development of multi-legged underwater walking–flying robot supported by the Korean Government. Authors are indebted to the Ministry of Oceans and Fisheries for their full support to this work.

References [1] L. Dong and X. Lin, Monocular-vision-based study on moving object detection and tracking, Proc. of the 4th International Conference on New Trends in Information Science and Service Science, Gyeongju, Republic of Korea (2010) 692-695. [2] J. Stuckler and S. Behnke, Combining depth and color cues for scale- and viewpoint-invariant object segmentation and recognition using random forests, Proc. of the International Conference on Robotics and Intelligent Systems, Taipei, Taiwan (2010) 4566-4571. [3] I. H. Kim, D. E. Kim, Y. S. Cha, K. H. Lee and T. Y. Kuc, An embodiment of stereo vision system for mobile robot for real-time measuring distance and object tracking, Proc. of the International Conference on Control, Automation and Systems, Seoul, Republic of Korea (2007) 1029-1033. [4] H. Lang, Y. Wang and C. W. de Silva, Vision based object identification and tracking for mobile robot visual servo control, Proc. of the 8th International Conference on Control and Automation, Xiamen, China (2010) 92-96. [5] F. Su and G. Fang, Moving object tracking using an adaptive colour filter, Proc. of the 12th International Conference on Control Automation Robotics and Vision, Guangzhou, China (2012) 1048-1052. [6] G. Xing, S. Tian, H. Sun, W. Liu and H. Liu, Peoplefollowing system design for mobile robots using Kinect sensor, Proc. of the 25th Chinese Control and Decision Conference, Guiyang, China (2013) 3190-3194. [7] H. B. Suay and S. Chemova, Humanoid robot control using depth camera, Proc. of the 6th International Conference on Human-Robot Interaction, Lausanne, Switzerland (2011). [8] K. Kaede and K. Watanuki, Gait generation and change of direction for the underactuated three-legged robot, Journal of Mechanical Science and Technology, 24 (1) (2010) 55-58. [9] K. Y. Kim, Ellipse-based leg-trajectory generation for galloping quadruped robot, Journal of Mechanical Science and Technology, 22 (11) (2008) 2099-2106. [10] A. Chillian and H. Hirschmuller, Stereo camera based navigation of mobile robots on rough terrain, Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems, St.Louis, Mo (2009) 4571-4576. [11] A. V. Gulalkari, G. Hoang, P. S. Pratama, H. K .Kim, S. B. Kim and B. H. Jun, Object following control of six-legged robot using Kinect camera, Proc. of the 3rd International Conference on Advances in Computing, Communications and Informatics, Delhi, India (2014) 758-764.

5435

[12] I. Ullah, F. Ullah and Q. Ullah, Real-time object following fuzzy controller for a mobile robot, Proc. of the 2011 International Conference on Computer, Networks and Information Technology, Abbottabad, Pakistan (2011) 241-244. [13] R. A. El-laithy, J. Huang and M. Yeh, Study on the use of Microsoft Kinect for robotics applications, Proc. of Position Location and Navigation Symposium, Myrtle, South Carolina (2012) 1280-1288. [14] B. Siciliano, L. Sciavicco, L. Villani and G. Oriolo, Robotics Modelling, Planning and Control, 2nd ed., SpringerVerlag, London, U.K. (2009). [15] K. Kungcharoen, P. Palangsantikul and W. Premchaiswadi, Development of object detection software for a mobile robot using an AForge.NET framework, Proc. of the 9th International Conference on ICT and Knowledge Engineering, Bangkok (2012) 201-206. [16] C. D. Herrera, J. Kannala and J. Heikkila, Accurate and practical calibration of depth and color camera pair, Proc. of the 14th International Conference on Computer Analysis of Images and Patterns, Seville, Spain (2011) 437-445. [17] H. Choset, K. M. Lynch, S. Hutchinson, G. Kantor, W. Burgard, L. E. Kavraki and S. Thrun, Principles of Robot Motion: Theory, Algorithms and Implementations, MIT Press, Cambridge, U.K. (2005). [18] G. Hoang, H. K. Kim and S. B. Kim, Path tracking controller of quadruped robot for obstacle avoidance using potential function method, International Journal of Science and Engineering, 4 (1) (2013) 1-5. [19] W. Y. Jeong, H. K. Kim, S. B. Kim and B. H. Jun, Path tracking controller design of hexapod robot for omnidirectional gaits, Proc. of the 9th Asian Control Conference, Turkey (2013) 1-6.

Amruta Vinod Gulalkari was born in India on November, 1988. She received her B.S. degree in Dept. of Electronics and Telecommunication, SSGB Amravati University, Amravati, India, in 2010. She then received her M.S. degree in the Dept. of Interdisciplinary Program of Mechatronics Engineering, Pukyong National University, Busan, Korea, in 2015. Her research interests include legged robots, mobile robot control, and image processing. Pandu Sandi Pratama was born in Indonesia on November, 1986. He received his B.S. degree in Electrical Engineering Dept. of Diponegoro University, Indonesia, in 2011. He then received his M.S degree in the Interdisciplinary Program of Mechatronics Engineering Dept., Pukyong National Uni-

5436

A. V. Gulalkari et al. / Journal of Mechanical Science and Technology 29 (12) (2015) 5425~5436

versity, Busan, Korea, in 2013. He then received a Ph.D. student in the Dept. of Mechanical Engineering, Pukyong National University, Busan, Korea in 2015. His fields of interests are computer science, robotics, and mobile robots. Hoang Giang was born in Vietnam on April, 1984. He received his B.S. degree in Dept. of Computer Science, Hochiminh City University of Technology, Vietnam in 2009. He then received his M.S degree in the Dept. of Mechanical Engineering, Pukyong National University, Busan, Korea, in 2012. He received a Ph.D. degree in the Dept. of Mechanical Engineering, Pukyong National University, Busan, Korea, in 2014. His fields of interests are computer science, robotics, and mobile robots.

Bong-Huan Jun was born in Korea on January 1970. He received his B.S. and M.S. degrees in mechanical engineering from the Pukyong National University, Busan, Korea, in 1994 and 1996, respectively, and his Ph.D. degree in the Department of Mechatronics Engineering, Chungnam National University, Daejeon, Korea, in 2006. He joined the Ocean Engineering Department of Korea Ocean Research and Development Institute (KORDI) in 1996 as a Research Scientist. His research interests include navigation guidance and control of underwater vehicles as well as the analysis and motion planning of underwater manipulators. Dr. Jun is a member of the Ocean Engineering Society of IEEE, Korea Ocean Engineering Society (KSOE), the Institute of Control, Robotics and Systems (ICROS), and the Korea Robotics Society.

Dae Hwan Kim was born in Korea on March, 1982. He received his B.S. degree in Electrical Engineering from Chosun University, Kwangju, Korea, in 2008. He then received his M.S and Ph.D degrees in Mechanical engineering from the Pukyong National University, Busan, Korea, in 2009 and 2015, respectively. His fields of interests are robust control, combustion engineering control, and mobile robot control.

Sang Bong Kim was born in Korea on August 6, 1955. He received his B.S. and M.S. degrees from National Fisheries University of Busan, Korea, in 1978 and 1980. He received his Ph.D. degree in Tokyo Institute of Technology, Japan, in 1988. He is a Professor of the Dept. of Mechanical Engineering, Pukyong National University, Busan, Korea. His research interests include robust control, biomechanical control, and mobile robot control.

Suggest Documents