Orientation Feedback for a Spherical Robot Using a

0 downloads 0 Views 370KB Size Report
rotation). • 2B The position of point 2 in the second image(after rotation). Problem statement: Ti and ... coincide with points 1B and 2B respectively. III. ANALYSIS.
CONFIDENTIAL. Limited circulation. For review only.

Orientation Feedback for a Spherical Robot Using a Single Camera Shantanu Thakar1 and Ravi Banavar2

Abstract— In this paper we propose a technique to determine the orientation of a sphere rolling on a flat surface using the data from a single image from a camera attached on the ceiling and parallel to the ground. The ball surface has multi coloured dots arranged at regular angular intervals along the longitudes. As the ball rolls, images are taken in series and the change of orientation of the sphere between two images, consecutive or not can be calculated based on the techniques presented in this paper. The two cases, one where all the initial points are visible and the second in which few or all points are occluded are discussed.

I. I NTRODUCTION For motion planning of spherical robots, sensing the orientation of the robot is crucial. One of the ways of finding the orientation of the robot is installing an IMU on the inner surface of the sphere. The main concern here is the winding of the wires, as the power source and the controller are placed on an independently moving platform(here yoke). This creates a need to have an independent measurement scheme for the orientation. The orientation of the sphere can be easily obtained in real time putting infrared reflecting stickers on the sphere surface and using motion capture systems like Optitrack, Viacon etc. These systems are expensive, hence, we explore a method to obtain the orientation of a sphere moving at slow speeds with multicolored dots regularly spaced on its surface as shown in Fig. 1. The relative positioning of each dot on the surface of the sphere is known. This method requires only one webcamera installed on the ceiling to capture images in succession. The methodology for finding the change in orientation between two images that convey information on two colored dots on the sphere is presented in this paper. Also, by defining an initial orientation, we can find the absolute orientation of the ball by having just one image. It is assumed that the coordinates of all the required points are accurately known. The 6 DOF tracking of a spherical object [1] with green and red dots on its surface has been done by comparing the coordinates in the projected image with a list of sorted pairs to find a set of candidate matches with the two chosen points. The sphere tracking fails when the projections of two different orientations appear too similar on the image plane. This is overcome by choosing dots of more than two colors. Another method to track the position and the orientation of the sphere is presented in [2]. The movement of the sphere 1 Shantanu Thakar is a Project Associate at the Autonomous Vehicles Laboratory, Aerospace Engineering Department, Indian Institute of Science Bangalore-560012, India [email protected] 2 Ravi Banavar with the Systems and Controls Engineering, Indian Institute of Technology Bombay, Mumbai-400076, Maharashtra, India

is recorded with two high-speed cameras and its orientation is tracked based on the identification of possible orientation ’candidates’ at each time step, with the dynamics obtained from maximization of a likelihood function. In [3] a simple technique to extract the 3-D locations of a pair of spheres is presented. This is done by analysing the centers and radii of their projected circles onto the image plane. Using the intrinsic parameters of the sensor, this allows 5 DOFs to be extracted when the spheres are connected in a dipole device. This information is then used to develop an algorithm for real-time tracking of a sphere dipole. In [4], learning algorithms for determining the orientation of any object using images is presented using this representation that allows the learning of orientations for symmetric or asymmetric objects as a function of a single image. Sensing the position of a minimum of two points on the sphere, in this case two coloured dots, is essential, and in connection with this, much literature is found in the aerospace community on the problem of orientation computation of a spacecraft (or rigid body) using two inertial measurements. [5]–[12]. This paper is organised as follows. Section II describes the problem statement and the parameters which are given. It is followed by Section III which describes the analysis for computation of the rotation matrices for the two cases: when all the initial points are visible after the rotation and when few or all initial points are occluded after the rotation. The technique to measure the co-ordinates of the points is described in Section IV. V presents the simulation results with a ball rolling on a circular path. Measurement errors in the coordinates of the points are taken into consideration. The Conclusion and the future work is presented in section VI.

Fig. 1.

Actual image from webcam of a ball with few colored dots

[email protected]

Preprint submitted to 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. Received February 22, 2016.

CONFIDENTIAL. Limited circulation. For review only. Cut 2

II. P ROBLEM S TATEMENT The objective is to determine the absolute orientation of the sphere at any instance during its rolling motion. This can be done by defining an initial orientation(knowing the initial coordinates of all coloured points) and having the projection of the top hemisphere on a plane i.e an image of the sphere, at that instance. Similarly, the change in orientation between any two instances can be determined by having an image of the sphere at each of the two instances. Two points on the surface of the sphere which are visible in both the images are used to determine the absolute or the change in orientation. The coordinates of these two points 1 and 2 from both the images are together shown in Fig. 2. The subscripts A and B correspond to the first and second image respectively. The case where 1 and/or 2 get occluded at any stage is addressed in Section III-B.

r1

1A

P

Cut 1

d2

d1

d3

2A

Y r

C

d4

2B r2

1B,1E

2E X Fig. 3.

1A P 2A

Y

C

2B 1B,1E

r

2E X Fig. 2.

The top view schematic for finding Rx and Ry

where Rx is a rotational transformation about the inertial X axis to transform point 1A to P as shown in Fig. 2. Ry the second rotational transformation of the sphere about the inertial Y-axis to transform point P to 1B . 1E and 2E are the intermediate positions of points 1 and 2 after these two rotational transforms respectively. 1E and 0 1B are coincident. The third rotational transformation R is about the axis 1B C with an angle α, such that point 2E moves to point 2B . The arc from 1E to 2E is along the great circle as seen in Fig. 4 This arc has the same length as the arc from 1B to 2B along the great circle i.e they subtend the same angle φ at the center. Hence, a rotation along the 1B C axis such that the point 2E moves to 2B is possible. After applying these three transformations successively, the sphere in the first image is rotationally transformed into the sphere in the second image i.e, the points 1A and 2A coincide with points 1B and 2B respectively.

The top view schematic

1A The position of point 1 in first image(before rotation) • 2A The position of point 2 in first image(before rotation) • 1B The position of point 1 in the second image(after rotation) • 2B The position of point 2 in the second image(after rotation) Problem statement: Ti and Tf are the sphere orientation matrices corresponding to the first and the second image respectively. Assuming •

Tf = R.Ti ,

(1)

our objective is to determine the rotation matrix R. The matrix R is now described in terms of three successive rotational transformations as, 0

R = R .Ry .Rx

(2)

III. A NALYSIS The analysis leading to the overall orientation matrix by 0 computing Rx , Ry and R is presented in this section. A. 1A , 2A , 1B and 2B are on the upper hemisphere In this section, the orientation is determined when all the required points are on the upper hemisphere in each of the two images or measurements. Determining the Rotations Rx and Ry : The rotation angle about the X axis for the calculation of Rx can be found out as per figure 3 and 4. The dotted circle in figure 4 is the circle after applying the Cut 1 as shown in figure 3. The dotted circle has a radius R1 and the center C1 . The total angle of rotation about the X axis is α such that

α = α1 + α2

Preprint submitted to 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. Received February 22, 2016.

(3)

CONFIDENTIAL. Limited circulation. For review only. Similarly,

1B , 1E

β1 = sin−1 (

d3 ) r2

(8)

β2 = sin−1 (

d4 ) r2

(9)

d4 d3 ) + sin−1 ( ) r2 r2

(10)

2E 2B

r

β = sin−1 (

φ φ

β can be found another way.

C

cos β =

C2 P · C2 1B |C2 P | · |C2 1B |

(11)

where, C2 is the center of the cross-sectional circle after Cut 2 and C1ˆ1A is a unit vector in the direction from C1 to 1A 0

Schematic for finding R

Fig. 4.

1A

Determining the rotation R : The angle γ is the angle of rotation of the sphere about C1B such that the point 2E moves to 2B . As can be seen from figure 6, the angle γ is the angle between the vectors Q2B and Q2E . Point Q is where the vectors C1B and Q2E as well as the vectors C1B and Q2B are perpendicular. n ˆ is a unit vector along the direction of the vector C1B .

0

P

d1

d2

r1

n ˆ= r1

α1

Y

α2

C1

Cut 1

C1B |C1B |

(12)

CQ = C2B cos φ · n ˆ

(13)

Q2E = C2E − CQ

(14)

Q2B = C2B − CQ

(15)

r

cos γ = Z Fig. 5.

0

α1 and α2 are shown in figure 4. where; d1 α1 = sin−1 ( ) r1

(4)

enˆ γ = I + N sinγ + N 2 (1 − cosγ)

(18)

 n ˆ (2) −ˆ n(1)  0

(19)

where, 

α = sin−1 (

(16)

The final rotation R is a rotation about the axis n ˆ by the angle γ 0 R = enˆ γ (17)

Cross-section after Cut1

d2 ) r1

(5)

d1 d2 ) + sin−1 ( ) r1 r1

(6)

α2 = sin−1 (

Q2E · Q2B |Q2E | · |Q2B |

−ˆ n(3) 0 n ˆ (1)

Hence, the rotation is, 0

R = R .Ry .Rx

α can be found another way. C1 1A · C1 P cos α = |C1 1A ||C1 P |

0 ˆ (3) N = n −ˆ n(2)

(20)

and the final orientation matrix can be obtained as (7)

Tf = R.Ti

Preprint submitted to 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. Received February 22, 2016.

(21)

CONFIDENTIAL. Limited circulation. For review only. 1B , 1E

2B

Q γ

2E

1A r

P2

P1

2A

P

φ φ



1B , 1E

C

Cut 3

Y

C

2E 2B

3B 4B

X Fig. 6.

The angle γ for R

0

Fig. 7.

Points 1B , 2B and P2 are in the lower hemisphere(Top-view)

Z

B. 1A , 2A on the upper and 1B , 2B on the lower hemisphere In this section, the orientation is determined when the points in the first measurement(1A and 2A ) move to the lower hemisphere(1B and 2B ) in the second measurement. Since, the points are arranged regularly, atleast two different points(say 3B and 4B ) move to the upper hemisphere in the second measurement. The relative position of the points is known hence, 1B and 2B can be found based on the measurement of 3B and 4B . 0 Determining the rotations Rx , Ry and R : The Fig. 7 shows the schematic for the situation in which the the initial points 1A and 2A move to the lower hemisphere 1B and 2B . Using the measurements of the points 3B and 4B , the co-ordinates of 1B and 2B are found out using the relative positioning of all the points. The calculation of Ry can be done similar to the previous section as seen in Fig. 7. Rx is determined using two rotations, Rx1 for moving 1A to P1 and Rx2 for moving P1 to P2 . Where, Rx = Rx1 .Rx2 . Rx1 is determined like in Section III-A. Rx2 is determined by the rotation about the X-axis by an angle  as shown in Fig. 8.  is calculated using the 0 co-ordinates of points P and 2B . 0 R is determined as shown in Section III-A.

Cut 3

Y

P C3

ǫ

2B

Fig. 8.

plane,

IV. M EASUREMENTS FOR THE CO - ORDINATES This paper mainly addresses the procedure to determine the orientation of the sphere, given that the co-ordinates of the required points are known. The co-ordinates of the center of the sphere and the colored points on the sphere surface are determined using basic optics properties. Consider the scene of image capture by the camera as shown in Fig. 9. [1] The focal length of the camera is f and the principal point is (px , py ). For the perspective projection of the point C = (X, Y, Z) in space to be at pixel (u, v) on the image



The view along the X axis after the Cut 3

X u − px = f Z

(22)

which gives, X=

Z.(u − px ) f

(23)

Y =

Z.(u − py ) f

(24)

and similarly,

The Z coordinate of the center of the sphere is always equal to H, which is known. The X and Y co-ordinates of any point on the sphere can be found out using the above method.

Preprint submitted to 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. Received February 22, 2016.

CONFIDENTIAL. Limited circulation. For review only. Camera

X axis

f u − px

Image plane

Z axis

Z=H

r X

C Fig. 10.

Fig. 9. [1]

The sphere of radius 5 with the pattern of coloured dots.

Perspective projection of any point, here the center of the sphere

Knowing that the surface is a sphere with a radius r, the Z co-ordinate of that point can be determined using the equation of the sphere. (Xc , Yc , Zc ) being the co-ordinates of the center of the sphere. (X − Xc )2 + (Y − Yc )2 + (Z − Zc )2 = r2

(25)

V. S IMULATION R ESULTS The method described in section IV is implemented as a simulation in this section. On the robot surface, fourteen coloured points have been marked as shown in 10. The points are placed at an interval of π/4 radians along two perpendicular circles on the spherical robot(sphere) with radius R = 5 units. The spherical robot rolls on a circular path with radius ρ = 30 units as shown in Fig. 11 The camera is assumed to be positioned at (−30, 0, 40) as shown in Fig. 11. A video of the complete circular motion can be found in the following link. (http://tinyurl. com/rolling-robot). A random error of maximum magnitude 0.1 is introduced in the measurement of the two points closest to the center of the circle when viewed from the camera. This results in the error in the calculated orientation matrix. The plot in Fig. 12 shows the error in the co-ordinates of the Red point when obtained using the calculated orientation matrix. The maximum error in each case is less than 0.15 units. VI. C ONCLUSION AND F UTURE WORK In this paper we have presented a method to determine the orientation of a sphere rolling on a flat surface. The two cases: first, when the initial points in the two images between which the orientation change is to be calculated, are on the upper hemisphere and second, when the initial points are occluded, are considered. The methods described

Fig. 11.

The spherical robot rolling on the circle (x − 30)2 + y 2 = 302

are simple geometric and are easy to compute. The future work involves the experimental validation of this work and a real time orientation feedback for a rolling spherical robot. R EFERENCES [1] D. Bradley and G. Roth, “Natural interaction with virtual objects using vision-based six dof sphere tracking,” Advances in Computer Entertainment Technology, 2005. [2] R. Zimmermann, Y. Gasteuil, M. Bourgoin, R. Volk, A. Pumir, J.F. Pinton, et al., “Tracking the dynamics of translation and absolute orientation of a sphere in a turbulent flow,” Review of Scientific Instruments, vol. 82, no. 3, p. 033906, 2011. [3] M. Greenspan and I. Fraser, “Tracking a sphere dipole,” 16th International Conference on Vision Interface, 2003. [4] A. Saxena, J. Driemeyer, and A. Y. Ng, “Learning 3-d object orientation from images,” in Robotics and Automation, 2009. ICRA’09. IEEE International Conference on, pp. 794–800, IEEE, 2009.

Preprint submitted to 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. Received February 22, 2016.

CONFIDENTIAL. Limited circulation. For review only.

Error in X

0.15 0.1 0.05 0 0

20

40

60

80

100

120

140

160

180

200

Time

Error in Y

0.15 0.1 0.05 0 0

20

40

60

80

100 Time

120

140

160

180

200

0

20

40

60

80

100 Time

120

140

160

180

200

Error in Z

0.15 0.1 0.05 0

Fig. 12. Error in the coordinates of the red point determined using the calculated orientation matrix

[5] M. D. Shuster and S. Oh, “Three-axis attitude determination from vector observations,” Journal of Guidance, Control, and Dynamics, vol. 4, no. 1, pp. 70–77, 1981. [6] F. L. Markley, “Attitude determination using vector observations and the singular value decomposition,” The Journal of the Astronautical Sciences, vol. 36, no. 3, pp. 245–258, 1988. [7] M. SHUSTER, “Maximum likelihood estimation of spacecraft attitude,” Journal of the Astronautical Sciences, vol. 37, pp. 79–88, 1989. [8] F. L. Markley, “Attitude determination using vector observations: a fast optimal matrix algorithm,” Journal of the Astronautical Sciences, vol. 41, no. 2, pp. 261–280, 1993. [9] L. Markley, “Attitude determination using two vector measurements,” in NASA CONFERENCE PUBLICATION, pp. 39–52, NASA, 1999. [10] F. L. Markley and D. Mortari, “Quaternion attitude estimation using vector observations.,” Journal of the Astronautical Sciences, vol. 48, no. 2, pp. 359–380, 2000. [11] G. M. Lerner, “Three-axis attitude determination,” Spacecraft Attitude Determination and Control, vol. 73, pp. 420–428, 1978. [12] M. D. Shuster, “The generalized wahba problem,” The Journal of the Astronautical Sciences, vol. 54, no. 2, pp. 245–259, 2006.

Preprint submitted to 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. Received February 22, 2016.

Suggest Documents