Global Vision Based Optimization of Control Functions and Kalman Filtering for Robotic Applications Gourab Sen Gupta1,3, Chris Messom2, and Serge Demidenko4 1 IIST, Massey University, Palmerston North, New Zealand 2 IIMS, Massey University, Auckland, New Zealand {g.sengupta, c.h.messom}@massey.ac.nz 3 SEEE, Singapore Polytechnic, Singapore
[email protected] 4 School of Engineering & Science, Monash University, Kuala Lumpur, Malaysia
[email protected] Abstract This paper deals with a global vision based robotic control system. The two basic control functions that need to be implemented and tuned well are the functions to orient a robot towards a target and the function to position it at a point in the area of operation. Fast response necessitates high gain but at the risk of overshoots and unstable operation. Experimental data has been logged, using the global vision, and from the response characteristics the control parameters have been adapted for optimal performance. A PD control for robot orientation function is discussed. The vision processing is done in real-time, effectively within 16.67ms sample time of an interlaced NTSC video image. Since odd and even fields are processed separately, there is inherent quantization noise in the system. This can make the system unstable when the gain is high. Kalman Filtering has been introduced with good results to reduce noise and improve predictive control. Performance of the system with different filter parameters has been evaluated.
1.
Introduction
In autonomous robotic applications, the robots must possess two basic capabilities – to turn towards a target and to move to a certain location. These basic functions may be combined to implement other functionalities and behaviour like intercepting a moving object, shooting a ball into the opponent’s goal in a robot soccer game and synchronized movements in a collaborative environment to accomplish a common task. In most applications, the accuracy of movement, whether angular or linear, is of critical importance. The vision system is an integral component of a modern autonomous mobile robot [1, 2]. Local vision systems of mobile robots have significant limitations primarily due to space constraints. Use of dedicated hardware makes them expensive and inflexible, though processing may be fast. In comparison, global vision systems, based on mass-produced off-the-shelf tools, are inexpensive and
flexible. With modern, fast computing machines, processing speed is only a limited constraint. Improvement in the image processing algorithms has brought real-time vision processing into the realms of commodity hardware Vision Software Task Allocation Control Layer Communication Software Wireless comms
Robotic Agent
Figure 1 Software Hierarchy of a generic MultiAgent Collaborative System Figure 1 shows the hierarchy of the software subsystems in a typical robotic application with global vision sensing and wireless control of agents. The global vision system (GVS) processes the image to identify objects of interest – their position and orientation. The data is then passed to the Task Allocation sub-system which decides the individual role assignment for an agent. At the control layer, the functions to orient and position the robots are implemented which compute the velocity commands before they are transmitted to the robots. This paper discusses the implementation of a PD control for the robot orientation function. Global vision is used to collect and analyze data to tune the control parameters
for fast response and a near-critically damped system performance. Kalman filtering [3, 4] is a computationally efficient algorithm, which generates an optimal estimate from a sequence of noisy observations. It has proved very useful in autonomous and assisted navigation and guidance systems, radar tracking of moving objects, etc. The Kalman filter (KF) is a set of mathematical equations that provides an efficient computational (recursive) solution of the least-squares method. The filter is very powerful in several aspects: it supports estimates of past, present, and even future states, and it can do so even when the precise nature of the modelled system is unknown. In this study the inherent quantization noise in the vision system adds to the inaccuracies of object position detection. An effective implementation of KF, to reduce the consequences of noise, has been evaluated [5].
2.
Classical PD Control for Orientation
The global-vision based system forms a closed-loop control system and the data generated by the vision subsystem is ideally suited for implementation of a classic proportional-derivative (PD) control system [6, 7]. Feedback mechanisms in closed-loop systems enhance the ability to correctly orient (or position) the robot. In this technique, the output of the system is measured and compared to the desired input and corrective action is taken, based on the error e, to achieve the desired result. The control effort is calculated using equation 1.
de(t ) v(t ) = K p e + K d dt
(1)
where Kp is the proportional gain and Kd the derivative gain.
2.1. Robot Orientation Control Robot Direction
θe Robot
Final Direction
α θr
Figure 2 Robot’s current and final direction.
In Figure 2, the angles shown are: θr – RobotAngle θe – AngleError (negative value in the case under consideration) α - DesiredAngle θe = α - θr
(2)
The robot needs to turn, from its present orientation, to the final direction and the velocities of the left and right wheels, Vl and Vr, are calculated using equation (3). Vl = -Kap* θe + Kad* θr Vr = -Vl
(3)
Kap and Kad are the proportional and derivative gains, respectively. In each frame, the robots present orientation is calculated and the angular velocity computed using equation (4).
θr = θr (t) – θr (t-1)
(4)
As the robot starts turning towards the desired direction, the angle error starts decreasing and hence the velocity reduces. When the robot is finally in line with the desired angle, the angle error is zero, thereby bringing the proportional control factor, Kap* θe, to zero. The derivative control has the effect of damping the velocity. In the absence of an accurate dynamic model of the system, choosing the right value for Kap and Kad is a process of trial and error, often using the human eye to judge the response.
2.2. Gain Scheduled Proportional Control In the case under scrutiny, the maximum angle by which the robot needs to turn is 90o. This will be the case when there is no differentiation between the robot’s front and rear ends as it is symmetric and can move in both directions. Hence the maximum angle error is ±90o. It is proposed that instead of having a constant proportional gain, it be gain scheduled based on the absolute value of the angle error. This will enable finer control over the response of the system and based on the application it may be tuned for optimal performance in a desired range of operation. The multi-valued Kap are shown in figure 3. A representative calculation of Kap and wheel velocities, for an example angle error, is shown in equation (5).
110 105 100 95 90 85 80 75 70 65 60 55 50 45 40 35 30 25 20 15 10 5 0 -5
Kap
Angle (Degrees) ---->
Ka70 Ka50 Ka30 Ka10 Ka0
X 0
10
30
50
AngleErrorX
70
90
AngleError (Degrees) Æ
Figure 3 Multi-tiered proportional gains, Kap If the angle error is between 30o and 50o, say AngleErrorX (see figure 3), the wheel velocities are calculated as follows-
(5)
Ka scalelFactor = 1.25 Ka scalelFactor = 1.0 Ka scalelFactor = 0.85 Ka scaleFactor = 0.8
0
Apart from the gain scheduled Kap proposed, we introduced a scaling factor which effects uniformly all the Kap values, as shown in equation (6). scaleFactor = 1.25;
10
15
20
25
30
35
40
45
50
Figure 4 Effect of increasing proportional gain, Kap.
Fine tuning the proportional gain, Kap
It is possible to further improve the system performance by tuning the shape of the gain scheduled values of Kap. By changing the values, even by a small amount, significant performance improvements can be made, as shown in figure 5.
A n g le (D e g re e s ) ---->
System response for different Kap
5
Frame Count / Time (x 16.67 mSec) ---->
This gain scheduled Kap, together with the offset calculation, results in a smooth change of Vl and Vr without sudden jumps at the boundaries of the steps.
2.3
Near-Critically Damped
Over Damped
2.4
BaseAngle = 30 Offset = (30 – 10)*Ka10 + 10*Ka0 Kap = Ka30 Vl = -Kap*(AngleErrorX – BaseAngle) – Offset + Kad* θr Vr = -Vl
Under Damped
95 90 85 80 75 70 65 60 55 50 45 40 35 30 25 20 15 10 5 0 -5
Ka70 = 0.2857 Ka70 = 0.35 Ka5 = 0.3
0
5
10
15
20
25
30
35
40
45
50
Frame Count / Time (x 16.67 mSec) ---->
Figure 5 Fine-tuning proportional gain, Ka. Ka70 has been increased from 0.2857 to 0.35 and Ka5 has been increased from 0.2857 to 0.3. 95
(6)
The effect of different scaling factors is seen in figure 4. The robot, oriented towards 0o is given a stimulus to turn to 90o. The best response is achieved when the scaling factor is 0.85; the system is close to being critically damped. For the scaling factor of 0.8, the robot does not reach the steady state value of 90o; the system is under-damped. For values of scaling factor greater than 0.85, the system exhibits substantial overshoots; it is under-damped. Observing the system response with human eyes did not reveal the stark differences as the angular motion of the robot is too fast and the steady state conditions are achieved within 25 frames, i.e. 0.5 Sec.
90 85 80 75 Angle (Degrees) -->
Ka70 = 0.2857*scaleFactor; Ka50 = 0.3429*scaleFactor; Ka30 = 0.40*scaleFactor; Ka10 = 0.457*scaleFactor; Ka5 = 0.2857*scaleFactor; Ka2 = 0.15*scaleFactor; Ka0 = 0.10*scaleFactor;
70 65 60
5.72
55 50 45
Ka70 = 0.2857
40
Ka70 = 0.35 Ka5 = 0.3
35 30 6
7
8
9
10
11
12
13
14
Frame Count / Time (x 16.67 mSec) ---->
Figure 6 Faster system response.
15
16
By changing the proportional gain Ka70 from 0.2857 to 0.35, the robot responds significantly faster. This is shown in figure 6. At the 10th frame count (166.7 mSec), the robot turns an extra 5.72 degrees which is an improvement of 6.35%, given that the final angle of the robot is 90o starting from 0o. Even a slight additional change in Ka5 from 0.2857 to 0.3 results in an overshoot.
2.5
The effect of Derivative gain, Kad
100 95
Kad = 0
85
o
Angle (Degrees) ---->
70
o
50 - 90 : Kad =0.2 o o 10 - 50 : Kad = 0.1 o o 5 - 10 : Kad = 0.05 o o 0 -5 : Kad = 0
65 60 55 50
o
10 - 90 : Kad =0.1 o o 5 - 10 : Kad = 0.05 o o 0 -5 : Kad = 0
o
40 35 30 25 20 15 10 5 0 5
10
15
20
25
30
35
40
45
50
Frame Count / Time (x 16.67 mSec) ---->
Figure 7 Gain Scheduled Derivative gains, Kad.
3. Kalman Filtering The inherent noise in the vision sensor system as well as quantization noise can substantially affect the system performance adversely. For high gains, it may even make the system unstable. The use of filtering to smoothen the noise was investigated. The ball in a robot soccer game was used as a test object.
3.1
: predicted estimate of position/angle at time n based on the sensor values at time n-1and all preceding times T : sample time (scan-to-scan period) for the system
y n : sensor reading at time n These equations provide an updated estimate of the present object linear/angular velocity and position/angle based on the present measurement of object
y
position/angle, n , as well as prior measurements. The predicted linear/angular velocity and position/angle at time n are calculated based on a constant velocity model: (9)
xˆn = xn −1 + Txn −1 = xn −1 + Txˆn
45
0
xˆ n
xˆn = xn −1
80 75
: filtered (smoothed) estimate of position/angle at time n
h and g : filter parameters
Just as the proportional gain has been gain scheduled, it is proposed that the derivative gain, Kad, may be gain scheduled too. Observations indicate that the resolution of Kad gain scheduling need not be as fine as that for Kap. Figure 7 shows the effect of Kad on the system response.
90
xn
Equations (9) and (10) allow transition from the velocity and position/angle at time n-1 to the linear/angular velocity and position/angle at time n. These transition equations together with (7) and (8) allow tracking an object and are combined to give just two tracking filter equations:
h xˆn = xˆn −1 + ( yn −1 − xˆn −1 ) T xˆn = xˆn −1 + Txˆn + g ( yn −1 − xˆn −1 )
Evaluating Filter Performance
The filter can be described by the following equations:
h xn = xˆn + ( yn − xˆn ) T xn = xˆn + g n ( yn − xˆn )
(7) (8)
where,
xn : filtered (smoothed) estimate of linear/angular velocity at discrete time n
xˆn
: predicted estimate of linear/angular velocity at time n based on the sensor values at time n-1and all preceding times
(11) (12)
Equations (11) and (12) are used to obtain running estimates of target linear/angular velocity and position/angle. The constant velocity model is suitable for the turning robot as it does not accelerate or decelerate significantly.
3.2
Filter description
(10)
Figure 8 Plot of ball position (dx).
A n g le (D e g r e e s ) ---->
100 95 90 85 80 75 70 65 60 55 50 45 40 35 30 25 20 15 10 5 0 -5
g=0.7, h=0.2 No Filtering g=0.6, h=0.4 g=0.8, h=0.2 g=0.7, h=0.3
0
5
10
15
20
25
30
35
40
45
50
Frame Count / Time (x 16.67 mSec) ---->
Figure11 Filtering without derivative control. Figure 9 Plot of ball position (dy). Figure 8 and 9 plot ball position for unfiltered data and filtered data with filter parameters g=0.6 and h=0.4. The segment of data shown is the region in which the maximum unfiltered dx and dy values occur. The maximum non-filtered dx was 0.25cm which was reduced to 0.08cm after filtering while maximum nonfiltered dy was 0.23cm which was reduced to 0.12cm. The higher h value dampens the response of the system giving better performance in the case of a stationary object.
3.3
Combining Filter and PD Control
Angle (Degrees) ---->
The affect of combining the g-h filter together with the PD control was studied. Figure 10 shows the effect of filtering on the system response with the derivative control.
The best system performance is achieved for g=0.7, h=0.3. It can be seen that the Kalman filtered vision data without derivative control provides the best system response. The Kalman filter when used together with a derivative controller can be improved by increasing the proportional gain for small angular error (see figure 10).
4. Conclusions The paper discussed the issues related to tuning the gain parameters of a PD controller for accurately turning a robot towards a desired direction. The effect of different proportional and derivative gains was analyzed using data captured from a global vision system. Kalman filtering was employed to minimize the effects of vision sensor noise and its combination with a PD controller was evaluated. It has been demonstrated that the system performance can be greatly improved using the filtered vision data.
Acknowledgements
95 90 85 80 75 70 65 60 55 50 45 40 35 30 25 20 15 10 5 0
The collaborative work was supported by a grant from ASIA 2000 HEEP funds and carried out jointly at Massey University and the Advanced Robotic and Intelligent Control Centre, Singapore Polytechnic.
References [1] B. Horn, Robot Vision (MIT Electrical Engineering and Computer Science), MIT Press, 1986.
With Kalman Filtering No Filtering
0
5
10
15
20
25
30
35
40
45
50
Frame Count / Time (x 16.67 mSec) ---->
[2] I.J. Cox and C. T. Wilfong, Autonomous robot vehicles, Springer Verlag, 1990.
Figure10 Filtering with derivative control. The parameters used to filter the robot’s angle were g=0.8 and h=0.2. Without filtering, the system is slightly over-damped. The oscillations in the angle are due to the controller’s response to vision sensor noise. Figure 11 shows the system response for several different filter parameters, but without derivative gain.
[3] B. Cipra, “Engineers Look to Kalman Filtering for Guidance”, SIAM News, Vol 26, No. 5, August 1993. [4] R. E. Kalman, “A new approach to linear filtering and prediction problems”, Trans. ASME J. Basic Eng., Vol 82D, pp. 35 – 45, 1960.
[5] C. H. Messom, G. Sen Gupta, S. Demidenko and Lim Yuen Siong, “Improving Predictive Control of a mobile Robot: Application of Image Processing and Kalman Filtering”, Proceedings of Instrumentation and Measurement Technology Conference, Vail, USA, pp. 1492 – 1496, 2003. [6] K. Ogata, “Modern Control Engineering”, 4th Edition, Prentice Hall, 2003.
[7] L. Huang, W. Yu and S. K. Jhajharia, “Speed Control of Differentially Driven Wheeled Mobile Robots – Tracking and Synchronization”, Proceedings of Instrumentation and Measurement Technology Conference, Vail, USA, pp. 1407 – 1412, 2003.