Optical sensor system using computer vision for the level measurement in oil tankers C. Gaber, K. Chetehouna, H. Laurent
C. Rosenberger
S. Baron
Institut PRISME ENSI de Bourges - Universit´e d’Orl´eans 88 boulevard Lahitolle 18020 Bourges Cedex, France Email:
[email protected]
Laboratoire GREYC ENSICAEN - Universit´e de Caen - CNRS 6 boulevard du Marchal Juin 14050 CAEN Cedex, France
Enraf Marine Systems SAS Bˆat 59, rue Isaac Newton ZA Port Sec Nord - Esprit 1 18000 Bourges, France
Abstract— This paper presents the proposal of an optical sensor for the measurements of level and volume in oil tankers tanks. It is based on computer vision and uses the coordinates of three points located at the top of the liquid to measure its height. If the final sensor will be composed of only one camera, a calibration step, involving two cameras, is necessary in order to determine the coordinates of each point by triangulation. In this paper, we illustrate the principle of the proposed sensor and give some experimental results of measurements.
I. I NTRODUCTION Many sensors dedicated to the measurement of liquid level are used in various industrial fields. Currently used fluid measurement methods are often based on immersed (completely or not) sensors. They can either repose on technology using capacitance exclusively [1], on passive inductor-capacitor circuit [2] or on optical fiber technology [3]. For an application such as the liquid’s height surveillance in tankers tanks, some requirements are to be respected: absence of mobile parts, absence of contact with the product, a precision of about one millimeter, low cost and conformity with the ATEX regulation. Non-contact sensors for fluid-level measurements are mainly based on ultrasonics. They work by emitting short ultrasonics pulses from the transducer to the liquid surface and measuring their reflections. These methods have the major drawback to be very sensitive to temperature fluctuations which directly influences the speed of sound and consequently the estimation of the distance to the surface. An other significant factor on the performance of an ultrasound sensor is the environment. For some dedicated applications, the presence of specific gas can severely attenuate ultrasound and reduce the efficiency [4]. The sensor presented in this paper is an alternative solution based on computer vision and applicable whatever the environment conditions can be. This approach is new and is an important stake for the field of level measurement in tankers tanks, also permitting to accede to information about the boat pitching and rolling. The aim of this paper is to demonstrate the feasibility of this technique. We first present the system and its working principles. We then give some preliminary experimental results and their analysis. Finally, we conclude this study and give improvement prospects.
Fig. 1.
Optical sensor working diagram
A. Presentation In order to measure the level in a tanker tank, we have decided to flash three laser beams on the liquid surface. The aim is to deduce the plane equation in the camera reference frame. Fig. 1 presents the optical sensor working diagram. This technique will also enable to gather information about the boat pitching and rolling. The theory related to the pinhole camera model [5]–[7] enables us to deduce the coordinates of a point in the image plane if we know its coordinates in the global reference frame (O, XG , YG , ZG ). This model leads to the following relation:
XG su sv = M YG ZG s 1
(1)
with:
Fx M = 0 0
γ Fy 0
u0 v0 1
r1 0 r4 0 r7 0 0
r2 r5 r8 0
r3 r6 r9 0
tx ty tz 1
Fig. 2.
Image plane position in the global reference frame
where:
Fx 0 0
γ Fy 0
u0 v0 1
r1 0 r4 0 , r7 0 0
are respectively the intrinsic and camera and su sv , s
r2 r5 r8 0
r3 r6 r9 0
tx ty tz 1
extrinsic parameters of the XG YG ZG 1
are respectively the laser point coordinates in the image plane and in the global reference frame. Fig. 2 sums up the considered situation. However the laser point coordinates are only known in the image plane and the matrix A corresponding to the camera parameters can not be inverted. We therefore can not directly apply this model and the use of a second camera seems to be an alternative solution.
B. Implementation of the triangulation method The principle is based on stereovision [8]. A second camera (camera 2) allows us to determine a point coordinates in the main camera (camera 1) reference frame by using triangulation [9], [10]. Fig. 3 illustrates the considered situation. With the knowledge of the laser point coordinates (x1 , y1 ) in the main camera image plane, the laser point coordinates (x2 , y2 ) in the second camera image plane, the intrinsic parameters of each camera, the extrinsic parameters of the second camera towards the main one, we can determine the laser point coordinates (X, Y, Z) in the main camera reference frame. As the final sensor should be made of only one camera, a calibration is necessary before any use of this sensor in order to determine the lasers beams equations. The three laser points coordinates are then given by the intersection of the lasers beams and the various optical lines (which go through
Fig. 3.
Triangulation principle
each image point, each laser point and the camera optical center). Firstly, the coordinates of many points of a laser beam are obtained in the main camera reference frame by triangulation as described earlier. We then calculate the least squares solution line with a genetic optimization algorithm [11], [12]. This line is determined by the intersection of two planes (Π1 ) and (Π2 ) whose equations are : (Π1 ) : Z = p1 X + p2 Y + p3
(2)
(Π2 ) : Z = p4 X + p5 Y + p6
(3)
The function to be minimized can be written as: Obj1 (p) = N X
[(Zi −p1 Xi −p2 Yi −p3 )2 +(Zi −p4 Xi −p5 Yi −p6 )2 ] (4)
i=1
where N is the number of chosen laser points and p = (p1 , p2 , p3 , p4 , p5 , p6 ) is the parameters vector to be determined. From equations (2) and (3), we can write the parametrical equation of the laser beam as follows: X 1 0 Y = λ a1 + b1 (5) Z a2 b2 p1 −p4 p5 −p2 , p3 −p6 p2 p5 −p2 .
with a1 = p3 +
−p4 a2 = p1 + p2 pp51 −p , b1 = 2
p3 −p6 p5 −p2
and b2 =
Secondly, the three laser points coordinates in the main camera reference frame can be determined by the intersection of the line characterized by (5) and the different optical lines given by: X A ˜ B Y =λ (6) Z 1
Fig. 4.
Fig. 5.
Experimental system
where the parameters A and B can be obtained from the pixel coordinates of the image point. Every laser point is determined ˜ by equalizing equations (5) by a couple of parameters (λ, λ) and (6). In practice, we have considered the intersection point as being the middle of the shortest segment linking the laser line and optical lines. Thus, a new function has been taken into account:
Acquisition of the N experimental points
TABLE I C OORDINATES IN THE IMAGE PLANES AND IN THE CAMERA 1 REFERENCE FRAME OBTAINED BY TRIANGULATION - C ALIBRATION STEP Point number
Image plane Camera 1 (xi , yi ) (in pixel)
Image plane Camera 2 (xi , yi ) (in pixel)
Camera 1 reference frame by triangulation (Xtri , Ytri , Ztri ) (in mm)
1
326.93 599.67
309.92 274.78
134.0 50.5 358.0
5
266.36 386.14
263.09 439.06
35.1 30.9 609.0
10
242.80 302.90
245.91 500.91
-56.0 13.2 840.8
2
15
231.00 263.50
237.75 529.62
C. Experiments and results
-129.0 -1.6 1026.3
20
219.80 223.60
229.58 558.08
-245.1 -24.5 1320.5
˜ = Obj2 (λ, λ) ˜ 2 + (a1 λ + b1 − λB) ˜ 2 + (a2 λ + b2 − λ) ˜ 2 (λ − λA)
(7)
The minimization algorithm is based on the simplex method [13], [14]. The laser points coordinates are then given by the following relation: ˜ X = 12 (λ + λA) 1 ˜ (8) Y = 2 (a1 λ + b1 + λB) 1 ˜ Z = (a2 λ + b2 + λ)
The main steps of the experimental protocol are the following. We have fixed the system composed of two cameras and one laser. Fig. 4 presents the experimental system. Afterwards, we have calibrated both cameras to determine their intrinsic and extrinsic parameters. We have then taken one photo per camera for each chosen laser point (N = 20), as shown in Fig. 5. Once the images acquired, the coordinates of these points are calculated by triangulation in the main camera (camera 1) reference frame. The laser line equation is determined as described in the previous paragraph and is given by equation (5). Tables I and II present the results obtained for some points during the calibration step. The various points of the laser line can simulate various liquid levels. Thus, a validation of the technique seems to be possible. In this aim, we have determined all the optical lines given by equation (6) and all the coordinates given by equations (7) and (8). The obtained results are showed in Table III and Fig. 6.
TABLE II PARAMETERS OF THE PLANES AND OF THE PARAMETRICAL EQUATION OF A LASER LINE - C ALIBRATION STEP Parameters of planes (Π1 ) and (Π2 ) Parameters
p1
p2
p3
p4
p5
p6
-16.5
70.6
-998.2
-4.9
12.3
400.5
Parametrical equation parameters of a laser line Parameters
a1
a2
b1
b2
0.2
-2.5
24.0
698.3
TABLE III C OORDINATES IN THE CAMERA 1 REFERENCE FRAME OBTAINED BY INTERSECTION OF THE LASER LINE AND THE OPTICAL LINES VALIDATION FOR THE MEASUREMENT STEP
Point
Parameters
Laser points coordinates
number
(A,B) of the
in the camera 1 reference
optical lines
frame by intersection of the laser and optical lines
Relative error Xtri −Xopt Xtri Ytri −Yopt Ytri Ztri −Zopt Ztri
(Xopt , Yopt , Zopt ) (in mm)
(in %)
1
0.37 0.14
134.0 50.4 358.2
0.02 0.14 0.04
5
0.06 0.05
35.1 31.0 609.2
0.04 0.14 0.05
-0.07 0.02
-55.9 13.1 840.2
0.14 0.52 0.07
-0.13 -0.00
-128.9 -1.7 1025.7
0.04 3.50 0.06
-0.19 -0.02
-245.1 -24.5 1320.4
0.01 0.08 0.01
10
15
20
Fig. 6. Difference between the coordinates determined by triangulation and those found by intersection of the laser line and optical lines
the pinhole camera model: X r1 Y r4 Z = r7 1 0
The analysis of the obtained results enable us to ascertain that the level measure gaps between the values obtained during the calibration step and those found by the intersection of the laser and optical lines are inferior to 1 mm. This result shows the relevance of the proposed method regarding the imposed constraints. The level measurement (or height H) of the liquid in the tank can be deduced from the equation of the plane formed by the three laser points in the main camera reference frame (X,Y,Z). Let (X1 , Y1 , Z1 ), (X2 , Y2 , Z2 ) and (X3 , Y3 , Z3 ) respectively be the coordinates of the laser points 1, 2 and 3. The equation of the plane associated to these points will be given by the relation: αX + βY + γZ + δ = 0
(9)
where: α = (Z3 − Z1 )(Y2 − Y1 ) − (Y3 − Y1 )(Z2 − Z1 ) β = (X3 − X1 )(Z2 − Z1 ) − (X2 − X1 )(Z3 − Z1 ) γ = (X2 − X1 )(Y3 − Y1 ) − (X3 − X1 )(Y2 − Y1 ) δ = −αX1 − βY1 − γZ1 (10) The objective is to measure the height H of the liquid in the c center of the tank. The coordinates (XG , YGc ) of the tank center in the horizontal plane are known. The height H corresponds c therefore to the ZG coordinate and can be determined from
r2 r5 r8 0
r3 r6 r9 0
c tx XG Yc ty G tz H 1 1
or equivalently: c + r2 YGc + r3 H + tx X = r1 XG c Y = r4 XG + r5 YGc + r6 H + ty c Z = r7 XG + r8 YGc + r9 H + tz
(11)
Combining (11) and (9) leads to the following expression for the liquid height: H=−
c δ + (αr1 + βr4 + γr7 )XG + (αr2 + βr5 + γr8 )YGc αr3 + βr6 + γr9
+
αtx + βty + γtz αr3 + βr6 + γr9
(12)
c The (XG , YGc ) coordinates of the tank center in the horizontal plane are measured when the tank is being filled. The application of (12) leads to the knowledge of the liquid level in a tank. From this equation, the calculation of the relative error of the liquid level leads to the following relation: ¯ ¯ ¯ ¯ ¯ ¶ 3 µ¯ X ¯ ∂H ¯ ¯ ∂H ¯ ¯ ∂H ¯ ∆H ¯ ¯ ¯ ¯ ¯ ¯ ∆Zi (%) = 100 ∆X + ∆Y + i i ¯ ∂Xi ¯ ¯ ∂Yi ¯ ¯ ∂Zi ¯ H i=1 (13) where we consider that the intrinsic and extrinsic parameters of c the camera and the coordinates (XG , YGc ) are perfectly known.
We are now testing, at a laboratory scale, the sensor described in this paper for the measurement of liquids levels (water, oil, ...). We have therefore designed an experimental set-up, based on a 200 L vessel, a camera and three laser pointers. This set-up is presented in Fig. 7.
[7] B. Telle, M´ethode ensembliste pour une reconstruction tridimensionnelle garantie par st´er´eovision, Th`ese de doctorat, Universit´e de Montpellier II, 2003. [8] G. Bougeniere, P. Moulon, C. Rosenberger and W. Smari, On the determination of 3D Trajectory of Moving Targets by Stereovision, IEEE International Conference on Computers, Communications and Signal Processing (CCSP), 2005. [9] S. D. Cohran, G. Medioni, 3D Surface description from Binocular Stereo, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, pp. 981-994, 1992. [10] R. Malgouyres, Algorithmes pour la Synth`ese dImages et lAnimation 3D, 2`eme e´ dition, Paris, Dunod, 2005. [11] D. E. Goldberg, Genetic Algorithms in Search, Optimzation & Machine Learning, 1st edition, Boston, Addison-Wesley, 1989. [12] L. Tang, L. Tian and B.L. Steward, Color image segmentation with genetic algorithm for infield weed sensing, Transactions of the American Society of Agricultural Engineers, vol. 43, no 4, pp. 1019-1027, 2000. [13] J. C. Lagarias, J. A. Reeds, M. H. Wright and P. E. Wright, Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions, SIAM Journal of Optimization, vol. 9, n 1, pp.112-147, 1998. [14] P. Pandey and A. P. Punnen, A simplex algorithm for piecewiselinear fractional programming problems, European Journal of Operational Research, vol. 178, n 2, pp. 343-358, 2007.
Fig. 7.
Experimental set-up
II. C ONCLUSION We propose in this paper a new sensor based on computer vision to measure the liquid level in tankers. After having presented the principle of the developed sensor, some preliminary experimental results and their analysis have been given. We have shown that this sensor is simple to install and has good performances. The theoretical calculation of the liquid height and its relative error have been presented. A feasibility study of this technique, at a laboratory scale, is now in process. ACKNOWLEDGMENT This research has been done with the support of project ”Enraf Marine Systems SAS - ENSI de Bourges 2006-2007”. The authors thank Emmanuel MENNESSON for his help in designing the experimental set-up.
R EFERENCES [1] F. Reverter, X. Li and G. C. M. Meijer, Liquid-level measurement system based on a remote grounded capacitive sensor, Sensors and Actuators A, vol. 138, pp. 1-8, 2007. [2] S. E. Woodard and B. D. Taylor, A wireless fluid-level measurement technique, Sensors and Actuators A, vol. 137, pp. 268-278, 2007. [3] M. Lomer, J. Arrue, C. Jauregui, P. Aiestaran, J. Zubia and J. M. L ’opez-Higuera, Lateral polishing of bends in plastic optical fibres applied to a multipoint liquid-level measurement sensor, Sensors and Actuators A, vol. 137, pp. 68-73, 2007. [4] M. Jeffries, E. Lai and J. B. Hull, Fuzzy flow estimation for ultrasoundbased liquid level measurement, Engineering Applications of Artificial Intelligence, vol. 15, pp. 31-40, 2002. [5] P. Brand, Reconstruction tridimensionnelle a` partir d’une cam´era en mouvement : de l’influence de la pr´ecision, Th`ese de doctorat, Universit´e Claude Bernard Lyon I, 1995. [6] M. B´enallal, Syst`eme de calibration de cam´era. Localisation de forme poly´edrique par vision monoculaire, Th`ese de doctorat, Ecole des Mines de Paris, 2002.