Fatigue Driving Detecting Method Based on Time

4 downloads 11323 Views 593KB Size Report
College of Electrical and Information Engineering. Chongqing University of ... The driver's fatigue is one of the major causes that lead to thousands of automobile ...
Fatigue Driving Detecting Method Based on Timespace Features in Real Driving Conditions Zuojin Li, Jun Peng and Liukui Chen*

Sreenivas Sremath Tirumala

College of Electrical and Information Engineering Chongqing University of Science and Technology Chongqing, China *[email protected]

School of computing and mathematical sciences AUT University Auckland, New Zealand [email protected]

Abstract—Fatigue driving is a major cause for traffic accidents. One of the most effective ways to detect fatigue driving is by machine vision based on driver’s eye fatigue features. However, it is highly affected by lighting conditions in real driving conditions. This paper firstly corrects the color balance of video images to extract the lighting-adaptive skin color features. Then the eye area is clipped out with SURF features to extract the time-space features of fatigue drivers. This makes fatigue detecting become robust under complex real driving conditions. Experiments show that this method possesses strong adaptability to lighting conditions, and thus better generalization in engineering. Keywords—fatigue driving; lighting adaptability; time-space features; SURF features;

I. INTRODUCTION According to WHO’s report, the total number of road traffic deaths worldwide stands as high as 1.24 million per year [1]. The driver’s fatigue is one of the major causes that lead to thousands of automobile crashes [2], and about 35%-45% of all traffic accidents are due to fatigue driving [3]. Generally, fatigue can be defined as the transition between the awake state and the sleep state where one’s ability to observe and analyze are strongly reduced [4]. Everyone may feel fatigued at some time, and the consequences include increasing reaction time and decreasing vitality which greatly impairs one’s driving abilities. The research community is making great efforts to increase traffic safety, one of which is the real-time detection of fatigue in an automated way. Fatigue is one of the biggest killers of professional drivers, but it is quite hard to detect. In recent decades, driver fatigue has been monitored by two kinds of ways. The first one is “driver-touched” [5], [6], [7] , for example, EEG signals, EMG signals, and EOG signals. The second kind of ways is “driver-untouched”, such as methods based on facial features [8], [9], [10]. As the video information is easy to acquire, and the detection based on eye features imposes no interference on driving process, it has become the key in industrial research. The detecting method proposed by Hertmann et al. [11] is based on the diameter of the pupil by tracking gaze. Boverie et al. [12] monitors the driver’s fatigue by detecting eyelid movement. Wang et al. [13] uses wavelet to extract eyes based on face recognition and the detect fatigue with neural network classifier. Mita and Ito et al. [14], [15] detects fatigue by Natural Science Foundation of Chongqing. (Grant No. cstc2014jcyjA40006).

tracking frequency of wink in motion images. Cheng et al. [16], [17], [18] judge the fatigue and attention of the driver with wink frequency and duration of driving. The best recognized real-time detection method for driver’s fatigue up to now is PERCLOS (Percent Eye Closure), meaning the percentage of eye closure in a certain period of time. This method can achieve high accuracy by combining features like average eye opening degree and longest eye closure as the fatigue index [19]. However, in real driving environment, as the lighting conditions are complex, it becomes extremely difficult to clip driver’s eye area and extract eye features, which leads to a decrease in the accuracy of fatigue detection. This paper corrects the color balance under different lighting conditions to improve the robustness of skin color detection and stability of invariant features of eyes, which contributes to a great improvement in driver’s fatigue detection. This remainder of this paper is organized as follows: Section II shows the details of the proposed method. Section III presents experiment and results. The conclusion and future work of this paper in section IV . II. FATIGUE DETECTION METHOD This proposed method works as follows: firstly, to correct color for every frame of video image; secondly, to determine the area of face and eyes; lastly, detect fatigue based on the time-space features of the eye area. The whole process is shown in Fig. 1.

⎡ Y ⎤ ⎡ 16 ⎤ ⎡ 0.2568 0.5041 0.0979 ⎤ ⎡ R ⎤ ⎢C ⎥ = ⎢128⎥ + ⎢− 0.318 0.4392 − 0.1212⎥ ⎢G⎥ ⎥⎢ ⎥ ⎢ g⎥ ⎢ ⎥ ⎢ ⎣⎢Cr ⎦⎥ ⎣⎢128⎦⎥ ⎣⎢ 0.4392 − 0.3677 − 0.0714⎦⎥ ⎣⎢ B ⎦⎥

(2)

The skin color detection based on the YCgCr color space is realized with image binarization. The binarization of Y, Cg, Cr components are expressed in Equation (3),

Fig. 1 Fatigue Detection Method A. Lighting adaptive transformation To adapt the skin color detecting model to the change of the color of light source, light compensation and color balance are required for the images. Generally, color balance is adjusted based on the illumination intensity. The intensity of the three components: red, green and blue, are adjusted accordingly, so as to recover the original color of the image. But for most video images under natural environment, the lighting conditions are impossible to be given in precision. This paper proposes a method for balancing the color proportion, which is adaptive to the changing environment. The color transformation is expressed in Equation (1):

⎧ sum(G) ⎪ R = R × sum(R) ⎪ sum(G) ⎪ ⎨G = G × sum(G) ⎪ ⎪ B = B × sum(G) ⎪ sum(B) ⎩

(1)

where sum() is a sum function. For example, sum(R ) means the sum of red components in the current image frame. When the renewed R, G, B are bigger than 255, their values are adjusted to 255 automatically. The major purpose of renewing R, G, B components of the original image is to adjust the color balance and restore the original color features of the image scene, so as to enable the skin color detecting model to adapt to the change of light color, hence improving the accuracy of skin color detection. B. Face Area Clipping To clip human face area, first of all, the RGB color space of the video image will be transformed to YCgCr color space. And then the skin color model will be used to detect the human face area and clip an area of the fixed size of 160*120, centering on the center of the human face. The transformation of RGB color space to YCgCr color space is conducted according to Equation (2) [20].

⎡ Y ⎤ ⎧1, if Y a ≤ Y ≤ Yb (3) ⎢C ⎥ = ⎪1, if C ≤ C ≤ C ga g gb ⎢ g⎥ ⎨ ⎢⎣ C r ⎥⎦ ⎪⎩ 1, if C ra ≤ C r ≤ C rb where Ya=100*(D/255), Yb=200*(D/255), Cga=120*(D/255), Cgb=190*(D/255), Crb=160*(D/255), Crb=240*(D/255). D is the average gray value of the currently first 20 frames of the image, i.e. the self-adaptive lighting transformation of model parameters. More over, the brightness component is increased in the skin color model, which is good for skin color detection in shadow area.

To acquire the size the skin color area, the results of Equation (3) should be calculated with connected regions. That means the pixel values of the connected regions (i.e. any pixel within the area includes at least one non-zero point) are added up, which is the size of the given area. If the size of the biggest block exceeds 3,000, a rectangle area of 160*120 will be clipped centering on the center of the block as the human face area. C. Eye area clipping For a given rectangle area of 160*120, SURT (Speeded Up Robust Features) non-variant feature extraction will be conducted as follows [21]: Step 1: construct Gaussian smoothing filter as shown in Fig. 2 and 3. Smoothing of direction x is to rotate direction y to 90° and the filter will renew the pixel value according to the weight coefficient of the template. For the pixel of a certain position ( x, y ) , filtering with the template in Fig. 2 produces the result:

I (x, y) = I (x, y) + I (x +1, y +1) − I (x +1, y) − I (x, y +1) .

Fig. 2 Direction xy mixed partial derivative template

the circle space of this point. For instance, to a given point ( x1 , y1 ) , the circles are expressed as:

( x1 − a1 ) 2 + ( y1 − b1 ) 2 = r1

2

(7)

The space of all circles through the point ( x1 , y1 ) is

C1 ∈ {(a1 , b2 ), r1 }, where r1 = 1,2,3,L,30 . Fig. 3: Direction y two-order derivative template Step 2: to calculate each pixel with Equation (4). If det(H) is less than zero, the pixel is an extreme point; otherwise it is

∂2 f ∂2 f and represent the two-order ∂y 2 ∂x 2 ∂2 f derivative of the directions x and y respectively, and ∂x∂y marked as empty.

represents the mixed partial derivative of directions x and y of the pixel.

∂2 f ∂2 f ⎛ ∂2 f ⎞ ⎜ ⎟ det ( H ) = 2 − 0 . 9 * ∂x∂y ⎟⎠ ∂x ∂y 2 ⎜⎝

2

(4)

Step 3: mark the extreme point in Step 2 as SURT feature point. For any point with concentrate non-variant features, calculate the Euclidean distance between this point and other feature points, with the equation as follows:

L=

(x1 − x2 )2 + ( y1 − y 2 ) 2

(5)

where xi , yi is the coordinate of the feature point. Choose the adjacent area with less than 30 feature points as the effective block. Taking the biggest and the second biggest blocks as the center, clip a rectangle area of 35*100 as the human eye area.

⎧a = a1 = a 2 = L = a j ⎪ ⎨ b = b1 = b2 = L = b j ⎪ r = r = r =L= r 1 2 j ⎩

(8)

Step 2: determine the con-cyclic points. If the element values of the circle space of the given point are equal, or the Equation (8) is valid, then it means these points are in one circle {( a, b), r} and j is the number of con-cyclic points. Step 3: determine valid circles. If j > 20 in Step 2, then this circle is valid. Step 4: determine the video frame of circular features. For the given M image frame, if there exists a valid circle, then this frame is counted as the circular feature frame, n = n + 1 , otherwise, n = n . III. EXPERIMENT AND RESULTS A. Experiment data collecting platform The data acquiring platform for driver fatigue detection is shown in Fig. 4. The video camera is installed in the center of the dash board in the driver’s cabinet. The CCD camera specifications are as follows: 8mm lens, 2 million pixels, resolution 1280 *720, automated exposure, automated color balance, automated gain, explosion-proof level IP65. The sampling frequency is 10 frames per second.

D. Time-space feature extraction and fatigue state detection Conduct circular feature detection to 35*100 image M. If the circular features are detected in M, count the number n. The time-space feature extraction of fatigue state is expressed in Equation (6).

P=

n m

(6)

where P is counted every second and m represents the number of 35*100 images M in recent 5 seconds. N represents the number of frames with valid circles in recent 5 seconds. If P ≤ 0.30 , the driver can be judged as in the fatigue state. The circular feature detecting method is as follows: Step 1: conduct binarization to M image with Equation (3). To every non-zero pixel, all circles through this point construct

Fig. 4 CCD camera installation position B. Verification of light adaptability In real driving situation, lighting conditions are changing dramatically. When detecting the skin color of human face directly with video images, the gray value is quite low in the

shadowed area, which leads to the failure in face detection. As shown in Fig. 5, the detected face area is small, failing to recognize the human face. After adjustment to color balance, the highest gray value of human face is taken as the benchmark to adjust the balance of the whole picture, the shadow area of human face disappears and the face is successfully detected.

Fig. 5 Human face clipping contrast C. Circular feature detection experiment for human eyes As shown in Fig. 6, a lot of non-variant features exist in human eye areas, making the eye area clipping more accurate. The upper eyelid edge and eyeball have evident circular features. For example, the red point in Fig. 6 is a point in a circle. Evidently, when eyes are open, circular features can be detected accurately, which helps better recognize the driver’s fatigue state. Fig. 6 Circular feature detection IV. CONCLUSION AND PROSPECTS This paper presents a human eye positioning and clipping method adaptive to lighting conditions and proposes a driver fatigue detecting method based on time-space features of human face. Experiments demonstrate that this method is adaptive to lighting conditions and effective in detecting driver fatigue. Existing researches prove that the facial expressions of drivers are important in detecting their fatigue state. And it will continue to be an important work in the future to realize realtime detection of driver’s fatigue with the information of eye features and facial expressions. ACKNOWLEDGMENT This work is also supported in part by Scientific and Technological Research Program of Chongqing (Grant No. KJ131423, No. KJ131422 and KJ132206), Doctor Special Project from Chongqing University of Science and Technology (Grant No. CK2011B05 and CK2011B09) and Natural Science Foundation of Chongqing (Grant No. cstcjjA40041 and cstc2014jcyjA40006). REFERENCES

[1]

World Health Organization. Global status report on road safety: time for action. Geneva: WHO, 2013. ttp://www.who.int/violence_injury_prevention/road_safety_status/2013/ en/. [2] A. Williamson, D. A. Lombardi, S. Folkard, J. Stutts, T. K. Courtney, and J. L. Connor, “The link between fatigue and safety,” Accid. Anal. Prev.,vol. 432, pp. 498–515, Mar. 2011. [3] Khushaba R N, Kodagoda S, Lal S, et al. Driver drowsiness classification using fuzzy wavelet-packet-based feature-extraction algorithm. IEEE Transactions on Biomedical Engineering, 2011, 58(1): 121-131. [4] Antoine Picot, Sylvie Charbonnier, and Alice Caplier, “On-Line Detection of Drowsiness Using Brain and Visual Information,” IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 42, no. 3, pp.764–775, May. 2012. [5] Chi Zhang, Hong Wang, and Rongrong Fu, “Automated detection of driver fatigue based on entropy and complexity measures,” IEEE Trans. Intell. Transp. Syst., vol. 15, no. 1, pp. 168–176, Feb. 2014. [6] Correa.AG, Orosco. L, Laciar. E, “Automatic detection of drowsiness in EEG records based on multimodal analysis,” Medical engni & phy, vol. 36, no. 2, pp. 244–249, Feb. 2014. [7] S.Jung, H.Shin, W. Chung, “Driver fatigue and drowsiness monitoring system with embedded electrocardiogram sensor on steering wheel,” IET. Intell. Transp. Syst., vol. 8, no. 1, pp. 43–50, 2014. [8] Lin.SD, Lin. JJ, Chen. CH, Chuang. CY, “Sleepy eye's recognition for drowsiness detection,” Journal of internet technology, Vol. 14, No. 7, pp.1159-1165, Dec 2013. [9] Wu. MJ, Mu. PA, Zhang. CY, “Driver fatigue detection algorithm based on the states of eyes and mouth,” Computer Application and Software, Vol. 30, No. 3, pp.24-26 Mar 2013. [10] T. Ma, B. Cheng, “Detection of driver ’ s drowsiness using facial expression features,” J Automotive Safety and Energy, Vol. 1, No. 3, pp.200-204, 2010. [11] HEITMANN A,GUTTKUHN R.Technologies for the Monitoring and Prevention of Driver Fatigue[C]//Proc of the 1st International Driving

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

[21]

Symposium on Human Factors in Driver Asscssment,Training and Vehicle Design,2001:81-86. BOVERIE S , LEQELLEC J M , HIRL A.Intelligent Systems for Video Monitoring of Vehicle Cockpit[C]//Proc of International Congress and Exposition ITS : Advanced Controls and Vehicle Navigation Systems,1988:1-5. Wang Rongben , Guo Keyou , Shi Shuming , et al. A Monitoring Method of Driver Fatigue Behavior Based on Machine Vision[C]//Proc of IEEE Intelligent Vehicles Symposium,2003:110-113. MITA T , KOZUKA S , NAKANO K , et al. Driver Blink Measurement by the Motion Picture Processing and Its Application to Drowsiness Detection[C]//Proc of the 5th International Conference on Transport Systems,2002:168-173. ITO T,MITA S,KOZUKA K,et al. Driver Blink Measurement by the Motion Picture Processing and Its Application to Drowsiness Detection[C]//Proc of International Conference on Intelligent Tranaportation Systems,2003:168-173. Cheng Bo , Zhang Guangyuan , Feng Ruijia , et al. Real-time Monitoring of Driver Fatigue Based on Eye State Recognition[J]. Automotive Engineering,2008,30(11):1001-1005.(in Chinese) Zhang wei Cheng bo and Zhang bo, Research on eye location algorithm robust to driver’s pose and illumination, Acta Phys.sin., Vol.61, No. 6, pp.1-9, 2012 Zhang wei Cheng bo and Zhang bo, A Research on the algorithm for driver’s eye positioning and traching, Automotive Engineering, Vol.34, No. 10, pp.889-893, 2012 Mao Zhe,Chu Xinmin,Yan Xinping,et al. Advances of Fatigue Detecting Technology for Drivers[J]. China Safety Science Journal, 2005,15(3):108-112. (in Chinese) PHUNG S L, BOUZERDOUM A, CHA I D. A novel sk in color model in YCbC r color space and its app licat ion to hum an face detection [ C ] / /Proc of International C onference on Image Process ing. 2002:289-292. Bay, H., Tuytelaars, T., Van Gool, L.: Surf: Speeded up robust features. In: Computer Vision–ECCV 2006. Springer (2006) 404–417

Suggest Documents