A Platform Surveillance Monitoring System using ...

28 downloads 1671 Views 2MB Size Report
environments, such as CCTV and various camera sensors. Recently, to prevent and monitor the safety accident in railway platform, CCTV(Closed Circuit TV) is.
International Conference on Control, Automation and Systems 2007 Oct. 17-20, 2007 in COEX, Seoul, Korea

A Platform Surveillance Monitoring System using Image Processing for Passenger Safety in Railway Station Sehchan Oh, Sunghyuk Park and Changmu Lee Advance EMU Research Team, Korea Railroad Research Institute, Uiwang-City, Gyeonggi-Do, Korea (Tel : +82-31-460-5745; E-mail: {soh, shpark, cmlee}@krri.re.kr) Abstract: In this paper, we propose a platform surveillance monitoring system using image processing technology for passenger safety in railway station. The proposed system monitors almost entire length of the track line in the platform by using multiple cameras, and determines in real-time whether a human or dangerous obstacle is in the preset monitoring area by using image processing technology. According to the experimental results, we verity system performance in real condition. Detection of train state and object is conducted robustly by using proposed image processing algorithm. Moreover, to deal with the accident immediately, the system provides local station, central control room and train with the video information and alarm message.

Keywords: Surveillance Monitoring, Image Processing, Railway Station, Passenger Safety

1. INTRODUCTION Passenger safety is a primary concern of railway system but, it has been urgent issue that dozens of people are killed every year when they fall off from train platforms. Recently, advancements in IT have enabled applying vision sensors to railway environments, such as CCTV and various camera sensors. Recently, to prevent and monitor the safety accident in railway platform, CCTV(Closed Circuit TV) is widely used. Currently, CCTV is installed at busy areas for monitoring and controlling passenger’s situation from CCR(Central Control Room) or local station. However, CCTV is a passive system which provides limited capability to maintain safety about boarding platform. In this manner, it is very difficult to recognize and manage immediately when an emergency situation occurs. Therefore, a new concept of platform monitoring system, which perceives automatically dangerous factors of passengers on the platform and informs about the emergency situation, is needed. The objective of this paper is to propose a platform surveillance monitoring system using image processing technology for passenger safety in railway station. The proposed system, as shown in Fig.1, monitors almost entire length of track line in the platform using multiple cameras. Each camera covers 20m length of track area, and detects in real-time whether human or dangerous obstacle is in its preset monitoring area by using image processing technology. When an emergency situation is detected, the system immediately provides train driver, CCR and station employees with both video information and alarm message. The rest of this paper is organized as follows. We present a system configuration in Chapter 2. In Chapter 3, the detection process for train and object is described. In Chapter 4, we evaluate the detecting performance of the proposed system with experimental result in real subway platform condition. Finally, some concluding remarks and possible extension of the proposed system are mentioned in Chapter 5. 978-89-950038-6-2-98560/07/$15 ⓒICROS

Video Cameras - Each one monitors preset own monitoring area

Image processing - Fallen passenger detection

Fallen passenger Information multicasting service - Video information - An alarm message

Central Control Room

20M monitoring area

Station Employee

Train Driver

Fig. 1. Platform Accident and Concept of Platform Surveillance Monitoring System

2. SYSTEM CONFIGURATION Fig. 2 shows the system configuration of vision based railway platform monitoring system. The proposed system can be divided into information acquisition unit, fusion unit and information multicasting unit. The information acquisition unit detects and perceives dangerous factor, such as fallen passenger, disastrous fire and so on, in the monitoring area. The detection processor conducts a series of process i.e. train detection, object detection, object recognition and object tracking. The fusion unit makes more intelligent and meaningful information by using inputted the monitored results from every single camera sensor for analyzing the situation. According to the results from situation analysis, it generates different alarm messages for local station and CCR employees and train driver.

394

The information multicasting service unit provides different clients, such as local station employee, CCR employee and train driver with corresponding alarm message including SOP(standard operation procedure) with video information about the accident situation in order to deal promptly with emergencies.

Each camera can have different state. If a station has N number of monitoring cameras and a train is approaches (i)th camera, as shown in Fig. 4, then 1 to (i-1)th cameras have a IN state, (i)th camera has a ON state and (i+1) to (N)th cameras have a OFF state. Camera 1

Information Acquisition Unit

Camera 2

Camera 3

Camera 4

Camera N

Detection Processor_1 Camera_1

A/D

Tran Detection

Fusion Unit (Data Fusion & Situation Analysis)

Object Detection

Train

Object Recognition Object Tracking

Camera_2

A/D

Train movement direction

Detection Processor_2

Information Multicasting Service Unit

Monitoring Area of Camera 1

ON

IN

OUT

OFF

(Video & Alarm Message)

Camera_N

A/D

Detection Processor_N

Fig. 2. System Configuration

Fig. 4. Train states for each camera.

3. DETECTION PROCESS

To make decision of dangerous factor for fallen object in monitoring area, it is important to find the accurate train states in the area for every single camera. The proposed system uses vision sensor, camera sensor for finding train states in current monitoring area with combining the detection results of laser sensors. The four states of train using camera sensor can be decided as shown in Figure 5.

The detection process is divided mainly into two steps i.e. train detection and object/human detection and recognition. The train detection determines the train state to prevent a train from being mistaken for a fallen passenger. The entire detection process for each camera sensor is described in Fig 3.

Train Detection Size OFF-mode?

No

Yes

OFF

Small

No motion

OUT

Large Motions in monitoring area

Object Recognition

Motions in monitoring area

Objection Detection Emergency?

No

IN

Yes Motion?

No

Yes

Train Stop& Deal with the situation

ON

Fig. 5. State diagram for distinguishing each state of train

Fig. 3. Flowchart of Detection Process

The proposed system has four different transitions. Transition of one from another state can be defined as described in Table 2. The system uses motion of train in each monitoring area to determine the state change. The transitions OFF to IN and ON to OUT are made when more than five consecutive motion frames are occurred. To ignore the noise effect, more than five consecutive frames are analyzed.

Train Detection The train state for each camera sensor can be defined as shown in table 1. The object/human detection process is performed in OFF modes, i.e. a state train does not exist in the monitoring area. TABLE I TRAIN STATES FOR EACH CAMERA SENSOR

Train States OFF

No motion

Description There is no train in monitoring area

IN

Train is approaching

ON

Train is stopped or occupies all area

OUT

Train is pulling out of the monitoring area

395

Object Detection/Traction The detection results of dangerous factor in platform monitoring area mainly classified two situations, i.e. a fallen object in the area and a sudden global change on the lighting conditions. To determine fallen object the proposed system considers only movements in monitoring area in OFF state. Moreover, it detects fallen

object coming from outside of dangerous area by using backtracking method which tracks movements in previous frames. To track object, the system saves object movement information in previous frames and back tracks objects when object moves in OUT condition. TABLE II TRANSITION OF TRAIN STATES

Transition OFF-IN IN-ON ON-OUT OUT-OFF

Figure 7. Train area and Dangerous area.

Description More than five consecutive motion frames are occurred More than five consecutive no-motion frames are occurred More than five consecutive motion frames are occurred More than five consecutive no-motion frames are occurred

Train movement is defined as a series of process, frame difference, thresholding, labeling and merging. Figure 8 shows the experimental result of train movement.

4. EXPERIMENTAL RESULT

(a)

(b)

(c)

(d)

To verify the performance the proposed system, we acquired the test sequences at aboveground Sungnae station in line number 2 and underground Heyhwa station in Seoul Metro line number 4 of Korea. A frame of the test video sequence for each station is presented in Fig. 6.

Fig. 8. Test Video Sequences; (a) result of frame difference; (b) result of thresholding; (c) result of labeling; (d) result of merging The experimental results of train detection for the test sequences, Heyhwa and Sungnae, are presented in Figure 9. In the figure, blue rectangle means train area, and redness is dangerous area. Train has relatively large potion of area in the picture. Therefore, the system regards changing relatively large potion of scene as movement of train, and ignores sudden change of light condition. To minimize noise effects, the system regards a movement of train when more than five consecutive frames have been changed. According to the experimental results, we can see the system completely detects all of the train states with image processing technology.

(a)

(b)

Fig. 6. Test Video Sequences; (a) a video sequence of Heyhwa station, (b) a video sequence of Sungnae station

B. Object Detection When an object movement is detected in the monitoring area, the system can determine whether the object is fallen from platform or not by using backtracking technique as presented in Figure 10. In the figure, blue rectangular line is the area of train and red polygon represents the dangerous area, and green rectangular line shows object bound. When object is in dangerous area, the system determines whether the movements come from outside dangerous area. Moreover, the system checks that the object bound is completely included in the dangerous area. If object bound is completely included in the dangerous area,

A. Train Detection

To make decision of dangerous factor for fallen object in monitoring area, it is important to find the accurate train states in the area for every single camera. Train and dangerous areas have to be clearly defined. The preset train area and dangerous area is described in Figure 7. The blue line box presents train area and red line box shows dangerous area.

396

using frame difference between current image and background image. Figure 11 shows an experimental result of fallen object trace in the dangerous area.

system regards it as a fallen object and changes the object bound color from green to red.

(a)

(b)

(c)

(d)

Fig. 11. Experimental results of object trace.

5. CONCOLUSION

(e)

(f)

(g)

(h)

In this paper, we propose a platform surveillance monitoring system using image processing technology for passenger safety in railway station. The proposed system monitors almost entire length of the track line in the platform by using multiple cameras, and determines in real-time whether a human or dangerous obstacle is in the preset monitoring area by using image processing technology. We verity the system performance with experimental result in real condition. Detection of train state and object is conducted robustly by using proposed image processing algorithm. Currently, we are pursuing an effective information transmission system for immediately dealing with the safety accidents.

Fig. 9. Experimental results of train state transition for test sequences, Heyhwa and Sungnae stations; (a) and (b) are OFF state; (c) and (d) are IN state; (e) and (f) are ON state; (g) and (h) are OUT state

REFERENCES [1] Y.Sasaki, N.Hiura. “Development of Image Processing Type Fallen Passenger Detecting System, ” JR-EAST Technical Review Special Edition Paper, No. 2, pp.66-72, 2003. [2] J. Vhzquez, M. Mao, "Detection of moving objects in railway using . . vision," IEEE Intelligent Vehicles Symposium University of Parma, Parma, Italy Jun. 1447, 2004. [3] N.Paragios and V.Ramesh, “A MRF-based Approach for Real-Time Subway Monitoring”, IEEE CVPR 2001, pp. 1034-1040. [4] I.Yoda, "Image processing technology for advanced safety to people in railroad transportation - For railroad crossing and station platform ," IPSJ Magazine Vol.48, No.1, pp.10-16, Jan. 2007. [5] I.Yoda, K.Sakaue. “Ubiquitous Stereo Vision for Controlling Safety on Platforms in Railroad Station,” IEEJ Tr. on Electronics, Information and Systems, Vol. 124, No. 3, Mar., pp.805-811, 2004. [6] F.Kruse, S.Milch, H.Rohling. “Multi Sensor

(a)

(b)

Fig. 10. Experimental results of object detection; (a) , (b) a video sequence of Sungnae station When the fallen object in the dangerous area has no motion information, the system traces the object by

397

System for ObstacleDetection in Train Applications,” Proc. of IEEE Tr., June, pp.42-46, 2003.

398