Development of an Automatic Traffic Conflict Detection System Based ...

58 downloads 0 Views 3MB Size Report
management than the tripwire systems. In other words, even though the tracking systems can provide comprehensive traffic safety measures, the latest tracking ...
Development of an Automatic Traffic Conflict Detection System Based on Image Tracking Technology Jutaek Oh, Joonyoung Min, Myungseob Kim, and Hanseon Cho improved accident or incident detection, both by detecting stopped vehicles within the camera’s field of view and by identifying acceleration or deceleration patterns that are indicative of incidents beyond the camera’s field of view. Although the latest tripwire VIPSs have proved to monitor traffic flows effectively, they cannot easily generate effective safety data because their detection algorithms have been developed to generate spot point traffic information. Even the present tracking VIPSs generate the same traffic parameters as the tripwire systems, because they focus on providing more detailed traffic parameters for traffic flow management than the tripwire systems. In other words, even though the tracking systems can provide comprehensive traffic safety measures, the latest tracking systems provide only limited safety information such as simple incident detection. To formulate comprehensive traffic safety measures, traffic safety should be objectively evaluated. In the daily work of improving the traffic environment, it is important to identify which situations are hazardous and what makes them hazardous. It is also important to assess whether a modification is beneficial. It is difficult to evaluate the effects of traffic measures in terms of the change in the number of traffic accidents, however, because traffic accidents are unpredictable and rare. Therefore, a VIPS was developed that satisfies both traffic flow and safety management. In this paper, an image processing algorithm that monitors individual vehicle trajectories based on traffic conflict evaluation techniques, which could be the leading-edge technology in the image processing system for traffic safety, is presented. First, individual vehicles were tracked. Second, detailed traffic information for individual vehicles, such as volume and speed, was offered. Third, an automatic detection line calculation algorithm was developed to detect detailed vehicle movements. Finally, traffic conflict evaluation techniques were applied to the tracking VIPS. The image processing approach adopted in this research was based on the use of a single camera installed at the corner of a street to detect vehicles approaching an intersection from all directions.

Increasing reliance on surveillance has emphasized the need for better vehicle detection, such as with wide-area detectors. Traffic information from vehicle trajectories can be especially useful because it measures spatial information rather than single-point information. Additional information from vehicle trajectories could lead to improved incident detection, both by identifying stopped vehicles within the camera’s field of view and by tracking detailed vehicle movement trajectories. In this research, a vehicle image processing system was developed by using a vehicle tracking algorithm, and a traffic conflict technology was applied to the tracking system. To overcome the limitations of the existing traffic conflict technology, this study developed a traffic conflict technology that considers the severity of different types of conflict. To apply this method, video images were collected from intersections at Jungja and Naejung in Sungnam City, South Korea. The image processing approach adopted in this research was based on the use of a single camera installed at the corner of a street to detect vehicles approaching an intersection from all directions, and they were analyzed with the traffic information extracted from the image tracking system. To verify the tracking system, three categories were tested: traffic volume and speed accuracy, vehicle trajectory tracking, and traffic conflict.

The quest for better traffic information, and the consequent additional reliance on traffic surveillance, has increased the need for better vehicle detection such as wide-area detectors. Meanwhile, the high costs and safety risks associated with lane closures have directed the search toward noninvasive detectors mounted beyond the edge of the pavement. One promising approach is a video image processing system (VIPS), which can go beyond traditional traffic parameters. VIPSs are generally divided into two categories: the tripwire system to obtain spot information at a single point and the tracking system (1). Spatial traffic information that tracks vehicles can be more useful than tripwire information at a single point, because it can measure true density instead of simply recording the detector occupancy. In fact, by averaging trajectories over space and time, traditional traffic parameters become more stable than the corresponding measurements from point detectors, which can be averaged only over time (2). Additional information from vehicle trajectories can also bring about

PRIOR RESEARCH AND STATE OF THE PRACTICE Vehicle detection technologies can be classified into background subtraction, temporal differencing, and optical flow. Background subtraction is calculated by the difference between the current image and the reference background image in pixel-by-pixel fashion. This approach is sensitive to background changes according to time changes in the day, the weather, and the seasons. To solve this problem, effective background maintenance algorithms have been proposed for the background prediction and weight learning method, such as

J. Oh, M. Kim, and H. Cho, Korea Transport Institute, 2311 Daehwa-dong, Ilsanseo-gu, Goyang-si, Gyeonggi-do 411-701, South Korea. J. Y. Min, Sangji Youngseo College, 660 Woosan-dong, Wonju-si, Gangwon-do 200-713, South Korea. Corresponding author: J. Oh, [email protected]. Transportation Research Record: Journal of the Transportation Research Board, No. 2129, Transportation Research Board of the National Academies, Washington, D.C., 2009, pp. 45–54. DOI: 10.3141/2129-06

45

46

wallflower (3), Gaussian mixture learning (4), and the Kalman filter technique (2). In temporal differencing, as moving objects change their intensity faster than static objects, consecutive frames are used to identify the difference and dynamic scene to which changes are adapted. The optical flow identifies characteristics of flow vectors of moving objects over time and is used to detect independently moving objects in the presence of a camera. It also requires special hardware, because it is difficult to find the translation vectors for all the pixels with the optical flow. Previous image processing and object tracking techniques were applied mostly to traffic video analysis to address queue detection, vehicle classification, and volume counting (5). The computer vision literature classifies video data tracking approaches into model-based tracking, region-based tracking, active contour-based tracking, and feature-based tracking (2). Model-based tracking is highly accurate for a small number of vehicles (6). The most serious weakness is its reliance on detailed geometric object models. It is unrealistic to expect it to have detailed models for all vehicles on the roadway. In region-based tracking, the process is typically initialized with the background subtraction technique. In this approach, the VIPS identifies a connected region in the image, a blob associated with each vehicle, and then tracks it over time by using a cross-correlation measure. The Kalman-filter-based adaptive background model allows the background estimate to evolve as weather and time of day affect lighting conditions. Foreground objects (vehicles) are detected by subtracting the incoming image from the current background estimate, looking for pixels where this difference image is above some threshold, and finding connected components (2). This approach works fairly well in free-flowing traffic, under congested traffic conditions, but vehicles partially occlude one another instead of being spatially isolated, which makes it difficult to segment individual vehicles. Such vehicles will be grouped together as one large blob in the foreground image. Complementary to the region-based approach, active contourbased tracking is based on active contour models. The idea behind it is to have a representation of the bounding contour of the object and to keep updating it dynamically. The advantage of having a contour-based representation instead of a region-based one is reduced computational complexity. However, the inability of the active contour-based approach to segment vehicles that are partially occluded remains. If a separate contour could be initialized for each vehicle, then each one could be tracked even in the presence of partial occlusion (2, 7 ). An alternative approach to tracking abandons the idea of tracking objects as a whole and instead groups subfeatures and tracks their trajectories (8). The advantage of the feature-based tracking approach is that, even in the presence of partial occlusion, some features of the moving object remain visible. Furthermore, the same algorithm can be used to track in daylight, twilight, or nighttime conditions.

METHODOLOGY FOR AUTOMATIC CONFLICT DETECTION This section explains the basic idea behind the tracking algorithm developed in this research. Vehicle tracking has been based on a region-based tracking approach, which most commercial VIP systems use because not only have individual vehicles been tracked but safety information such as incident and conflict detection have also been measured by tracking.

Transportation Research Record 2129

The procedure used for this paper consists of seven steps, as shown in Figure 1. Steps 1–4 are for general tracking, where morphological processes such as background subtraction, moving vehicle segmentation and noise removal, and individual identification (ID) labeling were performed. The detection conflicts are treated in Steps 5 through 7, which characterize this study. In Step 5, a method for detecting vehicle movement toward each direction in the crossroad was proposed by setting a detection line that sections every area in the crossroad at a specific distance. Conflicts were detected by using the conflict algorithm in Step 6, and a system of printing out the conflict results was developed with the actual experiment images. Step 1. Acquisition of Images In moving object extraction, vehicles passing through the detection area are used in the video background subtraction algorithm; the background template, f(x, y, t0), in the detection area is saved beforehand [where f(x, y, t0) is the gray value (one of 0–255) of the point (x, y) at time t0]; the current frames, f(x, y, ti), are taken [where f(x, y, ti) is the gray value (one of 0–255) of the point (x, y) at time ti ]; and the differences between the two images are calculated pixel by pixel. A sample difference image between two images taken at times t0 and ti is presented in Figure 2. Step 2. Deciding Threshold for Binarization In an ideal case, the histogram with a colored or gray distribution has a deep and sharp valley between two peaks that represent the objects and the background. In most real images, however, it is difficult to detect the valley bottom precisely (9). Therefore, the optimal threshold can be obtained by running the experiments several times. A popular way to find the optimal threshold by the theoretical approach is with the Otsu algorithm (9). This approach is based on the idea that the optimal threshold has been found to be the point that maximizes the between-class variance (σ 2B ) and minimizes the within-class variance (σ W2 ) in the pixel distribution (Equation 1). λ=

σ B2 σ W2

where λ is an optimal threshold that maximizes the between-class variance and minimizes the within-class variance. The optimal threshold k*, the discriminate criterion that maximizes λ, can be found as follows: σ B2 ( k *) = max σ B2 ( k )

(1)

1≤ k ≤ L

The pixel of the given picture is represented in L gray levels [1, 2, . . . , L]. The threshold in this research was chosen heuristically based on the experimental position, because the threshold exhibited a wide variance depending on each position and time. Therefore, the optimal threshold can be obtained by running experiments several times. In this experiment, the threshold was set at the gray level of 27. ⎧⎪1 d 0 ,i ( x , y ) = ⎨ ⎪⎩0

if f ( x , y, t0 ) − f ( x , y, ti ) > θ

(2)

otherwise

where d0,i is a binary value (0 or 1) of the point (x, y) at the ith frame and θ is the threshold for the gray level.

Oh, Min, Kim, and Cho

47

Acquire image (Background)

Start

Mask ROI (A)

STEP 1: Acquision of Image

Selection the area only tracking Mask ROI (B)

Acquire image

STEP 2: Deciding the Threshold for Binarization

Heuristically based on the experimental position Threshold (27 to 255)

Image subtraction (B-A)

STEP 3: Morphology

Fill in small holes within Particle Remove the small particles as Dilation with 3*3 mask noise Morphology (close)

STEP 4: Generating Vehicle IDs

Data structure for each vehicle with ID 0:ID 1:X 2:Y 3:Left 4:Right . . .

Morphology (remove particle)

Morphology (fill hole)

Draw rectangle with moving vehicles Particle Analysis Report

Bounding Rectangle

Track Particles

Display the particle (vehicle) results

Set up a global coordinate axis originating from the CCTV camera

STEP 5: Automatic Detection Line Calculation

Save the particle data into Reference Table (RT)

Giving IDs to individual vehicle

Calculate the point of crossing of the line

Derive the line that is formed by rotating, (-θ)

Divide the area where lines meet into N segments

Line W1 Calculate the point intersecting with the line

Divide the area where lines meet into N segments

STEP 6: Conflict Detection

STEP 7: Conflict Detection Results Display

FIGURE 1

Calc the velocity (km/h)

Transforms to real distance within intersection detection zone

Signal violation

Yes

Line W2

Vehicles are nearby within detection zone

Yes

Calc the stopping distance

Overlapping expected stopping distance with each other

Yes

No

No

No

Level 3

Safe

Level 1

Level 2

Divided into conflict types

Display conflict results (stopping distance & moving direction with individual vehicle)

Save the conflict & vehicle ID information

Data processing procedure of conflict decision program: ROI ⴝ region of interest.

Divide into main & nearby vehicle’s moving direction

48

Transportation Research Record 2129

(a)

(b)

(c)

FIGURE 2 Background subtraction for extracting individual vehicles from roads: (a) load template image, (b) acquire image, and (c) subtraction images to a template (b – a).

Step 3. Morphology Mathematical morphology is a tool for extracting image components— such as boundaries, skeletons, and convex hull—that are useful in the representation and description of a region’s shape (10). The morphology process in this research consists of three methods: closing by using dilation of a 3 × 3 structuring element followed by erosion of the result, filling in the hole with vehicles, and removing very small objects from this frame when they are considered noise.

Step 4. Generating Vehicle IDs The particles, as vehicles in frame It, were drawn within a rectangle, and each vehicle was given a new ID, except for the coordinates of (x, y), (top, bottom) in the reference table (RT). In the next frame, It+1, the particles within the detection zone were counted, and the rectangle whose coordinates were closest to those of the rectangle in the prior frame It from the stored RT was given the same vehicle ID as the rectangle in the prior frame. Otherwise, a new vehicle ID was generated for a new vehicle that enters into the detection zone or to separate the two vehicles bound with one rectangle in the prior frame.

Step 5. Automatic Detection Line Calculation As the images from the crossroad closed-circuit televisions (CCTVs) are obliquely angled, it has been difficult to acquire information, such as on speed and occupancy rate and on conflicts (except for traffic volume) because of the inability to calculate accurately the distance within the crossroad. In general, image detectors set up a two-dimensional (rectangular) detection area for each lane and, with this area used as a background image, vehicles are detected through the difference between the succeeding images. With this method, one or two references are set for the vehicle movement direction. Up to now, research has been carried out based on such an environment. Because the directions of vehicle movements vary at intersections, however (i.e., moving straight, turning left or right, making a U-turn), the existing two-dimensional detection areas cannot be applied to detection of conflict information. No research has been done on image detectors that addresses this issue. According to various factors, such as the CCTV camera’s angle and height and the crossroad’s width and length, images at different angles are entered from each crossroad. In this study, conversion of three-dimensional images into two-dimensional data requires

algorithm-based processing. To achieve this conversion, as shown in Figure 3, by dividing the images into constant angles centered on the vertex of ( p1, q1), ( p2, q2) to calculate the distance between the detection lines [where ( p1, q1) is the crossing point between lines of (x1, y1)–(x2, y2) and (x3, y3)–(x4, y4)], approximate values were calculated. This algorithm can be summarized in the following steps: 1. Set up a global coordinate axis (XG, YG) originating from the CCTV camera and a coordinate axis (XM, YM) based on the image. 2. Calculate the point ( p1, q1), ( p2, q2) of crossing of the line that passes through the crossroad’s polygonal detection area (x1, y1), (x2, y2), (x3, y3), and (x4, y4). 3. Derive the line that is formed by rotating the equation of line ( p2, q2) → (x3, y3) around point (x3, y3) by angle −θ. 4. Divide the area where the ( p1, q1) → lineW1 line meets N segments, and then calculate the equations of the lines between each point of the N segments (n1, n2, . . . , nN) → ( p1, q1). 5. Calculate the point that intersects with the line that passes ( p2, q2) → (x3, y3) and ( p2, q2) → (x4, y4). 6. In the same way as in Step 3, derive the line W2 that is formed by rotating the equation of line ( p1, q1) → (x3, y3) around point (x3, y3) by angle −θ (Equation 3). 7. Divide the area where lines ( p2, q2) → LineW2 meet into M segments, and then calculate the equations of the lines between each point of the M segments (n1, n2, . . . , nM) → ( p2, q2). 8. Calculate the point that intersects with the line that passes ( p1, q1) → (x1, y1), ( p1, q1) → (x3, y3) and ( p2, q2) → (x3, y3), ( p2, q2) → (x4, y4). Figure 4 shows the images before and after application of the automatic detection line calculation algorithm at the Naejung intersection.

Step 6. Traffic Conflict Decision Algorithm A traffic conflict occurs when a driver uses an avoidance behavior by braking or deviating to avoid a collision with another vehicle (11). To expand this concept, Hyden classified the conflicts that occur while driving into four levels: undisturbed passages, potential conflicts, slight conflicts, and serious conflicts (12). Hyden explained that the peak of serious conflicts that occur in passages leads to accidents, at whose apex are placed fatal and near-fatal accidents (12). Glauz et al. analyzed the interrelationship between accidents and conflicts and found that particular types of traffic

Oh, Min, Kim, and Cho

49

YG

YM (P2, q2) Line W1

(x2, y2)

(x1, y1)

(P1, q1)

(x5, y5)

(x1, y1) 0 0 (k = 1,2,…,n) (x3, y3)

(x4, y4) 0

XM XG Line W1

FIGURE 3

Applying algorithm developed in this study to an intersection.

accidents can be predicted more precisely by using traffic conflict evaluation technology than when data on past accidents are used because the available data on past traffic accidents are not enough to enable accident prediction (13). Most studies on past traffic conflicts used observers to decide on traffic conflicts. The observer personally verified whether the driver of the postpassing vehicle exhibited an avoidance behavior when the driver of the prepassing vehicle breached the signal. Then the verified instances were counted. This method, however, which relies on human judgment, has a major drawback in that it is likely to reflect the subjectivity of the observer or analyst and to consider insufficiently the seriousness of accidents and conflicts. In addition, past conflict decision methods merely counted all the vehicles where the driver violated a traffic signal but did not consider other factors, such as distance between vehicles and avoidance responses. Therefore, to establish more accurate conflict decision criteria for vehicle drivers,

this study considered four levels of conflict (LOCs): Level 1, signal violations; Level 2, slight conflicts; Level 3, dangerous conflicts; and Level 4, serious conflicts (accidents).

(a)

(b)

FIGURE 4

LOC1. Signal Violations With the same method as was used in the past to resolve conflicts at intersections, the signal violation instances in which the driver of the prepassing vehicle unreasonably entered the intersection at a yellow signal were counted. The instances of signal violations were counted by using intersection image tracking and by linking the positional data of each vehicle according to the time and signal presentation at the intersection. The times of entry and exit of each vehicle at the intersection were compared with the signal time, and the instances of signal violations were counted.

Applying automatic detection line calculation algorithm: (a) before and (b) after.

50

Transportation Research Record 2129

LOC2. Slight Conflicts When the driver of the preceding vehicle violated the signal and then the driver of the following vehicle entered the intersection, a conflict decision can be made by estimating the distance between the stopping points of the two vehicles. Through the X, Y coordinates obtained by image tracking between vehicles, the distance between the stopping points of two vehicles was estimated. Then, the stopping points of the two vehicles were marked on the coordinates and were reflected in real time so that the coordinates of the stopping points of the two vehicles could be identified. If the prolonged lines that lead to the stopping points between the two vehicles do not meet or cross, it is decided that the conflict is slight rather than serious.

LOC3. Dangerous Conflicts LOC2 is a signal violation condition in which the drivers of two vehicles pass within the stopping distance. LOC3 is a condition in which the path tracks of two vehicles cross when the distance between the stopping points of the two vehicles is estimated and coordinated. The specific procedure for developing the conflict decision criteria at this level follows that of the decision criteria for the slight conflict (LOC2) condition. A dangerous conflict occurs when two vehicles meet or cross each other when their stopping points are coordinated.

LOC4. Serious Conflicts (Accidents) With the coordinates of the two vehicles obtained through image tracking, when the parts that represent the scope of each vehicle meet or overlap, it is decided that the conflict was an accident. Specifically, the minimum distance (D) between two vehicles is D ≤ 0, as shown in Figure 5.

IMPLEMENTATION OF AUTOMATIC TRAFFIC CONFLICT DECISION PROGRAM This program consists of the following functional modules: an image input module that acquires images from the CCTV camera; a saveto-buffer module that stores the entered images by classifying them

(a)

into background images, current images, difference images, and segmentation images; and a conflict detection module that displays the processed results. The program was developed with LabVIEW 8.5 (a graphic language) and the Vision module library. In this research, 30-frameper-second images of 640 × 480 resolutions acquired from crossroad CCTVs were converted into digital images in the frame-grabber board for image processing. The algorithm was implemented from the hardware system with a Quad Core 2.4-GHz central processing unit and 2 GB of random-access memory; the authors’ system measures vehicle trajectories and detailed traffic information and detects conflicts in real time for 12 to 16 frame-per-second images. The road image within the detection area was stored as the background image. To increase the processing speed, the images within the detection area were converted into 256 gray formats. This background image was defined as a t-frame image and was stored in the memory buffer (LL_grabAcq). To extract (segment) moving vehicles, the (t + i ) frame image was stored in the memory buffer (template). The brightness difference between LL_grabAcq and template was calculated through 1:1 matching of the addresses to calculate the difference image, which was then stored in the buffer (ImgSubtract). Because the difference image could not accurately extract the region of a moving vehicle, the dilation of the region had to be processed with a morphology algorithm. In this program, morphologic dilation processing was done with 3 × 3 masks (structuring elements). After calculation of the actual distance within the detection area and of the actual moving distance of a moving vehicle in the previous stage, the speed was calculated so that the braking distance of each vehicle could be calculated. The speed was calculated according to 0.1-s intervals along the actual moving distance, followed by the braking distance, using the calculated speed, the friction coefficient, and the vehicle’s length. In addition, the moving direction was calculated by extending the slope of the previous frame (t − 1) and that of the current frame (t). The estimated braking distance at the next frame (t + 1) was calculated. To detect traffic sign violations, the color values of the four traffic lights at a crossroad were processed without converting them into 256 gray formats. Conflict detection processing was performed according to the following four conflict levels. At LOC1, the signal violation status was detected. Processing moved to LOC2 if, after a vehicle’s traffic

(b)

FIGURE 5 LOC4: serious conflict situations in (a) opposing left-turn conflict type and (b) cross-traffic conflict type.

Oh, Min, Kim, and Cho

51

signal violation was detected, the braking distance of the calculated vehicle was found not to intersect with that of a vehicle moving in the opposite direction. At a LOC3 conflict decision, a situation was detected in which a collision was expected, because the braking distances of the main vehicle and the object vehicle intersected. Finally, a LOC4 situation was detected if an accident occurred. To display all the results of the image input, image processing, and conflict decision until the conflict determination results were obtained, the entire screen was divided into five areas: the current image output area, the moving vehicle partition area, the braking distance area, the moving direction calculation area, and the conflict severity result text area. The conflict identification program implemented as discussed here is shown in Figure 6.

DATA COLLECTION AND FIELD TEST To verify the applicability of the system developed in this research, two highly accident-prone intersections were selected for collection of vehicle movement images. The images were collected within 15 min on April 24, 2008. The cameras at the sites were installed at different heights (6 m at one site and 15 m at the other site) to test the system under various environments. The tracking system was tested by using three criteria: volume and speed accuracy, vehicle trajectory tracking, and traffic conflicts. The recorded images were compared with those of the commercial VIPS Autoscope because the test sites were not equipped with other detectors, such as a loop detector. Because Autoscope cannot measure turning vehicle movements and track vehicle trajectories, the comparison was conducted with only through traffic and average vehicle speeds. Table 1 shows the through-traffic volume and speed results in which the total traffic volumes from the tracking system at the Naejung intersection were 74 and 76 from Autoscope, whereas

FIGURE 6

Traffic conflict detection program.

the total traffic volumes from the tracking system at the Jungja intersection were 102 and 104 from Autoscope. The real ground-truth counts of the recorded images at the Naejung intersection were 76 and 103 at the Jungja intersection, which shows that both systems generate reliable volume counts. As for speed consistency results, the systems showed minor differences (within 1 to 3 km/h). With regard to the vehicle trajectory tracking, as mentioned previously (Step 5), the camera images were two-dimensional images converted from three-dimensional ones, and they were skewed based on the camera’s height and angle, which means they were not rectangular. Thus, to verify the vehicle trajectory tracking algorithm, the real distances between the images in the field were measured, and the real-distance data were applied to the X, Y coordinates of the tracking system. Figure 7 shows an image of the Naejung intersection and the vehicle movement trajectories at the intersection. As shown in Figure 7, the tracking system ably traced the vehicle trajectories, including the turning movements, by drawing the detailed X, Y coordinate records. To apply the traffic conflict decision criteria, the same vehicle images recorded at the Jungja and Naejung intersections were used. Table 2 shows some of the data generated by the tracking system for traffic conflict analysis. As shown in Table 2, when vehicles experience traffic conflicts at their locations, the system determines their LOC on the basis of conflict decision criteria. The system identified 33 vehicles under LOC1 in the recorded images. Of the 33 vehicles, 26 advanced to LOC2 and three were recorded under LOC3. None of the vehicles, however, reached LOC4, which means there was no accident. Figure 8 shows some traffic conflict cases that the tracking system detected. Figure 8a shows an LOC1 situation in which a vehicle with ID 44288 violated a signal but there was no other vehicle. Figure 8b shows an LOC2 situation between the vehicles with IDs 70178 and 64335. In this case, the vehicle with ID 70178 violated a signal. Figure 8c shows

52

Transportation Research Record 2129

TABLE 1

Traffic Volume and Speed Results

Traffic Volume Count

Vehicle ID

Average Tracking Speed (km/h)

Autoscope

Traffic Volume Count

31 36 36 31 31 38 56 62 41 42 34 35 31 34 36 23 29 23 37 28 34 33 76 37.9

1 2 3 4 5 6 7 8 9 10 11 12 27 28 29 30 31 100 101 102 103 104 Total count Average speed

Naejung Intersection 1 2 3 4 5 6 7 8 9 10 11 12 49 50 51 52 53 72 73 74 75 76 Total count Average speed

Average Tracking Speed (km/h)

Autoscope

Jungja Intersection 11,934 98,729 75,721 83,812 72,884 2,330 95,633 74,522 10,750 61,533 45,787 59,375 54,484 92,107 92,370 89,339 17,463 23,576 36,339 72,142 28,365 1,378 74

30 35 34 30 29 38 54 60 38 41 33 33 31 35 38 24 28 23 36 27 35 33 37.4

(a) FIGURE 7

Vehicle ID

Vehicle movement trajectories.

63,462 99,682 12,033 71,469 88,910 58,582 50,281 24,501 33,158 36,032 76,699 34,011 28,663 41,707 37,238 15,411 17,412 97,963 27,360 41,250 83,926 28,832 102

61 54 54 54 54 54 61 63 25 24 28 29 49 49 51 51 38 75 54 58 39 42 46.7

(b)

62 53 55 54 54 53 62 64 26 25 27 29 46 48 50 52 38 78 54 58 39 44 104 46.5

Oh, Min, Kim, and Cho

TABLE 2

53

Data from Traffic Conflict Detection Program

Date

Time

Time Frame

Vehicle ID

Front (X coordinate)

Front (Y coordinate)

Center (X coordinate)

Center (Y coordinate)

Rear (X coordinate)

Rear (Y coordinate)

LOC

Speed (km/h)

2008-05-22 2008-05-22 2008-05-22 2008-05-22 2008-05-22 2008-05-22 2008-05-22 2008-05-22 2008-05-22 2008-05-22 2008-05-22 2008-05-22 2008-05-22 2008-05-22

5:17:12 5:17:12 5:17:12 5:17:12 5:17:12 5:17:12 5:17:12 5:17:12 5:17:12 5:17:13 5:17:13 5:17:13 5:17:13 5:17:13

7,956.3 7,956.3 7,956.3 7,956.4 7,956.4 7,956.4 7,956.5 7,956.5 7,956.5 7,956.6 7,956.6 7,956.6 7,956.7 7,956.7

72,871 40,065 2,150 72,871 40,065 2,150 72,871 40,065 2,150 72,871 40,065 2,150 72,871 40,065

148.2 117.5 189.6 145.7 105.9 182.8 142.8 93.9 173.5 138.7 90.2 163.0 136.8 75.3

307.7 57.6 88.3 317.2 62.8 94.6 322.2 60.3 88.0 328.1 62.4 95.1 334.3 62.4

167.0 155.8 227.2 163.3 145.6 220.2 159.6 134.5 211.3 155.3 122.8 201.3 152.2 110.7

282.4 59.7 97.0 288.4 60.4 96.6 294.6 60.4 95.0 301.4 60.9 95.0 307.0 61.3

185.8 194.1 264.8 180.9 185.4 257.6 176.3 175.1 249.1 171.9 155.5 239.6 167.6 146.0

257.0 61.9 105.6 259.6 57.9 98.6 267.0 60.4 101.9 274.8 59.4 94.9 279.7 60.2

2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0

38.3 43.5 32.5 38.3 43.4 32.5 38.4 43.4 32.6 38.5 43.5 32.5 38.3 43.4

2008-05-22

5:17:13

7,956.7

2,150

153.0

90.1

190.9

93.9

228.8

97.8

2.0

32.5

LOC 4.0

LOC 4

3.0

3 ID ID

2.0

2

70178

44288

0.0

0

64335

7985.7 7986.0 7986.3 7986.6 7986.9 7987.2 7987.5 7987.8 7987.9 7988.1 7988.2 7988.4 7988.5 7988.7 7988.8

1

8035.8 8035.9 8036.0 8036.1 8036.2 8036.3 8036.4 8036.5 8036.6 8036.7 8036.8 8036.9 8037.0 8037.1

1.0

Time (sec)

(a)

(b)

LOC 4 3 2

97147 75721

1

7962.6 7962.7 7962.8 7962.9 7963.0 7963.1 7963.2 7963.3 7963.4 7963.5 7963.6 7963.7 7963.8 7963.9 7964.0 7964.1 7964.2 7964.3 7964.4 7964.5

0

(c) FIGURE 8

LOCs when traffic conflicts occur.

Time (sec)

54

an LOC3 situation between vehicles with IDs 97147 and 75721, in which, at time 7,964.4, the two vehicles moved into LOC3 from LOC2. No LOC4 (serious conflict or accident) situation was detected in the recorded images (see Figure 8).

Transportation Research Record 2129

These applications should extend into automatic accident detection, which still requires much effort in the image processing area.

ACKNOWLEDGMENT CONCLUSIONS Traffic information from vehicle trajectories can be useful, because it measures spatial information rather than single-point information. Additional information from the vehicle trajectories could lead to improved incident detection, both by detecting stopped vehicles within the camera’s field of view and by tracking detailed vehicle movement trajectories. In this research, a VIPS was developed with a tracking algorithm, and traffic conflict evaluation technology was applied to the tracking system. This traffic conflict evaluation technique is expected to have a frontline position in the future, both practically and theoretically, because it is important to identify which situations are hazardous and because it is difficult to evaluate the effects of traffic measures merely in terms of the change in the number of traffic accidents. Therefore, development of traffic conflict evaluation techniques should include the task of applying real-time automatic image processing in the phase of detecting safety-related events to conduct an objective evaluation of traffic safety. In particular, detailed microscopic data of vehicle behavior at intersections can be evaluated on the basis of the traffic conflict evaluation technology. Because of limitations of previous traffic conflict evaluation technology, in which incidents of signal violations were physically counted, this study differentiated the types of traffic conflicts that are likely to occur at the time of a signal violation. Then a traffic conflict evaluation technology that considers the severity of each type of conflict was developed. To apply this method, traffic images were collected from intersections at Jungja and Naejung in Sungnam City, South Korea, and they were analyzed by using the traffic information extracted from the image tracking system. To verify the tracking system, the measured traffic information items, such as volume and speed, were compared with those of Autoscope. Both systems performed well with the recorded images. Thirty-three vehicles experienced LOC1, and 26 experienced LOC2. Only three vehicles experienced LOC3 (serious conflicts), and the LOC4 (dangerous conflicts or accidents) situations established in this study were not verified. These results imply that, although few vehicles experience dangerous LOCs at intersections, intersections are still dangerous, and more efforts are needed to alert drivers about safety. In future work, applications of the tracking system developed in this study should be tested based on various environmental conditions.

This research was supported by a grant from the Transportation System Innovation Program of the Ministry of Land, Transportation, and Maritime Affairs of South Korea.

REFERENCES 1. Oh, J., and J. D. Leonard II. Vehicle Detection Using Video Image Processing System: Evaluation of PEEK VideoTrak. Journal of Transportation Engineering, Vol. 129, No. 4, 2003, pp. 462–465. 2. Coifman, B., D. Beymer, P. McLauchlan, and J. Malik. A Real-Time Computer Vision System for Vehicle Tracking and Traffic Surveillance. Transportation Research C, Vol. 6, No. 4, 1998, pp. 271–288. 3. Toyama, K., J. Krumm, B. Brumitt, and B. Meyers. Wallflower: Principles and Practice of Background Maintenance. Proc., 7th IEEE International Conference on Computer Vision, Kerkyra, Greece, Sept. 20–27, 1999, pp. 255–261. 4. Lee, D.-S. Effective Gaussian Mixture Learning for Video Background Subtraction. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, No. 5, 2005, pp. 827–832. 5. Chen, S. C., M. L. Shyu, S. Peeta, and C. Zhang. Learning-Based Spatiotemporal Vehicle Tracking and Indexing for a Transportation Multimedia Database System. IEEE Transactions on Intelligent Transportation Systems, Vol. 4, No. 3, 2003, pp. 154–167. 6. Koller, D., K. Daniilidis, and H. Nagel. Model-Based Object Tracking in Monocular Image Sequences of Road Traffic Scenes. International Journal of Computer Vision, Vol. 10, 1993, pp. 257–281. 7. Koller, D. Towards Robust Automatic Traffic Scene Analysis in Real Time. Proc., 12th IAPR International Conference on Pattern Recognition, Oct. 9–13, 1994, pp. 126–131. 8. Kim, Z. Real Time Object Tracking Based on Dynamic Feature Grouping with Background Subtraction. IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, Alaska, June 23–28, 2008, pp. 1–8. 9. Otsu, N. A Threshold Selection Method from a Gray-level Histogram. IEEE Transactions on Systems, Man and Cybernetics, Vol. 9, No. 1, 1979, pp. 62–66. 10. Gonzalez, R. C., and R. E. Wood. Digital Image Processing. Addison Wesley, Indianapolis, Ind., 1992. 11. Perkins, S. R., and J. I. Harris. Traffic Conflict Characteristics—Accident Potential at Intersections. In Highway Research Record 225, HRB, National Research Council, Washington, D.C., 1968, pp. 35–44. 12. Hyden, C. Development of a Method for Traffic Safety Evaluation: The Swedish Traffic Conflict Technique. Department of Traffic Planning and Engineering, Lund Institute of Technology, Lund, Sweden, 1987. 13. Glauz, W. D., K. M. Bauer, and D. J. Migletz. Expected Traffic Conflict Rates and Their Use in Predicting Accidents. In Transportation Research Record 1026, TRB, National Research Council, Washington, D.C., 1985, pp. 1–12. The Intelligent Transportation Systems Committee sponsored publication of this paper.

Suggest Documents