Developing Operating Mode Distribution Inputs for ... - SAGE Journals

0 downloads 0 Views 805KB Size Report
maximize modeling capacity must be found. Developing Operating Mode Distribution. Inputs for MOVES with a Computer. Vision–Based Vehicle Data Collector.
Developing Operating Mode Distribution Inputs for MOVES with a Computer Vision–Based Vehicle Data Collector Zhuo Yao, Heng Wei, Zhixia Li, Tao Ma, Hao Liu, and Y. Jeffrey Yang MOVES (Motor Vehicle Emission Simulator) 2010 is applied for project-level conformity analysis (3–5). The U.S. Environmental Protection Agency (EPA) recommends obtaining the data either from other locations with similar geometric and traffic characteristics or from outputs from microsimulation models. However, acquiring accurate fleet composition and relevant traffic operating data locally for MOVES remains a challenge in practice. It has been well recognized that EPA-regulated emissions are associated with traffic operation conditions (4, 6–10). Previous research has proved that on-road traffic-related emission varies with traffic operating conditions (e.g., speed, acceleration and deceleration) and fleet composition (i.e., percentage of vehicle types in the traffic stream) (10–14). Operating mode distribution is a critical input of the project-level MOVES model and represents the key attribute of link traffic activity. In practice, the air quality models (i.e., emission and dispersion) perform default calculations and produce a summary of pollutant emission factors or concentrations at each link or receptor location by using predetermined default values. However, use of default values may not represent real-world conditions. Thus, there is a gap between local traffic activity inputs and the emission and dispersion models. There is a critical need for easy-to-use traffic data sets reflecting real-time vehicle operating characteristics as inputs for the MOVES emission and energy consumption analysis. Such a tool should enable the rapid assessment of emissions on the basis of data collected from local roadway links. In EPA’s latest vehicle emission model, MOVES2010, traffic activity inputs for project level analyses include traffic volume, vehicle composition, and operating mode distribution. Operating mode distribution represents percentages of various vehicle emission-producing activities on a specific road segment. It is categorized into operating mode distribution by using vehicle speed, vehicle acceleration, and vehicle-specific power (VSP) across the entire vehicle fleet. The data sources for VSP distribution are usually derived from second-bysecond vehicle operating data, which provide instantaneous speed, acceleration and deceleration, and roadway grade. Other traffic data sources such as dual-loop detector, radar speed recorder, and Global Positioning System (GPS) and Bluetooth-enabled devices can also be used to generate VSP distribution (15, 16). However, the conventional method of collecting data for operating mode distribution is expensive and time-consuming. Another drawback of traditional traffic data sources is that most of them are model-based. Systematic errors may be adapted from the modeling algorithm and result in inaccurate VSP. Hence, the availability of easy and reliable operating mode distribution, traffic volume, and vehicle composition data for MOVES analysis based on ground truth data is deficient. Alternatives that can provide operating mode distribution for MOVES inputs to maximize modeling capacity must be found.

Acquisition of reliable vehicle activity inputs to the U.S. Environmental Protection Agency’s MOVES (Motor Vehicle Emission Simulator) model is necessary for maximizing modeling capacity and helping federal and state officials improve the quality of transportation management. For this purpose, rapid and low-cost collection of the operating mode distribution and other traffic activity data for the MOVES model is necessary. In this study, a computer vision–based software tool, Rapid Traffic Emission and Energy Consumption Analysis (REMCAN), is developed to enable a rapid operating mode distribution profiling for the MOVES model. The video-based system provides traffic activity inputs, including vehicle speeds and acceleration and deceleration rates covering the entire vehicle fleet; these may be difficult to extract from traffic data collected by traditional methods. The REMCAN system architecture and vehicle parameter extraction methods are presented. The speed measurement, which is the most critical factor for operating mode profiling, is calibrated with a coefficient that converts screen space to real-world space. Three case studies with different traffic operation scenarios are tested to demonstrate the capability of the REMCAN system. The integration of REMCAN traffic activity data collection and MOVES operating mode distribution generation provides timely, lowcost, and accurate environmental impact assessment compared with traditional data sources for emission estimation analysis.

Localized traffic activity inputs to emission models are crucial in maximizing their capability to reflect accurately the energy consumption and greenhouse gas emissions associated with transportation programs and projects. Such inputs have critical influences on the emission assessment associated with local or regional projects that may improve travel times, alleviate congestion, and reduce stopand-go traffic (1, 2). Significant improvement in the accuracy of air quality modeling assessments at the local scale is expected when Z. Yao, T. Ma, and H. Liu, College of Engineering and Applied Science, University of Cincinnati, 735 ERC, 2901 Woodside Drive, Cincinnati, OH 45221-0071. H. Wei, Advanced Research in Transportation Engineering Systems Laboratory, College of Engineering and Applied Science, University of Cincinnati, 792 Rhodes Hall, 2851 Woodside Drive, Cincinnati, OH 45221-0071. Z. Li, Traffic Operations and Safety Laboratory, Department of Civil and Environmental Engineering, University of Wisconsin–Madison, 1249A Engineering Hall, 1415 Engineering Drive, Madison, WI 53706. Y. J. Yang, Office of Research and Development, National Risk Management Research Laboratory, U.S. Environmental Protection Agency, 26 West Martin Luther King, Jr., Drive, Cincinnati, OH 45268. Corresponding author: Z. Yao, [email protected]. Transportation Research Record: Journal of the Transportation Research Board, No. 2340, Transportation Research Board of the National Academies, Washington, D.C., 2013, pp. 49–58. DOI: 10.3141/2340-06 49

50

Transportation Research Record 2340

Summary of Previous Studies

Methodology

In the past two decades, a variety of computer vision–based vehicle detection and data extraction techniques have been developed. Several commercially developed video analytics tools for traffic data acquisition products, such as Autoscope and Traficon, have been widely used in practice (17, 18). They were designed to collect vehicle counts, vehicle speed, classification, density, flow, and so forth. Many of these systems require specially manufactured hardware and ancillary equipment such as cameras and detector boards. In addition, these commercial products and applications are not specifically designed for vehicle emission analysis. A video capture–based approach has been used to extract vehicle trajectory data and modeled lane choice and lane change behaviors (19–23). A computer-based tool, Vehicle Video-Capture Data Collector (VEVID), was developed to facilitate the extraction of trajectory data from the video. The application was originally developed for modeling lane choice and lane change, and it was then upgraded and redeveloped for other purposes such as modeling the dynamic nature of the dilemma zone (24–27). Studies have proved that using VEVID to extract vehicle trajectory data is a powerful way to reveal the dynamic features of the dilemma zone. Wu et al. (28) introduced a commonly applied vehicle detecting and tracking method that uses a roadside camera. The procedure is described as (a) video preprocessing, (b) foreground segmentation, (c) shadow removal, (d) vehicle tracking, and (e) extraction of traffic parameters. Malinovskiy et al. applied the latest advancement in computer vision and developed a more robust video detection system that is insensitive to the impacts of shadows, sun glare, rapidly changing lighting, and sight-disturbing conditions such as heavy rain. A methodology using a spatiotemporal map to extract vehicle operation data is developed to minimize the environmental and occlusion impacts on video-based vehicle detection accuracy (29). Zhang et al. (30) developed a real-time traffic data collection system by using uncalibrated video cameras. The algorithm applied virtual loop detectors to mimic the situation in which vehicles pass loop detectors with triggered signals. The results are relatively reliable and accurate. Although the system attempts to use the ground truth data, the algorithm still adapted the modeling error embedded in the system. Little research has been done into the generation of traffic activity data inputs for MOVES through the use of recent advances in computer vision techniques. Botha et al. (31) used video data to measure vehicle operating mode for prediction of emissions. In their study, video for the instrumented vehicle is taken from a camera mounted on an airborne helicopter. The helicopter follows the instrumented vehicle and provides second-by-second travel records. The study derived modal activity distribution of multiple roadway links. However, no emission analysis was made. Scora et al. (32) developed a computer vision–based monitoring system incorporating energy and emission profiles from the Comprehensive Modal Emissions Model (CMEM) and EPA’s MOVES emission factor database. It provides a bridge from real-time traffic data to instantaneous emission estimation. This study borrowed the MOVES model database, but the actual emission model used is the CMEM model. This paper extends previous work on generating traffic activity inputs for MOVES in three ways: (a) by using a computer vision– based algorithm to extract the second-by-second vehicle operating data, (b) by providing more accurate and reliable vehicle speed and acceleration and deceleration from calibration parameter results through the use of GPS probe vehicle data, and (c) by building the capability of automatically extracting traffic activity data for MOVES from large video data sets.

The goal of this research is to develop a tool to enable the rapid estimation of vehicle emissions and energy consumption for the MOVES model from ground truth videos. To accomplish the goal, two objectives must be fulfilled: (a) development of a prototype computer vision–based tool for project-level MOVES running exhaust traffic data inputs, the Rapid Traffic Emission and Energy Consumption Analysis (REMCAN) system, and (b) testing of the REMCAN system and assessment of traffic emissions under three video site cases. In this project, the case study focuses on selected criteria pollutants as specified in the National Ambient Air Quality Standards (33). An automated computer vision–based system (REMCAN) for ground truth vehicle detection and tracking based on lessons and experiences learned from VEVID was developed to satisfy the need for vehicle emission analysis. REMCAN outputs, including the extracted vehicle trajectory, are then converted to inputs of the MOVES emission model. The methodology is illustrated in Figure 1. The REMCAN system is implemented in C++ with Open CV library. It has four modules: video acquisition, calibration, vehicle parameter extraction, and the MOVES input generation and conversion module. The video acquisition module enhances, splits, and resizes raw video data into a common standard size and length that the program can read. The calibration module first collects ground truth vehicle activity data from GPS probe vehicles and then uses VEVID to extract more data for vehicle speed calibration and validation. The vehicle parameter extraction module is the core of this system. The module extracts vehicle parameters by first initializing the video by frames and converting them into binary images. Second, the background is segmented by averaging out continuous video frames. Third, the foreground, which contains the vehicle blob, is filtered out. Then the vehicle type, currently set only for light-duty vehicles and heavy-duty vehicles, is identified on the basis of a preset length threshold. Afterwards, the geometric centroid of the vehicle blob is tracked from one frame of the video to another. Finally, the vehicle activity data are extracted from sequential frames of the video. The MOVES input generation and conversion module produces the operating mode distribution, link source types, and link volume, which are ready for the MOVES model run. In combination with MOVES global inputs for project-level emission analysis, such as age distribution, meteorological data, fuel supply, and formulation, the MOVES model inputs database is complete. Vehicle Parameter Extraction Module The vehicle parameter extraction module is the core of the REMCAN system. Figure 2 illustrates the module. In the video initialization phrase, the video is corrected on the basis of the camera intrinsic and extrinsic parameters. In this way, the intrinsic parameters such as focal length, principal point, and tangential and radial coefficients can be used to recover the original images to geometrically undistorted images. Afterwards, camera extrinsic parameters (i.e., angles and height) with respect to world coordinates are prepared for the image warping process that maps objects in three-dimensional space to a two-­dimensional plane on the basis of the theory of perspective ­transformation (i.e., homography). The transformation is given by (34) m n = QnQo−1m o

(1)

REMCAN Development with C++ Video Background Initialization Segmentation Trajectory Extraction

Video Acquisition

51

Foreground Segmentation

Vehicle Tracking

Vehicle Detection

Calibration Module

Vehicle Parameter Extraction Module

Yao, Wei, Li, Ma, Liu, and Yang

Vehicle Locations in Sequential Frames

Detection Zone

Video Selection Based on Time of Day and Location of Interest Video Enhance, Split, and Resize

GPS Probe Vehicle Data Collection VEVID Trajectory Data Extraction

Calibration and Validation MOVES Emission and Energy Consumption Estimation Other MOVES Inputs

Operating Mode Distribution

MOVES Traffic Activity Inputs Generation

Link Source Types Traffic Volume

MOVES Input Generation/Conversion Module

FIGURE 1   Flowchart for computer vision–based REMCAN system.

where ˜ n = new position in the plane, m ˜ o = old position in the plane, and m Qn and Qo = 3 × 4 matrices that encode both camera position and orientation. The background image was segmented by averaging the temporal median of 30 continuous video frames. This procedure is set to be

Video Initialization

Video Source

continuous and provide the background images at a fixed time interval to eliminate the negative impacts brought about by changes of lighting and environment. To detect the lane position and limit detection to the predetermined lanes, a lane-finding algorithm is used. The algorithm used 3 min of video data to accumulate all the frames into one grayscale image. Thus most nonmotion parts in the image frames are preserved. The Hough transform (35) is then performed to detect all the lanes. The tracking algorithm is implemented for each lane.

Background Segmentation

Camera Calibration/Undistortion

1st Image

Image Warping

2nd Image



30th Image

Temporal and Spatial Median

Parameter Extraction Extracted Background

VSP Distribution Link Source Type Link Volume

Vehicle Detection Vehicle Tracking

Lane Finding

Mean Shifting Vehicle Tracking MOVES Input

Background Subtraction Speed 1

Vehicle Type

Speed 2

Acceleration or Deceleration Rate

FIGURE 2   Data flowchart of vehicle parameter extraction module.

Auto Black and White Threshold Region Filter

52

Transportation Research Record 2340

Since the background has been determined, the next step is to compare any image frames with the background and extract the features that are not on the background. Afterwards, the extracted foreground will be converted into a binary image with black and white only. An autothreshold method is used to determine the best threshold value for the best solution of black and white. To filter out small objects and distinguish between cars and trucks, the program used a region filter to segment the black and white blobs into cars and trucks. For the vehicle tracking procedure, the mean shift method is used since it is a robust method of finding local extrema in the density distribution of a data set. Two successive speed values are computed in the detection area, both of which are the average speed within 1 s (30 frames). Acceleration is then computed with the two speed values. To classify the type of vehicle, the height and width of the rectangle representing the vehicle boundary were measured during the mean shift searching. Vehicles are roughly one of two types: light duty or heavy duty. However, the method can be improved if more features are added as detectors, such as color, edge, texture, and three-dimensional geometry.

TABLE 1   Operating Mode Bins for MOVES Running Emissions Instantaneous Speed (mph) Vehicle-Specific Power (kW/ton) >30 30 27 24 21 18 15 12 9 6 3 0 50

Bin 30

Bin 40

Bin 29

Bin 39

Bin 28

Bin 38

Bin 27

Bin 37

Bin 15 Bin 14 Bin 13

Bin 25 Bin 24 Bin 23

Bin 12

Bin 22

Bin 11

Bin 21

Bin 35

Bin 33

Video Data To fulfill the objectives, three freeway segments from I-71 within the Cincinnati and Columbus, Ohio, urban areas were selected. Site 1 is located at Exit 6 (Smith Road) of I-71 in Cincinnati. This site was selected because of the potential concern with regard to health issues of local residents. Site 2 is located at Exit 19 (Fields Ertel Road). The Advanced Regional Traffic Interactive Management and Information System (ARTIMIS), the regional intelligent transportation systems center, maintains a traffic camera there, and a video sample was provided. It was also selected on the basis of the need to test the capability of REMCAN by using video data from existing traffic monitoring sources. Site 3 is located at Exit 100A (South High Street). The site was selected because previous research projects have collected ground truth video data and GPS data for calibration. The vehicle trajectory data have been extended by using VEVID for calibration.

VSP and Generation of Operating Mode Distribution VSP is traditionally defined to represent the instantaneous vehicle engine power. It has been widely used to reveal the impact of vehicle operating conditions on emission and energy consumption estimates, which are dependent on speed, roadway grade, and acceleration or deceleration on the basis of the second-by-second cycles. The VSP values (36) are then computed with Equations 2 and 3 for light- and heavy-duty vehicles, respectively. VSP = v × [1.1a + 9.81 × grade (%) + 0.132] + 0.000302 × v 3

(2)

VSP = v × [a + 9.81 × grade (%) + 0.09199] + 0.000169 × v 3

(3)

where v = vehicle speed (m/s), a = vehicle acceleration or deceleration rate (m/s2), and grade = vehicle vertical rise divided by horizontal run (%). MOVES adapted the 23 operating mode bins according to combinations of speed and VSP representing real-world operating modes,

plus additional operating modes for starts and evaporative emissions. Table 1 is a summary of the VSP bins for the MOVES model. The VSP values are binned according to Table 1, and the operating mode distribution is generated. Calibration of REMCAN Speed Measurement Accuracy of traffic parameters is critical for the MOVES emission analysis. Since VSP and operating mode distribution are determined on the basis of vehicle speed, the system must be calibrated to ensure accurate speed measurement. Many factors may influence the accuracy of the measured speed. Errors could arise from processes such as camera calibration and image warping. In summarizing all the possible error sources for speed, the most critical factor is conversion of the screen space to real-world space. This section discusses the process for calibration of speed measurements from the REMCAN system. The speed of a vehicle is calculated as the distance (number of pixels in screen) traveled during the time interval between two frames. To do this, a parameter summarizing all factors that might affect the measurement of speed is introduced. The speed measurement is given by speed = β ×

number of pixels traveled number of frames elapsed × video FPS

(4)

where β is the conversion factor between screen space and realworld space and FPS is the number of frames per second of the video file (usually 30). Thus, the calibration process has been converted to finding an optimal β that will approximate the measured speed to the GPS speed. The GPS speed measurement is considered to be the ground truth. An iterative process is performed, and the optimal value of β is determined for each of the video sites. The objective is to limit the speed measurement error to less than 5%. Table 2 is a summary of the calibrated β-value with its percentage errors for the I-71 Exit 6 (Smith Road) site. The same calibration process is used for all three sites.

Yao, Wei, Li, Ma, Liu, and Yang

53

TABLE 2   Example of REMCAN Speed Measurement Calibration Using GPS Data at I-71 Exit 6 Site GPS Speed (T ) 66.0 58.0 66.5 67.2 60.4

REMCAN Speed (T)

Percentage Error

GPS Speed (T + 1)

REMCAN Speed (T + 1)

Percentage Error

β-Value

65.5 58.0 66.2 67.7 62.4

−0.75 0.05 −0.45 0.74 3.33

65.9 59.6 66.0 66.8 59.3

65.4 59.5 68.9 63.2 60.6

−0.81 −0.16 4.46 −5.42 2.19

1.344 1.344 1.344 1.344 1.344

Note: Speeds are in miles per hour. T is the time the vehicle passes the detection zone in seconds.

Figure 3 shows a sample GPS speed profile used to calibrate the speed measurement of the REMCAN system (the circle indicates where the GPS-instrumented vehicle passes the video zone). The GPS data loggers used were Qstarz BT-Q1000EX. They were set to collect date, time, latitude, longitude, altitude, speed, horizontal dilution of precision (HDOP), number of satellites (NSAT) used, and so forth. According to the manufacturer (37), the device error is less than 3 m for positioning and 0.1 m/s for measuring velocity. A data filter with HDOP greater than 4 and low NSAT less than 4 (38) was applied to remove invalid data caused by any blockage of satellite signals. Video processing in the REMCAN system is an easy task. However, calibration of the speed measurements may be time-consuming, depending on the sample size requirement. The goal of this calibration was to limit the speed errors to a 5% range, which is a common industry standard for video-based traffic feature extraction. VEVID is also used in calibrating and validating the speeds extracted from the REMCAN system to resolve sample size issues. Figure 4 shows the graphical user interface with reference line setup. The reference line system is used to convert screen space to real-world space (19–27). The instantaneous speed where the GPS probe vehicle passed the video zone was recorded and compared with the REMCAN measured speed. Through trial-and-error tests, the β-value is derived. Sample data for calibration of REMCAN speed measurement for I-71 Exit 6 are given in Table 2. Time T is when the probe vehicle passes the detection zone. To enable the calculation of the acceleration or deceleration rate, two speeds from consecutive seconds are used. The difference in GPS speeds over the 2-s period is identified

as the acceleration or deceleration of the probe vehicle. The measurement of vehicle speed is calibrated and validated to be within an acceptable error range, which is 5% in this study. Therefore, the acceleration or deceleration rate calculated from the two consecutive speed measurements is also assumed to be acceptable. Table 2 shows only five GPS data items for the calibration, but the calibration of the conversion factor β may be extended with a hybrid method using both GPS data and VEVID if a higher sample size is required. This is illustrated in the calibration module of Figure 1.

Results from Case Studies The case study used 1 h of video from the three sites described above. The results from the REMCAN system are summarized as follows.

REMCAN Video Processing Figure 5 shows the original video with the warped screen in the REMCAN system. In all three case studies, only one direction of the freeway is measured and counted. The vehicle parameters are displayed at the lower right corner of the screen. Accumulated vehicle counts, speed, and vehicle type are recorded once a vehicle passes a detection zone bounded by the registration lines. The program can run large video data sets. It plays the video and extracts the vehicle parameters simultaneously.

Distance (mi)

Speed (mph) 75

Altitude (ft) 900

66.99 mph 00:25:17 5.760 (mi)

60

800

Video zone location

45

700

30

600

15

500

0

.0

4.0

5.0

6.0

FIGURE 3   GPS speed profile for speed measurement calibration.

7.0

8.0

9.0

10.0

400

54

Transportation Research Record 2340

FIGURE 4   VEVID graphical user interface with reference line setup.

Operating Mode Distribution The operating mode distribution analyzed by REMCAN is summarized in Figure 6. However, the distribution patterns differ among the three sets. For the Exit 6 and Exit 19 sites, traffic flow is relatively easy, without any congestion. The distribution concentrated in Bins 33 and 40 and in other higher-numbered bins. There is almost no distribution in the lower bins. At Exit 100, where congestion occurred, the video tells a different story. The operating mode distribution is relatively evenly distributed among the bins.

spheric carbon dioxide (CO2) and CO2 equivalent are in kilograms per mile. The unit for total energy consumption is 106 kJ/mi. Emission rates from the MOVES model calculation correspond to the real-world situation. As described in the previous section, the I-71 Exit 100 site experienced congestion. Therefore, its emission rates for CO and NOx and its energy consumption are the highest among the three video sites. That is to say, these two pollutants and energy consumption are sensitive to traffic operational conditions. However, emission rates for the other pollutants, such as PM2.5 and PM10, are less sensitive to vehicle operating conditions for this case study.

MOVES Emission Rates

Conclusion

For project-level emission analysis, it is essential to model the running emission rates and prepare them for further dispersion analysis. However, the REMCAN system provides only the operating mode distribution for running exhausts. Off-network operating distribution must be obtained from alternatives such as simulation models. The operating mode distribution obtained from the REMCAN system is then used as input for MOVES emission modeling. Aside from the traffic activity inputs, inputs to the MOVES model were obtained from the official MOVES training data set from the EPA website. For ease of running the model and testing REMCAN, the same data are used across all three video sites. The data set includes vehicle age distribution, fuel supply and formulation, and meteorological data. The MOVES Runspec settings are all the same except for the traffic activity data (i.e., link, link source, and operating mode distribution). The latest version of MOVES, MOVES2010b, is used for the running emission rate modeling. Figure 7 and Table 3 summarize the emission rates for selected pollutants. For comparability, emission rates of carbon monoxide (CO), oxides of nitrogen (NOx), nitrogen dioxide (NO2), and particulate matter less than 2.5 µm and less than 10 µm in diameter (PM2.5 and PM10, respectively) are in grams per mile. Emission rates for atmo-

Acquisition of reliable vehicle activity inputs to the MOVES model is necessary for maximizing modeling capacity and helping federal and state officials improve the quality of transportation management. Application of the REMCAN system made the generation of traffic input data easier. It has advantages over current methods, such as derivation of the driving cycle by using GPS data collected from instrumented vehicles. It measures operating conditions of the entire vehicle fleet by using videos at a relatively low cost. The system can automatically calculate required parameters for MOVES ­project-level emission and energy consumption analysis with a minimal amount of effort and time. This tool can be applied to many traffic operating scenarios to profile the operating mode distribution from readily available video sources. A roadway crossover could be an ideal video source location. Another excellent choice is a video van to be deployed at any necessary location. A regional intelligent transportation system center with video monitoring, such as ARTIMIS in the Cincinnati region, is also a potential source for traffic videos. Once the operating mode distributions of all monitored traffic scenarios are profiled, the corresponding emission and energy consumption rates can be rapidly calculated from a predesigned lookup table. This enables

(a)

(b)

(c) FIGURE 5   Original video screen (left) and vehicle tracking after warping (right): (a) I-71 Exit 6 (Smith Road), (b) I-71 Exit 19 (Fields Ertel Road), and (c) I-71 Exit 100 (South High Street).

56

Transportation Research Record 2340

0.8 0.7

I-71Exit 6 I-71 Exit 19 I-71 Exit 100

Fraction

0.6 0.5 0.4 0.3 0.2 0.1

op

op

M od

el M D0 od op el D M od 1 op elD M 1 od 1 op elD M 1 od 2 op elD M 1 od 3 op elD M 1 od 4 op elD M 1 od 5 op elD M 1 od 6 op elD M 2 od 1 op elD M 2 od 2 op elD M 2 od 3 op elD M 2 od 4 op elD M 2 od 5 op elD M 2 od 7 op elD M 2 od 8 op elD M 2 od 9 op elD M 3 od 0 op elD M 3 od 3 op elD M 3 od 5 op elD M 3 od 7 op elD M 3 od 8 op elD M 3 od 9 el D 40

0

Operating Mode ID FIGURE 6   Operating mode distributions for three case study sites.

50,000

5,000

40,000

4,000

30,000

3,000

20,000

2,000

10,000

1,000

0 Car

Truck

I-71 Exit 6

Car

Truck

I-71 Exit 19

Car

Truck

0 Car

I-71 Exit 100

Truck

I-71 Exit 6

(a)

30,000 20,000 10,000 0 Truck

I-71 Exit 6

Car

Truck

I-71 Exit 19

Car

Truck

Car

I-71 Exit 100

Truck

I-71 Exit 6

1,200

2,500

1,000

2,000

800

1,500

600

1,000

400

500

200

0 I-71 Exit 6

Car

Truck

I-71 Exit 19 (e)

Truck

I-71 Exit 100

Car

Truck

I-71 Exit 19

Car

Truck

I-71 Exit 100

(d)

3,000

Truck

I-71 Exit 19

Car

1,400 1,200 1,000 800 600 400 200 0

(c)

Car

Truck

(b)

40,000

Car

Car

Car

Truck

I-71 Exit 100

0 Car

Truck

I-71 Exit 6

Car

Truck

I-71 Exit 19

Car

Truck

I-71 Exit 100

(f)

FIGURE 7   Comparison of emission rates for three case study sites: (a) CO (grams per mile); (b) CO 2 equivalent (kilograms per mile); (c) NO x (grams per mile); (d) primary exhaust PM 10 , total (grams per mile); (e) NO 2 (grams per mile); and (f) primary exhaust PM 2.5, total (grams per mile).

Yao, Wei, Li, Ma, Liu, and Yang

57

TABLE 3   MOVES Results for Running Emissions from Three Case Study Sites I-71 Exit 6

I-71 Exit 19

I-71 Exit 100

Pollutant

Car

Truck

Car

Truck

Car

Truck

CO NOx NO2 Energy consumption CO2 equivalent Primary exhaust PM10—total Primary PM10—organic carbon Primary PM10—elemental carbon Primary PM10—sulfate particulate Primary exhaust PM2.5—total Primary PM2.5—organic carbon Primary PM2.5—elemental carbon Primary PM2.5—sulfate particulate

1,119.55 684.15 71.18 0.0039 281.355 27.91 22.89 4.99 0.03 25.70 21.08 4.60 0.03

1,234.39 10,637.90 722.12 0.0154 1,127.120 329.30 50.47 276.67 2.16 319.43 48.96 268.38 2.09

1,056.69 626.20 65.15 0.0036 257.522 25.55 20.95 4.57 0.03 23.52 19.29 4.21 0.03

1,035.54 9,736.83 660.95 0.0141 1,031.640 301.40 46.20 253.23 1.97 292.37 44.81 245.64 1.92

38,644.90 2,431.22 252.96 13.9122 999.827 99.19 81.34 17.73 0.12 91.34 74.90 16.33 0.11

8,900.56 37,803.20 2,566.14 54.6182 4,005.330 1,170.20 179.35 983.18 7.66 1,135.12 173.98 953.71 7.44

Note: Emission factors are in g/mi, CO2 equivalent in kg/mi, and energy consumption in 106 kJ/mi.

rapid yet reliable emission and energy consumption estimation with easy integration to the existing video monitoring infrastructure. Although this study focused on acquisition of reliable operating mode distribution, the vehicle classification functionality needs further improvement. Improvements will allow the generation of complete and accurate MOVES source type data. In addition, the method is likely to be more applicable to point-based hot spot analysis as opposed to analysis of an entire roadway segment. To overcome this disadvantage, an integrated method that will enable the vehicle activity parameters over a longer distance will be more robust. Further improvements to the REMCAN system to address the low camera angle conditions should be considered in the future. In addition, advanced algorithms should be developed to improve vehicle longitudinal occlusions by using techniques such as the addition of color, edge, texture, and three-dimensional geometry detectors. AcknowledgmentS The authors appreciate the support of the Ohio Department of Transportation and NEXTRANS. References  1. Transportation Conformity Guidance for Quantitative Hot-Spot Analyses in PM2.5 and PM10 Nonattainment and Maintenance Areas. EPA420-B-10-040. U.S. Environmental Protection Agency, Dec. 2010.  2. ICF International. Potential Changes in Emissions due to Improvements in Travel Efficiency. U.S. Environmental Protection Agency, 2011.  3. Oge, M. T. Official Release of the MOVES2010 Motor Vehicle Emissions Model for Emissions Inventories in SIPs and Transportation Conformity. FRL-9121-1. U.S. Environmental Protection Agency, 2010.   4. McNally, M. G., R. Jayakrishnan, L. Chu, and N. S. Kalandiyur. Estimation of Vehicular Emissions by Capturing Traffic Variations. Presented at 85th Annual Meeting of the Transportation Research Board, Washington, D.C., 2006.   5. Hatzopoulou, M., B. F. L. Santos, and E. J. Miller. Developing Regional 24-Hour Profiles for Link-Based, Speed-Dependent Vehicle Emissions and Zone-Based Soaks. Presented at 87th Annual Meeting of the Transportation Research Board, Washington, D.C., 2008.

  6. California Air Resources Board. Air Pollution Sources, Health Effects, and Controls. http://www.arb.ca.gov. Accessed Jan. 8, 2012.  7. NCHRP Report 388: A Guidebook for Forecasting Freight Transportation Demand. TRB, National Research Council, Washington, D.C., 1997.   8. Wu, P. Modeling Transportation-Related Emissions Using GIS. Presented at 86th Annual Meeting of the Transportation Research Board, Washington, D.C., 2007.   9. Ryan, P., G. LeMasters, L. Levin, J. Burkle, P. Biswas, S. Hu, S. A. ­Grinshpun, and T. Reponen. A Land-Use Regression Model for Estimating Microenvironmental Diesel Exposure Given Multiple Addresses from Birth Through Childhood. Science of the Total Environment, Vol. 404, 2008, pp. 139–147. 10. Li, C., Q. Nguyen, H. Spitz, M. Lobaugh, S. Glover, P. Ryan, G. ­LeMasters, and S. A. Grinshpun. School Bus Pollution and Changes in Air Quality at Schools: A Case Study. Journal of Environmental Monitoring, Vol. 11, No. 5, 2009, pp. 1037–1042. 11. Frey, H. C., N. M. Rouphail, and H. Zhai. Link-Based Emission Factors for Heavy-Duty Diesel Trucks Based on Real-World Data. In Transportation Research Record: Journal of the Transportation Research Board, No. 2058, Transportation Research Board of the National Academies, Washington, D.C., 2008, pp. 23–32. 12. Song, G., L. Yu, and Z. Wang. A Practical Modeling Approach for Evaluation of Fuel Efficiency for Road Traffic. Presented at 87th Annual Meeting of the Transportation Research Board, Washington, D.C., 2008. 13. Fulper, C., C. Hart, J. Warila, J. Koupal, S. Kishan, M. Sabisch, and T. DeFries. Development of Real-World Data for MOVES Development: The Houston Drayage Activity Characterization Study. Presented at 90th Annual Meeting of the Transportation Research Board, Washington, D.C., 2011. 14. Chamberlin, R., B. Swanson, E. Talbot, and J. Dumont. Utilizing MOVES’ Link Drive Schedule for Estimating Project-Level Emissions. Presented at TRB Workshop on Integrating MOVES with Transportation Microsimulation Models, Feb. 2011. 15. Bar-Gera, H. Evaluation of a Cellular Phone–Based System for Measurements of Traffic Speeds and Travel Times: A Case Study from Israel. Transportation Research Part C, Vol. 15, No. 6, 2007, pp. 380–391. 16. Wei, H. Integrating Traffic Operation with Emission Impact Using DualLoop Data. Ohio Transportation Consortium Research Project Report. University of Cincinnati, Ohio, 2012. 17. Image Sensing Systems, Inc. Autoscope. www.autoscope.com/. Accessed Dec. 18, 2011. 18. Traficon. www.traficon.com/. Accessed Dec. 18, 2011. 19. Wei, H. Observed Lane-Choice and Lane-Changing Behaviors on an Urban Street Network Using Video-Capture-Based Approach and Suggested Structures of Their Models. PhD dissertation. University of Kansas, Lawrence, 1999.

58

20. Wei, H., E. Meyer, C. E. Feng, and J. Lee. Characterizing Lane-Choice Behavior to Build Rules as Part of Lane-Based Traffic Microsimulation Hierarchy. Presented at 80th Annual Meeting of the Transportation Research Board, Washington, D.C., 2002. 21. Wei, H., E. Meyer, J. J. Lee, and C. Feng. Characterizing and Modeling Observed Lane-Changing Behavior: Lane-Vehicle-Based Microscopic Simulation on Urban Street Network. In Transportation Research Record: Journal of the Transportation Research Board, No. 1710, TRB, National Research Council, Washington, D.C., 2000, pp. 104–113. 22. Wei, H., E. Meyer, J. Lee, and C. Feng. Video-Capture-Based Approach to Extract Multiple Vehicular Trajectory Data for Traffic Modeling. ASCE Journal of Transportation Engineering, Vol. 131, No. 7, 2005, pp. 496–505. 23. Wei, H., C. Feng, E. Meyer, and J. Lee. Closure to “Video-CaptureBased Approach to Extract Multiple Vehicular Trajectory Data for Traffic Modeling.” ASCE Journal of Transportation Engineering, Vol. 135, No. 3, 2009, pp. 151–152. 24. Wei, H., Z. Li, and Q. Ai. Observation-Based Study of Intersection Dilemma Zone Natures. Journal of Transportation Safety and Security, Vol. 4, No. 1, 2009, pp. 282–295. 25. Wei, H., Z. Li, P. Yi, and K. R. Duemmel. Quantifying Dynamic Factors Contributing to Dilemma Zone at High-Speed Signalized Intersections. In Transportation Research Record: Journal of the Transportation Research Board, No. 2259, Transportation Research Board of the National Academies, Washington, D.C., 2011, pp. 202–212. 26. Li, Z. Modeling Dynamic Dilemma Zones Using Observed YellowOnset Trajectories. ITE Journal, Vol. 79, No. 11, 2009, pp. 24–35. 27. Li, Z., H. Wei, Q. Ai, and Z. Yao. Empirical Analysis of Drivers’ Yellow Stopping Behaviors Associated with Dilemma Zones. Presented at 89th Annual Meeting of the Transportation Research Board, Washington, D.C., 2010. 28. Wu, Y.-J., F.-L. Lian, and T.-H. Chang. Traffic Monitoring and Vehicle Tracking Using Roadside Cameras. Systems, Man and Cybernetics, Vol. 6, 2006, pp. 4631–4636. 29. Malinovskiy, Y., Y.-J. Wu, and Y. Wang. Video-Based Vehicle Detection and Tracking Using Spatiotemporal Maps. In Transportation Research Record: Journal of the Transportation Research Board, No. 2121,

Transportation Research Record 2340

Transportation Research Board of the National Academies, Washington, D.C., 2009, pp. 81–89. 30. Zhang, G., R. P. Avery, and Y. Wang. Video-Based Vehicle Detection and Classification System for Real-Time Traffic Data Collection Using Uncalibrated Video Cameras. In Transportation Research Record: Journal of the Transportation Research Board, No. 1993, Transportation Research Board of the National Academies, Washington, D.C., 2007, pp. 138–147. 31. Botha, J. L., J. A. Elia, S. P. Washington, and T. M. Young. Using Video Data to Measure Vehicle Operating Modes for Prediction of Emissions. In Transportation Research Record: Journal of the Transportation Research Board, No. 1664, TRB, National Research Council, Washington, D.C., 1999, pp. 21–30. 32. Scora, G., B. Morris, C. Tran, M. Barth, and M. Trivedi. Real-Time Roadway Emissions Estimation Using Visual Traffic Measurements. Forum on Integrated and Sustainable Transportation Systems, IEEE, 2011, pp. 40–47. 33. National Ambient Air Quality Standards. U.S. Environmental Protection Agency. http://www.epa.gov/air/criteria.html. Accessed June 18, 2012. 34. Fusiello, A., E. Trucco, and A. Verri. A Compact Algorithm for Rectification of Stereo Pairs. Machine Vision and Applications, Vol. 12, No. 1, 2000, pp. 16–22. 35. Gonzales, R. C., and R. E. Woods. Digital Image Processing, 3rd ed. Prentice Hall, N.J., 2007. 36. Zhao, Q., L. Yu, and G. Song. Characteristics of VSP Distribution of Light-Duty and Heavy-Duty Vehicles on Freeway: A Case Study. Presented at 91st Annual Meeting of the Transportation Research Board, Washington, D.C., 2012. 37. Qstarz BT-Q1000EX Specifications. Qstarz International Co., Ltd. http:// www.qstarz.com/Products/GPS%20Products/BT-Q1000EXQR-S.htm. Accessed July 2, 2012. 38. Gong, H., C. Chen, E. Bialostozky, and C. Lawson. A GPS/GIS Method for Travel Mode Detection in New York City. Computers, Environment and Urban Systems, Vol. 36, No. 2, 2012, pp. 131–139. Any opinions expressed in this paper are those of the authors and do not necessarily reflect the views of the Ohio Department of Transportation or NEXTRANS. The Transportation and Air Quality Committee peer-reviewed this paper.

Suggest Documents