This paper is a postprint of a paper submitted to and accepted for publication in IET Intelligent Transport Systems and is subject to Institution of Engineering and Technology Copyright. The copy of record is available at IET Digital Library http://dx.doi.org/10.1049/iet-its.2013.0167
On creating vision based ADAS Marcos Nieto1*, Oihana Otaegui1, Gorka Vélez1, Juan Diego Ortega1, and Andoni Cortés1 1. Vicomtech-IK4, Paseo Mikeletegi 57, San Sebastian, Spain
[email protected] Abstract: In this paper we analyse the exponential growth of Advanced Driver Assistance Systems (ADAS) based on video processing in the last decade. Specifically, we focus on how research and innovative ideas can finally reach the market as cost-effective solutions. We explore well known computer vision methods for services like lane departure warning systems, collision avoidance systems, and point out potential future trends according to a review of the state of the art. Along the paper our own contributions are described as examples of such systems from the perspective of realtime by design, pursuing a trade-off between the accuracy and reliability of the designed algorithms, and the restrictive computational, economical and design requisites of embedded platforms. Keywords: ADAS, COMPUTER VISION, REAL-TIME, EMBEDDED SYSTEMS. Introduction Last decade has seen an exponential growth of the presence of computer vision systems in many sectors due to the improvements in consumer electronics, consolidation of the information society, the reduced costs of hardware production, and the significant contributions of scientific communities in the development of algorithms and methods [1]. Remarkably, in the field of Intelligent Transport Systems and intelligent vehicles, the technology has favoured the appearance of a number of systems and applications that improve comfort, safety or efficiency of transport. Examples of these systems are GPS navigation, automatic cruise control, vehicle identification, communication and eCall, and more recently advanced driver assistance systems (ADAS) like lane departure warning, driver drowsiness detection, or semi automated driving systems like platooning, parking assistance, etc [2]. The abovementioned ADAS provide a wide range of services for the driver, making use of sensing equipment, obtaining data from the environment surrounding the vehicle, processing it, understanding the situations and taking actions (e.g. send the information to the driver, actuate on
the vehicle, launch communication channels, etc). The penetration of such systems into the market is strongly conditioned to reaching cost-effective solutions, which is not an easy task since sensing systems typically are based on expensive technologies, or difficult to miniaturise [3]. In this sector, computer vision is considered as a passive sensor when compared with active sensors like LIDAR or radar, which suffer a number of drawbacks: potential interferences between equipments, higher costs, lower resolution, limited descriptive output and in general low adequacy to embedded systems and cost-effective platforms. Computer vision systems have broken through this market for several reasons: (i) richness of the information contained in images; (ii) advances in electronics and optics that allow reducing the size and cost of cameras; (iii) increased computational capacity of embedded systems; and (iv) the outstanding contributions of scientific communities during last decades in image processing, projective geometry, machine learning and other related disciplines. In this article we present a thorough discussion about relevant topics that must be considered for a successful deployment of vision-based ADAS, including a proposal of a SW and HW codesign methodology, market analysis and future trends. Example use cases are provided that illustrate different ADAS regarding their current matureness in the market. From research to market During the last three decades there has been numerous research programs aiming to give steps towards the introduction of intelligent systems into vehicles: the DARPA Grand and Urban Challenge, EUREKA Prometheus, or Google driverless car, just to mention a few. These programs exemplify the effort of the scientific community, which has proposed since a vast number of algorithms, methods and technologies for intelligent vehicles. The target of these programs was to learn, test and evolve the technology in an exercise of exploring the possibilities that it can offer to drivers in particular and the transport sector in general. For the tests, intelligent prototype vehicles have been used, massively equipped with sensors like radar, laser, GPS, antennas, multiple cameras, CPUs, batteries, etc. These cars demonstrate only the feasibility of the technical part of deploying intelligent vehicles. There is a very steep path from these vehicles to solutions for the market, considering also legal aspects, continuous functioning, space, costs and consumption constraints of HW. Euro NCAP, formed by seven European Governments as well as motoring and consumer organisations in every European country in the automotive market, tests independently since 1997
2
and reports publically the safety levels of vehicles marketed in Europe. The safety level of the vehicles is analysed in four separate categories: Adult Occupant protection, Child Occupant protection, Pedestrian protection and Safety Assist. The results are published in their website (www.euroncap.com) as a five start rating system. Each safety level (one to five stars) requires a minimum number of points in each of the four categories and, in addition, a minimum number of total points. In the near future, by 2017, NCAP will require ADAS to be present in the vehicle to obtain the desired five star rating. Therefore, manufactures will be forced to include these sensing systems as standard vehicle equipment by that time. Current priorities can be perceived by studying the evolution of changes over the last years in the NCAP rating scheme. Between 2011 and 2012, pedestrian protection requirements increased in 50% to achieve the minimum points. This fact reveals the importance of computer vision based ADAS due to their ability to not only detect and classify objects as pedestrians but for their capability to do the same with partially occluded pedestrians as required by the rating schemes Computer vision based ADAS are still in a growing stage with yet low market penetration, specially focused on high-end vehicles. Nevertheless, its expected impact on Euro NCAP requirements make computer vision based ADAS development a key factor for reaching wider markets. In any case, a revision of the technology is required to address the commercial aspects of the new safety requirements of NCAP, which focuses on the price, space and mass market limitations. Nowadays, mid range vehicles equipped with ADAS are becoming closer to a reality, thanks to the advances in more efficient computer vision methods, more easy-to-program and reduced cost embedded HW, and the growing interest of large manufacturers that compete for the market. However, there is still a risk in the effort of bringing together the diverse challenges and objectives of the stakeholders (see Figure 1): market drivers, key technological challenges, automotive trends, customers’ preferences and benefit are not always totally aligned. Significant effort shall be made from the point of view of research and development of innovative sensors to satisfy the requirements and challenges of the involved stakeholders.
3
Figure 1: Development of ADAS and associated considerations.
Methodology The automotive market puts strict requirements on computer vision systems to be integrated on vehicles. On the one hand the algorithms require considerable computing power to fulfil the target functionalities and to work reliably in real-time and under a wide range of lighting conditions. On the other hand, the cost of the designed system must be kept low, the package size must be small and the power consumption must be very reduced which means that the solution needs to be embedded in order to reach the requirements of the market. The design and development of computer vision systems for the automotive sector is therefore challenging not only in terms of design of robust software (SW) algorithms but also implementation into embedded platforms. Thus a systemic methodology is required to align the design, development and prototyping cycle to the requirements of the automotive industry, paying special attention on reducing the total development price. The solution complexity and required low time to market make the current design methodologies no longer adequate. In traditional design methodologies for embedded systems, hardware decisions are taken first. After an initial specification the hardware (HW) architecture is designed based mainly on the experience of the HW designer team. This methodology has some important drawbacks. First, it can delay SW teams because in most cases the SW development and testing cannot start until the
4
HW design is available. Furthermore, the SW design is constrained by the HW so that it limits the algorithm design flexibility. Finally, there is a considerable risk for system overdesigning or underdesigning due to a lack of an initial evaluation of the SW computational requisites. To overcome these drawbacks and speed up the product design chain we have adopted a HW/SW co-design methodology [4][5]. Before selecting the final HW platform a functional SW prototype is designed and validated using a PC and then migrated into a flexible HW/SW platform to tune the HW/SW design and to select the best fitting HW design. Specifically, the methodology consists on the following steps: 1. Determine the specifications and requirements of the sensor to be developed. This step is usually given by the customers (vehicle manufacturers or automotive electronic product providers). 2. Develop a first functional software prototype in a PC. To accelerate this step it is required to exploit SW libraries that provide routines for computer vision algorithms such as image processing, probability estimation, machine learning, object detection, camera calibration, etc. (see “Implementation details” section for more details). The SW is programmed using C++ for efficiency, code organization, multiplatform capabilities and to support the validation stage. 3. Define the embedded system architecture. We use reconfigurable hybrid System on Chip (SoC) architectures to have enough flexibility to tackle a variety of applications which require different type of algorithms. This type of architecture consists of a programmable logic (e.g. FPGA) and a processing system (e.g. ARM or Intel). This stage includes a breakdown analysis of the algorithms to decide which parts do require specific HW implementation and which parts can run on the microprocessor. As our SW prototype is written in C++ it is easy to port it to the microprocessor and to measure the computational time of its different parts (typically applying code optimization from C++ to C and exploiting single instruction multiple data (SIMD) architectures available (e.g. NEON for ARM or SSE for x86). Bottlenecks, like pixel level operations are then ported to the FPGA architecture which accelerates the execution of parallelizable operations (programming in VHDL). Normally, the decision taken in this step is crucial nevertheless as we are using a reconfigurable HW some design errors easy to solve when detected by simply looping back and restarting the process.
5
4. Validate the embedded device. The SW prototype previously developed is used as a golden reference model to compare with the results of the application under test. The SW reference is evaluated using datasets of the target scenarios (e.g. road sequences, pictures of faces, sign traffics, etc.) with ground truth annotations for the objective comparison of results. Special care must be taken at creating or selecting datasets, since this step plays a critical role to really assess the expected performance of the system in real situations [6]. The major challenge of the dataset selection is to take into account the visual variability of real scenarios. Typically, datasets are recorded by a sensorized vehicle, driving under real conditions where critical situations are forced to happen. However, forcing some critical situations in the real world is troublesome (e.g. accidents, emergency braking, etc.) and hence complex simulators have been also created to make available affordable tools to emulate the reality (www.vires.com, www.tassinternational.com/prescan).
Figure 2: Block diagram of HW/SW co-design ADAS development methodology.
Use cases The following subsections describe specific ADAS which exemplifies three levels of technology penetration in the sector: from mature integrated technologies already in the market to pure research ideas, which are illustrated with our own advances thanks to the cooperation with DatikGrupo Irizar (www.datik.com) which has installed and commercialized some of these systems and contributes to generate videos to feed the testing datasets.
6
Lane Departure Warning (Mature, already in the market) The real-time detection of lanes in the road implies the possibility to warn the driver in case the vehicle is abandoning the lane involuntarily or to automatically correct the steering of the vehicle to keep it in the lane. Video processing systems allow detecting lane markings and thus determine the position and trajectory of the vehicle inside its lane. It is critical to design an approach that avoids as many operations at pixel level as possible. In that sense we recommend to filter the image only once per frame to detect the pixels in the image that likely belong to the lane markings and then fit a lane model which is coherent with the perspective of the scene [7]. We can see an example detection of such lane markings in Figure 3. The known perspective of the image allows creating a perspective histogram which accumulates the detected lane markings pixels according to their relative orientation to the vanishing point which simplifies further computations. Such detections can be fed into a linear tracking stage like the Kalman filter, which adds the required time coherence to detections, smoothing the final result.
Figure 3: Detection of lane markings the lane model fitted to the image.
For the evaluation of the system we have analyzed a set of sequences with a total length of 336 minutes in roads and highways, with varying illumination conditions (direct sun light, cloudy days, rain, night, etc), and two types of vehicles (car and bus). One way to objectively determine the correct functioning of the system is to check the correct detection of lane changes. The detection of such events shows that the system is well detecting the position of the lane markings. Lane changes are challenging situations that usually involve fast motion and often partial absence of measurements (typically, only one lane marking is observed during such manoeuvre). Therefore, we hypothesise that the system is able to determine when the vehicle is getting out of its own lane if lane changes are detected correctly. We also determine the availability of the system as the result of a self-assessment module which determines the reliability of the detections. Below a certain
7
reliability value, the system switches off, and the availability is measured as the percentage of time in which the system is switched on. L (min) TOTAL Position
Light
TP
FP
FN
R
P
336’41’’
90,55
610
63
33
0.95
0.91
Bus
197'14'’
87,48
355
44
22
0,94
0,89
Car
139'26'’
94,88
255
19
11
0,96
0,93
Sunny
155'39'’
91,72
288
28
11
0,96
0,91
Cloudy
133'12'’
87,26
259
23
18
0,94
0,92
46'22'’
95,79
56
12
4
0,93
0,82
1'26'’
99,00
7
0
0
1,00
1,00
Dry
236'00’'
91,68
401
44
21
0,95
0,90
Rain
100'40’'
87,89
209
19
12
0,95
0,92
220'30'’
93,22
430
42
19
0,96
0,91
115'26'’
85,51
179
20
14
0,93
0,90
Dusk/Dawn Night Weather
Av. (%)
Roadway Highway type Periurban
Table 1: Detection of lane changes organised according to position, light, weather and roadway type. L: length of sequence, A(%): availability; TP: true positives, FP: false positives, FN: false negatives, R: recall = TP / (TP + FN), P: precision = TP / (TP + FP).
Table 1 shows the lane change detection statistics for different groups of sequences. Recall and precision values are computed to show the performance of the proposed method. The recall, which is related to the number of correctly detected events, give values above 95 %, while the precision, related to the quality of the detections, is above 91 %. Vehicle detection for pre-crash detection (Mature, partially in the market) Vehicle detection is of great importance in ADAS as it is part of the core of applications like precrashing detection, safety distance keeping and also the determination of the surrounding traffic. Nowadays, most traffic accidents with fatalities and injured people are related to vehicle crashes [8]. As a consequence, a big effort has been devoted from researcher centres and industry to create mechanism which can detect other vehicles in time. Currently, the state of the art includes methods that are robust and efficient and that can detect the presence of other vehicles with high levels of accuracy. Such methods have boosted the appearance of a number of autonomous vehicles that can handle driving situations with obstacles.
8
However, most of them have been demonstrated limited to urban/periurban areas, where speed rarely goes beyond 50 Km/h, otherwise having not full automation capabilities (requirement of good weather conditions, costly sensing equipment, significant effort on calibration and setup). The road to full cost-effective, market-ready, autonomous driving requires the ability to determine the obstacles around the vehicles. Stereo vision seems to be the technology that best cover such requirements (triangulation 2D-3D capabilities compared to single camera vision; significantly cheaper than active sensors) as an evolution of the combination of laser and single cameras. The challenge for the industry today is to migrate stereo into cars, considering the non-trivial tasks of designing algorithms for autocalibration of the cameras [9], and also porting heavy, powerful dense stereo algorithms into embedded HW. Both single and stereo camera approaches require providing a means to make the detection robust and efficient. Detection and tracking of vehicles/pedestrians through video analysis can be accomplished in a two-stage fashion: hypothesis generation, and hypothesis verification [10]. The first usually implies a quick search, so that the image regions likely containing vehicles are broadly identified. Typical methods include basic information processing like edge analysis, colour, shadows, symmetry and depth. Model-based and appearance-based techniques can be then used to verify the existence of vehicles/pedestrians in the selected regions (hypotheses verification). We have used a supervised machine learning process, which involve a training stage in which visual features are extracted from a set of positive and negative samples to design a classifier. Neural networks and support vector machines (SVM) have been extensively used for classification [11] while strong efforts are still being done in the field of feature extraction, where the most widely accepted methods are based on histogram of oriented gradients (HOG), principal component analysis (PCA), Gabor filters, and Haarlike features.
Image Database
Training Process Feature Extraction
Classifier Training
Explicit Features: - Edges - Symmetry - Shadow Implicit Features: - HOG
Machine Learning : - Adaboost - SVM
9
Statistical Models
Statistical Models
Detection Process Multiscale Window Scanning Feature Extraction Explicit Features: - Edges - Symmetry - Shadow
Cassification
Implicit Features: - HOG
Vehicle
Input Frame
No Vehicle
Figure 4: Block diagram of the proposed approach based on appearance features like HOG, edges, symmetry, and SVM classification.
In our work, we have adopted two main approaches, in one of them we exploit the power of SVM and HOG, and in the other we compute features like shadows, symmetry, edges, etc. with an Adaboost classifier, which is a fast alternative to HOG-SVM (Figure 4 shows an illustrative block diagram of the approach) [12]. SVM-HOG provides better detection results, but the computation of HOG is much heavier, and it is hard to migrate to ARM/FPGA architectures. We exploit the prior knowledge of the perspective of the scene to know the expected size of vehicles in the images at different distances. This prior information avoids the typical use of multiscale detection and concentrates the computational effort in much less hypotheses. Besides, we have tuned our algorithm so that it provides near 0% of false negatives at the cost of having some false positives that can be filtered afterwards with a tracking approach. We have implemented a Rao-Blackwellized Data Association Particle Filter (RBDAPF) [13] for tracking. This type of filter has been proven to provide good multiple object tracking results even in the presence of “sparse” detections as the ones we have in these sequences, and can also be tuned to handle occlusions. The Rao-Blackwellization can be understood as the process of splitting the problem into a linear/Gaussian and a non-linear/non-Gaussian part. The linear part can be solved with Kalman Filters, while the non-linear one must be solved with approximation methods like particle filters. In our case, the linear part is the position and size of a bounding box that models the objects. The non-linear part refers to the data association which is the process of generating a matrix that links detections (the HOG ones, for instance) with objects or clutter. The association process can
10
be strongly non-linear so that sampling approaches like ancestral sampling need to be used. Besides, the control of input/output of new objects is handled thanks to the use of the data association filter, which classifies detections according to the existing objects, remove those that have no associated detections for a too long period of time, and creates new objects when detections not associated to previous objects appear repeatedly. Preliminary results have shown that this approach is able to detect and track simultaneously up to 4 or 5 objects, which is a reasonable number for this type of scenarios. Driver Status Monitoring (Not yet widely commercialized) A driver may lose attention of the road because of voluntary actions (using the mobile phone, GPS, talking to passengers, etc), or involuntary actions derived from his/her physiological status (e.g. drowsiness or fatigue). Computer vision can be used to determine the attention, drowsiness and fatigue level by means of analysing biometrics like eyelid closure, blinking speed and frequency [14] (without the need of intrusive sensing like steering grip sensor, or ECG monitoring devices (e.g. Ford’s Heart Rate Monitoring Seat) [15]). Analogously to the other ADAS, these approaches are affected by the illumination conditions, and also by the great variability of the appearance of the face of the persons (including variable elements like glasses, skin color, facial hair, etc). For that reason we have devised a flexible solution to cope with all the variability of the scenario, focusing on a user-based detection and tracking of the eyes of the driver using paired eyes-model. The eye blinking is measured and analyzed to generate alarms when the system determines the driver is getting drowsy . The solution we have implemented is based on the combination of several algorithms. First, the face of the driver is detected in the image as an initialization stage. We have used the detector proposed in using a cascade of weak classifiers (Adaboost) fed with illumination invariant features like LBP. After this first detection stage, eyes tracking and fitting is carried out in a similar way to what we do for vehicle/pedestrian detection: image normalization, HOG description and SVM classification. This information is then interpreted to obtain the blinking frequency and also its duration (see Figure 5). Additionally, this module has been tested independently in Apple platforms like iPad 2-3 and iPhone 4S-5 in order to check its behaviour in ARM-based processors. The following section depicts the details of its performance.
11
Figure 5: Blinking speed and rate analysis based on open/close eye detection.
In our opinion, the implementation of this type of systems is still a few steps behind the abovementioned services for two reasons: (i) the appearance and behaviour of human beings are much more variable than the appearance and dynamics of lanes or vehicles. Therefore, methods aiming to detect, measure and interpret images of human beings are more likely to fail if thoroughly tested in large datasets; (ii) the psychological nature of human beings might prevent drivers to buy systems that directly observe them while driving, with the aim of detecting the adequacy of their driving capabilities at all moments. The solution to the first one relies on creating larger and richer databases of drivers while driving, which can typically be only done with the cooperation of a vehicle manufacturer, the presence of test vehicles and drivers. The second point is hardest to prevent since the ultimate utilisation of technology is a decision of the user. Open Research Lines Although the first wide range of ADAS and services is already partially in the market, there is still a huge growing potential derived from intelligent exploitation of computer vision, its combination with artificial intelligence, computer graphics, smart screen consumer electronics or intervehicle communications. The help ADAS provide focuses on specific driving situations, such as maintaining safety distance, checking blind spots, lane departure warning, etc. In the next years we will see enhancements on these services, moving from partially to fully automatic processes, such as parking assistance [16]. However, there are still many more complex driving situations that current ADAS do not
12
encompasses (like particularly complex overtaking manoeuvres with several lanes and vehicles, recognition of the brand or other information of surrounding vehicles, etc). Such outcome is only possible through the installation of the necessary sensors to obtain 360º environmental information around the vehicle [17], along with a predictive decision system trained to interpret all the information. Some steps towards vehicle recognition have been given, specially focused on determining the brand of the vehicle (typically in static surveillance applications like parking entrances [18]), or its license plate [19]. Combined together and mounted on board a vehicle could launch a new generation of identification applications. Entertainment has been historically neglected in the automotive sector since the main goal of ADAS is to enhance drivers’ safety [20]. However, as technology advances and driving became safer, the driver and vehicle occupants may be interested in infotainment services. GPS localisation, real-time traffic information are basic known sources of information which can be enhanced using road detection and augmented reality (e.g. display on screens virtual cars for driving schools lessons; information about stored routes previously recorded by the driver or friends; virtual opponents in racing environments, etc.), or establishing smart and secure communications between drivers. The emergence of Google and its Google Street services has caused a radical change in the way people use digital maps. Their maintenance is private and exploits the well known heavy equipped Google cars. However, in a not so distant future, a large proportion of vehicles will have visual sensing equipment, and may act as scouts to feed cloud services, like global 3D reconstruction with updated information in near real time, replacing the current photographic digital maps for live videographic footage. Analogously, other information like traffic signs can be detected and updated by such population of scout vehicles, so that intelligent systems can interpret the traffic regulations at each place based on the spatiotemporal combination of the detected signs [21]. Implementation details The methods described in this paper have been tested and deployed for real-time operation in an ADAS real framework following the co-design methodology described along the paper. The SW developments have been made multi-platform, and built and tested in different platforms and OS to compare its performance in general purpose CPUs and embedded systems (see Table 2).
13
Type Desktop PC
Processor RAM Intel Core 2 Q8300 4 GB
Industrial PC Intel Atom N270 Embedded HW 1 Zynq ARM CortexEmbedded HW 2 Apple A4 to A6X
GHz 2.5
OS Windows 7 Ubuntu 12.04 1 GB 2 Ubuntu 11.10 512 MB 800 MHz Specific Linux 512 MB – 1 GB 1 - 1.4 iOS
Language C++, TBB C++ C++, VHDL C++, NEON
Table 2: HW platforms used to test the described modules.
The PC has been used for design and debugging purposes, while the others have actually been used as final HW platforms. The industrial PC has been used for large vehicles, which had not hard space constraints and also could afford the price of a non-ARM processor. The embedded HW both use ARM architecture, although each one being very specific. The first one is part of the Xilinx’s Zynq chip, which contain ARM – Cortex A9 and FPGA, running a personalized Linux. The other one is part of the Apple AX family of ARM for iPad, iPhone, using iOS. Table 3 summarizes the processing times associated with the different modules we have described in the paper running on the mentioned platforms. Method Lane departure warning Vehicle detection Pedestrian detection Driver drowsiness detection
PC 0.5 ms 4 ms 16 ms 7 ms
Industrial PC 12 ms 40 - 50 ms 85 ms 37 ms
Embedded HW 1 18 ms 20 – 40 ms -
Embedded HW 2 30 - 70 ms
Table 3: Processing time (per frame) for different platforms and applications.
As shown, some methods have been tested already in ARM-based architectures, reaching real-time performances (below 40 ms for 25 fps input video) in most cases. The detection of pedestrian has not been yet reached that maturity level, but it is in our roadmap to reach the same level of optimization as with the other modules. Regarding SW, we have exploited the capabilities offered by the BSD licensed OpenCV 2.4.6 [22] libraries, our Vision and Image Utility Library (Viulib(R) - version oct. 2013 [23]), Qt and OpenGL for visualization where required. Conclusions and future work The use of computer vision in vehicles is becoming a reality in the market of intelligent vehicles thanks to the optimization of algorithms, the increased power of low consumption embedded HW, and also the reduced costs of cameras and optics. Many different systems can be integrated in
14
intelligent vehicles using computer vision, being ADAS of particular relevance. The final goal is to make intelligent vehicles fully automatic and achieve accident free traffic scenarios. Since this is not yet feasible, the next steps in the field will likely represent small steps towards automation, such as the introduction of semiautomated systems (e.g. those that take control of the vehicles only in short periods of time), but also consolidate SW and HW advances, so that more complex computer vision solutions are available at lower costs. In that sense we are working on optimizing our algorithms, using parallelization of operations at pixel level by combining ARM architectures with FPGA and VHDL based implementation of algorithms. This will dramatically decrease the computational cost of algorithms and make them available for a wider range of cheap devices and thus to a wider market. Acknowledgements The works described in this paper have been partially supported by the program ETORGAI 2011-2013 of the Basque Government under project IEB11. This work has been possible thanks to the cooperation with Datik – Irizar Group for their support in the installation, integration and testing stages of the project. References [1]
Aggarwal, T.: ‘Embedded vision system (EVS) ’, IEEE/ASME Proc Int. Conf. on Mechatronic and Embedded Systems and Applications, 2008, pp 618-621.
[2]
Schneiderman, R.: ‘Car makers see opportunities in infotainment, driver-assistance systems’, IEEE Signal Processing Magazine, 2013, 30, (1), pp 11-15.
[3]
Chakraborty, S., Lukasiewycz, M., Buckl, C., et al.: ‘Embedded systems and software challenges in electric vehicles’, Proc. Conf. on Design, Automation and Test in Europe, 2012, pp 424-429.
[4]
Anders, J., Mefenza, M., Bobda C., Yonga F., Aklah Z., Gunn, K.: ‘A hardware/software prototyping system for driving assistance investigations’, Journal of Real-Time Image Processing, 2013, pp 1-11.
[5]
Teich, J.: ‘Hardware/software codesign: The past, the present, and predicting the future’, in IEEE Proc. 100 (Special Centennial Issue), 2012, pp 1411–1430.
[6]
Ponce, J., Berg, T. L., Everingham, M., et al.: ‘Dataset Issues in Object Recognition’, Toward Category-Level Object Recognition, LNCS 4170, 2006, pp 29-48.
[7]
Nieto, M., Cortés, A., Otaegui, O., Arróspide, J., Salgado, L.: ‘Real-time lane tracking using Rao-Blackwellized particle filter’, Journal of Real-Time Image Processing, 2012, pp 1-13.
15
[8]
Sun, Z., Bebis, G., Miller, R.: ‘Monocular precrash vehicle detection: features and classifiers’, IEEE Transactions on Image Processing, 2006, 15, (7), pp 2019–2034.
[9]
Wang, Q., Zhang, Q., Rovira-Más, F.: ‘Auto-Calibration Method to Determine Camera Pose for Stereovision-Based Off-Road Vehicle Navigation’, Environment Control in Biology, 2010, 48, (2), pp 59-72.
[10] Arróspide, J., Salgado, L.: ‘Region-dependent vehicle classification using PCA features’, IEEE Proc. Int. Conf. Image Processing, 2012, pp 453-456. [11] Sun, Z., Bebis, G., Miller, R.: ‘On-Road Vehicle Detection Using Gabor Filters and Support Vector Machines’, Proc. IEEE International Conferenceon Digital SignalProcessing, 2002, pp 1019-1022. [12] Ortega, J. D., Nieto, M., Cortés, A., Flórez, J.: ‘Perspective Multiscale Detection of Vehicles for Real-time Forward Collision Avoidance Systems’, Advanced Concepts for Intelligent Vision Systems, Lecture Notes in Computer Science, 2013, 8192, pp 645-656. [13] Del Blanco, C. R., Jaureguizar, F., García, N.: ‘Visual tracking of multiple interacting objects through Rao-Blackwellized Data Association Particle Filtering’, IEEE Proc. Int. Conf. on Image Processing, 2010, pp 821-825. [14] Forsman, P. M., Vila, B. J., Short, R. A., Mott, C. G., Van Dongen, H. P. A.: ‘Efficient driver drowsiness detection at moderate levels of drowsiness’, Accident Analysis & Prevention, 2013, 50, pp 341-350. [15] Hu, S., Zheng, G., Peters, B.: ‘Driver fatigue detection from electroencephalogram spectrum after electrooculogram artefact removal’, IET Intelligent Transport Systems, 2013, 7, (1), pp 105-113. [16] Shaout A., Colella, D., Awad, S.: ‘Driver Assistance Systems – Past, present and future’, in Proc. Seventh International Computer Engineering Conference (ICENCO), 2011, pp 72-82. [17] Valeo Vision, ‘360 Bird’s eye view’, http://valeovision.com/innovation/ [18] Badura, S., Stanislav, F.: ‘Advanced scale-space, invariant, low detailed feature recognition from images – car brand recognition’, in Proc. International Multiconference on Computer Science and Information Technology, 2010, pp 19-23. [19] Shan, D., Ibrahim, M., Shehata, M., Badawy, W.: ‘Automatic License Plate Recognition (ALPR): A State-of-the-Art Review’, IEEE Transaction on Circuits and Systems for Video Technology, 2013, 2, (2), pp 311-325. [20] Young, K., Regan, M.: ‘Driver distraction: A review of the literature’, In I. J. Faulks et al. (Eds.). Distracted driving. Sydney, NSW: Australasian College of Road Safety, 2007, pp 379-405.
16
[21] Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: ‘Man vs. Computer: Benchmarking machine learning algorithms for traffic sign recognition’, Neural Networks, 2011, 32, pp 323-332. [22] ‘OpenCV 2.4.6’, http://www.opencv.org [23] ‘Viulib 13.10’, http://www.vicomtech.es/viulib
17