TECHNOLOGY LICENSING OPPORTUNITY
TARGET, a Versatile Algorithm for Vision-based Detection of Object Approaches and Collision Threads Executive summary A researcher from the University of Barcelona has developed the computer algorithm TARGET for predicting collision threads. TARGET processes image frames from video sequences or video cameras with no constraints on frame rate or resolution. The output is a simple signal which increases before collision would occur. TARGET suppresses background movement (as it would occur during car driving) without sacrificing its sensitivity for detecting collisions. Furthermore, large-field motion patterns due to camera movement or self-motion are also suppressed. Both suppression mechanisms contribute to a reduction in the number of false alerts, and make the algorithm versatile such that it can be used for many different applications, for example car driving, or airplanes. The algorithm is based on local computations, and a corresponding implementation in hardware should be feasible. The University is looking for a license agreement, but other collaborations may be considered (codevelopment, financial resources, etc.).
Introduction The reliable detection of approaching objects based on visual information has attracted great attention from researchers and industry. The main difficulty in developing such an algorithm is to distinguish approaching objects from the rest of the image (the background), because background motion is often unpredictable and thus causes interferences. Such interferences can lead to detect collisions where there is actually none (false alerts), or even make the algorithm “blind” for object approaches.
Description TARGET is an algorithm for detecting object approaches. As a novelty, the detection is rendered largely independent from background motion, and there are no constraints on the environment in which the algorithm is used. Note that other algorithms for collision detections are usually tailored to specific environments, such as indoor, or car-driving. TARGET detects collisions with arbitrary objects (pedestrians, cars, football, airplane, etc). In this way TARGET can be flexibly used for many
applications. This is a further important difference to competitor algorithms, which often have to define the objects of interest beforehand (e.g. pedestrian, bicyclist, car, etc.). The recognition of these objects of interest adds further computational load and introduces new sources of errors (e.g. due to mis-classification). The computational load of TARGET can be tuned continuously from low (low reliability for detecting collisions) to high (excellent reliability). In this way, TARGET can process input from a video camera in real-time even when executed on a standard laptop computer. Naturally, the preferable choice is a corresponding hardware implementation (FPGA, GPU).
Advantages • Suppression of background movement, camera jitter and self-motion • Collision detection of any kind of object (any size, form, class, velocity) • Collision detection in arbitrary environments (simple to complex) •
Flexible tuning of computational load
• No constraints on image frames (any resolution, any frame rate)
• Suitable for hardware implementation
Current stage of development The algorithm fully defined and is well tested. It is currently implemented in Matlab and C++, and can process video sequences with any frame rate or (pixel-) resolution.
Goal The University is looking for a license agreement, but other collaborations may be considered.
Reference / AVCRI-208 Contact José Conde Email:
[email protected] Tel: +34 934 020 128
TECHNOLOGY LICENSING OPPORTUNITY
Appendix: Illustration Output of TARGET to three video sequences:
Three video sequences (with the frame number indicated in green) from the Star Wars movie served as input to TARGET. In the first video, background and lateral space ships were manually erased. In the second video, only the background was erased. The third video is the original sequence. It is especially challenging due to low contrast, high noise levels, and background motion away from the observer. To the right the corresponding output of TARGET is shown (see legend: 1st, 2nd and 3rd video correspond to the gray, green and red curve, respectively), demonstrating background suppression.