Advanced integrated enhanced vision systems - CiteSeerX

2 downloads 0 Views 556KB Size Report
The basic rationale for Enhanced Vision Systems (EVS) on transport and rotary .... Basic diffraction (antenna) physics limit the “true” angular resolution of a 94 ...
Advanced integrated enhanced vision systems J. Richard Kerr*a, Chiu Hung Lukb, Dan Hammerstromb, Misha Pavelb a Max-Viz, Inc., 16165 SW 72, Portland, OR, USA 97224 b OGI School of Science and Engineering, Oregon Health Sciences University 20000 NW Walker Road, Beaverton, OR, USA, 97006 ABSTRACT In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that “synthetic vision” is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; inertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC. Keywords: enhanced vision, sensor fusion, ground-map correlation

1. ENHANCED VISION SYSTEMS: EVOLUTION OF CAPABILITIES 1.1 Initial rationale for Enhanced Vision The basic rationale for Enhanced Vision Systems (EVS) on transport and rotary wing aircraft is increased safety in the form of “enhanced situation awareness” derived from infrared (IR) imagery. This applies at night and/or in obscurants such as haze, smog, and many fog scenarios. The significance of improved vision when flying at night is quite substantial and should not be underestimated. Also, in addition to weather-limited visibility, it may be pointed out that haze over the national airshed has become a frequent and continent-spanning issue. This utilization of EVS addresses such critical areas as runway incursions; CFIT avoidance; general safety enhancements during approach, landing, and takeoff; and ground operations. As a means of generally improved visual awareness of terrain, traffic, structures, and obstacles, it does not seek “extra credit” regulatory certification. It is however a potentially significant, autonomous asset for use at Cat I and non-precision fields as well as for RNAV operations. Safety statistics are increasingly dominated by human (rather than equipment) failure, and it is highly probable that a number of CFIT and incursion-related accidents in recent years could have been avoided with the availability of basic EVS. Traditionally, the industry has looked for a direct economic payback for investment in such a capability. However, with the very attractive cost/performance and reliability attributes of the newest EVS technology, operators are realizing the advantages of “autonomous safety enhancement” in their own right. 1.2 Progression to “advanced EVS” The next step will be to extend enhanced situation awareness to virtually “all-weather” effectiveness. Extra-credit certification is still not a goal, and the key to acceptance will be the availability of a low-cost sensor suite and its associated processor. One element that this capability adds to the aircraft is a direct, visual ground-proximity indicator, *[email protected]; phone 1 503 968-3036, X103; fax 1 503 968-7615; max-viz.com

which is arguably more effective than GPWS with aural cockpit warnings. The next step will be the extension to extra credit in IMC. Although IR-based EVS has experienced a difficult regulatory acceptance in this arena, the seamless fusion of all-weather imagery should lead to significant advances providing that issues such as real-world obscuration, pilot workload, and overall system integrity are properly addressed. Finally, the ultimate goal or “Holy Grail” of EVS is the achievement of autonomous Cat III operations, including at nonprecision airfields. Broadly stated, this implies the efficient and rapid accomplishment of flight missions (including dispatch) in IMC, with improved throughput and completion rates. The central theme of this paper is that commercial and military (dual-use), autonomous Cat III operations can be achieved through the proper integration of EVS with GPS Landing Systems/Flight Management Systems (GLS/FMS). Other relevant avionics include inertial sensors, EGPWS, ADS-B, and TCAS. We will refer here to such “integrated EVS” systems as I-EVS. It is generally recognized that, while highly accurate, augmented DGPS has insufficient integrity in terms of availability, jamability, and various artifacts1. Therefore, the addition of ground-correlated, all-weather EVS as an integrity monitor and separate-thread navigation/attitude source provides the required assurance; this includes real-time detection of hazards arising from obstacles, incursions, and digital-map inaccuracies. We note that, although the terminology has evolved, this general approach of “separate-thread”, sensor-based integrity assurance has been pursued for more than a decade – including in the context of the Boeing “Enhanced Situational Awareness (ESAS)” System2. A challenge facing the I-EVS concept is that RNAV/RNP approvals are evolving towards Cat I minima (and potentially lower for certain military transport missions), while the requirement for still lower decision heights occurs in generally less than one percent of operations. Therefore, any added capability must be highly cost effective, including the expense of added avionics as well as actual integration.

2. EVS SENSOR EVOLUTION 2.1 Baseline EVS sensors Our search for baseline IR imagers, optimally tailored to EVS, has lead directly to the new generation of noncryogenically-cooled, microbolometer focal plane arrays3,4. The basic reason for this is that EVS is a wide-field-of-view application, which implies short-focal-length optics; therefore, the low-F/#’s required to achieve high performance with “uncooled” imagers can be achieved using small and inexpensive lenses. A typical aperture diameter is about 1.5 inches. The absence of a cryocooler also contributes greatly to reliability, as well as to compact, lightweight, and economical imaging units. With such “fast” optics, sensitivities can essentially be as good as for cryocooled detectors, and ongoing, defense-based development has the specific goal of approaching theoretical (thermal-fluctuation-limited) performance5. Also, uncooled sensors are also virtually “instant-on”. A further advantage is that these imagers operate in the long-wave infrared (LWIR) spectrum, typically 8-14 microns. Conversely, cryocooled sensors utilized for EVS operate at mid-wave infrared (MWIR, 3-5 microns). Infrared often provides a significant fog-penetrating capability, and because of the higher wavelength/droplet size ratio, LWIR has generally superior performance in such scenarios4. Furthermore, in the cold ambient conditions that are most challenging for infrared EVS, the background scene energy is shifted to LWIR and uncooled sensitivity can actually be superior to that of cryocooled MWIR4. In fact, the only advantages for MWIR over LWIR are in such non-EVS applications as 1) surveillance/reconnaissance requiring long-focal-length telescopes 2) very-long propagation paths having a high gaseous water content (humid, maritime atmosphere), which is absorptive to LWIR

The above (LWIR or MWIR) alternatives are utilized to image the thermal background scene, including en route terrain, runway boundaries, airport features, structures, incursions/obstacles, and traffic. In addition, it is highly desirable to enhance the acquisition of runway/approach lighting. Cryocooled MWIR units are typically extended down to shortwave IR (SWIR) wavelengths to accomplish this. However, the dynamic range problem inherent in the simultaneously handling of high-flux lights and low-flux thermal backgrounds tends to compromise both functions. With uncooled LWIR, we have found it preferable to add a second, uncooled SWIR imager that allows us to process the two signals separately. Optical and electronic filtering permits the extraction of the lights of interest (including strobes), while rejecting much of the clutter of extraneous lighting in the scene; we then overlay these lights onto the general (thermal) scene. In the latest system implementation, the LWIR and SWIR units utilize a common aperture. The extraction and fusion operations for this patented, dual-uncooled sensor approach are accomplished in a FPGA-based processor. The operation of the “Model EVS-2000” dual sensor is illustrated in Fig. 1, showing a night-time approach to Boeing Field near Seattle, WA. Note the B737 on the far end of the runway and the C172 on the taxiway. 2.2 All-weather sensor suite Notwithstanding the high sensitivities that are now available, LWIR is no panacea for fog, and the natural choice to complement the baseline EVS sensors is imaging millimeter wave (mmw). The mmw penetrates fog quite well, but with limited resolution; an image-fusion system can be used to seamlessly “flesh-out” the composite image as the IR emerges during a landing approach. As discussed later, however, direct display may not represent the optimum utilization of these assets in an IMC system. Imaging mmw continues to progress in performance, physical size, and cost. The wavelength band (propagation window) of choice for EVS is 94 GHz, although 140 GHz shows increasing promise for better size/angular resolution while offering satisfactory atmospheric transmission. A major remaining barrier to the use of 140 GHz is its cost. Basic diffraction (antenna) physics limit the “true” angular resolution of a 94 GHz system to 1.1 degrees per 10 cm of antenna cross-section, based on a half-Rayleigh criterion. In order to actually realize this, sufficient over-sampling is required. In addition, depending upon the robustness of the signal-to-noise ratio, a degree of super-resolution may also be achieved. The most common configuration for active mmw “imagery” is to use mechanical or electronic scanning in azimuth, along with range resolution from processing of the FMCW return. The resultant PPI or “B-scope” presentation is then converted to a pseudo-perspective (C-scope) display. However, such substitution of range resolution for elevation resolution results in artifacts that have proven objectionable to many prospective users. Nevertheless, it has been shown that this type of sensor can very effectively derive ground-correlated navigation and hazard detection6. Covert operators have generally preferred passive systems, or at least active versions with constrained emissions. Passive units tend to be true (azimuth/elevation resolving) “cameras”, which primarily sense differing mmw scene reflections from the cold, sky background. Sensitivity vs update rate and physical size vs resolution have traditionally been issues with passive mmw cameras. Significant progress has been made with sparse, aperture-plane arrays7, and also with focal plane arrays that utilize either energy-detecting or heterodyning cells. Also, with the addition of an illuminator, the focal plane units can become selectable, dual-mode (active/passive) sensors. Recently the demands of military users for advanced, autonomous landing as well as terrain following/terrain avoidance (TFTA) capabilities have suggested that true, 3-dimensional active mmw imagers are required. This introduces a further parameter (range) into the hazard-recognition function. The achievement of simultaneous resolution in azimuth, elevation and range is challenging, both for FMCW and pulsed systems. Current investigations encompass antenna/installation requirements and overall system tradeoffs; it appears that this challenge can be met, albeit at a higher cost level than is envisioned for a general-purpose, commercial EVS unit.

We are planning flight tests on new-generation, active and passive mmw units in the near future. Ultimately, we may distinguish between two solutions with differing priorities: • •

lowest cost: an affordable sensor suite for “all-weather situation awareness” highest performance and integrity, for I-EVS operation under Cat III conditions

2.3 EVS processing and integration Standard image processing functions for EVS include non-uniformity correction, auto-gain and level (preferably on a local-area basis), and various enhancements. This is followed by fusion of the signals from a multi-imager sensor suite. Advanced functions include feature extraction and object recognition for runway and hazard detection, and the extension to ground-map correlation in order to generate sensor-based navigation and hazard alert signals6,8. This opens up a range of powerful options in I-EVS, including pilot and machine interfaces as discussed below. The above functions may be achieved using hardware ranging from PC-video FPGA and processor boards to bulky, specialized platforms. The challenge is to implement the most powerful algorithms on cost effective, compact, “productized” hardware encompassing software and firmware design rules that are compliant with stringent certification requirements for IMC operations. We point out two recent and notable achievements: • •

a complete and rigorous description of multiresolution, Bayesian (maximum aposteriori or maximum likelihood) fusion and correlation9; at this point, the required computing power is a barrier to practical implementation robust fusion and correlation (nav and hazard) processing, with consistent flight test demonstration even using an azimuth-range mmw sensor6; this capability remains to be implemented as productized software and hardware

3. ASSOCIATION ENGINE APPROACH 3.1 Introduction We are pursuing an alternative processing concept with the goal of achieving approximate Bayesian fusion, recognition and ground correlation with greatly reduced hardware requirements. This is a neural-net inspired, self-organizing, associative memory or “Association Engine” (AE) approach that can be implemented in FPGA-based boards of moderate cost. When applied in imaging applications, it achieves functionalities that have not formerly been practical. Associative (Palm) memory10-12 is an operation that is pervasive in a variety of neural circuitry, though in much more complex forms. Simply put, association involves storing mappings of specific input representations to specific output representations: it then can perform recall from a highly obscured, noisy, or incomplete input. In comparison to conventional memory, data are stored in overlapping, distributed representations. Sparse, distributed data representation leads to generalization and fault tolerance, and the information content per bit can be higher than for traditional memory. The approach has novel and important aspects: 1) The AE constitutes a very efficient implementation of “Best Match” (BM) association; it achieves this at high (real time video) rates, thereby introducing BM as an important new association paradigm (alternative to Exact Match10-12) 2) The BM association can be shown to represent a Bayesian Maximum Likelihood (ML) operation13 3) It is highly robust in the face of noisy and obscured image inputs 3.2 Image representation and AE operation

The means of image representation emulates the human visual pathway14, as depicted in Fig. 2. This is accomplished in a preprocessor that performs feature extraction (edges as well as potentially higher levels of abstraction), in order to generate a large, sparse, random binary vector for each image frame. The features-images are created using the Laplacian of Gaussian method, which finds edges by looking for zero crossings after filtering with a Laplacian of Gaussian filter; each edge image is then thresholded by taking the K strongest features, setting those to 1 and all others to zero. If the image has MxN pixels, the corresponding vector has MxN nodes, of which only a fixed K (typically tens of nodes) are 1’s. For multiple imagers, the feature vectors are simply strung together to create a composite vector. Also, these operations are performed over a range of multiresolution hyperpixels9, including adaptive temporal filtering or “3d (x,y,t) multiresolution”. Assume for the moment that a database imagery set is available for the aircraft route and destination area of interest. This may be obtained for example from dedicated flight data; or from National Imagery and Mapping Agency (NIMA) and Digital Elevation Model (DEM) data, with appropriate transformations both for basic cockpit perspective and for the physics of the individual sensors15. (Such transformation may also involve non-perspective imagery, such as the case of an azimuth-range mmw sensor.) Each reference image is indexed for its navigational position, and associated with the basic visual image from which it is derived. Salient-feature extraction is performed on the multi-imager reference imagery, thereby generating “reference vectors”. The outer product of a complete (regional) set of such vectors generates a binary weight matrix that constitutes the AE memory for that set*. This constitutes “training” of the AE, which in this case can be accomplished by compilation from a database library of binary feature data. (This discussion assumes “naïve priors”, i.e., equal probabilities over the reference vectors; in the more general case of an apriori probability distribution, the weighting matrix is not binary.) In operation, the inner product of the weight matrix with an arbitrary (real time, degraded) input feature vector yields an output vector. Using a suitable metric, the AE determines the best match, in feature space, between this output vector and the reference vectors. The final output is the chosen reference vector, which is indexed with respect to its associated, visual reference image as well as its navigational position. A postprocessor then recalls the visual reference image, which is the ML ground-correlated scene. The basic process is illustrated in Fig. 3. 3.3 Other functions and complete AE system In practice, it is also necessary to correct for misregistration of the real-time imagery with the reference database. Although this can be achieved through conventional processing (warping), it is also an ideal application for another, ancillary AE. In this case, the engine may be trained on generic runway images as a function of perspective; the BM output will be indexed with respect to aircraft attitude and offset. This approach promises to be robust in the presence of translation, rotation, scaling, and distortion; and it will reduce the capacity required in the principal AE. Very sensitive hazard detection is accomplished by subtracting the real-time input feature-image from the BM returned by the AE, and highlighting or annunciating differences. In a further embellishment, such a difference (hazard) vector may itself be processed by another ancillary AE, which is trained as a hazard recognition device through storage of a wide variety of obstacles, scales, and aspect angles. A simple example of AE operation is shown in Fig. 4. A very noisy runway scene is sensed by two imagers. The thermal background image is feature extracted, showing as barely discernible random white dots in the upper featureimage; similarly, the lights feature-image is shown just below it. The combined feature-image is registered and applied to the Image Association Memory, which outputs the appropriate BM reference image. Finally, the registration operation is inverted, to present the correct, real-time perspective image on a pilot’s display. It is interesting to note that, although the lights-features dominate in this scenario, the system obtained the correct best match with only the thermal imager operating, thus illustrating the robustness of the approach. A more complete AE-based processor architecture is shown in Fig. 5. Through indexing of the BM, multisensor reference vector, along with inversion of the registration operation and highlighting of the differential (input minus BM) hazard vector, the post-processor outputs the correct visual image along with navigation, attitude and hazard signals. 3.4 Database issues

The en route terrain and destination-region database required for the above operations should be available to most users. This includes sufficient breadth and detail to apply to RNP/RNAV, as well as non-standard landing approaches. The appropriate detail is flight phase dependent, which is key to limiting the required on-board memory capacities to levels that are readily achieved with today’s technology (including PC-based). The positional resolution of the reference imagery will become much greater during landing approach, with the greatest detail occurring near threshold; for commercial use, high resolution inserts of airport environs may be appropriate.15 Terrain and obstacle data requirements are treated in RTCA/DO-276. It is also of interest to explore the operation of the AE principle in I-EVS for scenarios in which less data are available. Further investigations are ongoing regarding the use of simple digital map (“digital Jeppesen”) data in the AE; it may be noted that the conventional EVS-navigation processor of Ref. 6 requires only the runway dimensions, presence of lights, and heading to within +/- one degree. The present effort is being extended to the use of generic terrain and landing field AE references, in order to establish the efficacy of the association approach in scenarios with rudimentary or even zero available data for the route and destination. This also relates to the raw data “fill-in” concept described in the next section. 3.5 Fusion and correlation integrity issues Related to the issue of possibly incomplete and/or incorrect database imagery is that of fusion and correlation integrity. In the advanced applications of I-EVS, extremely high probabilities must be associated with the following: • •

“what is out there” will in fact be displayed and otherwise input to the I-EVS what is displayed and input is in fact “out there”

In other words, the system must not overlook significant data, and must not generate artifacts. One key to this is the correlated preprocessing of sequential frames (“adaptive fusion”9), which in fact does not introduce image latency. In the AE context, this operation corresponds to “3-dimensional feature extraction”, where the third dimension is time. This operation inherently rejects temporal noise artifacts, while enhancing subtle scene features, through frame-frame comparison. Ongoing investigations are dealing with spatial artifacts, utilizing both longer-term and inter-sensor frame comparisons. In addition, the computer determination of BM confidence will be implemented. The other key to both database and processing integrity issues is the retention of some degree of “raw” (unprocessed sensor) data. In the AE context, this may again be likened to the operation of the human visual system: the retina is a low-bandwidth interface, but we experience a richness of vision. In effect, the association mechanism provides much of the content, but the raw-data sensor input serves to “fill-in” the field and alert us to changes. A similar, local-image-area “fill-in” operation to hybridize associative and real data is under investigation. 3.6 Implementation of the AE To date, the AE work has been performed on a dedicated simulation facility. In order to implement in real time, significant computation, memory and bandwidth will be required. Execution of these algorithms requires the ability to fetch long arrays directly from memory; this is problematical for state of the art processors, which, in spite of very high clock rates, are constrained by memory bandwidth. In addition to large bandwidth requirements, these programs need significant parallelism; however, they do tolerate low precision. Based upon relevant experience, Field Programmable Gate Arrays (FPGAs) constitute an appropriate solution. A prototype board is being fabricated that utilizes four mid-range FPGAs with embedded SDRAM controllers and logic gate equivalents of about 150K each. Each FPGA is connected to a state-of-the-art, 512 MB, SDRAM with an 80 ns, 64-bit pathway. The external interface is a PCI bus. These boards are expected to be on-line during the Spring of 2003, at a cost of a few hundred dollars each. Later, if desired, further ASIC customization may be employed. A complete real time system, with all functions shown in Fig. 5, will be utilized in multisensor flight tests and for development of the integrated-EVS concepts described below.

4. INTEGRATED ENHANCED VISION SYSTEMS

Although augmented DGPS as utilized in Global Landing Systems are capable of high accuracies, the requisite integrity for Cat II and Cat III operations is problematical. With the added integration of an autonomous and completely separatethread, real-time sensor-based navigation and hazard detection system, the required (10-9) integrities can in principle be attained. Here we briefly discuss both pilot and machine interfaces for an I-EVS such as that shown in Fig. 6. 4.1 Pilot interfaces Based upon the best match AE output (or on conventional processing), the visual image may be presented either headdown or on a conformal, stroke-raster HUD. There exists very considerable ongoing work in the human factors area, regarding the best implementation of this interface16. Alternatives include photo-realistic imagery, sparse (e.g., wire frame) or symbolic imagery. In essence, such an AE/correlation driven display constitutes “sensor-verified synthetic vision”. The goal is to permit the pilot to readily interpret the image data, symbology, and (in the HUD case) real world cues without interference and undo clutter. Also, attention must readily be drawn to critical data elements, such as hazard alerts. A possible added tool is the use of color, noting that, traditionally, the color red is reserved for hazard indications. The image data may be utilized in either of two ways: • •

as an integrity monitor for autopilot operations with guidance symbology (e.g., a predictive “highway in the sky”)

4.2 Machine interface The EVS generates separate-thread navigation, attitude and hazard signals. In the FMS, the nav and attitude can be compared with GLS as well as inertial and other avionics inputs. This is a generalization of a “terrain match navigator”1, and suggests that – in the complete integration of EVS – the “highest and best use” of the imagery and its associated data may not be in the form of pilot displays, but rather, through the machine interface. When the AE processor is used, relatively displeasing imagery such as that from a mmw sensor (particularly if not vertically resolved) is utilized only to help generate the correct, clean visual display from the database. In fact, with this emphasis on machine use of the data, the pilot may also be buffered from such interpretive workloads even if conventional EVS-navigation processing is used6.

ACKNOWLEDGEMENTS This work sponsored in part by AFRL/SNHC, Hanscom AFB, MA, Contract No. F19628-02-C-0080.

REFERENCES 1. M. Kayton and W. R. Fried, Avionics Navigation Systems, Ch. 5, John Wiley & Sons, New York, 1997. 2. S. Harrah, W. Jones, C. Erickson, and J. White, “The NASA approach to realize a sensor enhanced synthetic vision system,” Proc. IEEE: 21st Digital Avionics Systems Conference 2002, CH37325, 2002. 3. C. Tiana, J. Kerr, and S. Harrah, “Multispectral uncooled infrared enhanced-vision system for flight test,” Proc. SPIE: Enhanced and Synthetic Vision 2001, Vol. 4363, pp. 231-236, 2001. 4. J. Kerr and S. Way, “New infrared and systems technology for enhanced vision systems,” NATO/RTA/SET Workshop on Enhanced and Synthetic Vision Systems, Ottawa, Ontario, 2002. 5. D. Murphy, et al, “High-sensitivity 25-micron microbolometer FPAs,” Proc. SPIE: Infrared Detectors and Focal Plane Arrays VII, Vol. 4721, pp. 99-110, 2002. 6. B. Korn, H. Doehler and P. Hecker, “Navigation Integrity Monitoring and Obstacle Detection for Enhanced Vision Systems,” Proc. SPIE: Enhanced and Synthetic Vision 2001, Vol. 4363, pp. 51-57, 2001. 7. S. Clark, C. Martin, J. Lovberg, A. Olson, and J. Gailliano, Jr., “Real-time wide field of view passive millimeter-wave imaging,” Proc. SPIE: Infrared and Passive Millimeter-Wave Imaging Systems, Vol. 4719, pp. 341-349, 2002. 8. Y. Le Guilloux and J. Fondeur, “Using imaging sensors for navigation and guidance of aerial vehicles,” Proc. SPIE: Sensing, Imaging and Vision for Control and Guidance of Aerospace Vehicles, Vol. 2220, pp. 157-168, 1994.

9. R. Sharma, T. Leen, and M. Pavel, “Bayesian sensor image fusion using local linear generative models,” Opt. Eng. 40, pp. 1364-1376, 2001. 10. G. Palm, “On associative memory,” Biological Cybernetics 36, pp. 19-31, 1980. 11. G. Palm, F. Schwenker, F. Sommer and A. Strey, “Neural associative memories,” Associative Processing and Processors, C. Weems, Ed., pp. 307-326, IEEE Computer Society, 1997. 12. S. Zhu and D. Hammerstrom, ”Simulation of associative neural networks,” Proceedings of the 9th International Conference on Neural Information Processing Vol. 4, 02EX575, IEEE, pp. 1639-1643, 2002. 13. G. Zeng and D. Hammerstrom, “Distributed associative neural network model approximates Bayesian inference,” OGI School of Science and Engineering of Oregon Health Sciences University, 2003, to be published. 14. E. Rolls, Computational Neuroscience of Vision, Oxford University Press, Oxford, 2001. 15. Richard Flitton, Evans and Sutherland Computer Corporation, Salt Lake City, UT, private communication. 16. NATO/RTA/SET Workshop on Enhanced and Synthetic Vision Systems, Ottawa, Ontario, 2002.