Non-intrusive Flight Test Instrumentation using Video ...

4 downloads 3442 Views 1MB Size Report
The MVP is a PC-based simulation research ... configuration and fed live to a laptop computer (Dell Precision M4300) using LabVIEW Video Acquisition software.
AIAA 2016-1435 AIAA SciTech Forum 4-8 January 2016, San Diego, California, USA AIAA Modeling and Simulation Technologies Conference

Downloaded by NATIONAL RESEARCH COUNCIL CANADA (aka Canada Institute for STI) on November 4, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2016-1435

Non-intrusive Flight Test Instrumentation using Video Recognition: Reducing the cost and time to market for certifying flight simulation devices Joseph Riccardi1, Cyrus Minwalla2 Flight Research Laboratory, National Research Council Canada, Ottawa, ON. Canada

The National Research Council of Canada has conducted feasibility studies into the development of non-intrusive flight test instrumentation methods with the goal of reducing the cost and time-to-market for certified aerospace products. Video recognition for the collection of flight test time history data was one such non-intrusive method. Compared to traditional instrumentation alternatives the use of machine vision techniques for flight data collection can reduce the instrumentation, airworthiness, and installation efforts required for flight test data collection, and is particularly advantageous when access to the aircraft is limited. This paper details the development of flight test video recognition software, the necessary calibration algorithms, hardware, and the accuracy of data collected by video via full flight simulator data benchmarks. This work has shown that video recognition is a convenient means of collecting cockpit flight test data for model development and certification of full flight simulator devices.

Nomenclature ADI APD API CAE CNRC CPU DRDC EFIS FPS FRL MVP NRC NRMSD OGD RMS RPM RMSD SMD TAS TRL VR

Attitude Director Indicator Aeronautical Product Development Application Programming Interface Canadian Aviation Electronics Conseil National de Recherches Canada Central Processing Unit Defence Research and Development Canada Electronic Flight Instrument System Frames per Second Flight Research Laboratory Model Validation Platform National Research Council Normalized Root Mean Square Deviation Other Government Departments Root Mean Square Revolutions Per Minute Root Mean Square Deviation Simulator Model Development True Air Speed Technology Readiness Level Video Recognition

1

Corresponding Author: Research Officer, National Research Council Canada, 1200 Montreal Rd, Ottawa ON K1A 0R6, AIAA Member. 2 Research Officer, National Research Council Canada, 1200 Montreal Rd, Ottawa ON K1A 0R6, AIAA Member. 1 American Institute of Aeronautics and Astronautics

Copyright © 2016 by The National Research Council of Canada has its own Crown copyright to be signed. Will follow up as not listed as an option above.. Published by the American Institute of Aeronautics and Astr

Downloaded by NATIONAL RESEARCH COUNCIL CANADA (aka Canada Institute for STI) on November 4, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2016-1435

I. Introduction Flight test data collection in support of simulator model development (SMD) requires the measurement of a variety of parameters in the cockpit, such as engine gauge quantities and control positions. Such data has traditionally been measured by installing string potentiometers on the control positions and/or making electrical connections to aircraft gauges or associated sensors deep inside the mechanical systems of the aircraft. In the case of glass cockpits, data is readily available on digital data buses albeit in a proprietary format. Interfacing with such a system requires aircraftspecific information to decode the digital data which is proprietary to the avionics manufacturer. The use of machine vision for flight test data collection is an alternative technique which can significantly reduce the instrumentation and airworthiness efforts, reduce the overall flight test schedule, and mitigate the overall risks involved when the availability of the data dictionary is uncertain. This work presents the development of video recognition techniques at the National Research Council of Canada to extract high-fidelity time-history data using the criteria for Level D full flight simulator development and validation as the benchmark1. The phases of development were as follows: preliminary development using existing cockpit video, development of video recognition techniques extending to soft real time development in a ground based simulation platform, followed by flight testing of the technique on the NRC Bell 206 helicopter. The development of video recognition for flight test data collection was subject to the following criteria. A. Flight Training Device and Full Flight Simulator Requirements A review of the FAA Part 60 simulator certification requirements1 provided the data accuracy benchmarks for the development of cockpit flight test data measurement via video recognition. In a review of the criteria for flight training devices (FTD) Level 5, 6, 7 and Full Flight Simulators (FFS) Level A B, C, D the most stringent requirements for engine torque and cyclic stick position, which were selected as representative cockpit measurements, were 3% and 0.1 inches respectively. These reflect the error bounds within which the simulated aircraft must reproduce the aircraft truth data. It should be noted that the same techniques may be used to measure other control deflections and/or gauge quantities. B. Time Alignment The time history data accuracy was critical in the timing precision as well as the magnitude of the measured quantity. The data extracted by video recognition must be time aligned accurately with the entire data package, which may include other data parameters, including inertial data and control surface positions. This is particularly important when time domain modeling methods are used to identify the aircraft mathematical models from collected flight test data2, where even a small time misalignment between control position data and inertial data can have a significant impact on the derived model parameters. C. In-Flight Data Monitoring The third practical requirement in the development of non-intrusive instrumentation techniques was the ability to monitor data accuracy in flight. Given the cost and schedule constraints of conducting flight test data collection, the ability to monitor data accuracy in flight is highly desirable. Therefore, the methods that were developed were also tested in a soft real-time environment to provide a live readout of the aircraft instruments. In-flight data checking, whether by an on board flight test engineer or through the use of data checking software, does not require high sample rate data though higher sample rates are desirable for more advanced data checking methods.

II. Development with Existing Cockpit Video A. Experimental Setup Initial development was performed using existing cockpit video from previous flight test programs. The flight test video analyzed in this section was collected using a Canon Vixia HF10 16GB CMCRDR SDHC camcorder with a resolution of 1920x1080 pixels. This video camera was mounted on the overhead ceiling, behind the pilot’s seat. The video image recorded was a general cockpit view which is typical of simulation and model development video recordings. The installation and camera selection were not made with video recognition in mind, and as such, was less than ideal. The region of interest, depicted in Figure 1, measured approximately 600 x 330 pixels. This work provided a baseline for the quality of data that can be expected from the general cockpit flight test video typically recorded on flight test programs without prior consideration of using the video for video recognition purposes. 2 American Institute of Aeronautics and Astronautics

Downloaded by NATIONAL RESEARCH COUNCIL CANADA (aka Canada Institute for STI) on November 4, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2016-1435

B. Methodology 1. Region-of-Interest Selection Localizing the gauges in a busy cockpit is a challenging image processing task, and for the present work, is accomplished manually. The cameras of interest are positioned to minimize off-axis parallax and maximize the field-of-view coverage. Small variations in camera position and viewing angle are accommodated by utilizing template matching to localize the overall layout of the gauges. From a single frame or picture while the target was immobile (i.e. the helicopter was stationary on the ground), a template image of the relevant gauges was selected (Figure 2). Features that change from one frame to the next, such as needle positions, were removed from the template image. The template was used to automatically compensate for any changes in the relative position and orientation of the camera Figure 2. Region of interest selection; to the instrument panel and served as an effective means of vibration the blue dot represents the centre of compensation. Pattern matching was used to locate the template, and the matched template used as a all of the gauges were located relative to the template position reference point to identify gauge identified in each video frame. position for needle-finding 2. Image Pre Processing 2a. Method 1 – Absolute Difference of Pixel Values The template was also used to remove unchanging features in the region of interest. Once the template was accurately overlaid, the absolute difference of each pixel value between the template and the current image frame was taken, with the intention of removing gradations and other markings that might interfere with extraction of the gauge needle. Under ideal circumstances, the difference would reveal only the moving needle position. In practice, the combined result of variable lighting conditions, the use of a camera with a rolling shutter, and a camera mount that vibrated in flight, introduced inter-frame blur and distortion effects. 2b. Method 2 – Binary Thresholding Localized gauges were binarized in each video frame, where the binary threshold was chosen experimentally. The net result was a white needle on a black background. Using the template to locate the gauge in the region of interest, the gauge-read algorithm was implemented to find the needle. 3. Gauge Data Extraction The template frames were prepared via manual identification prior to commencing the search. Each template was unique to the instrument dial or gauge being extracted, with the methodology for typical gauges outlined as follows: 3a. Single Needle Dial In order to distinguish/identify gauge needles from other markings, the brightness and contrast values were manually adjusted. Then, a binary threshold was applied to eliminate darker pixels that were likely not part of the bright needle. The change of intensity across a user-specified arc was extracted to Figure 3. Locating multiple needles on a reveal the needle position. The pixel positions identified (left and multi-needle gauge right edge of the needle) are used to compute the angular position of the needle with respect to the centre of the gauge. The measurement was written to a data file indexed by the corresponding video frame count. The calibration process involved formulating the gauge quantity in engineering units as a function of the angular position of the needle, where the perspective projection due to the camera angle required a non-linear mapping due to the ellipsoidal shape of the imaged gauge. 3b .Multi-Needle Dial Figure 3 depicts a gauge where green lines were superimposed on the needle locations identified by recognition software. This scenario was similar to the single needle case. The user was required to define multiple arcs at different radii spanning the lengths of the two needles. Each needle was detected independently, with the process applied recursively, starting with the smallest radii needle. Masking was applied with each extraction to prevent confusion between multiple needles during detection. Though this technique was developed on a gauge with differently sized needles, this was not necessarily the case. The process of finding a needle and then masking it to 3 American Institute of Aeronautics and Astronautics

search for other needles still applies. Overlapping needles proved to be a limiting case. Additional post-processing steps, guided by the knowledge that two needles were present, were successful in reconstructing the data during the time of overlap.

Downloaded by NATIONAL RESEARCH COUNCIL CANADA (aka Canada Institute for STI) on November 4, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2016-1435

C. Preliminary Discussion and Results Observation of the data extracted from video shows a clear trend which agrees reasonably with the traditionally measured quantities. A plot of engine torque data during an engine shutdown is included in Figure 4. The video and

Figure 4. VR from existing cockpit video demonstrated scatter of 5 degrees / 360 degrees extracted data highlighted the need for proper camera hardware selection and installation. Rolling shutter wobble resulted from aircraft vibration, due to the sensor technology utilized in the hand-held camera, which warped the shape of gauges between frames. This problem was mitigated in additional flight tests by utilizing sensors with global shutters and/or stiffening the camera mounts. Camera resolution and positioning were observed to play a significant factor in the reliability and accuracy of the data. A typical gauge from a global view of the cockpit had about 20 pixels in diameter resulting in an insufficient number of pixels representing the gauge for data to be extracted with sufficient accuracy for full flight simulator validation. The combined result of these challenges was significant scatter in the extracted data but there was also sufficient promise to continue development.

III. Simulation Platform Video Recognition Development The results of the preceding feasibility study were encouraging, and development of the technique continued on a desktop version of the NRC Model Validation Platform (MVP). The MVP is a PC-based simulation research environment which provided a platform to investigate the implementation of flight test data collection using video recognition with a camera and eye point selected specifically for that purpose, and for the development of soft real time processing in a controlled laboratory setting to address the in flight data checking requirement. A. Experimental Setup An instrument panel was simulated on a PHILIPS Brilliance 225B monitor at maximum brightness and 60Hz. A Panasonic HDC-TM700 HD Video Camera was also used to capture the simulated instrumentation panel during a maneuver sequence for one experiment while the simulator output was logged as truth data to be compared with the video recognition (VR) results. This camera captured a resolution of 1920 x 1080 pixels and a frame capture rate of 60 FPS. The video was replayed on the same screen alongside a real time version of the video recognition software. The same video segment of simulated flight was repeatedly used for real time and post processing development and comparison. The hardware used for video recognition included a Prosilica GC2450 machine-vision camera sub-sampled to 800x600 resolution. The camera was fixed on a tripod in a stable configuration and fed live to a laptop computer (Dell Precision M4300) using LabVIEW Video Acquisition software and processed using LabVIEW 8.6. A data acquisition script was developed in LabVIEW8.6 to interface with the 4 American Institute of Aeronautics and Astronautics

Downloaded by NATIONAL RESEARCH COUNCIL CANADA (aka Canada Institute for STI) on November 4, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2016-1435

machine-vision camera and video data was polled in frame-by-frame as images and passed onto the processing pipeline. A software timer was activated upon start of this script and the time elapsed was recorded every time VR data was recorded so that every frame was tracked with a time value for time alignment. The following techniques were developed in addition to the methods outlined earlier. B. Methodology 1. Attitude Indicator Attitude information was extracted using a reference coordinate system. Brightness and contrast adjustments were made to the image as the first step, while edge detection of the artificial horizon was used to indicate the pitch attitude. The vertical distance from a preset center of the attitude indicator is also measured to the midpoint of edge, as this distance is linearly proportional to pitch angle and can be calibrated to give pitch information. The roll angle was also measured as indicated by the horizon. Roll angle was computed using edge detection to locate five locations of the artificial horizon and computing the slope. By using information of the gauge panel template position and orientation, the relative angle of the artificial horizon was computed, so that any skew between camera and object was corrected. Figure 5 illustrates an intermediate result of the technique for pitch and roll estimation. 2. Slip Ball Though slip ball measurement is not a quantitative full flight simulator requirement, the use of video recognition opened up the possibility of measuring additional quantities in the cockpit view. Measurement of the slip ball was performed by dilating the image, then applying edge detection to extract the ball contour. The contour was subsequently thinned to a single pixel width by applying erosion. Dilation ensured that the contour would be fully formed, and small gaps due to local lighting inconsistencies and camera noise eliminated, before extraction via edge detection. The objective of this processing prior to edge detection was to eliminate the possibility of locating edges that were a video artifact or function of environmental lighting. 3. Post-Processing A calibration and correction script was used to eliminate obvious outliers in the data. Three types of filters were applied: magnitude, difference, and frequency. In magnitude filtering, boundary data and data near boundaries were discarded. These outliers occurred when data recognition failed completely, such as when the pitch was at extremum values, resulting in failure of the boundary detection algorithm. In difference filtering, the magnitude of change of the current data point from the last valid point was checked since large magnitude changes were typically indicative of a temporary misread of the gauge. Typical outliers were sourced by local illumination changes, noise effects and spurious obscurations. However, long-term obscurations, due to a pilot’s helmet or arm covering the gauges, could not be rectified in this manner and resulted in a loss of data. One potential improvement is the detection of such obscurations resulting in a warning generated to the FTE. Finally, a low-pass filter was utilized to Figure 5. Pitch and roll estimation from the artificial eliminate high frequency noise that was a result of horizon. small fluctuation in template position. Temporary losses in the data stream were recovered by fitting the measurement time-sequence to a cubic spline function, under the assumption that the signal varies smoothly. C. Preliminary Results Pitch and roll attitude data were both extracted and exhibited the correct trend, however challenges in the calibration process were observed. Figure 6 depicts a plot of the pitch attitude for subsequent discussion. Note that the pitch attitude was correlated to the pixel distance from the attitude indicator center. Extraction of pitch from the simulated artificial horizon proved more challenging to detect and calibrate, as it relied on the edge detection of the attitude indicator, which did not remain at a fixed angle and scaled non-linearly. The accuracy achieved by video recognition for pitch attitude was not suitable for full flight simulator data accuracy at large deflections, with errors 5 American Institute of Aeronautics and Astronautics

Downloaded by NATIONAL RESEARCH COUNCIL CANADA (aka Canada Institute for STI) on November 4, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2016-1435

on the order of degrees. Further effort is needed to improve the accuracy of the methodology for simulator modeling flight data collection. On an EFIS ADI where pitch attitude is indicated by a linear tape, the processing would be more straightforward and subsequently achieve greater accuracy. Slip ball detection was an example of a more linear indicator measurement and proved to be highly accurate (Figure 7). The projection was linear between slipball data and displayed ball position, and distinctive edges existed which could be tracked to within a singlepixel accuracy. Partial obscuration of the ball at one extremum were observed, but corrected readily in post processing. The algorithms were successfully implemented in soft real-time, operating on the live video stream at an average frame rate of 4 FPS, where the frame-rate was a function of the computational complexity of the processing pipeline. Though the frame rate is not sufficient for full flight simulator data requirements, it is acceptable for inflight data checking, and scales with advances in computing technology.

Figure 6. Post-processed pitch angle measurement. Ground-truth (blue) values are compared to the real-time (red) and the post-processed (green) outputs.

Figure 7. Post-processed slip ball measurement. Ground-truth (blue) values are compared to the real-time (red) and the post-processed (green) outputs.

6 American Institute of Aeronautics and Astronautics

Downloaded by NATIONAL RESEARCH COUNCIL CANADA (aka Canada Institute for STI) on November 4, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2016-1435

IV. Flight Test Validation Flight test validation is crucial to the successful development and technology readiness level (TRL) escalation of any new flight test data collection technology and video recognition is no exception. A Bell 206 JetRanger research platform, which was fully instrumented for simulation and model development data collection projects, was used to validate of the accuracy of data extracted by video recognition. Video recognition was applied to an aircraft torque gauge, and the lateral and longitudinal cyclic stick position during in-flight maneuvers and the results were compared to traditionally instrumented data. A description of the supporting Bell 206 data collected is followed by an explanation of the video recognition and calibration techniques used. The same techniques employed can be extended to all cockpit gauges and control positions. A. Experimental Setup A Logitech C600 webcam (2 MP) was mounted in the cockpit of the Bell 206. This webcam was selected due to its attractive size and weight characteristics resulting in ease of installation. It was installed and directed with a view of the instrument panel and the pilot cyclic with careful attention towards minimizing vibrations during flight (Figure 8). Preliminary testing of video recognition demonstrated that the frame rate varied with lighting conditions, therefore a means of accurately positioning each of the video frames in time synchronization was needed. To solve this problem, the video frames were polled in Linux (Ubuntu 11.04) with a software package that recorded the CPU time-stamp per incoming frame to a local text file. The software package was implemented in C and executed on a Dell D820 Latitude laptop computer. Simulations indicated a synchronization error on the order of milliseconds. The time-series log generated by the software package was aligned with the GPS time-stamp during post-processing to Figure 8. Instrument panel of the 206 cockpit as compute fixed time step data time histories for imaged from the VR camera. comparison to the traditionally measured quantities. B. Torque Gauge Video Recognition Results Flight testing was conducted with the intent to measure the engine torque of the Bell 206. Preliminary comparison demonstrated that the VR data had good time alignment, although the time step between video frames was occasionally observed to increase over 100ms. This was likely due to the software timer implementation which was called immediately preceding the video frame acquisition. In addition, additional delays were possible, introduced by other threads of activity on the laptop, as the operating system was not real-time and could experience arbitrary task-switching delays. Accumulation of the error due to time offset was mitigated by fusing the VR results with the recorded time vector and interpolating to a fixed sample rate. This set up, using a webcam and laptop, was very inexpensive and fit for many flight test instrumentation purposes, and the data was considered suitable for a full flight simulation test data collection. The observed latency posed a concern for control position measurement, where a high-frequency modeling step input occurring during a 100ms gap would result in sub-optimal modeling parameters. A potential, short-term solution is to introduce another camera into the cockpit and interleave the redundant video streams. The plots in Figure 9 show that the occasional lengthened time step occurred at a fairly regular increment. Investigations into to the cause of latency (software or hardware) in the time-stamp software package and potential solutions towards removal of the latency are relegated to future work. . Gaps can be observed in the VR data in Figure 9, and are due to the pilot obscuring the target gauge from view. Spurious obscurations are inevitable due to limited mobility and positioning room in the aircraft cockpit. Partial mitigation can be achieved by camera placement and pilot awareness of the camera’s position and coverage. After post processing and scaling to engineering units, the VR torque data was observed to agree with the traditionally instrumented quantity within 0.9% RMSD (Figure 10).

7 American Institute of Aeronautics and Astronautics

Downloaded by NATIONAL RESEARCH COUNCIL CANADA (aka Canada Institute for STI) on November 4, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2016-1435

Figure 9. Time-step intervals between individual video frames. The higher peaks in the video time log are symptomatic of the variable latency inherent in an event-driven multi-tasking operating system.

Figure 10. Engine torque data comparison, with ground–truth (blue) values overlaid onto the torque values measured by the VR pipeline.

C. Cyclic Position Video Recognition Although previous work with video recognition focused on cockpit gauge measurement, flight control position extraction was also investigated using similar techniques. The cyclic position was targeted as the test inceptor, and a pattern matching technique was implemented to track a salient feature on the cyclic grip. Note that a fiduciary marker (sticker) can be readily applied if the particular inceptor is devoid of recognizable features. Decomposing cyclic position into longitudinal and lateral measurements proved to be challenging. The initial camera positioning produced ambiguity as the position of the cyclic grip could not be uniquely decomposed into longitudinal and lateral cyclic positions. Careful repositioning the camera viewpoint was required, as constrained by the availability of suitable camera mounting locations. The resultant time history plots are presented in Figure 11. 8 American Institute of Aeronautics and Astronautics

Downloaded by NATIONAL RESEARCH COUNCIL CANADA (aka Canada Institute for STI) on November 4, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2016-1435

Figure 11. Comparison of longitudinal and lateral cyclic data before calibration. The strong bias in longitudinal is due to parallax from the off-axis camera angle, which can be removed by calibration. 1. Decoupling longitudinal and lateral cyclic measurement As mentioned earlier, decoupling the lateral and longitudinal cyclic position for reduced ambiguity required the selection of an appropriate viewpoint. Although a birds-eye view is ideal for cyclic position tracking, this location is not always practical or feasible for camera mounting. In the present case, the most appropriate position was established by a series of ground tests in the hangar, where the Logitech C600 webcam was relocated to an appropriate viewpoint from which the longitudinal and lateral cyclic positions could be uniquely determined. (Figure 12). Since data was acquired on the Bell 206 helicopter in the hangar, in-flight effects such as Figure 12. Image of cyclic eye point selection from a vibration and changing light gradients (shade and sequence collected during ground tests in the hangar glare) were minimal, allowing for specific examination of geometric and viewpoint effects on using VR for objects moving in depth such as the cyclic control stick. A comparison between measured and traditionally instrumented data was performed to observe the impact of the improvement. After adjusting for the coordinate transformation, time offset, and calibration, the results in Figure 13 were obtained. 2. Camera Scale-factor Determination The final stage was calibration, eliciting the camera scale-factor which maps the camera pixel coordinates to the lateral and longitudinal cyclic position in engineering units. Different methods were examined, of which the twostep polynomial regression proved to be most effective, as it solved the coupling problem that arose from the geometry of the perspective. Cyclic data was collected, including large magnitude high frequency inputs as well as small magnitude low frequency inputs. The inputs were designed to explore the effects of camera view angle and geometry of cyclic displacements on VR extracted position data. A second set of data consisted of large round cyclic inputs which included high amplitude inputs in both the lateral and longitudinal direction, as well as elliptical 9 American Institute of Aeronautics and Astronautics

Downloaded by NATIONAL RESEARCH COUNCIL CANADA (aka Canada Institute for STI) on November 4, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2016-1435

motions to examine the longitudinal and lateral cyclic position coupling. This data was mainly used for the verification process, where the calibration methodology from first dataset was applied and tested. Quantitatively, the main criteria used to analyze the methods of calibration were root mean squared deviation (RMSD). While the normalized RMSD

Figure 13. Longitudinal and Lateral Cyclic Deflections investigating coupling in cyclic position data (NRMSD) gave stronger indication about the effectiveness of each method, the RMSD value is a truer physical representation of the error magnitude. The correlation coefficient was evaluated in multi-stage calibration methods to examine the residual correlation/coupling in control measurement error. The nonlinearity of the geometry was captured by fitting to a high-order polynomial, and accurate results were obtained for the scale-factor calibration from the fitting process. In addition, a high-order polynomial was utilized to eliminate the coupling between the longitudinal and lateral cyclic position measurement. Analysis of the residual error, the difference between traditionally measured and VR quantities, proved that the longitudinal and lateral cyclic data were successfully decoupled. Validation of the technique was performed by conducting large amplitude inputs in the individual longitudinal and lateral cyclic axes, followed by coupled motions. In all cases, the RMS deviation was observed to be approximately 0.08 inches and 0.04 inches for longitudinal and lateral cyclic input respectively, exceeding the Level D criteria of 0.1 inches for full flight simulator development. Figure 13 demonstrates one of the control sweeps, this one focused on small lateral cyclic deflections in the process to validate that the longitudinal and lateral cyclic positions were successfully decoupled.

V. Analysis and Discussion Preliminary analysis of existing cockpit video highlighted the need for a dedicated installation of cameras to extract video recognition time history data from cockpit video and meet full flight simulator data requirements. Camera selection and placement played a significant role in the reliability and accuracy of the data. Further complications arose from the inevitable vibrations of the helicopter, which highlighted the importance of camera mounting. In addition, partial obscuration of the camera due to the confluence of mounting constraints, video coverage, pilot awareness and behaviour can be mitigated with a system of several cameras and careful camera positioning. The video recognition software was successfully implemented in soft real-time operating at an average frame rate of 4 FPS. The achieved frame rate met the requirement for in-flight data validation and review. The frame-rate was directly influenced by the complexity of the processing pipeline operating on the video stream, and 10 American Institute of Aeronautics and Astronautics

Downloaded by NATIONAL RESEARCH COUNCIL CANADA (aka Canada Institute for STI) on November 4, 2016 | http://arc.aiaa.org | DOI: 10.2514/6.2016-1435

will improve with improvements in computing power and continued algorithm development. Use of a single camera was observed to introduce parallax, particularly in gauges at the extremes of the field of view. Polynomial-based scale-factor calibration provided the desired accuracy by mitigating the ellipsoid projection of the gauge, although multiple cameras are better suited to the task. It was observed that mounting, coverage, vibration and accuracy constraints placed great importance on resolution, field-of-view and mechanical installation of camera hardware. Mounting the camera in a cockpit for the purposes of video recognition requires knowledgeable hardware selection and mounting.

VI. Conclusion and Future Work This paper details the development of flight test video recognition software, calibration algorithms, hardware, and the accuracy of data collected by video means using full flight simulator data requirements as a benchmark. Methods were developed to extract high fidelity time history data for aircraft cockpit gauges and flight control positions with RMS values of 1% for torque measurement and less than 0.1 inches for control position measurement. Flight Test validation was conducted on the NRC Bell 206 platform, demonstrating that video recognition can be used to extract gauge and control position data at the accuracy required for full flight simulator model development and validation data. Video recognition was shown to be a convenient means of collecting cockpit flight test data for model development and certification of full flight simulator devices. Future work will include the development of a time aligned set of machine vision cameras, with multiple cameras spanning the cockpit instruments and controls to further improve the gauge resolution, provide reliable accurate data for the entire set of cockpit gauges and camera positioning for optimal data accuracy supporting simulator model development. The next phase of development will also include a modular software toolkit to allow quick switching between video and real time processing.

VII. Acknowledgments The authors thank the pilots Rob Erdos and Stephan Carignan (1964-2014) for their expert flying, Joao Araujo and Shahrukh Alavi for their instrumentation and flight test expertise, and students Samuel Zhao, Guong Ho and Eric Jiang for their efforts.

References 1 Federal Aviation Administration, “14 CFR Part 60 Final Rule, NSP Consolidated Version”, F.R. Vol 71, No. 208, pp. 63426-63432, URL: http://www.faa.gov/about/initiatives/nsp/media/consolidated_version.pdf [cited 11 December 2015]. 2 Hui, K., Auriti, L., Ricciardi, J., “Mathematical Model Development for a Cessna Citation CJ1 Level-C”, LTR-FRL-20070011, January 2008 3 Ricciardi, J., Wei Yu, G., “Real Time Cockpit Video Recognition”, LTR-FRL-2013-0008, March 2013

11 American Institute of Aeronautics and Astronautics