A Read-Out Electronic System for Imaging FTS Tom Neubert, Heinz Rongen, Karl Ziemons Zentralinstitut für Elektronik, Forschungszentrum Jülich, 52425 Jülich, Germany
[email protected]
Felix Friedl-Vallon, Thomas Gulde, Guido Maucher, Anne Kleinert Institut für Meteorologie und Klimaforschung, Forschungszentrum Karlsruhe, 76021 Karlsruhe, Germany
Abstract: A high performance read-out system for imaging FTS instruments based on equal time sampling designs with data post processing is presented. It can perform a data storage throughput up to 160 MByte/s. ©2009 Optical Society of America OCIS codes: (120.6200) Spectrometers and spectroscopic instrumentation; (300.6300) Spectroscopy, Fourier transforms
1. Introduction For imaging FTS instruments the electronics must handle and process generated raw data rates in regions over 100 MByte/s, if large focal plane arrays (FPAs) (i.e. 256x256) in combination with high frame rates of several kHz are being used. This requires a high performance storage system and a fast and efficient data post processing. For equal time sampling designs with data post processing (Brault algorithm [1], FFT, radiometric calibration, coaddition) the requirements can easily become a big challenge. Current instruments [2,3] are available which manage these amount of data in real-time at the cost of continuous data acquisition [4] or data throughput [5] performance. This paper presents a data-acquisition and interferometer electronic system, which was designed for the airborne Global limb Radiance Imager of the Atmosphere (GLORIA) instrument [6], in combination with a standard workstation PC to store and perform data post processing for equal time sampling designs. Due to a distributed hardware architecture the read-out electronics can directly adapt to the detector and interferometer to allow designing a compact imaging FTS. 2. Description of architecture The read-out electronic system (see figure 1) is based on distributed hardware architecture and consists of two parts, the data-acquisition and interferometer electronics (IDAC) and a high performance storage and processing system called (CHEFFE).
Figure1: Overview Read-Out Electronic System
Figure 2: IDAC-board for Imaging FTS in laboratory
The IDAC represents the upper level interface to detector and interferometer. The acquired detector data is marked with a unique timestamp and converted into data cubes to transfer to the CHEFFE, using a standardized interface. The detector system can be configured (e.g. region of interest, frame rate) and triggered via an external sync-signal. In addition, an integrated servo controller is implemented to control the drive of the interferometer. With two analog inputs a quadrature laser signal can be digitized for laser fringe timing estimation and optical distance measurement. The CHEFFE stores all acquired data to the storage system, controls the instrument operation and performs post processing.
Figure 2 shows the current IDAC board, which is used in a first laboratory environment setup for spectral characterisation of detector system and algorithm development. A Xilinx Virtex-II XC2V2000 FPGA in combination with an embedded controller based on a x86 processor represents the main architecture. The FPGA is running at 80MHz and defines the absolute time base for the read-out system. Figure 3 illustrates the IDAC block diagram in more detail. Because of the high data bandwidth generated by the detector, the data is transferred to the processing FPGA using two INOVA ING165B serial data transceivers, each capable of transferring 1.3 GBit/s.
Figure 3: Block diagram IDAC
The interferometer electronic unit (IFME-Unit) consists of a ELMO WHI2.5/60 digital servo drive for DC brush and brushless motors, linear motors and voice coils to control the interferometer drive and to perform a constant scan speed. Signals from an external incremental encoder, hall sensors or a reference laser can be used as feedback inputs for position and velocity control. For this a two channel analog input is used to digitize the quadrature laser signal with an 80MSPS analog/digital converter. Due to this oversampling approach, high precision timing information can be derived from zero crossings of the laser signals. A linear interpolation and digital filtering is necessary to make this technique robust against signal noise and mechanical vibrations. This timing information is used for Brault interpolation of scientific data. In addition small piezoelectric and stepper drives are implemented to adjust mirrors to correct for shear and to adapt the focus of the IR-lens for different ambient temperatures (temperature range -70 … +30°C). A remote control unit with an embedded processor is implemented and directly connected via a data bus to the FPGA. An embedded operating system (Embedded Linux) is running on the x86 processor architecture and delivers a standardized GBit Ethernet interface to receive telecommands and transfer data (i.e. housekeeping, laser fringe timing, service updates). For the high speed acquired detector data a Camera Link interface is implemented to transmit the data either via a copper or fibre optics medium. With these standardized interfaces (Ethernet, Camera Link) the components of the read-out electronic system can be placed at different locations. The CHEFFE is based on a dual-processor workstation PC system. With additional I/O hardware, like a National Instruments based frame grabber card NI-PCIe 1429 the high speed data via Camera Link interface can be received and directly stored to a SATA disk array, with the help of an ARECA Raid Controller ARC-1220. Storage capacity of several TByte for SATA disks is state of the art. Figure 4 illustrates the block diagram of the CHEFFE architecture. For scientific data processing tasks, e.g. Brault algorithm, FFT, radiometric calibration, coaddition of pixels or interferograms, the dual-processor system can be used as post processing hardware. To increase the processing performance multicore processor architectures (e.g. CELL [7], GPUs [8]) are chosen as an additional accelerator board for parallel processing. Imaging FTS raw data cubes are predestined for parallel calculation since thousands of interferograms shall be processed. In the GLORIA instrument a Cell Accelerator Board (CAB) will be used for onboard and post-flight processing. The implementation of the parallel algorithms is under development.
Figure 4: Block diagram CHEFFE illustrates offline- (green) and online-processing path (yellow) with a Cell Accelerator Board (CAB)
3. Performance The spectral data acquisition of the read-out electronics is adapted to a 256x256 FPA. During operation of GLORIA a optimized subset of 128x128 pixels will be used at a frame rate of 2665 Hz. The total data throughput will be 88 MByte/s. Measurements have shown that the read-out electronics performs data throughput of up to 160 MByte/s and frame rates up to 10 kHz. The interferometer electronics can operate with analog laser signals up to 25 kHz to get fringe timing information. The bottleneck will be caused by the integrated position estimation. Using external encoder as position reference, the laser frequency can increase to 100 kHz and more. The implemented position estimation based on laser signal is robust against velocity variations up to +/-30 %. The total storage data throughput depends on the RAID-level which is an indication for the redundancy of the data storage and the number of hard disks inside the disk array. In case of a RAID-level 5 configured system in combination with 8 hard disks a continuous data storage rate up to 240 MByte/s could be achieved. The effective values for processing performance have not been tested yet, since the implementation of the algorithms is still under development. 4. Next steps The read-out electronic system has to be upgraded to operate in rough environment conditions (e.g. a wide humidity, temperature and air pressure range) to enable operation in the new German research aircraft HALO and other scientific applications like balloon missions. Furthermore the adaption and the development of the required scientific data processing on Cell Accelerator Board will be continued. 5. References [1]
J. W. Brault, „New approach to high-precision Fourier transform spectrometer design“, Applied Optics, Vol. 35, Issue. 16, Pages 28912896, Juni 1996.
[2]
G. Toon, J.-F. Blavier, M. McAuley, and A. Kiely, “Advanced On-Board Science Data Processing System for a Mars-orbiting FTIR Spectrometer”, R&TD Task 01STCR R.05.023.048, NASA Jet Propulsion Laboratory, Pasadena, CA, 2005.
[3]
V. Farley et al., “Development and testing of a hyper-spectral imaging instrument for field spectroscopy”, SPIE 5546-02, USA, Aug. 2004.
[4]
P. J. Pingree, J.-F. L. Blavier, G. C. Toon, and D. L. Bekker, “An FPGA/SoC Approach to On-Board Data Processing - Enabling New Mars Science with Smart Payloads”, in IEEE Aerospace Conference, March 2007.
[5]
P. Dubois, M. Chamberland, J. Genest, and S. Roy, " FPGA SoC Architecture for Imaging FTS Real-Time Data Processing," in Fourier Transform Spectroscopy/ Hyperspectral Imaging and Sounding of the Environment, OSA Technical Digest Series (CD) (Optical Society of America, 2007), paper JWA9.
[6]
F. Friedl-Vallon, M. Riese, G. Maucher, A. Lengel, F. Hase, P. Preusse, R. Spang, „Instrument concept and preliminary performance analysis of GLORIA”, Advances in Space Research, Volume 37, Issue 12, Pages 2287-2291, 2006.
[7]
Williams, S., Shalf, J., Oliker, L., Kamil, S., Husbands, P., and Yelick, K, “The potential of the cell processor for scientific computing”, Proceedings of the 3rd Conference on Computing Frontiers (Ischia, Italy, May 03 - 05, 2006).
[8]
M.Rumpf and R. Strzodka,”Graphics Processor Units: New Prospects for Parallel Computing”, In A. M. Bruaset and A. Tveito, Numerical Solution of Partial Differential Equations on Parallel Computers, volume 51 of Lecture Notes in Computational Science and Engineering. Springer-Verlag, 2005.