Update on Data Processing with the GBT

0 downloads 0 Views 83KB Size Report
Ronald J. Maddalena, Paul Marganian, Anthony H. Minter .... Marganian, P., Garwood, R. W., Braatz, J. A., Radziwill, N. M., & Maddalena, R. J.. 2006, in ASP ...
Astronomical Data Analysis Software and Systems XVI ASP Conference Series, Vol. 376, 2007 R. A. Shaw, F. Hill and D. J. Bell, eds.

Update on Data Processing with the GBT Robert W. Garwood, James A. Braatz National Radio Astronomy Observatory, Charlottesville, VA, USA Ronald J. Maddalena, Paul Marganian, Anthony H. Minter National Radio Astronomy Observatory, Green Bank, WV, USA Abstract. The Green Bank Telescope (GBT) is a general purpose instrument that operates at frequencies from 300 MHz to 50 GHz, and addresses a range of astronomical problems from mapping the Moon to studying the earliest structures in the universe. Data processing and analysis for the GBT is complex and relies on a combination of strategies, generally tied to the data recording device (back-end) being used. Pulsar and radar observations are reduced and analyzed using software developed by individual research groups. All other GBT data analysis is supported by in-house efforts. We will outline the data processing paths and software applications used for each type of observation, including continuum imaging, spectral line on-the-fly mapping, and point source spectroscopy. The primary package for calibration and reduction of spectral line data is GBTIDL. We will describe flagging capabilities recently added to this package. GBTIDL is extensible and is used as a base by observers using specialized methods such as polarization measurements or pulsed spectral line experiments. The package also lends itself to incorporation of alternative calibration algorithms and techniques.

1.

Introduction

The Robert C. Byrd Green Bank Telescope (GBT)1 is a 100-m, fully-steerable single dish radio telescope. The GBT features an unblocked aperture and an active surface. The current receivers have a frequency coverage from 300 MHz to 50 GHz. The available back-ends (data recording devices) at the GBT support continuum, spectral line, pulsar and VLBA/VLBI observations. Some userprovided backends are also used at the GBT. 2.

Continuum Data Processing

The Digital Continuum Receiver (DCR) is the GBT’s general purpose continuum backend. It is used both for utility observations to characterize the properties of the telescope and the atmosphere (focus, pointing, opacity) as well as for extended source mapping (imaging).

1

http://www.gb.nrao.edu/GBT/GBT.shtml

277

278

Garwood et al.

Utility observations are reduced in near-real-time using GFM (GBT Fits Monitor). GFM is integrated into ASTRID (Astronomer’s Integrated Desktop, O’Neil et al. 2006) along with the other tools that observers use to prepare and submit scheduling blocks and monitor the observations. The results of pointing and focus observations can be fed back to the telescope control system to adjust those properties of the telescope. DCR mapping data can be calibrated and imaged in AIPS++. 3.

Spectral Line Data Processing

GBTIDL2 is the recommended package for processing single pointing (nonimaging) spectral line data from the GBT (Marganian et al. 2006). A data flagging and blanking capability has recently been added to GBTIDL. The builtin flagging tools can be used directly to mark data inappropriate for averaging, or they can be incorporated into (user-developed) higher level tools to flag data based on statistical tests. There are two supported routes for making images from GBT spectral line mapping data. The data can be pre-processed using IDL and exported to classic AIPS. The dish tool, as part of AIPS++3 , can be used to calibrate and image GBT spectral line data. AIPS++ tools can also be used to combine GBT data with synthesis data. Dish development is frozen along with AIPS++ development. 4.

GBTIDL

GBTIDL is written entirely in Interactive Data Language (IDL), a commercial product of ITT Visual Information Solutions. It can be used as a calculator that operates on spectra. Calibration procedures are provided for data taken in one of the GBT standard observing modes. Users can easily explore alternative calibrations. Data blanking and flagging, described in the next section, have been recently added. GBTIDL includes an interactive plotter that can produce highquality Postscript files. User-contributed code is encouraged and distributed with each release. At the telescope, GBTIDL provides immediate access to the most recent data coming off the telescope. The GBTIDL architecture isolates data I/O so additional data formats can be easily supported. 4.1.

Flagging and Blanking in GBTIDL

Blanking is the process of replacing spectral intensities for a given set of channels with a special value recognized by the data analysis system (GBTIDL). Flagging is the process of establishing rules so that when data on disk is read into GBTIDL, the data described by those rules is blanked. In general, blanked values cannot be undone easily because they are replacement values, stored as data. The only way to undo a blanked value is to return 2

http://gbtidl.sourceforge.net

3

http://aips2.nrao.edu

Update on Data Processing with the GBT

279

to the original source of data and reload the (possibly uncalibrated) spectra. Flagging rules, on the other hand, can be undone, selectively applied or ignored. Since the data on disk are not changed by these rules, one simply has to re-read the data from disk after changing the flagging rules. Flagging rules can be specified by individual channel, channel range, individual spectrum, or by selection criteria to identify a group of spectra that that rule applies to. Rules are also identified by a user-supplied name so that rules having the same name can be selectively applied, ignored, or removed. Flagging is typically an iterative process. Calibrate the raw data and examine the results to determine if any flagging is required to improve calibration. If necessary, flag the offending data and return to the first step. Once the initial flagging is done, write the calibrated data to a new data file. Again, examine the calibrated data and use the flagging procedures to mark residual bad data to exclude from the average. Average the data. Examine the average and, if necessary, return to either of the flagging steps and modify the flagging commands as necessary. Proceed with analysis of the averaged spectrum. As of GBTIDL version 2.0, the following flagging and blanking commands are available: replace Replace the data in all channels or a range of channels with another value, which may be the special “blanked” value. flag Create a flag rule based on selection criteria (scan, integration, polarization, etc). A string (idstring) can be used to identify the flag rule or groups of rules in other commands. flagrec Create a flag rule for a specific row (spectrum) or set of rows. This is similar to flag, but useful when data selection is not necessary to identify the row(s) to be flagged. listflags List some or all of the flag rules. listids List all of the unique idstrings in the flag rules. unflag Remove a flag rule or set of rules by rule number or idstring. get Get some data and apply all or a limited set of flag rules (by exclusion or inclusion using the idstring values). Data that match the applied flag rules will be blanked. All similar data retrieval routines have the same syntax for applying and ignoring flag rules. It is straightforward to write an IDL script that automatically generates flagging rules for an entire data set based on any criteria (e.g. statistics). We refer to this as “auto-flagging” to distinguish it from single rule command-line flagging where the user types in a specific flag command by hand. The plots in Figure 1 show one application of a user-generated auto-flagging procedure on some GBT data with significant interference. In this procedure, data are flagged when the intensity is more than some value from the mean. Extra channels can be flagged on either side of a flagged signal. The edges can be excluded from the mean and from any flag rules. The resulting set of flag rules all share the same idstring (reason). The left side of Figure 1 shows the

280

Garwood et al.

Figure 1. Calibrated and averaged GBT data before (left) and after (right) autoflagging.

calibrated and averaged data before application of this auto-flagging procedure. The right side shows the same data after the auto-flagging procedure has been used to generate a set of flag rules. The data were then re-averaged. 5.

Future Data Processing Plans

Over the next year, we plan on working to improve wide-bandwidth calibration. More auto-flagging tools will be added to GBTIDL. A visual, interactive flagging tool is being designed. We also plan on doing work to improve the exchange of data (import and export) between GBTIDL and other imaging and analysis tools including casa (the ALMA data analysis package and successor to AIPS++), classic AIPS and CLASS. References Marganian, P., Garwood, R. W., Braatz, J. A., Radziwill, N. M., & Maddalena, R. J. 2006, in ASP Conf. Ser. 351, ADASS XV, ed. C. Gabriel, C. Arviset, D. Ponz, & E. Solano (San Francisco: ASP), 512 O’Neil, K., Shelton, A. L., Radziwill, N. M., & Prestage, R. M. 2005, in ASP Conf. Ser. 351, ADASS XV, ed. C. Gabriel, C. Arviset, D. Ponz, & E. Solano (San Francisco: ASP), 719