Home
Search
Collections
Journals
About
Contact us
My IOPscience
Development of a readout link board for the demonstrator of the ATLAS Tile calorimeter upgrade
This article has been downloaded from IOPscience. Please scroll down to see the full text article. 2013 JINST 8 C03025 (http://iopscience.iop.org/1748-0221/8/03/C03025) View the table of contents for this issue, or go to the journal homepage for more
Download details: IP Address: 130.242.128.222 The article was downloaded on 05/09/2013 at 14:54
Please note that terms and conditions apply.
P UBLISHED
BY
IOP P UBLISHING
FOR
S ISSA M EDIALAB
R ECEIVED: November 16, 2012 R EVISED: January 21, 2013 ACCEPTED: February 12, 2013 P UBLISHED: March 27, 2013
TOPICAL WORKSHOP ON E LECTRONICS 17–21 S EPTEMBER 2012, OXFORD, U.K.
FOR
PARTICLE P HYSICS 2012,
S. Muschter,a,1,2 K. Anderson,b,2 C. Bohm,a,2 D. Eriksson,a,2 M. Oregliab,2 and F. Tangb,2 a Fysikum,
Stockholm University, Stockholm, Sweden b University of Chicago, Chicago, U.S.A.
E-mail:
[email protected] A BSTRACT: A hybrid readout system is being developed for installation in one module of the ATLAS scintillating Tile Calorimeter (TileCal) during the long LHC shutdown in 2013/2014. The hybrid combines a fully functional demonstrator of the full-digital system planned for installation in 2022 with circuitry to maintain compatibility with the existing system. This is the report on a second generation prototype link and controller board connecting the on- and off-detector electronics. The main logic component within this board is a XILINX Kintex-7 FPGA connected to an 12 × 5 Gbps SNAP12 opto transmitter and a 4 × 10 Gbps QSFP+ connector, for off-detector communication. One of the latter two will be chosen for the final design. K EYWORDS : Front-end electronics for detector readout; Optical detector readout concepts
1 Corresponding 2 On
author behalf of the ATLAS Tile Calorimeter System
c CERN 2013 for the benefit of the ATLAS collaboration, published under the
terms of the Creative Commons Attribution 3.0 licence by IOP Publishing Ltd and Sissa Medialab srl. Any further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation and DOI.
doi:10.1088/1748-0221/8/03/C03025
2013 JINST 8 C03025
Development of a readout link board for the demonstrator of the ATLAS Tile calorimeter upgrade
Contents Introduction
1
2
The ATLAS Tile Calorimeter 2.1 The current readout electronics 2.2 The upgraded readout electronics
2 2 3
3
First prototype DaughterBoard
3
4
The second generation DaughterBoard 4.1 Hardware changes 4.2 Functionality changes
4 4 5
5
DaughterBoard status
6
6
Conclusion
7
1
Introduction
The Large Hadron Collider (LHC) upgrade program aims at a 5-10 fold luminosity increase over the design value following the long shutdown ending in 2022 (Phase 2). A number of upgrades to the ATLAS experiment [1] are planned up to this point, both to replace aging electronics as they reach end-of-life, and to cope with the much higher backgrounds expected. One upgrade planned for 2022 is a completely redesigned front-end digitization and readout system for the hadronic scintillating-tile calorimeter (TileCal) [2]. The current system [3] stores digitized samples in ondetector digital pipeline-memories and selects only those samples that are involved in triggered events for further processing in off-detector modules (Read Out Drivers, RODs). These are located in the neighboring electronics cavern. The new readout will use multi-gigabit optical links to transmit all digitized samples off the detector, allowing the pipeline-memory to be placed outside the detector, near the RODs. Among other benefits, this new architecture can provide a fully digital trigger, help ease trigger latency constraints, as well as provide the Level-1 trigger with higherresolution hadronic layer data. To evaluate this new architecture, a hybrid on-detector electronic demonstrator system will be installed in a limited part of the detector during the first long shut-down in 2013/2014 (Phase 0). The demonstrator will be a fully functional prototype for the future ondetector electronics, while maintaining compatibility with the present data acquisition system. The new system merges the four board types in the current electronics into a more compact three board solution: ”front-end” boards for signal conditioning, a ”MainBoard” for data acquisition and a link ”DaughterBoard” for off-detector communication and control. The development of the demonstrator has been an iterative process. An early, rough model of the readout hardware was assembled [4]
–1–
2013 JINST 8 C03025
1
using mainly off-the-shelf components to allow firmware development [5] to be carried out in parallel with the hardware development. The model has been gradually refined by replacing components with more realistic prototypes. After tests of the first MainBoard and DaughterBoard prototypes, a second DaughterBoard version was produced that incorporates a modern FPGA, a redesigned connector to the data acquisition board, and increased high speed communication capability.
2
The ATLAS Tile Calorimeter
The ATLAS hadronic Tile Calorimeter (TileCal) is essential for measuring the energy and direction of jet and hadrons produced in LHC colllisions. The calorimeter itself is composed of steel plates alternating with plastic scintillator tiles. Optical wavelength-shifting plastic fibers from the scintillating tiles are grouped into roughly 5000 calorimeter cells that are read out by 10000 photomultiplier tubes and associated readout electronics. In the current system, the photomultiplier signals are digitized and pipelined on-detector, and triggered events are read out over 1 Gbit fiber links. A reduced set of ”tower” sums are also sent as analog pulses to the Level-1 trigger. The TileCal cells are grouped into four barrel partitions, two extended barrels at the outside and two central barrels in the middle. Each barrel is divided into 64 slices. The readout electronics are situated at the outside of every slice in so called drawers. Overall there are 512 drawers within TileCal and one drawer can digitize the signals from up to 48 photo multiplier tubes (PMT). 2.1
The current readout electronics
The current TileCal readout electronics comprise four board types. The front-end 3-in-1 board is a single card solution for amplification and shaping of the PMT signals, as well as different calibration functions. The 3-in-1 boards are configured and controlled by the 3-in-1 motherboards. The TileCal Digitizer boards digitize PMT signals at 40 MHz and store the data in digital pipelines. Reduced-granularity tower sums are distributed to the L1 trigger as analog pulses. Upon a Level1 trigger accept, data are read out and sent off-detector through the Interface board, which also handles configuration and timing distribution. A rough overview of the current electronics is shown in figure 1.
–2–
2013 JINST 8 C03025
Figure 1. Schematic overview about the front-end electronics situated within a Drawer in TileCal.
2.2
The upgraded readout electronics
For the TileCal upgrade, the front-end electronics will be completely redesigned and repartitioned. One board (the DaughterBoard) will be used for communication and control, while a second (the MainBoard) will digitally sample the PMT pulses. A redesigned version of the 3-in-1 board will be used for signal preamplification and shaping [6]. PMT pulses will be shaped in the 3-in-1 board, and the signals will be digitized on the MainBoard and transferred to the DaughterBoard. From there the full digital data will be sent off-detector over high speed optical links.
3
First prototype DaughterBoard
The first prototype DaughterBoard was conceived as a platform for testing and verification of design features and components for the final demonstrator version. The exact specifications are still evolving, making it difficult to finalize the DaughterBoard design at this time. Tests of the first and second generation prototypes allow the specifications to be evaluated and thus simplify the decision process for completely specifying the final version. The dimensions of the first prototype board were 103 mm × 201 mm, values partly determined by the space available within the drawer. To provide redundancy the board was divided down the center line into two identical parts running in parallel. Two XC6VLX130T Virtex-6 FPGAs [7] were placed on the board, as well as two redundant sets of power supplies. This allows readout to continue uninterrupted if one side fails, for example due to a Single Event Upset (SEU) in the configuration memory of an FPGA. The FPGAs are interconnected in a multi-point LVDS topology, allowing readout of one side with both FPGAs simultaneously. The first prototype was additionally
–3–
2013 JINST 8 C03025
Figure 2. First Test setup used for firmware development consisting of: the first generation DaughterBoard (1), the first generation MainBoard (2) and two 3-in-1 boards used for shaping the pulses coming from the photomultiplier.
4 4.1
The second generation DaughterBoard Hardware changes
The second generation DaughterBoard was designed to resolve the issues discovered in the previous version, and to incorporate new functionality and interfaces. There were two major changes, the first being a migration from the Virtex-6 FPGA to XC7K325T Kintex-7 [12] and the second being a major redesign of the power distribution network. The Kintex 7 FPGA was chosen to reduce overall power consumption and increase the speed of data transmission up to 10 Gbps. Power consumption was reduced by up to 30 percent as a result.
–4–
2013 JINST 8 C03025
equipped with two SFP+ modules, one AFBR-775BEPZ twelve channel laser array transmitter and a 400-pin SEAF-40-06.5-10-A connector with the ANSI/VITA 57.1 FPGA Mezzanine Card (FMC) configuration standard [8]. The latter is capable of transferring data from four 3-in-1 boards. In a radiation environment, special care must be taken in the choice of components, to ensure error-free operation. The components for the first prototype were mainly commercial and will successively be replaced as radiation tolerant alternatives become available. To prepare for possible scenarios arising from component changes, different options have been considered during design. One such example concerns the clocking scheme. In the final version of the demonstrator the main clock should be derived from a GBTx chip [9], developed at CERN. Because the electrical characteristics of the GBTx were not available at the time of the prototype layout, a workaround without external clock circuitry was introduced. This concept had the benefit of reducing the overall power consumption and the number of external components, and thus reduced potential sources of failure. Additionally this scheme would have increased redundancy because it allowed the GBTx to be bypassed in case of failure. A potential drawback was that this scheme was not recommended for driving gigabit transceivers. The concept was partly tested [10] on a ML605 evaluation platform from XILINX [11] and shown to work. In these tests an external clock was used to start up the data reception and an internal synthesized clock was used for data transmission. The next steps to be investigated with the first prototype DaughterBoard were to rely solely on internal clocking. Unfortunately, test results showed that the decoupling of the power distribution network was not sufficient to derive a clock clean enough to start up the data reception of the first prototype and thus establish stable communication between the DaughterBoard and a ML605 using the SFP+ modules at 4.8 Gbps. Further investigations showed that the quality of the supply voltage was directly coupled to the jitter of the global clock nets. Accordingly one can reduce the jitter if additional filtering is added to the power planes, which in turn would also influence the noise of signals sampled by connected ADCs. For further tests the prototype DaughterBoard was connected to the first generation Mainboard [4], developed at Stockholm. Using this setup (figure 2), SPI programming and data readout at up to 480 Mbps between MainBoard and DaughterBoard were successfully verified. In summary the main purpose of this board was to evaluate the possibilities for developing a data acquisition board with a minimal number of commercial components by transferring as much functionality as possible to the FPGA. Further information about the first prototype DaughterBoard can be found in [4].
It was decided to use only 10 V from the low voltage power supplies. The power distribution network was adapted to this, allowing the use of rad-hard DC-DC regulators developed at CERN [13], at first only for the conversion from 10 V down to 3.3 V. If these perform satisfactorily, they will be used for other voltages as well. One more change in the power distribution network was that the high speed data links have a separate power supply, independent from the FPGAs. Analog filters were implemented on every output of the switching DC-DC regulators. A QSFP+ module replaced the two previously used SFP+ modules, providing the capability for 10 Gbps data transmission as well as receiving commands and timing information from the off detector area. Within the QSFP+ module two transmission lines were connected to one FPGA and two to the other to keep redundancy. The QSFP+ module itself could be programmed from both FPGAs using a bus interconnection. A similar bus interconnection was used between the AFBR-775BEPZ transmitter and the two FPGAs. In a later generation, one of these high speed opto module options will be removed after a selection has been made based on performance. A dedicated clock circuit (CDCE62005 from Texas Instruments) was added, which could be programmed using a standard SPI interface. This circuit allowed different clocking schemes and frequencies for driving the GTX to be evaluated. This chip is not considered for the final version of the DaughterBoard that will be used within the demonstrator project, and will be replaced by a GBTx chip [9]. 4.2
Functionality changes
The second generation DaughterBoard also added more control and monitoring functionality. The pin layout of the 400-pin connector was redesigned, leaving more space for user defined functions. The formerly used ANSI/VITA 57.1 FMC pin layout was replaced by a custom pin layout, which allowed reading out twelve 3-in-1 front-end boards, as required by the demonstrator. Contrary to the ANSI/VITA FMC definition almost all pins were connected to general purpose IO pins on the FPGA. An exception was made in the innermost rows, where mainly pins used for monitoring and control were placed. The pins internally connected to an ADC input or an I2 C interface could be used as normal IO as well. All of the remaining pins could be used with LVDS or a 1.8 V based single ended communication standard. This restriction arises because the Kintex-7 provides
–5–
2013 JINST 8 C03025
Figure 3. Schematic overview about the DaughterBoard user IO connections used for communication with the underlying MainBoard (400-pin connector) or the off detector area (QSFP+ and SNAP12 module).
Figure 4. Photograph of the second generation DaughterBoard mounted with one Kintex-7 FPGA.
two types of IO banks, one for high range (HR) and one for high performance (HP). The latter alternative was chosen because they supported transmission speeds up to 710 Mbps using single data rate and 1400 Mbps using double data rate communication. In addition to this the user has the possibility to implement digitally controlled impedance, giving better signal integrity. The last change was that the JTAG-chain connected to the FPGA was going through the MainBoard via the 400-pin connector. This way all the devices located on one side of the front-end-electronics and connected to the JTAG-chain could be programmed independently. This feature could be bypassed if necessary.
5
DaughterBoard status
The partly mounted version of the DaughterBoard, with only one FPGA, is shown in figure 4. Since the board did not show any serious issues one more board will be mounted equipped with two FPGAs to fully validate the GTX transmission lines. In this case the first board shown in figure 4 will be used for radiation tests. To evaluate changes in performance compared to the first generation the power distribution network was tested, which did perform better than previously. The switching noise of the DC-DC
–6–
2013 JINST 8 C03025
Figure 5. Comparsion between 250 MHz synthezised clock on the first prototype (a) and the second generation (b) DaughterBoard showing the shape of the signal in the upper half and the corresponding RjBUj-Track of the signal in the lower half of the picture.
regulator was sufficiently attenuated. As a result the clock quality improved significantly. Figure 5 shows the difference between the first and the second generation when both clock signals were synthesized using a 100 MHz oscillator connected to the FPGA. One can see that in the track that combines the Random and Bounded Uncorrelated Jitter (RjBUj-Track) in figure 5(a) the signal quality was degraded by a periodic distortion caused by the noise of the DC-DC regulator. This distortion was completely eliminated due to the increased analog filtering. The shapes of the clock signals were different due to the different output standards used. In the final application the GTXPLL will be driven directly from a GBTx chip, which should deliver a high performance clock. If the GBTx chip is not be available, the clock for the GTX-PLL will be synthesized internally. Since the GTX-PLLs of Virtex-6 and Kintex-7 have similar constraints, comparing the clock performance gives a good hint if the planned clock scheme is going to work. Furthermore a high quality clock is also necessary for minimizing the noise of the MainBoard ADCs. After the power distribution network and clock quality was sufficiently tested, GTX tests with transmission rates of 5 Gbps were performed, using the SMA connectors wired to one GTX. This test was performed using a standard 100 MHz clock input derived from a ML507 [14] evaluation platform from XILINX and connected directly to the GTX unit. The eye diagram is shown in figure 6 showing a clearly opened eye. Unfortunately it was not possible to test 10 Gbps transmission rates using the same clock input because the Kintex-7 engineering samples did not support the clock routing necessary to perform this test. To overcome this issue the CDCE62005 will be used to perform the test and the next board will be mounted with Kintex-7 production versions which are now available.
6
Conclusion
The second generation DaughterBoard demonstrates significant improvements compared to the first generation and is therefore an important step towards a fully functioning demonstrator to be be installed during the long shutdown in 2013/2014. However there are still important tests to perform. It was showed that the power distribution network and clock quality were significantly improved, both important requirements for achieving gigabit transmission. The GTX was shown to work with a transmission rate up to 5 Gbps. In the next step bit error tests must be performed, as well as tests of every GTX transmission line. When the functionality of the board with respect to data reception and gigabit transmission is completely validated, radiation tests must be performed to ensure that
–7–
2013 JINST 8 C03025
Figure 6. Eye diagram of a 5 Gbps signal and the corresponding RjBUj-Track, generated using a 100 MHz reference clock signal and a pseudo random bit stream.
all components are sufficiently radiation tolerant. Finally the results gained from these tests will be incorporated in a third generation prototype, which should be available for testing in the first half of 2013.
References [1] ATLAS collaboration, The ATLAS Experiment at the CERN Large Hadron Collider, 2008 JINST 3 S08003.
[3] K. Anderson et al., Front-end electronics for the ATLAS Tile calorimeter, in Proceedings of Fourth Workshop on Electronics for LHC Experiments, September (1998), Rome Italy. [4] D. Eriksson et al., A prototype for the upgraded readout electronics of TileCal, 2012 JINST 7 C02006. [5] S. Muschter et al., An early slice prototype for the upgraded readout electronics of TileCal, in IEEE Nucl. Sci. Symp. Med. Imag. Conf. (2011), Valencia Spain. [6] F. Tang et al., Design of the front-end readout electronics for ATLAS tile calorimeter at the sLHC, IEEE Nucl. Proc. Symp. Sci. Real Time Conf. (2010). [7] XILINX, Virtex-6 Family Overview, DS 150, January (2012), http://www.xilinx.com/support/documentation/data sheets/ds150.pdf. [8] American National Standards Institute, American National Standard for FPGA Mezzanine Card (FMC) Standard, ANSI Standard ANSI/VITA 57.1-2008 (2008). [9] P. Moreira, GBTx specifications, https://espace.cern.ch/GBT-Project/GBTX/Specifications/gbtxSpecsV1.2.pdf. [10] S. Muschter, A Full Slice Test Version of a Tentative Upgraded Readout System for TileCal, ACES Common ATLAS CMS Electronics Workshop for SLHC, March, 2011, CERN, Switzerland. [11] XILINX, Virtex-6 FPGA ML605 Evaluation Kit, http://www.xilinx.com/products/boards-and-kits/EK-V6-ML605-G.htm. [12] XILINX, 7 Series FPGAs Overview, DS 180, November (2012), http://www.xilinx.com/support/documentation/data sheets/ds180 7Series Overview.pdf. [13] B. Allongue et al., Low noise DC to DC converters for the sLHC experiments, 2010 JINST 5 C11011. [14] XILINX, Virtex-5 FPGA ML507 Evaluation Platform, http://www.xilinx.com/products/devkits/HW-V5-ML507-UNI-G.htm.
–8–
2013 JINST 8 C03025
[2] ATLAS collaboration, ATLAS Tile Calorimeter Technical Design Report, CERN-LHCC 96-042 (1996).