New Control System Aspects for Physical Experiments - CiteSeerX

3 downloads 1806 Views 259KB Size Report
(detection of mass of electron neutrino) have collaborators from about 20 nations ... requirement of an internet browser like Netscape or Internet. Explorer are the ...
New Control System Aspects for Physical Experiments Wolfgang Eppler, Armen Beglarian, Suren Chilingarian, Simon Kelly, Volker Hartmann, Hartmut Gemmeke

Abstract—New control system aspects are introduced for the design of slow control systems for physical experiments. Mainly, they are based on a comprehensive usage of XML technologies. A second paradigm for future control systems is the consequent usage of the Model-View-Controller (MVC) pattern. Main aspect of all (hardware and software) system components is the use throughout of standards for interfaces, protocols and architecture. In particular, the software is based on the unifying XML specifications XSchema, XPath, XQuery, XLink and OPC XML. A first application of these technologies is the KATRIN Slow Control System (KSC) of a neutrino experiment at the Forschungszentrum Karlsruhe. Main characteristic of KSC is its homogeneous structure with its code being spread over several sub-systems. Implications of this distributed method are system stability, independence of sub-systems and fast and comfortable maintenance. I.

T

INTRODUCTION

HE systems treated here comprise big physical experiments with scientific users being located all over the world. Experiments of our research center like AUGER (detection of masses of ionizing radiation and cosmic rays, which are constantly striking the earth), ITER (demonstration of the scientific and technological feasibility of fusion energy for peaceful purposes) or KATRIN (detection of mass of electron neutrino) have collaborators from about 20 nations. It is extremely important for them to have fast and uncomplicated access to all scientific data produced by the experiment. Special software installations on their client computers are very tedious because of the deployment and distribution of changing program versions and the use of different platforms. Thin clients with the only requirement of an internet browser like Netscape or Internet Explorer are the solution of this problem. On the other side an efficient data transfer between front end devices with their measurement signals and a database is most important for such systems. For the first time new web technologies make it possible to use the same concepts for both system parts. Manuscript received June 6, 2003; revised December 15, 2003. W. Eppler, A. Beglarian, V. Hartmann and H. Gemmeke are with the Forschungszentrum Karlsruhe, POB 3640, 76021 Karlsruhe, Germany (telephone: ++49 7247 825537, e-mail: [eppler,beglarm,hartmann] @ipe.fzk.de). Surik Chilingarian is with the Yerevan Physics Institute, Armenia (e-mail: [email protected]). Simon Kelly is with the University of the West of England. (e-mail: [email protected]).

Extremely useful for this integration are web services combining the two basic technologies XML with its ability of moving structured data across the Web and its separation of content and presentation, and SOAP, the Simple Object Access Protocol, that uses XML messages to invoke remote methods through HTTP's post and get methods. But SOAP is much more robust and flexible. Further evidence that justifies to proceed with this development is grid computing. Grid computing with its enormous demand of computation performance, storage capacity, and data transfer rates is based completely on XML technology with its new OGSI specification [1]. In the development of control systems, frequently similar procedures and algorithms occur. Data acquisition systems for example consist of event builders, data concentrators, archiving methods or monitor functions. Therefore, it speaks for itself, to reuse algorithms which have been created once and to repeatedly use existing procedures in future developments. For systems like this, the following properties are desirable, partly mandatory: • platform and development system independence, • code and design reuse, • modularity, • extensibility, • hierarchy of globally defined, abstract data types. Only few distributed environments are homogenous and have computers with just one type of operating system. At the present time operation system independence is fact but this independence is not true for libraries and other tools provided by the development system. This is against the idea of modularity. Code and design reuse decreases the cost for a new development, provides incremental quality improvements (as software flaws are repaired in long-lived components), and establishes design best practices that everyone in the organization understands. Breaking a design into modules that interact through well-defined high level interfaces allows developers to work independently, enhances maintainability and testability, and provides opportunities for using purchased components and outsourcing some development. Application functionality must be able to keep up with organizational growth and technological change and therefore must be extensible. Application specific data structures organized as a hierarchy of data types can be derived from the XML schema specification. This globally defined structure is very

important for future extensions and compatibility with other modules. All these high-level goals are taken into account when using the Model-View-Controller (MVC) design pattern [2]. The paper is organized as follows: In the next section a first application for this new way of control system design is introduced. The example is good for showing the new concepts that are more than just an interesting design study. The overall paper shows that the time has come to use the proposed concepts in practice. In section III some substantial architectural aspects are considered that constitute the components of modern slow control systems. Among them is the Model-View-Controller paradigm, the XML language and the OPC standard for process data transfers. In section IV and V two interfaces are investigated in more detail. The first one is the connection between front-end and database, the second one between database and web server. Other important aspects of slow control systems as the control task itself or alarm, historical data and user handling [3][4] are not in the focus of this paper. II. FIRST APPLICATION EXAMPLE KATRIN The KATRIN (KArlsruhe TRItium Neutrino) experiment is designed to measure the mass of the electron neutrino directly to a precision of 0.23 eV. It is a tritium beta-decay experiment scaling up the size of previous experiments by an order of magnitude with a much more intense tritium source [5], [6]. The tritium source will consist of a 10m long tube. The gaseous tritium is inserted in the middle and pumped out at both ends. Electrons from beta decay and remaining tritium molecules enter the transport section, which has the purpose of eliminating the tritium while guiding the electrons to the spectrometers. This is achieved by a combination of mechanical and cryogenic pumps and super-conducting magnets. The main spectrometer defines the analysing potential with a precision of a few ppm. Magnetic fields are provided by super-conducting magnets on both sides of the spectrometers. Electrons are counted in an integral fashion above the retarding potential. The electrostatic spectrometer has an overall length of about 20m. The pre-spectrometer is a smaller version of the main one. A static electric field provides a retarding potential to remove all electrons of low energies. This is necessary to minimize background in the main spectrometer due to trapped electrons. Leaving the main spectrometer the electrons are guided to a detector. The present concept of the detector is based on a large array of silicon based semiconducting detectors like diodes or drift detectors. For this experiment a new generation control system is developed. It is based on following concepts: - Acquisition and control algorithms homogeneously spread over subsystems - Throughout use of the model-view-controller architecture - Use of new web technologies with world-wide access on experimental data

Use of high-level interface descriptions and data protocols. Distributed access to the system should be possible by cell phones and PDAs. Newest Internet technologies like webservices, XML, XQuery and other high level standards are used to provide platform-independence, modularity and reusable components. A new data type specification based on XML Schema was defined especially for slow control and data acquisition systems. Main hardware components for the data collection are Fieldpoint devices from National Instruments running with LabView Real-time. -

III. ARCHITECTURAL ASPECTS FOR KSC The KATRIN Slow Control (KSC) architecture is characterized by its use throughout of standard protocols and standard interfaces on a high level. Especially for physical experiments this objective is most important, as some of them have typical life cycles of 10 to 20 years. There will be ongoing changing requirements in this time period. Components will have to be changed and will be replaced by others being specified for different environments. It is expected that there will be a rapid development of the web, of hardware and software tools. Especially, platforms will change with their operating systems. In such a changing world nothing would be stable. But well selected interfaces and protocols have shown to exhibit the longest live cycles. Examples are the Ethernet specification and the TCP/IP and MIME protocols. Figure 1 shows that KSC mainly relies on XML (eXtensible Markup Language [7]) technologies. The tendency of several other internet-based data transfer standards like OGSA (Open Grid Services Architecture) is to merge with XML and to integrate this basic data exchange language. In the last few years, XML has become a preferred format for encoding and moving data in an open, systemindependent way. This mainly arises from its separation of data content and data presentation and consequently is an excellent basis for a MVC (model-view-controller) architecture. The MVC paradigm suggests a basic architecture to separate data access functions from the presentation and control logic that uses these functions. The separation of model and view allows multiple views to use the same data model. Consequently, application components are easier implemented, tested, and maintained, since all access to the model goes via these components. To support a new type of client, simply write a view and controller and wire it into the existing data model. The MVC architecture is a way of breaking an application into three parts: the model, the view, and the controller. The model represents the structure of the data in the application, as well as application-specific operations on that data. The view (there may be several) presents data in a specific form to a user in the context of some application function.

Fig. 1. Architectural overview of the KATRIN slow control system. At the bottom several subsystems (CRYO, ... Detector) can be seen that are controlled by a distributed system of National Instruments Fieldpoint Stations. These stations are connected to the supervisor control and to the database (DB) by a XML OPC protocol (HDR is an extension of the standard protocol - see later in text). On the query side of the database distributed web clients have access to the DB via web server using XML mechanisms for data query.

A controller translates user actions (mouse motions, keystrokes, words spoken, etc.) and user input into application function calls on the model, and selects the appropriate view based on user preferences and model state. In other words, a model corresponds to an application state, a view corresponds to the application presentation, and a controller corresponds to the application behaviour. Especially XSL, the eXtensible Stylesheet Language, provides the view of MVC and forces the decoupling of presentation and data. Struts, and in the case of KSC stxx (Struts with XSL), are both powerful frameworks for the MVC controller. The XQuery language with its close relation to XPath is still a working draft but will change that status in Feb 2004 to a candidate recommendation. If a data model uses XML as data format XQuery is the seminal access method for a MVC model. XML primarily was designed and, at the present time, is used for representing documents and describing meta-data. It will be shown that under certain conditions it also may be used for big data sets. In this regard the connection between XML and the industry standard for data exchange in process control OPC (OLE for Process Control [8]) is of special interest. As a big step for process control the OPC foundation (http://opcfoundation.org) created a new XML based OPC standard to augment its existing, widely successful standard based on COM (component object model) / DCOM (distributed COM). Programmable logic controllers (PLC), distributed control systems (DCS), human-machine interfaces (HMI), and other factory-floor software vendors use the OPC standards to move real-time data between field devices, control systems and other applications in a standard way, promoting multi-vendor compatibility and interoperability. OPC XML solutions provide great benefits over COM/DCOM standards. The most important ones are: (1) Simple integration with Internet applications. Existing OPC applications work fine on the typical LAN (local area

network) environment. However, as DCOM uses dynamically allocated TCP/IP ports that are not typically allowed through corporate firewalls, supporting Internet clients becomes a nearly impossible task. (2) Use in non-Microsoft environments even in heterogeneous networks because of its pure text-based data representation. (3) Better connectivity to enterprise applications. Some enterprise applications need real-time plant floor data being passed by an OPC server. But most of these highly integrated applications do not implement COM interfaces necessary to talk with OPC servers. OPC XML has one big disadvantage: XML uses textual data representation, which causes much more network traffic to transfer data. Even BASE64 or UUencoded byte arrays are approximately one and a half times more extensive than a natural binary format, and for more complex data this ratio can rise up ten times and more. Additionally, more CPU resources for transformation between natural data representation and XML are required [9]. Therefore it is unimaginable to use the current OPC XML DA (OPC XML Data Access [10]) specification for devices generating a huge amount of real-time data. A further issue is the present unavailability of XML versions of OPC History [3] and OPC Alarm and Events [4] specifications that are typically required in scientific experiments. Nevertheless it is very desirable to have one universal concept for serving both low and high data rates. To do this job some small extensions are added to the original OPC XML DA specification, at the same time preserving full compatibility with legacy OPC XML DA clients. The following section discusses extensions to the OPC XML DA specification, called OPC XML HDR (High Data Rate) [11], that include an efficient encoding of binary data, and the selection of fast multi-platform XML libraries to be used in the server design. IV.

CONNECTION BETWEEN FRONTEND AND DATABASE

A. Problems with OPC XML DA Dealing with high data rates by using OPC XML servers the following problems arise. XML is a text-based format and so all binary data must be encoded with some text encoding to be transferred by XML messages. All these encodings (BASE64, UUENCODE, etc.) enlarge the original data size by at least 35%. The OPC XML DA specification requires XML messages to be very descriptive about the data being transferred. For example, instead of “5” the record 5 is sent, what increases the bandwidth about 10 times or even more. The Subscribe, SubscribePolledRefresh data subscription mechanism used in OPC DA is inefficient in utilizing the full network bandwidth. OPC XML DA is designed to use the Simple Object Access Protocol (SOAP) as transport protocol over HTTP/

HTTPS, which is connection oriented and cannot be used in multicast communications. But, taking advantage of multicasting abilities for serving several clients will reduce bandwidth enormously. Access to archived data (like “Historical Servers” in standard non-XML OPC) is not provided by OPC XML. All these problems are considered when proposing a new extension to the OPC XML specification. The most important point with this extension is its two-way compatibility with the present standard. B. OPC XML HDR extensions The fundamental approach to solve the bandwidth problem is using a binary data representation, which is integrated into XML. There are two possible solutions satisfying these conditions: a SOAP message with attachment [12] or the multipart/related MIME type of a HTTP message [13] using XLink (XML Linking Language [14]). As SOAP attachments still are in a W3C working draft state and are unsupported by major XML libraries, and do not provide any way for multicasting messages, the second approach is used. Because of HTTP/HTTPS being a connection-oriented protocol, it cannot be used directly for multicasting. The only way to implement this is to separate the data connection from the control connection as it is done in the FTP protocol. Addressing of the multicasting group can be done in an XLink reference as shown in the example (Figure 2). Furthermore, user-derived data types can be declared by means of the XSD Schema [15]. Then, the new schema can be stored on the server under the ‘/RecordTypes’ path and may be accessed by clients as standard OPC XML DA data, using read requests. In OPC XML DA data messages only the name of this data type and the number of records has to be sent. C. Security aspects of OPC XML HDR The described solutions of bandwidth problems raise new problems – security problems. In standard cases HTTPS can be used to protect data. But for multicasting data connections HTTPS is unavailable and some other mechanism must be used. The proposal is to use an authentication server, which will use SSL private/public keys for authorization and generate symmetric session keys. It is also proposed to use the internal XML security approach for the control connections instead of the HTTPS protocol, being described in XML Encryption [16] and XML Signature [17] specifications. This will provide the following advantages over the HTTPS approach: - Better compatibility with third party internet software - Some proxy servers lack of HTTPS support - Some simple and fast Web servers lack of HTTPS support - More control over client authentication since access control is shifted from Web server to OPC server - Enhanced performance since only parts of a XML document may be secured

D. Compatibility to OPC XML DA To provide two-way compatibility with legacy OPC XML DA servers and clients the following approach is proposed: - Server view: If a client uses a XML Request with OPC XML HDR the server will consider the client to support HDR extensions, otherwise the client is treated as a legacy OPC XML DA client. - Client view: The client sends a GetStatus request to the server with standard OPC XML DA. If the server has an entry “XML_HDR_VERSION_1_0” in its list of interfaces supported by it, the server is treated as HDR capable, otherwise as a legacy OPC XML DA server. E. Binary Data Representation As described above the only fast possibility to transmit large amounts of data is to use binary data representation rather than textual representation. But different platforms and even different compilers of the same platform use different binary data representations: different byte order for representing multi-byte data, different floating point formats, different string formats, and even different alignment of data. Thus some universal standard should be chosen to transport data between different platforms. Of the currently available binary encoding standards the following ones were investigated and compared: XDR (eXternal Data Representation [18,19]) standardized as RFC-1014; widely used in SUN RPC servers and in NFS file systems. CDR (Common Data Representation); data presentation for IOP (Inter-ORB Protocol [20]).

Suggest Documents