development of distributed virtual instruments. Multimedia .... Rendering. The data are displayed according to an application-dependent format on GUI windows.
Development of Virtual Data Acquisition Systems based on Multimedia Internetworking1 Giancarlo Fortino
Libero Nigro
Laboratorio di Ingegneria del Software Dipartimento di Elettronica Informatica e Sistemistica Università della Calabria, I-87036 Rende(CS) - Italy Email: {g.fortino, l.nigro}@unical.it
Abstract. This paper proposes a novel approach centered on multimedia internetworking for the development of distributed virtual instruments. Multimedia internetworking refers to network infrastructures, protocols, models, applications and techniques being currently deployed over the Internet to support multimedia applications, e.g., videoconferencing, video-on-demand, shared workspaces. It is applied to broaden the concept of virtual instrument and enable new measurement patterns leveraging efficiency and interactivity. A Distributed Virtual Instrument (DVI) is a virtual instrument split into possibly multiple and independent parts, sender and receiver, which are linked by real-time continuous media and control streams. Senders and receivers are built by using open, composable and modular components based on a time sensitive actor framework and glued by multimedia middleware. A prototype is described to demonstrate the potential and the benefits of the proposed approach.
Keywords. Multimedia Internetworking, Distributed Virtual Instruments, IP-multicast, RTP, RSVP, NI-DAQ, Actors.
1
A preliminary version of this paper was presented at IEEE Instrumentation and Measurement Technology Conference (IMTC/99), Venice, May 1999, Vol. 3, pp. 1863-1867.
1. Introduction The development and deployment of the IP multicast infrastructure (the MBone) [3] as an overlay network on the Internet, have enabled an efficient and scalable mechanism for multipoint data delivery. New real-time protocols (RTP/RTCP – Realtime Transport Protocol/RTP Control Protocol) [13] have been introduced and standardized along with resource reservation protocols (RSVP) [2] to provide guaranteed quality of service (QoS). A wide range of distributed multimedia application manipulating in real-time audio, video, text, and graphics are currently available. They mainly rely on software-based processing (e.g., software codecs, transcoding, and image compression) and network data streaming. Besides, advances in processor, plug-in DAQ boards and standard buses have fueled the introduction of raw sample streams in desktop computers [1]. Thus, new potential sources of multimedia information such as instrumentation, radio frequency and electrical signals can be captured and processed. This paper proposes an approach, mDVI – multimedia-based Distributed Virtual Instruments, which represents an original contribution toward an exploitation of multimedia internetworking and processing facilities to broaden the concept of virtual instrument and enable new measurement patterns leveraging efficiency and interactivity. Virtual Instruments (VIs) are specialized SW-modules that can be easily interfaced with the system under test through DAQ boards, GPIB standards, VXI buses, etc. Distributed Virtual Instruments (DVIs) are Virtual Instruments that are split into possibly multiple and independent parts, sender and receiver, which are linked by real-time continuous media and control streams. The sender, interfaced with the system under test, continuously captures signals, optionally applies compression, creates a media stream and sends it on the network. Media streams consist of time-stamped packets containing the sampled data in the payload field according to the RTP protocol. On the other hand, the receiver, connected to the network, can acquire, process, and display
media streams. The payload has to be well-defined, advertised, and possibly standardized so as to enable heterogeneous receivers to use it. Thus, the sender and the receiver are coupled like in a multimedia Internet-based conference. Its basic communication abstraction is the RTP session, which is defined by a collection of transmission sources, receivers, and intermediate agents that participate in the conference. All the session participants share a common multicast channel, over which session information is transmitted. Senders transmit to a multicast group address. Receivers interested in a particular transmission “tune-in” by subscribing to the multicast group in question. This loosely coupled, light-weight, real-time multimedia communication model is known as the Light-Weight Sessions (LWS) architecture [3]. This way, a single source of data can be efficiently shared among several users saving network resources. Receivers can have different views of the same source. Different views not only mean different data displaying on a GUI but also different computation and analysis of the received data. In order to have QoS guarantees, RSVP can be used. Bandwidth reservation is essential to guarantee a minimal level of uninterrupted service throughout the session. Configuration and control of the source can be accomplished by two ways currently deployed over MBone: by a local user through a stand-alone application or by a remote one through an RTSP-like protocol or a WEB-based interface [4]. The time-critical part of mDVI sender and receiver rests on a time-sensitive Actor-based framework in Java [10] which is adequate for building distributed measurement software [6] and multimedia applications [4]. A virtual one-channel oscilloscope featured with a spectrum analyzer has been developed on a network of PCs. Signals are captured by using a National Instruments LAB-PC-1200 board plugged on a Pentium running Windows95. Hardware independent NI-DAQ driver software [11] is used.
Measurement scenarios ranging from real-time distributed computation and monitoring of data sources to control sharing of instruments for teleteaching purposes have been envisaged. The remainder of this paper is as follows. Section 2 summarizes the IP-based multimedia internetworking. Section 3 discusses the proposed architecture for DVIs and the mDVI approach. Section 4 describes an experimental prototype. Section 5 relates mDVI to existing work. Final remarks and perspectives of future work conclude the paper.
2. Internetworking Multimedia The original Internet service model supported only point-to-point (unicast) communications. Unicast communication is well-suited for applications that require a communication model where exactly two parties exchange information, e.g., client/server applications such as the WWW. However, for those applications requiring multi-point communication, also known as multicast, unicast messaging presents relevant performance drawbacks. In fact, a host that wishes to send a message to n-hosts, has to establish n-connections (if TCP is used) and duplicate each packet n times. This represents an enormous waste of bandwidth. Moreover, since the number of copies injected into the network by a host group of size n is O(n2), the use of unicast for multi-point communication scales quite poorly with the number of participants. In order to overcome the performance drawback of the unicast model, multicast communication model (IP-multicast) was proposed and developed. This way, an application sends data to multiple receivers by transmitting the data once. In turn, the network infrastructure disseminates the multicast data efficiently, i.e., with minimum copies, to each member of the group, thereby obviating the source's need to generate multiple copies of the data. The basic service model of IP multicast is to introduce a level of indirection in the addressing, namely the multicast group. In contrast to unicast communication, where hosts address their data to a specified single host or set of hosts, members of a multicast group address their data to a group address (class D
of IP addresses). Members join and leave a designated multicast group and address their data to that group. Today, Internet multicast service is deployed as an overlay network, called MBone, within the Internet, containing a collection of multicast-capable regions (LAN and multicast-routed networks) that are connected through unicast routes, or “tunnels”. The proliferation of multimedia applications over MBone increased the need for a standard protocol for real-time multimedia transport, both to establish a standard for interoperability and furthermore to provide higher-level information, such as timing and naming information, required by this new class of applications. This led to the development of the Real-Time Transport Protocol (RTP) [13]. RTP is an application-level protocol that provides end-to-end delivery services for data with realtime characteristics. While it is primarily designed to satisfy the needs of multi-party multimedia conferences, it is not limited to that application. Interactive distributed simulation and, mainly, control and measurement applications find RTP applicable. The basic communication abstraction is the RTP session that is based on the LWS architecture. In the IP protocol stack, RTP lies above UDP (User Datagram Protocol). It also runs over AAL5/ATM (ATM Adaptation Layer type 5/Asynchronous Transfer Mode). It actually consists of two protocols: RTP for real-time transmission of data packets and RTCP for QoS monitoring and for conveying participants’ identities in a session. The RTP data packet is composed of a header followed by payload data. Main fields in an RTP header (Figure 1) are: • Timestamp: reflects the sampling instant of the first octet in the data packet. It is media specific and is used to provide receiver-based synchronization. • Sequence number: is incremented by one for each data packet sent. It can be used to detect losses, duplicated and out-of-order packets. • Payload type: identifies the format of the data payload, e.g., H.261 for video streams, PCM for audio streams, NI-DAQ for measurement streams (see § 4).
• Marker (M): signals significant events for the payload, e.g., end of a frame for video, beginning of a talkspurt for audio, end of a measurement data block, etc. • Synchronization source: provides a mechanism for identifying media sources independent of the underlying transport or network layers. Other fields in the RTP header specify the protocol version (V), an extra padding (P) to the payload, an extension to the header (X), the number of contributing sources (CC) and the identifiers of the contributing sources (CSRC) to the RTP packet. It is worth noting that RTP itself doesn’t provide any mechanism to ensure timely delivery or other QoS guarantees, but relies on lower layer services to do so. Thus, in order to obtain real-time responsiveness and no packet losses in the Internet error-prone environment, RSVP (Resource reSerVation Protocol) [2] can be used to set-up both multicast and unicast reserved sessions. It works over IP and assumes the existence of certain mechanisms like packet scheduling and classification into routers. Bandwidth reservation is essential to guarantee a minimal level of uninterrupted service throughout the session. When using RSVP as a QoS signaling protocol, participating end systems establish a closed control loop. The senders inform the network and receivers about their traffic characteristics. The receivers that send their reservation requests back towards the sender, based on the traffic profiles announced, trigger the actual reservation.
3. A Multimedia Architecture for Distributed Virtual Instruments A distributed virtual instrument (Figure 2) is a virtual instrument split in two possibly independent parts, sender and receiver. They are glued by the multimedia internetworking middleware. The sender can be interfaced with the system under test by DAQ boards, GPIB standards, VXI buses, etc. It is primarily devoted to creating an RTP-based continuous media stream. The basic operations performed by the sender are:
• Setting. It allows initializing the acquisition process with the user-defined parameters. • Capturing. The sender acquires signals coming from the system under test. The flexibility of this operation is heavily affected by the particular acquisition system (HW and SW drivers) adopted. Double buffering is required for continuous capturing. • Encoding. It is an optional operation in which the sampled data stored in the buffers are coded according to loss-less methods preserving the original information. For example, if the acquired data are real values (e.g., 64bit), they can be transformed with some acquisition system-dependent formula, into shorter values (e.g., 16bit). • Packetization. The data are packetized according to the RTP protocol. The maximum transfer unit is 1024 bytes, 12 bytes are reserved for the RTP header, 1012 bytes are available for the payload data. The payload data is divided in a payload-dependent sub-header part, which contains information to correctly interpret the data and the data part itself. • Transmission. After opening an RTP session, each RTP packet is sent on to the network. The RTP session can be both multicast and unicast. In the former case, the multicast group should be known “a priori”, previously advertised by a rendezvous mechanism, or requested on-demand. The receiver, connected to the network, joins the RTP session in order to acquire, process, and render a media stream. Its basic functions are as follows: • Receiving. After tuning on the chosen RTP session, the receiver begins to receive the RTP packets. • Unpacketization. According to the payload type, the data are extracted from the packet and stored in temporary buffers. • Decoding. The data are (possibly) decoded by using the information of the payload sub-header. • Processing. It is application-dependent, i.e., each virtual instrument handles the data differently. For example, FFT, filtering, fitting algorithms, etc., can be applied. It is worth highlighting that timing
information allowing re-synchronizing the data is contained in the timestamp field of the RTP header. Thus, real-time processing is possible. • Rendering. The data are displayed according to an application-dependent format on GUI windows. The visualization process can be real-time. The real acquisition process on the DVI sender is moved by the RTP session to the DVI receiver so as to turn into a virtual acquisition process. The Receiving, Unpacketization and Decoding components of the DVI receiver (see Figure 2) form a Virtual-DAQ (V-DAQ). The signal evolves through the DVI architecture in a lifecycle in which assumes intermediate forms according to the following transformations: analog to digitized by the DAQ board; digitized to streamed by the DVI sender; streamed back to digitized by the V-DAQ; digitized to different domain types (e.g., transforming the signal from the time to the frequency domain), by the processing part of the DVI receiver. Indeed, a DVI can admit more sender and receiver parts. For example, a two channel virtual oscilloscope is assembled by two source senders (one per channel) and two sink receivers. The two generated and temporal coupled media streams require to be synchronized at both the sender and receiver sites. Sender and receiver can be loosely or tightly coupled. Loose coupling implies that sender and receiver are independent, i.e., no direct interaction on the acquisition process setting exists. The sender sets the parameters specified by a local user and starts the RTP measurement session on a multicast group. On the other hand, the receiver can join the multicast group and acquire the measurement streams. In this case, the local user advertises the session parameters, i.e., process acquisition settings and group (multicast address and port), by a rendezvous mechanism. Over MBone the advertisement is delivered to a distributed session directory (SDR) shared on a known multicast group. A remote user can listen to the session directory, select a particular session, and run the receiver. Other mechanisms can be used such as Session Initiation Protocol (SIP) messages [3], WWW-based SDR, etc.
Tight coupling means that sender and receiver are bound by an interaction and control link. In this case, a server is permanently running at the source site waiting for connections. The remote client connects to the server to request, set-up and start the acquisition process. After the server agrees the request and the control connection between the two partners is created, the client issues the set-up request message which contains all the setting parameters. When the server receives the message instantiates the DVI sender by initializing it with the parameters, and replies to the client. The client then runs the DVI receiver and sends the start message to the server that launches the RTP measurement session. The control connection can be based on RTSP (Real Time Streaming Protocol) [14], i.e., a text-based protocol like HTTP (Hyper Text Transfer Protocol) that maintains states. It provides standard (e.g., set-up, play, teardown) and customizable methods (e.g., set_parameter, get_parameter, etc.). The sender and receiver components are built by using a time-sensitive actor model [10] which favors time predictability. The model is suited to fulfil the requirements of multimedia systems [4, 5].
3.1 Actor-based multimedia systems A variant of the Actor model is adopted that centers on light-weight actors and a modular handling of synchronization and timing constraints [10]. Actors are finite state machines. The arrival of an event (i.e., a message) causes a state transition and the execution of an atomic action. At the action termination the actor is ready to process a next message and so forth. Actors do not have internal threads for message processing. At most one action can be in progress in an actor at a given time. Actors can be grouped into clusters (i.e., subsystems). A subsystem is allocated to a distinct physical processor. It is regulated by a control machine (Figure 3) that hosts a time notion and is responsible of message buffering (scheduling) and dispatching. The control machine can be customized through programming. For instance, in [10] a specialization of the control machine for hard real-time systems is
proposed, where scheduling is based on messages time-stamped by a time validity window [tmin, tmax] expressing the interval of admissible delivery times. Message selection and dispatching is based on an Earliest Deadline First strategy. Within a subsystem, actor concurrency depends on message processing interleaving. True parallelism is possible among actors belonging to distinct subsystems. A distinguishing feature of the actor model is the modular handling of timing constraints. Application actors are developed according to functional issues only. They are not aware of when they are activated by a message. Timing requirements are responsibility of RT-synchronizers [9, 12], i.e., special actors which capture “just sent messages” (including messages received from the network) and apply to them timing constraints affecting scheduling. Control machines of a distributed system can be interconnected by a network and real time protocol so as to fulfill system-wide timing constraints. Implementations of the actor model were achieved in C++ and Java. This work considers Java.
3.2 Modeling multimedia-based data acquisition systems Actors can naturally be used to structure a multimedia-based distributed data acquisition system. Two kind of subsystems specialized to handle the requirements existing at both the server (transmitter) and client(s) (receiver(s)) sides of the application [6] are introduced. The transmitter side is typically devoted to achieving the multimedia data (measurement samples), e.g., from a data acquisition device or from stored files, and to send it through a network binding to the client(s) for the final presentation. Specific timing and synchronization constraints exist and should be managed respectively at the server and client side to ensure quality-of-service parameters. To this purpose both server and client subsystems are equipped with a multimedia control machine with suitable QoSsynchronizers (Figure 4).
A QoSsynchronizer is a RTsynchronizer [12] which captures and verifies QoS timing constraints. As an example, the QoSsynchronizer in a client subsystem can perform fine-grain inter-media synchronization (e.g., synchronization between two channels of an oscilloscope). Bindings, i.e., logical communication channels, connect transmitter and receiver subsystems. Bindings can be point-to-point (i.e., unicast) and point-to-multipoint (i.e., multicast). A binding is created by a bind operation originated from media-actors called Binders. A binder governs the ongoing flow of data (e.g., continuous media or control messages) sent into the binding. It hides particular transmission mechanisms (e.g., network and transport protocols). It can also monitor the binding QoS so as to provide information such as throughput, jitter, latency and packet loss statistics. A Streamer is a periodic actor that accesses digital media information through media passive objects (MediaFile, MediaDevice, MediaNetSource) and sends it to Binders or Presenters. Presenters are media-actors specialized to render media objects. Figure 4 portrays a multimedia-based data acquisition system concerned with two synchronized measurement streams over IP-multicast. The Transmitter and Receiver(s) are connected by two data streaming bindings. They carry the data of the multimedia session according to the RTP/RTCP protocol [13]. In the case the data streaming binding is multicast, receiver subsystems can arbitrarily join the on-going multimedia session requested by its initiator. Transmitter subsystem is responsible of the capturing process and the enforcement (by using the RTP header information) of timing constraints upon the media streams to fulfil the requirements of the multimedia presentation. On the remote site, Receiver(s) subsystem(s) control and render the requested multimedia session. At the transmitter site, an actor pair, Streamer and Binder, is instantiated for each media stream. A Streamer captures the media (measurement data) from the devices, encodes, and sends them, as
messages, to the Binders. At the receiver site, a mirrored situation exists. A Binder polls the bindings and delivers the read messages to the Presenters for rendering purposes. Rate synchronizers are introduced for timing the acquisition, transmission and reception operations of the media actors.
4. A prototype implementation A virtual one-channel oscilloscope featured with a simple spectrum analyzer has been developed using Java 1.2. The testbed consists of a set of PCs networked by a LAN Ethernet 10Mbps. Signals, coming from both devices under test and polynomial waveform synthesizers, are acquired by using a National Instruments Lab-PC-1200 board plugged on a Pentium-133MHz running Windows95. It is a completely SW-configurable multi-functional I/O ISA board. The board is a narrow-band device that has a 12-bit resolution A/D, a max sampling rate equal to 100KS/s, and 8 analog input channels. The HW-independent NI-DAQ driver software [11] is exploited. It provides functions for DAQ I/O, buffer and data management (e.g., double buffering mode acquisition). Since NI-DAQ drivers isolate the application from the NI-DAQ HW products, it can be chosen to define one payload type for NI-DAQ. It should be general enough to describe all the features of the existent NI-DAQ HW products so as to enable an independent DVI receiver to correctly interpret the payload data. A payload type was defined whose sub-header main fields are: DAQ product ID, operation settings (channel, gain, etc.), voltage ranges, etc. The prototyped virtual device operates according to the loosely coupled paradigm. The DVI sender can generate media streams (data) at a rate up to 1.6Mb/s. This limit is due to the Lab-PC-1200 board. The DVI Sender controls the DAQ board through an implemented Java class which interfaces to native methods developed in C++. It has to set the following parameters that characterize the measurement stream:
•
Address (multicast or unicast) and Port on which the session is going to be sent.
•
TTL. The scope of the session, e.g., local (1), regional (16), global (127).
•
Number of points. The points to be acquired.
•
Rate. The acquisition rate.
•
Duration. The duration of the session in ms.
•
Channel. The DAQ board channel.
The virtual oscilloscope is split into two parts: the DVI Receiver and the GUI. The DVI Receiver joins the address/port set by the sender and by using the information contained in the RTP header reconstructs the spatial and temporal characteristics of the stream needed for processing and rendering the stream itself. In Figure 5 is portrayed the GUI of the virtual oscilloscope. It mirrors the HP 54504A front panel and has five functional areas: System Control, Entry, Setup, Menus and Function Selection. Keys in the System Control section cause action when selected. The R/S key toggles the acquisition status of the VOscilloscope. If the VOscilloscope is running (i.e., continuously acquiring), by pressing the key R/S it becomes stopped and viceversa. The last acquired data is buffered and displayed. The S key allows performing a single acquisition. The Entry panel consists of a multi-function numeric keypad and knob. The keypad is for direct numeric input. The Setup section controls the automatic setup (e.g., autoscale, save and recall setups, etc.). Menus access any of nine separate subsystems: timebase (TMB), channel (CHA), waveform math and save (MATH, SAVE), etc. The MATH menu allows applying selected functions to the acquired signal, e.g., an implemented FFT algorithm. Each key in the Function Section corresponds to a function shown in the displayed menu. In Figure 6 the VOscilloscope runs the FFT on a received square waveform signal.
5. Related Work The following summarizes some commercial and/or research solutions concerning the development of distributed virtual instruments. Pros and cons are considered with respect to the real-time responsiveness, the interaction patterns and the user benefits. The first solution refers to National Instruments SW products that are the most used in both the academy and industrial world. The second one describes a research at the MIT that is very close to the approach proposed in this paper.
5.1 Remote Device Access (RDA) NI-DAQ for Windows version 6.x includes a feature called Remote Device Access (RDA) [7]. RDA makes it possible to acquire data over a network using a NI-DAQ board installed in a remote computer. RDA uses TCP/IP as the underlying network protocol so that it can be exploited both on LANs and WANs that are based on TCP/IP. RDA is implemented at the NI-DAQ driver level and extends the NI-DAQ model across the network. A virtual instrument application can be achieved using LabVIEW, LabWindows, or Virtual Bench. The virtual instrument runs on a client (the RDA Client) whereas the data acquisition devices are located on a server (the RDA Server). A single RDA Client can access remote devices in several RDA servers as well as local ones. A remote device in an RDA server could be used by several RDA clients. There are some considerations to take into account in order to avoid conflicts between RDA clients. For example, if a client A configures Analog Input Channel 3 in group 0, and a client B configures Analog Input Channel 5 on the same remote device and also using group 0, the readings taken by client A will be from channel 5 and not channel 3. RDA relies on Microsoft’s implementation of RPC (Remote Procedure Call) that is based on the OSE/DCE. The distribution aspects of an application are completely transparent to the user. In fact, when a client from, for example, a LabVIEW application, accesses a certain device and NI-DAQ sees that this device is a remote one, the call is packaged up and sent via RPC to the relevant remote
computer. There, the RDA server application receives the call, unpackages it and sends it to NI-DAQ, as a normal local call, that is finally executed. TCP is not suited for real-time transmission of data [3]. It scales poorly since it is unicast-centered. In addition, it is not possible to build real-time synchronous WANs [8]. RDA being based both on TCP and RPC suffers the above drawbacks. The interaction patterns are limited to a Client/Server paradigm. The user can transparently connect to several remote devices but has to be very careful to avoid conflicts with other users using the same devices. Moreover, the real-time behavior of the received data, in the usual case that the client has spawned a process on the server which uses double buffering for data acquisition, can be critically affected by GUI operations (e.g., sizing or moving a window) and also by the unpredictable network latencies. Due to the direct and synchronous connection existing between the RDA client and server, the loss of real-timeness can imply a buffer overwrite in the server. The mDVI approach described in this paper does not suffer these drawbacks because the receiver/sender connection is asynchronous and because of the explicit re-synchronization task adopted at the receiver site. Although the sender process depends on double buffering during data acquisition, that process is local and is not disturbed by client interactions.
5.2 Virtual Sample Processing (SpectrumWare) The Virtual Sample Processing (VSP) approach [1] is software-centered. Many media handling functions are transferred from hardware to application-level software. The SpectrumWare testbed is used for experimenting with new media types such as ultrasound, infrared and RF signals. The backbone of the prototyping environment is the VuNet, an ATM desk area network. Data are captured by using network-based appliances, with transducers for RF signals. These signals are processed on hosts (DEC WSs) distributed around the network. The programming environment is based on the VuSystem, which has already demonstrated the feasibility of a software-based approach to
conventional media, including audio, video, text and graphics. The VuSystem supports the flow of temporally sensitive information, i.e., timestamped payloads through input and output ports. It has been extended to embed a variety of new media types. In particular, payload types have been added to handle sampled and complex (e.g., Fourier transforms of sampled data) valued data streams. The mDVI project described in this paper owes to VSP. However, the design of mDVI was mainly driven by the requirement of deploying virtual devices over Internet, i.e., according to standard network infrastructure, multimedia protocols and the Java programming language. Novel in mDVI is the adoption of an actor model for modular structuring of an application and its operating software. The approach purposely avoids dependencies from hidden policies of an Operating System.
6. Conclusions This paper proposes an original approach to the development of Distributed Virtual Instruments (DVI) over Internet. The approach exploits the new multimedia internetworking solutions and embeds them in the distributed measurement research area. Mechanisms are described in the paper which permit efficient and scalable access to remote sensing and control devices. Current and commercial solutions for DVI (Internet developers toolkit for NI LabView and LabWindows/CVI, DataSocket, RDA, HP-VEE 5.0) are based on TCP which is not suitable for realtime, and/or on unicast IP which scales very poorly. Moreover, only http, ftp, email and proprietary protocols (e.g., dstp) are adopted, without taking into account the new emergent standards which have been addressed by the Internet research community. The paper provides a virtual oscilloscope example to demonstrate the flexibility and the potential of the research project. The approach opens to new measurement scenarios such as: (a) sharing the same measurement streams by clustered hosts for parallel processing; (b) DVIs for teleteaching purposes where the
teacher shows in an efficient way to n students how an instrument works; (c) cooperative instrument control which enables a workgroup to operate at the same real-time on a shared resource. On-going directions of work include: • Development of a wide range of virtual devices by using different and more powerful DAQ HW. • Implementation of a “Measurement on-demand” server which makes it available controllable RTP measurement sessions both live and archived. Archived sessions are RTP measurement sessions previously stored on the server file system [4]. • Running of the implemented DVIs on clusters of multicast routed LANs with RSVP support.
Acknowledgments Work partially funded by the Ministero dell’Università e della Ricerca Scientifica e Tecnologica (MURST) in the framework of the project “MOSAICO”.
References [1]
V. G. Bose, A. G. Chiu, and D. L. Tennenhouse, “Virtual Sample Processing: Extending the
Reach of Multimedia,” Multimedia Tools and Applications, Volume 5, No. 3, 1997. [2]
R. Braden, L. Zhang, S. Berson, S. Herzog, and S. Jamin, “Resource Reservation Protocol
(RSVP) version 1 functional aspects,” RFC 2205, IETF, Sept. 1997. [3]
J. Crowcroft, M. Handley and I. Wakeman, “Internetworking Multimedia,” UCL press, 1999.
[4]
G. Fortino and L. Nigro, “An interactive and cooperative videorecording on-demand system
over MBone,” in Proc. of SCS EuroMedia’99, pp. 120-124. [5]
G. Fortino and L. Nigro, “Modeling, analysis and implementation of actor-based multimedia
systems”, in Proc. of Parallel and Distributed Processing Techniques and Applications (PDPTA’99), Las Vegas, June, 1999, pp. 489-495.
[6]
G. Fortino, D. Grimaldi, and L. Nigro, “Multicast control of mobile measurement systems,”
IEEE Trans. on Instr. and Meas., Vol. 48, October 1998, pp. 1149-1154. [7]
T. Hayles, “Developing Networked Data Acquisition Systems with NI-DAQ”, NI Application
Note 116, 1998. [8] K. G. Shin, P. Ramanathan, “Real-time computing: a new discipline of computer science engineering,” Proceedings of the IEEE, Vol. 82, No 1, January 1994. [9]
B. Nielsen, S. Ren and G. Agha, “Specification of real-time interaction constraints”, in Proc. of
First Int. Symposium on Object-Oriented Real-Time Computing, IEEE Computer Society, 1998. [10]
L. Nigro and F. Pupo, “A modular approach to real-time programming using actors and Java,”
Control Engineering Practice, 6, pp. 1485-1491, 1998. [11]
NI-DAQ, User Manual for PC Compatibles, Version 6.x, National Instruments, April 1998.
[12]
S. Ren, G. Agha and M. Saito, “A modular approach for programming distributed real-time
systems,” J. of Parallel and Distributed Computing, Special issue on Object-Oriented Real-Time Systems, 1996. [13]
H. Schulzrinne, S. Casner, R. Frederick and V. Jacobson, “RTP: a Transport Protocol for
Real-Time Applications,” RFC 1889, IETF, Jan. 1996. [14] H. Schulzrinne, A. Rao, and R. Lanphier, “Real Time Streaming Protocol (RTSP)”, RFC 2326, IETF, Apr. 1998.
Vitae
Giancarlo Fortino was born in 1971 in Italy. He graduated in Computer Science Engineering in 1995 at the University of Calabria. He is currently a PhD student in Computer Science at the Department of Electronics Informatics and Systems Science of the University of Calabria. His main interest concerns modeling, design and implementation of distributed applications, including multimedia synchronization and cooperative mechanisms, like video conferencing and on-demand applications, and measurement architectures on top of Internet.
Libero Nigro was born in Italy in 1953. He graduated in Electrical Engineering in 1978 at the University of Calabria. Actually he is an Associate Professor of Computer Science at the Department of Electronics, Informatics and Systems Science of the University of Calabria. His research interests include: object-oriented software engineering, distributed real-time systems, parallel simulation, measurement and multimedia systems. Prof. Nigro is a member of ACM and IEEE.
Figure legends
Figure 1: RTP Header Figure 2: DVI Architecture. Figure 3: Time-sensitive actor architecture. Figure 4: Architecture of a single multimedia session composed of two synchronized measurement streams. Figure 5: The virtual oscilloscope. Figure 6: The display shows the spectrum of a square waveform signal continuously acquired.
bit 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |V=2|P|X| CC |M| Payload Type| Sequence Number | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Timestamp | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Synchronization Source (SSRC) identifier | +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ | contributing source (CSRC) identifiers | | .... | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 1: RTP Header
Signal
DVI Sender Drivers
System under test
DAQ
Capturing
Packetization
Encoding
Transmission
Acquisition Process
IP-multicast network Virtual Acquisition Process V-DAQ
RTP measurement stream #i
Receiving
Decoding
Processing
RT Display
UnPacketization Rendering
DVI Receiver
Figure 2: DVI Architecture.
RTP measurement stream #i
SUBSYSTEM
CONTROL MACHINE
schedule
Message Queues Clock
Scheduler
select
dispatch
RTsynchronizers
Controller dispatch Local Messages Inner Actors Network Messages APPLICATION ACTORS Network Messages
Communication Network
Figure 3: Time-sensitive actor architecture.
RECEIVER #i
SENDER
Multimedia Control Machine
Multimedia Control Machine
RateSynchs
RateSynchs
QoSsynchronizer
QoSsynchronizer RTP
Channel1Binder
Channel1Binder
Channel2Binder
Channel2Binder RTCP Ch1Streamer
Ch2Streamer
ACQUISITION
Ch2Presenter
Ch1Presenter
VISUALIZATION
Figure 4: Architecture of a single multimedia session composed of two synchronized measurement streams.
Figure 5: The virtual oscilloscope.
Figure 6: The display shows the spectrum of a square waveform signal continuously acquired.