Resource Management Policies for Real-time Java Remote Invocations

3 downloads 356 Views 1MB Size Report
Remote Method Invocation (RMI) in order to support real-time remote invocations. ... remote invocation influence in the end-to-end response time of a distributed ...
Resource Management Policies for Real-time Java Remote Invocations (DRAFT) P. Basanta-Val, and M. García-Valls Departamento de Ingeniería Telemática Universidad Carlos III de Madrid {pbasanta, mvalls}@it.uc3m.es



Abstract—A way to deal with the increasing cost of next generation real-time applications is to extend middleware and high-level general-purpose programming languages, e.g. Java, with real-time support that reduces development, deployment, and maintenance costs. In the particular path towards a distributed real-time Java technology, some important steps have been given into centralized systems to produce real-time Java virtual machines. However, the integration with traditional remote invocation communication paradigms is far from producing an operative solution that may be used to develop final products. In this context, the paper studies how The Real-Time Specification for Java (RTSJ), the leading effort in real-time Java, may be integrated with Java’s Remote Method Invocation (RMI) in order to support real-time remote invocations. The article details a specific approach towards the problem of producing a predictable mechanism for the remote invocation- the core communication mechanism of RMI- via having control on the policies used in the remote invocation. Results obtained in a software prototype help understand how the key entities defined to control the performance of the remote invocation influence in the end-to-end response time of a distributed real-time Java application. Index Terms— Real-time Java, Real-time Middleware, RT-RMI, RTSJ, DRTSJ I.

INTRODUCTION

Analyzing the evolution of real-time technology – looking at [1] to study its past evolution and at [2][3][4] to identify the future- we appreciate how real-time systems seem to follow a path parallel to the one previously covered by general purpose applications. If in the very early stages of its existence, influenced perhaps by the high cost of hardware, the search for optimal and efficient algorithms monopolized most research efforts; nowadays, other issues more related to portability, adaptability, and maintainability are gaining momentum. Indeed, whereas primitive realtime systems were embedded and isolated pieces of code, the current generation of real-time systems is more open and COTS (commercial-off-the-shelf) real-time operating systems and middleware are commonly used [4][5]. In this continuous reinvention, some eyes are fixed on the use of high abstraction programming languages –such as Java [6]- as a new way to reduce development costs not only in general purpose applications but also in real-time systems[6][7]. The use of Java attracts attention on specific topics such as byte-code efficiency, real-time portability, predictable code downloading, and especially on automatic memory management. The financial benefits may be This research was partially supported by the European Commission ARTIST2 NoE, IST-2004- 004527; iLAND ARTEMIS-JU Call 1) and The Ministry of Industry, Tourism and Commerce (Gateway for DDS and Web services, TSI-020501-2008-159).

1

quite interesting too –some reports on general purpose productivity [8, 9] promise improvements ranging from 20% to 140% when instead of C++ or ADA, developers choose Java. In the specific area of real-time Java, extraordinary efforts have been carried out for centralized systems with the definition of The Real-time Specification for Java (RTSJ) [10]. This specification is backed by a number of commercial implementations: IBM [11], Oracle [12], and Aicas [13] ready for developing applications. However, one of the main efforts in the area of distributed systems: The Distributed Real-Time Specification for Java (DRTSJ) [14] -based on the RTSJ and Java’s Remote Method Invocation (RMI) [15]- is still far from producing commercial implementations[16][17]. In essence, the DRTSJ sets the focus of its activity on the creation of a framework for distributable real-time threads ([18][19][20]): entities that traverse from one node into another maintaining their trans-node real-time characterization. In the DRTSJ, the idea of producing a real-time remote invocation is used as an underlying transport mechanism. The main role is played by the distributable thread that moves its locus of execution independently from the underlying infrastructure. In this article, the central entity is the remote invocation of RMI that is used in some RT-RMI approaches to communicate real-time Java enabled nodes. The article proposes a model for real-time remote invocations that takes into account the resources involved in the end-to-end path. The way the approach develops is starting from a typical remote invocation, extending its basic behaviour with new entities that offer real-time performance. From this initial model, it is defined an extension to the current RMI and RTSJ specifications to accommodate the architectural solution that shows how to use resource for a remote invocation. The model proposed can be used by DRTSJ to define a predictable low-level communication model for remote invocations on which it could run distributed real-time threads. The model may be also integrated in current end-toend scheduling algorithms (like [21], [22], and [23]) specializing their computational models; one goal that is out of the scope of the paper. Other RT-CORBA models (like those lead by RTZen [24, 25]) may enhance its current architecture including specific elements defined in the model proposed (like memory-area pools). The rest of this article is organized as follows. Section II is the state-of-the-art and reviews the main approaches that combine the RTSJ and RMI to produce different RT-RMI technologies. Section III is a background section that introduces the architecture used for developing the model proposed. Section IV develops the model and the API based on RTSJ and RMI. The model is evaluated in Section V and Section VI which detail results of a set of 2

empirical tests designed to measure the performance of the model on Java’s RMI. Section V compares real-time remote invocations against traditional RMI invocations, while Section VI uses real applications requirements taken from two application benchmarks. Finally, the paper ends drawing conclusions and our on-going work in Section VII.

II. BACKGROUND ON REAL-TIME JAVA TECHNOLOGIES Due to historical reasons [17] real-time Java technology has been considered from two different angles: centralized and distributed. On the one hand, the centralized efforts for real-time Java have focused their attention on improving the predictability of single real-time virtual machines. On the other hand, the most important efforts towards a distributed real-time Java technology define mechanisms that offer end-to-end real-time performance in remote communications taking in most cases centralized technologies as departure points.

A. Centralized real-time Java Different approximations to real-time Java may be grouped into three categories: (i) based on modifications to Java APIs; (ii) based on integrating the virtual machine into the real-time operating system; and (iii) based on specific hardware. Among them, the most relevant for the real-time remote invocation are those based on extending current Java APIs -the main approaches to RT-RMI follow this approach. This choice constrains our analysis to the following technologies: RTSJ [26], RTCORE [27], PERC [28], RTJThreads [29], and CJThreads [30]. 

RTSJ (The Real-Time Specification for Java) is a specification for real-time Java that defines an alternative based on modifying the virtual machine. In this specification the virtual machine supports the development of both general purpose and real-time applications. Nowadays, this specification is the most relevant in the area and several commercial products support this specification.



RTCORE (Real-Time Core Extensions) defines an alternative based on a new execution environment: the core. This execution environment offers a new class hierarchy to support the development of real-time applications which communicate with non real-time applications using queues. Some of the principles proposed by RTCORE have been recovered in the context of high-integrity systems based on RTSJ ([31]).



PERC (Portable Executive for Reliable Control) defines two packages: one for real-time applications and another for embedded systems. Today, this product belongs to ATEGO. 3



RTJThreads (Real-time Java Threads) is a rather simple solution; it consists only of three classes. Its scheduling model is based on priorities and real-time synchronization protocols.



Finally CJThreads (Communicating Threads) is based on the CSP formalism defined by Hoare.



Table I identifies the type of coverage given by each approach to key drawbacks of Java for the development of real-time and embedded systems. Among all of them, the table highlights the following: (1) lack of precise scheduling policies, (2) lack of real-time synchronization algorithms, (3) priority inversion introduced by the garbage collector, (4) the behaviour of the dynamic class loader, (5) the lack of specific support for processing external events, and (6) access to low-level hardware. These limitations are described in the NIST document requirements for real-time Java [7]. TABLE I DIFFERENT API-BASED APPROACHES FOR CENTRALIZED REAL-TIME JAVA

Scheduling Synchronization Garbage collection Class loading Class initialization Events Hardware access

RTSJ

RTCORE

PERC

RTJ

CTJ

A A A A A A A

A A A A A A A

A A A A A A

A A A -----

A -----A

Legend: A (addressed); and -- (not addressed)

B. Distributed real-time Java Whereas in centralized real-time Java there is a mature idea of the main problems to face, the distributed area lacks a similar consensus [16]. There are not fully developed specifications that help the programmer during the design of distributed real-time applications or implementations that may be used in the development of commercial applications. Mainly there are two promising initiatives that combine two specifications for real-time Java (RTSJ and RTCORE) with two distribution technologies: RT-CORBA and RMI. The RTZen project [32] is one of the main approaches following the RT-CORBA path. It uses RTSJ to improve the predictability of RT-CORBA using lessons learnt from the RT-CORBA architecture. The model proposed uses resources that are defined in RT-CORBA like thread and connection pools. However, it also adds others like the memory pool that is not defined in the RT-CORBA specification. Memory area pools may be used in RT-CORBA to remove the garbage collector from the end-to-end remote invocation (following a model similar to the NhRo

4

paradigm [33]) adapted to the RT-CORBA model. The way applications define their real-time parameterisation is also different, whereas RTZen follows the RT-CORBA’s architecture (i.e. with ORBs and POAs), the model proposed is closer to the RTSJ’s architecture than RTZen is.

TABLE II APPROACHES TO DISTRIBUTED REAL-TIME JAVA BASED ON RMI RT-RMIRT-RMIDRTSJ RT-RMI-York UPM TA&M Processor management A A A A Memory management I A A -Connection management -A A -Distributed garbage collection I I --Distributed class-loader I ---Distributed events I A ---

DREQUIEMI (our proposal) A A A I I A

Legend: I (identified), A (addressed problem), -- (not identified).

The rest of this state-of-the-art focuses on RMI-based initiatives: DRTSJ [34], RT-RMI-York [35], RT-RMI-UPM [36], and RT-RMI-TA&M [37] which are closer to the model proposed by DREQUIEMI than RTZen. As stated in the introduction, the Distributed Real Time Specification for Java (DRTSJ) is the most important effort focused on the definition of an RT-RMI specification. To accomplish its goal, DRTSJ defined three integration levels: level-0 (RMI-RTSJ) where there are no end-to-end guarantees; level-1 (RT-RMI-RTSJ) where there is an endto-end guarantee in remote invocations; and eventually level-2 (DRtThreads-RTSJ) in which there are special improved facilities such as distributable threads and distributed events. Recently, the DRTSJ has concentrated its efforts on the following issues: (1) definition of distributable threads, (2) integration of a generic scheduling framework for distributable threads, and (3) its corresponding integrity framework. Based on this work, the real-time systems group of the University of York proposed an integration framework for RMI and the RTSJ (RT-RMI-York). The framework consists of two parts: one in which the authors define API extensions for RT-RMI; and another in which the implications of the extensions proposed are analyzed in more detail. The framework also identifies five potential improvements to the basic infrastructure of RMI: (1) real-time extensions for the naming service; (2) predictable class downloading; (3) integration of real-time distributed garbage collectors; (4) predictable serialization; and (5) a distributed real-time event model. In the HIJA (High Integrity for Java Applications) European project (RT-RMI-UPM), the real-time systems group of the Universidad Politécnica de Madrid defined real-time profiles for RT-RMI. Three profiles were identified: one 5

for hard real-time systems, another for soft real-time systems, and a third kind for flexible real-time systems. Each profile takes into account four main aspects in its definition: computational model, extensions required to current RMI interfaces, concurrency model, and relationship with automatic memory management algorithms. Another related work analyzed in this state of the art is the approach defined by the real-time systems group of the Texas A&M University (RT-RMI-TA&M). They have addressed RT-RMI from the point of view of mobile environments based on components. The RT-RMI support defined is server-centric and does not require extensions into the communication protocol or in the API of RMI. Our approach is based on DREQUIEMI [33, 38-41][42][43-45], a middleware developed at the University Carlos III of Madrid which is based on the RTSJ and RMI and whose goal is to define an architecture for distributed realtime Java that relies on well-known practices. It is part of the RT-RMI family as all the approaches previously described and worked on models driven by implementation prototypes. Its closest related work is RT-RMI-York (both approaches work on integration of the RMI and RTSJ). However, the way they interpret rules are different: RTRMI-York departs from RTSJ and applies it to RMI whereas DREQUIEMI describes the solution first on the architecture and then the solution is applied to the programming language which is supported via prototype evaluation. Table II compares the main approaches for distributed real-time Java. The table explores distributed real-time Java into three dimensions: (i) resource management support, understood as the ability to offer a fine control on the management of memory, processor, and network connections during a remote invocation; (ii) specific issues related to RMI such as dynamic class loading and support for a real-time distributed garbage collector; and (iii) advanced features as new classes to process external events. The results summarized in Table II show that all these approaches agree on the need for additional control on CPU consumption at the server side. Most approaches suggest the inclusion of different mechanisms at the server. Some initiatives define solutions based on extending the interfaces like RT-RMI-York while others like RT-RMI-UPM are more focused on establishing requirements. The need for an additional control over the memory management algorithms that remove the garbage collector from the remote invocation was identified by the DRTSJ. This issue has been only partially addressed by RT-RMIUPM (suggesting the use of a real-time garbage collector or a region-based memory solution at the server, but giving 6

scarce information about the impact of these techniques on programming paradigms and middleware), and by RTRMI-York which proposed some algorithms to address the problem. DREQUIEMI addressed this issue proposing patterns ([46, 47][33]) that remove the garbage collector from the remote invocation and requires changes in both the core of the middleware and programming models. Another important output of the analysis is that some specific limitations of RMI are still awaiting specific solutions. For instance, neither RT-RMI-UPM nor RT-RMI-York detail the impact caused by removing the distributed garbage collector or the distributed class downloader from RMI. Although DREQUIEMI analyzes some of these limitations in detail and proposed new schemas for the distributed garbage collector, it is far from a satisfactory solution. Constraining the analyses to the real-time remote invocation, RT-RMI-UPM, RT-RMI-York and RT-RMI-TA&M are the implementations that deal with the problem in more detail. RT-RMI-York and RT-RMI-UPM addressed the problem from the point of view of APIs, extending existing class hierarchies with real-time tags and describing their nature. RT-RMI-TA&M developed its extensions from the point of view of real-time scheduling theory applying specific scheduling algorithms that require fewer changes in the APIs than RT-RMI-UPM and RT-RMI-York. By contrast, DREQUIEMI begins with a generic remote invocation and the internal architecture of RMI paying less attention to final interfaces stemmed from the model.

C. Resource Management in Real-Time RT-RMI Remote Invocations Regarding predictability in remote invocations [16], most approaches (DRTSJ, RT-RMI-York, RT-RMI-UPM, and RT-RMI-TA&M) proposed techniques that combine processor, memory, and network policies expressed as rules or constraints. All are silent on the impact of each management policy on the end-to-end response-time and changes in the middleware required to support it. This trade-off has not been studied from a real-time RMI perspective analyzing costs and impact on the remote invocation. -

DRTSJ proposes the use of a predictable RT-RMI to deploy distributable threads RT-RMI. However, it does not provide an infrastructure to run distributable threads. The real-time remote model proposed in this article may provide this support to enhance the performance of distributable threads.

-

RT-RMI-York improved the quality of the remote invocations for a RMI and RTSJ framework. The model for remote invocations is based on TCP/IP networks and improves the server side of the remote invocation. 7

However, the model does not cover the possibility of providing direct access to the three underlying resources (i.e., connections, memory, and CPU). The work defined in RT-RMI-York may be extended with the three resource model proposed in this article by using this support as a basic template for invocations. -

RT-RMI-UPM provided different types of RT-RMI remote invocations for different profiles. Each profile has different network and resource constraints. The authors provide a different solution for each profile that may benefit from a more general model that describes the interaction with resources at client and at server.

-

RT-RMI-TA&M has support for a component model for Java components. Its model is similar to the model proposed by RT-RMI-UPM and does not describe a configurable interaction with underlying resources for the remote invocation.

-

DREQUIEMI. In DREQUIEMI remote invocation invocations interact with three resources (memory, connections and threading at server) at different stages of the remote invocation. Each type resource is managed by a specific pool that may be configured according to the application requirements. Previous results on DREQUIEMI offer a connection among resources and remote invocations. However, most results were produced in the area of end-to-end memory management with the NhRo paradigm [33][46] as a means to remove garbage at the server-side. Previous work on DREQUIEMI did not address the performance of priorities in the remote invocation or the impact of using different kinds of policies in network management. DREQUIEMI was also silent on defining a generic model for the remote invocation that defines an interaction with these resources in the remote invocation. Section IV provides an execution model that takes into account the three main resources of a remote invocation. This view is complemented in Section V and Section VI with performance results.

III. DREQUIEMI: A FRAMEWORK FOR DISTRIBUTED REAL-TIME JAVA The model for the real-time remote invocation defined in the next section was designed in the context of DREQUIEMI. DREQUIEMI ([40, 45][38]) is based on a layered architecture (see Figure 1) similar to [48] and adapted to the internals of RMI and RTSJ. DREQUIEMI defines five layers: resources, infrastructure, distribution, common services, and application.

8

Resources layer: This layer sets the foundations for the model setting the list of primary resources that are managed by the architecture. The model takes from real-time Java two main resources: memory and processor and from RMI the concept of network as the element that enables the communications between two Java nodes. Infrastructure layer: These three resources are typically accessed through interfaces via a real-time operating system (RTOS) or an RT-JVM shaping an infrastructure layer. In DREQUIEMI the access to the infrastructure is given through a set of primitives which are used by programmers to access low-level resources. To be compliant with this model, the current RTSJ requires the inclusion of classes for the network (i.e., TCP/IP, and UDP stacks). These classes are already available in standard Java and can be included in a new RTSJ-profile for networked applications. Distribution layer: On top of this layer the programmer may use a set of common structures useful to build distributed applications. There is one structure corresponding to each key resource, namely: ConnectionPool, MemoryAreaPool, and ThreadPool. Each entity represents a complete family of resources and may be specialized.

For instance, the middleware may define different connection pools for different types of networks (e.g. TCP/IP [49], UDP, and CAN). Common Services layer: Currently DREQUIEMI describes four services: a stub/skeleton service, a DGC (Distributed Garbage Collection) service, a naming service, and eventually a synch/event service. The first, which is analyzed in more detail in the next section, allows carrying out a remote invocation offering real-time performance. The second service, DGC, eliminates unreferenced remote objects in a predictable way; that is, introducing a limited interference on each Java node. The third service, naming, offers a local white-page service for distributed applications. Lastly, the synch/event service offers a message driven communication. So far the synch/event service has been mapped to a time-triggered approach (see a full description of the integration in [50]). Application layer: Lastly, the most specific parts of the application are found at the outer layer of the architecture, in charge of defining the modules of an application. This layer is out of the scope of this paper which deals with realtime remote invocations. As Figure 1 also shows, this architecture also adds two horizontal management layers. The first is in charge of managing the local resources: memory, CPU, and connections. The second includes specific policies for new entities defined in the common-service layer and uses the first to access low-level services. In addition to these two mechanisms, some common services define their own configuration policies; this is the case of the stub/skeleton 9

service. Lastly, Figure 1-bottom introduces the main classes involved in a real-time remote invocation. They play an import role during the remote invocation, managing resources, and they are described in detail in the following sections. This article deals with the interaction between the remote invocation service and the underlying resources that support remote invocations. In previous work (e.g. [45][44]), the authors proposed models for the remote invocation but they did not address the impact of using different policies in the cost of a remote invocation from a general perspective.

10

SchedulingParameters

ReleaseParameters

RealtimeUnicastRemoteObject

MemoryParameters

DefaultManager

ProcessingGroupParameters

RealtimeUnicastRemoteStub

HeapMemoryAreaPool

ThreadPool

ConnectionPool

MemoryAreaPool LTMemoryAreaPool

DynamicThreadPool

StaticThreadPool

DynamicConnectionPool

StaticConnectionPool

Fig. 1. The architecture of DREQUIEMI (top) and the API related to the real-time remote invocation (bottom)

IV. REAL-TIME REMOTE INVOCATIONS The core of an RT-RMI technology is supported by real-time remote invocations. This section explains the changes introduced in the original model of RMI (plain RMI) by DREQUIEMI in order to offer real-time remote invocations.

A. Remote invocations in RMI

11

Figure 2 shows the most relevant steps of a remote invocation. In essence a remote invocation [41] consists of seven steps which are executed sequentially (1→2→3→4→5→6→7). In the first step the middleware creates the parameters for the remote invocation. Next, these parameters are transferred in the second step to the server using a connection. Typically, there is a third step in which the remote thread processes incoming data to select the remote object which is invoked locally. This invocation takes place in the fourth step in which the handler thread invokes the logic of the application. The results of the invocation are transferred back to the client in the fifth step. Once back at the client, the invocation continues analyzing the results of the remote invocation which are passed to the stub. Eventually, after the seventh step, the client thread continues its execution. mid_dis _remoteobject _invoke

mid_dis _stub _invoke

stub 1

remote object 4

application 7 3 res

data

2

remote

5

6

client

server

Fig. 2. Remote invocation in RMI (previously described in [41])

B. Real-time remote invocations From the point of view of DREQUIEMI, adding real-time capabilities to a remote invocation means having extra control on the resources involved in the process of invoking the remote object. To control the remote invocation, it introduces three new structures (Figure 3): 

A connection pool at the client



A memory pool at the server



A thread pool at the server

It also defines three types of policies to manage these pools: 

Global policies at the core of the middleware



Local policies configured at the client



Local policies configured at the server

12

These policies are stored in the internal tables of the middleware in both the server ( exported object table) and the client (info) according to their nature.

mid_dis _remoteobject _invoke

mid_dis _stub _invoke

stub 1

remote object

4

application

set_priority

7

2 2.1

Info

3

5

remote

6.2

2.2 6.1

3.1

3.2 5.3

5.1

3.3 5.2

Exported objects table

set_priority

6

Connection pool

Thread pool

client

Memory pool

server

Fig. 3.Real-time remote-invocation in DREQUIEMI and its association to the three pools

Each pool controls one important resource which may have a relevant impact on the end-to-end response time. The connection pool decides on the allocation and assignment of connections to stubs in the remote invocation, including the set-up and other quality-of-service facilities. The memory pool decides about the memory used for allocating parameters of the remote invocation and serializing the server response; in essence, it decides on use of heap or noheap memory. Locally the thread pool decides how the thread used for performing the remote invocation is allocated and assigned to each remote invocation. In DREQUIEMI the invocation of RMI is extended in ten points: [2.1; 2.2] in 2, [6.1; 6.2] in 6, [3.1; 3.2; 3.3] in 3, and [5.1; 5.2; 5.3] in 5 (see Figure 3). In 2.1, before sending the parameters of the remote invocation, the invoking thread gets a connection from the connection pool to transfer the data to the server. In addition to parameters for the remote invocation, in step 2.2 the middleware transfers non-functional information related to the server using real-time headers (see Section IV-D for more details). These real-time headers are processed at the server in step 3.2 and help decide the priority used in the server to invoke the remote object. At the server, before up-calling the remote object, the middleware uses the thread pool and the memory pool to get the concurrent entity and the type of memory used for invoking the local method. In 3.1 the middleware gets the thread which is going to carry out the local invocation. Next (in 3.3), it decides about the memory that is going to be 13

used (i.e., heap or no-heap). The priority of the thread is decided in 3.2. The remote invocation continues up-calling the remote object using these resources. At the server the remote invocation has two priorities: one for incoming data ( in Figure 3) and another ( in Figure 3) for up-calling the remote object. Both priorities may be defined by the application according to the application using DREQUIEMI APIs. The model may work with the policy model of RT-CORBA [51] (i.e. with lanes or without lanes) to avoid to change the priority of the invocation thread. The response of the remote invocation is also extended. It contains a new header which transports (in 5.1) the nonfunctional information of the invocation in addition to the response of the remote invocation. The thread and memory taken in 3.1 and 3.3 are released after the invocation at the server in two new steps: 5.2 and 5.3. Once transferred the response of the remote invocation, the execution continues at the client. The extra work carried out at the client includes the processing of real-time headers in 6.1 and releasing the connection (in 6.2) after extracting all the data transferred from the server.

C. Extending the API of RMI The previous section only defined the impact on the architecture and was silent about API issues. This section covers this issue and explains how the API of RMI was extended to accommodate special policies for a remote invocation. They are described in an ordered way, starting by the most essential interfaces to after that move the focus to others that are less relevant. The most relevant change happens at the remote object: the UnicastRemoteObject template of RMI is extended by DREQUIEMI with the definition of RealtimeUnicastRemoteObject (Figure 4). The main difference between this class and the previous is the definition of real-time parameters and the policies for managing the pools. Maintaining the original model of RMI, the programmer may decide the port (port) where the remote object is accepting connections. DREQUIEMI contributes the possibility of selecting the memory pool (map) and thread pool (thp) used in the server-side part of the remote invocation.

14

package es.uc3m.it.drequiem.rtrmi.server; public class RealtimeUnicastRemoteObject extends java.rmi.server.UnicastRemoteObject{ public RealtimeUnicastRemoteObject( int port, RMI SchedulingParameters sp, ReleaseParameters rp, RTSJ MemoryParameters mp, MemoryAreaPool map, DREQ. ThreadPool thp, ProcessingGroupParameters pgp ) RTSJ public SchedulingParameters getSchedulingParameters(); public void setSchedulingParameters( SchedulingParameters sp); . . . }

Fig. 4. RealtimeUnicastRemoteObject details

The RTSJ contributes the parameterization of the remote object using parameters from the asynchronous event handler (AsyncEventHandler). Both (remote object and event handler) use and define scheduling (sp), release (rp), memory (mp), and processing group (pgp) parameters. Although the computational model of a remote object and an event handler are rather different (one runs sequentially whereas the other runs in parallel) their behavior is very similar. I.e., they execute triggered by external entities and without a regular period. So, in DREQUIEMI they share the same real-time parameterization. At the client (Figure 5) the most important parameter is the connection pool (cpool) in charge of offering transport for messages of a communication. Using static setters the programmer can assign a connection pool to a stub. The application may decide about the kind of remote invocation it wants to carry out using an async flag when the remote object is allocated (see [41]). Another group of parameters (sp, rp, and mp) can override the parameters of the remote invocation while pgp bounds the number of remote invocations a client generates.

15

package es.uc3m.it.drequiem.rtrmi.server; import javax.realtime.*; import es.uc3m.it.drequiem.rtrmi.*; import java.rmi.server.RemoteStub; public class RealtimeUnicastRemoteStub { public static void setParameters( RemoteStub stub, boolean async, DREQ. SchedulingParameters sp, ReleaseParameters rp, RTSJ MemoryParameters mp, ProcessingGroupParameters pgp, ConnectionPool cpool) DREQ. . . . public static SchedulingParameters getSchedulingParameters( RemoteStub stub); public static void setSchedulingParameters( RemoteStub stub, SchedulingParameters sp); . . . public static ConnectionPool getConnectionPool( RemoteStub stub); public static void setConnectionPool( RemoteStub stub, ConnectionPool cpool); }

Fig. 5. RealtimeUnicastRemoteStub details

package es.uc3m.it.drequiem.rtrmi; import javax.realtime.*; public class DefaultManager extends DistributedManager { public DefaultManager( SchedulingParameters sp, ReleaseParameters rp, RTSJ MemoryParameters mp, ProcessingGroupParameters pgp, ThreadPool globaltp, DREQ. MemoryAreaPool globalmp, ConnectionPool globalcp ) . . . }

Fig. 6. DefaultManager details

The last piece of software used in the configuration of the middleware is placed in the second level of management. It defines a global set of resources and default policies for the core of the middleware (Figure 6). There are policies specific for DREQUIEMI (globaltp, globalmp, and globalcp) and others for the underlying RTSJcomplaint virtual machine (sp, rp, mp, and pgp). By default, the resources used during the remote invocation are taken from the global thread pool, memory pool, and connection pool. From the point of view of a real-time virtual machine they are characterized, as in the previous case, using the RTSJ’s event handler characterization.

16

D. Extensions to JRMP The application communication protocol of RMI, the Java’s Remote Method Protocol (JRMP) [15], requires modifications to be adapted to the proposed model due to non-functional information that it has to transfer from one node to another. The main drawback of JRMP for real-time applications is that it does not transfer real-time information during a remote invocation, i.e., with Ping, Call, and DgcAck messages. In DREQUIEMI (Figure 7), this problem was addressed adding two new headers: RtCall and RtCall2. This first targets at the default manager which transfers the priority of the client and the relationship with the garbage collector via noheap and priority. In DREQUIEMI the non-functional information comprises a special identifier (mid), the asynchronous flag (async), and the object identifier (oid) of the invoked remote object. Likewise, the object identifier is followed by the kind of message transferred: Call, Ping or DgcAck.

Message: RtCall RtCall2 Call Ping DgcAck RTSJ DREQ. JRMP RtCall: 0x55 priority noheap mid async oid Call 0x55 priority noheap mid async oid Ping 0x55 priority noheap mid async oid DgcAck RtCall2: 0x56 sched release proc mid async oid Call 0x56 sched release proc mid async oid Ping 0x56 sched release proc mid async oid DgcAck RTSJ

DREQ.

JRMP

Fig. 7. RT-JRMP: Real-time Calls

The second kind of header defined is RtCall2. It is quite similar to RtCall. The main difference between both is that the information transferred by RtCall2 is more complete than the information transferred by RtCall. During the negotiation of connections in JRMP, the middleware may exchange information between two nodes using an extended set of messages (Figure 8). In essence, the extended protocol adds two real-time protocol headers: RTProtocol which is quite simple, and another more complex: RTProtocol2. Likewise, there are two new real-

time acknowledgement messages (ProtocolRTAck and ProtocolRTAck2) that transfer additional information in the response of a remote invocation to the client. These new messages define the real-time parameters of the remote node (i.e. the priority of the handler thread awaiting new requests). 17

Out: . . . Protocol: SingleOpProtocol StreamProtocol MultiplexProtocol RTProtocol RTProtocol2 . . . MultiplexProtocol: 0x4d RTProtocol: RTSJ JRMP 0x54 priority noheap SingleOpProtocol 0x54 priority noheap StreamProtoco 0x54 priority noheap MultiplexProtocol RTProtocol2: 0x57 sched release proc SingleOpProtocol 0x57 sched release proc StreamProtoco 0x57 sched release proc MultiplexProtocol RTSJ

JRMP

In: ProtocolRTAck ProtocolAck ProtocolRTAck2 ProtocolAck ProtocolAck . . . ProtocolRTAck: 0x55 priority noheap RTSJ . . . ProtocolRTAck2: 0x58 sched release proc RTSJ

Fig. 8. RT-JRMP: Real-time Protocol and Real-time ACK

V. PERFORMANCE IN RT-RMI AND TRADITIONAL RMI Currently, there are no implementations for RT-RMI publicly available that may be used for evaluating the performance of real-time RMI. Only a partial implementation RTZen [32] was available for testing purposes for a RTSJ-CORBA in tandem. This situation seems to recommend directing performance evaluation to a comparison against plain RMI. In this new comparison the goal is to establish the costs and benefits that each different type of pool may introduce in the total cost of the remote invocation. Therefore, the goal of this section is to compare the performance of an RT-RMI infrastructure against the traditional architecture of RMI. Figure 9 shows the different types of experiments that were deployed: 1. A set of experiments that requires several clients (interferers in the figure) which run at a lower priority and try to access to a shared remote object. The goal of the experiment is double: first, it is to compare the performance of RMI vs. the performance of RT-RMI. Second, it also measures the quality of the implementation comparing real against theoretical results. These types of experiments were not evaluated

18

previously in DREQUIEMI and may be useful to other RT-RMI approaches that use similar communication structures (i.e., RMI-UPM, RT-RMI-York, RT-RMI and DRTSJ). 2. Another that evaluates the impact of a thread-pool and a connection-pool in the total cost of the remote invocation. This type of experiments was not evaluated previously in DREQUIEMI and may be useful to RMI-UPM, RT-RMI-York, RT-RMI and DRTSJ. 3. A third that evaluates the impact of having a memory pool in charge of controlling the use of memory on the total cost of the remote invocation. This set of results extend previous results with no-heap remote objects (see [33],[47] and [46]) and may be useful for other RT-RMIs (especially for RT-RMI-UPM). In general, all nodes are very similar: DREQUIEMI (which may act as an ordinary RMI node or as an RT-RMI node) and an RTSJ (the implementation of TimeSys [52]) virtual machine on an RTAI real-time operating system [53] on top of 769 MHz machines. To offer acceptable real-time performance, the set-up uses a 100 Mbps SwitchedEthernet to connect isolated PCs. On this infrastructure, the experiment establishes two check points: one at the server (the time elapsed executing there) and another at the client (the end-to-end response time). priority

Preallocation or new connection

Remote stub

Experiment (3)

Remote object

Cost of the local invocation

Response time

RMI or RT-RMI

Cserver

Experiment (2) RMI or RT-RMI Conn pool

Thread pool

Mem pool

Preallocation or new allocation

RTSJ

RTSJ

769 Mhz and RTAI-2.4

769 Mhz and RTAI 2.4

Number of interference generators

Regions or Heap

Transmission delay

100 Mbits Switched Ethernet

Interf. Legend

RMI or RTRMI priority

...

measure Experiment (1)

parameter

Interf. RMI or RTRMI

Fig. 9. Empirical set-up: infrastructure, configuration parameters, and measurement points

In the empirical platform the following parameters can be configured: (1) the time consumed executing the remote method; (2) an extra delay before each transmission; (3) connection allocation (reusing connections or not); (4) 19

thread allocation (fresh threads or reused); (5) the kind of memory used at the server (regions or the heap); and the priority of each client, server, and interferent.

A. Impact of priorities on end-to-end response times 1) No Interference The analysis of the impact on the cost of the remote invocation starts considering a scenario where there is only a client connected to a server trying to access the remote server. The only tasks running are the services of the operating system (e.g. network daemons) and the virtual machine. This scenario is useful to perform analysis of the behavior of a remote invocation. To this end, the server carries out a CPU intensive activity (simulated with loops that consume CPU). The goal of the experiment is to measure from a simple client the cost of the remote invocation, in the following three cases: -

With plain RMI. This RMI refers to the cost of the remote invocation without the RT-RMI headers and plain Java threads.

-

With the RT-RMI support offered by DREQUIEMI (real-time threads). In this case there is a mechanism to reduce priority inversion at the server.

-

In the ideal case in an idealized (RT)-RMI. The ideal case refers to the case where there is no interference from the underlying operating system activities. This case is measured as the best case execution of ideal case.

Notice that since there is not full control on the software stack (e.g. operating system actions, TCP/IP stack), response times cannot be calculated exactly. Instead, the authors use a statistical model (based on a Gaussian analysis previously used by the authors in [54]). The idea is that the benchmark produces for each value a confidence interval (C.I.) and a maximum probability of having an error (perror). The results for this simple case are shown in Figure 10. The figure shows that RT-RMI offers the worst-case performance because the implementation has to send/receive extra data and real-time threads running slower than plain Java threads (they have to check RTSJ specific rules). Notice that even plain RMI has certain overhead when compared against the ideal response-time of a Java RMI that does not suffer penalties from the underlying infrastructure.

20

Average response time (NOTASKS) Worst-Case Measured Response Time (NOTASKS) 4500 6000

2389

2206

2021

1838

1654

1000

Work at server (units)

0

1471

IDEAL 1286

500

RT-RMI

2000 919

2389

2206

2021

1838

1654

1471

1286

919

1101

735

549

1000

367

0

1500

RMI

3000

1101

2000

4000

735

IDEAL

549

2500

5000

367

RT-RMI

0

3000

183

RMI Response Time (us)

3500

183

Response Time (us)

4000

0 Work at server (us)

Fig. 10. Average and worst-case cost of the remote for a case without any other task trying to access the server. (796Mhz-100 Mbits Switched Ethernet network). Confidence interval (C.I.): ±6µs with and error probability bounded by perror10-9.

This overhead has been quantified in Figure 11. The results shows that in average the implementation used for RMI produced an almost plain performance overhead on the ideal RMI. When compared against RMI, results show that the performance tends to the cost of the ideal case. Results also refer to the worst case response time of an application for the worst-case. For the worst-case response times, RT-RMI performs better than RMI because it is able to eliminate some sources of indeterminism from the underlying implementation. This characteristic makes RT-RMI better than RMI for applications with worst-case response-time requirements. Overhead of RT-RMI(NOTASKS) Average Response Time Cost

Overhead of RT-RMI(NOTASKS) (wcrt)

70 Difference (us)

RMI IDEAL

-30

RMI

200

IDEAL 0

2400

2200

2000

Work at server (us)

1800

1600

1400

1200

1000

800

0

Work at server (units)

600

-200

2400

2200

2000

1800

1600

1400

1200

1000

800

600

400

200

0

-80

400

400

20

200

Response Time (us)

600

Fig. 11. Overhead introduced by RT-RMI on RMI and on Ideal RMI for the highest priority end-to-end task. (796Mhz-100 Mbits Switched Ethernet network.) Confidence Interval (±6µs) and error probability bounded by 10-9.

To evaluate how good a certain implementation is, it is defined the quality factor of the remote invocation as the ideal response time divided into the difference between the ideal end-to-end response time (RT(Ideal)) and the measured time (RT(Measured)): Quality = RTideal / (RTmeassured - RTideal). The higher this parameter is, the better the underlying infrastructure is. For instance, when a remote invocation has a quality of 1, it means that the cost of the remote invocation from a client is the double of the ideal time. A quality of 10 means that the extra overhead is by 1/10 of the total time and a quality of 0.1 means that the invocation takes ten times the ideal cost of an invocation.

21

For the simple case of a real-time Java remote invocation on the system, in average (see Figure 12), the quality of RMI and RT-RMI is high (above 10). These results decrease when the worst-case is considered instead. For this case, the quality is still high (4.4).

10

100

1

RT-RMI 1

Quality

RMI 10

RMI

0,1

RT-RMI

0,01

0,1

Work at server (units)

2400

2200

2000

1800

1600

1400

1200

800

1000

600

0

2400

2200

2000

1800

1600

1400

1200

1000

800

600

400

0

0,001 200

0,01

400

Quality

Quality (NOTASKS) (wcrt)

200

1000

Quality (NOTASKS) (avg)

Work at server (units)

Fig. 12. Quality in average and worst-case response time RMI and RT-RMI implementation for the highest priority end-to-end task. (796Mhz machines-100 Mbits Switched Ethernet network). Confidence Interval (±0,2) and error probability bounded by 109 . 2) Low Interference (30%)

This basic framework was extended with a case of a task that is trying to consume as much CPU as possible. This task consumes 30 % of the available time of the server from another remote client. Under these conditions, the experiments described in previous section were reevaluated measuring the interference introduced in the highest priority task. Figure 13 shows a summary with results for average and worst-case results. For all experiments, the response-time of the RT-RMI implementation is in between the ideal case and the response time of plain RMI remote invocations. Average response time (LOW)

Worst-Case Measured Response Time (LOW) 12000

ideal 2389

2206

2021

1838

1654

1471

1286

2000 0

Work at server (units)

RT_RMI

4000

1101

2389

2206

2021

1838

1654

1471

1286

1101

919

735

549

367

0

0

2000

RMI

6000

919

4000

8000

735

IDEAL

549

6000

10000

367

RT-RMI

183

RMI

8000

0

Response Time (us)

10000

183

Response Time (us)

12000

Work at server (units)

Fig. 13. Average and worst-case response time of the remote (796Mhz -100Mbits Switched-Ethernet network) for the highest end-to-end priority task. Confidence interval (C.I.): ±6µs with and error probability bounded by perror10-9.

Results (Figure 14) for the overhead show that the real-time remote invocation adds a plain cost on RT-RMI that is 112 µs for average response time and 602 µs for the worst-case response times. Likewise, measured quality for the RT-RMI implementation decreases to 2.8 units for the worst-cases (see Figure 15).

22

Overhead (LOW) Worst-Case Case Response time

Overhead (LOW) Average Response Time Cost 1000

0 -500

Difference (us)

RMI IDEAL

-1000 -1500 -2000

0 -1000

RMI IDEAL

-2000 -3000 2400

2200

2000

1800

1600

1400

1200

800

Work at server (units)

1000

600

0 2400

2200

2000

1800

1600

1400

1200

800

1000

600

400

0

200

400

-4000

-2500

200

Response Time (us)

500

Work at server (units)

Fig. 14. Overhead introduced by RT-RMI on RMI and on Ideal RMI for the highest priority end-to-end task. (796Mhz -100Mbits Switched Ethernet network.) Confidence Interval (±6µs) and error probability bounded by 10-9.

100

Quality (LOW) (avg)

Quality (LOW) (wcrt)

10

RMI Quality

RT-RMI

RT-RMI

Work at server (units)

2400

2200

2000

1800

1600

1400

1200

1000

800

600

400

0

2400

2200

2000

1800

1600

1400

1200

800

1000

600

400

0

0,1 200

1

1

200

Quality

RMI 10

Work at server (units)

Fig. 15. Quality in average and worst-case response time RMI and RT-RMI implementations for the highest priority end-to-end task. (796Mhz machines-100 Mbits Switched Ethernet network). Confidence Interval (±0,2) and error probability bounded by 109 . 3) High Interference (100%)

The last set of experiments considers a saturated scenario. A saturated scenario is a scenario with clients that want to access to a shared remote object. The shared remote object is accessed from six interfering clients in parallel (this number of clients is enough to block the server producing saturation in the lowest priority tasks). Results show that for this scenario RT-RMI performs closer to the ideal case (see Figure 16). The overhead with respect to the ideal case (see Figure 17) is higher than for the previous case. Likewise, for RT-RMI in a worst-case scenario, the results are worse than in the previous experiment. Lastly, quality decreases to a minimum of 0.9 units for a worst-case response time (see Figure 18). Average response time (SAT.)

Worst-Case Measured Response Time (SAT.) 25000

ideal

2389

2206

2021

1838

1654

1471

1286

1101

919

5000 0

2389

2206

2021

1838

Work at server (units)

1654

1471

1286

1101

919

735

549

367

0

0

5000

RT_RMI

10000

735

10000

RMI

15000

549

IDEAL

367

15000

20000

0

RT-RMI

183

RMI

Response Time (us)

20000

183

Response Time (us)

25000

Work at server (units)

Fig. 16. Average and worst-case response time of the remote. (796Mhz-100 Mbits Switched-Ethernet network) for the highest priority end-to-end task. Confidence interval (C.I.): ±6µs with and error probability bounded by perror10-9.

23

Overhead (SAT.) Average Response Time Cost

Overhead (SAT.) Worst-Case Case Response time 4000

-2000

RMI

-3000

IDEAL

Difference (us)

-4000 -5000

RMI

-6000

IDEAL

-11000

2400

2200

2000

Work at server (units)

1800

1600

1400

1200

800

Work at server (units)

1000

600

0

400

-16000

2400

2200

2000

1800

1600

1400

1200

800

1000

600

400

0

200

-6000

-1000

200

Response Time (us)

0 -1000

Fig. 17. Overhead introduced by RT-RMI on RMI and on Ideal RMI for highest priority end-to-end transaction. (796Mhz-100 Mbits Switched Ethernet network.) Confidence Interval (±6µs) and error probability bounded by p error10-9. Quality (SAT.) (avg)

Quality (SAT.) (wcrt)

100

10 RMI

RMI Quality

RT-RMI

0,1

2400

2200

2000

1800

1600

1400

1200

1000

800

0

Work at server (units)

600

0,1

2400

2200

2000

1800

1600

1400

1200

1000

800

600

400

0

200

0,01

RT-RMI

1

400

1

200

Quality

10

Work at server (units)

Fig. 18. Quality in average and worst-case response time RMI and RT-RMI implementations for the highest priority end-to-end transaction. (796Mhz machines-100 Mbits Switched-Ethernet network). Confidence Interval (±0.2) and error probability bounded by perror10-9.

B. Combined use of connection and thread pools Another two sources of indeterminism are the management of connections and threads. In the standard implementation of RMI, the middleware creates threads (at the server) and allocates connections at the client dynamically at runtime using pooling strategies (by the default, a connection remains in unused state no more than fifteen seconds). From the point of view of a real-time Java middleware, not controlling these allocations may be harmful (see Figure 19). The normal cost of a void (Cserver= 0 units) remote invocation doubles due to the allocation of threads and connections dynamically. Fortunately this cost is less important as the cost of the remote invocation increases (C(units)server 0 units). For instance, in the experiment carried out the (RMI/(RT-RMI)) ratio decreases up to 1.2 when the execution time required from the remote object increases from 0 to 4200 units.

24

RMI.wcrt/RT-RMI.wcrt 2,1 1,8

ratio

1,5 1,2

1,8-2,1

1,5-1,8

1,2-1,5

0,9-1,2

0,6-0,9

0,3-0,6

0,9 0-0,3 0,6

300

200

100

4700 0

Cserver (units)

4200

3200

3700

2000

2700

0

0

1000

0,3

Delay (us)

Fig. 19. RMI vs. RT-RMI using DREQUIEMI: Impact of connection and thread pools in the cost of the remote invocation. Confidence

Interval (±0.2) and error probability bounded by perror10-9.

A. Impact of automatic memory management policies on the cost of the remote invocation A less fortunate source of interference comes from memory management. The priority inversion caused by the garbage collector in the total cost of the remote invocation may be quite high, even when there is only one remote object active in the server. In the experiments carried out (Figure 20) this impact may multiply by three the normal cost of the remote invocation in very simple cases: a minimum RMI hosting one remote object with void methods.

RMI.wcrt/RT-RMI.wcrt

ratio

12-14 10-12 8-10 6-8

1112 End-to-end reponse time (us)

752

913

1.162

1.401

1.666

1.910

3912 2.163

2.410

2.659

3.158

Alive memory (Kbytes)

2.909

3.408

3.694

2512

Fig. 20. RMI vs. RT-RMI: Impact of automatic memory management on the end-to-end response time. Confidence Interval (±0.2) and error probability bounded by p error10-9.

Unfortunately, this interference increases as the amount of occupied memory (named alive) increases. In the results described in Figure 15, it increases an order in magnitude when alive memory is not quite high (3158 Kbytes).

25

Solving this problem leads real-time Java implementers to choose between real-time garbage collectors [55] and regions [56]. Real-time garbage collectors defer the cost of collecting the heap causing overhead that typically is by 20% of the total CPU time. Region-based memory management offers better performance (a penalty about 5% in our experiments) but requires special programming paradigms like the no-heap remote object paradigm ([33, 46, 47]). The less negative side of the problem is that the ratio decreases as the work carried out at the server increases.

VI. REAL-TIME REMOTE INVOCATIONS ON APPLICATIONS The section considers two different frameworks: Boeing’s Bold Stroke Architecture ([48]) and a distributed version of the collision detector (CDx) ([57]) framework called dCDx. Both frameworks define operational frequencies (~2040Hz and ~0.1-0.25 kHz respectively) which determine the maximum number of remote communications that be carried out in each release. Likewise, applications suffer substantial interference, which may be quantified, from the underlying infrastructure.

A. Distribution possibilities Figure 21 shows results corresponding to the first part of the evaluation. The figure uses dCD to refer to the distributed collision detector and BS for the Boeing’s Bold Stroke architecture. DREQUIEMI:Maximum number of remote invocations (medium) 30

25

25

20 15 10 5

Invocations per cycle

30

25

20 15 10 5

0

0 Ideal

Hard

Soft

dCD~100Hz

Hard

BS~40 Hz

dCD~250Hz

BS~20 Hz

25

Invocations per cycle

30

25 20 15 10 5 0

Ideal

dCD~100Hz

dCD~100Hz

Soft

BS~40 Hz

Soft

BS~40 Hz

BS~20 Hz

dCD~250Hz

dCD~100Hz

BS~40 Hz

BS~20 Hz

IDEAL RT_RMI :Maximum number of remote invocations (large) 30

20 15 10 5

25 20 15 10 5 0

Ideal BS~20 Hz

Hard

Scenario considered

0

dCD~250Hz

5

IDEAL RT_RMI:Maximum number of remote invocations (medium)

30

Hard Scenario considered

10

Scenario considered

IDEAL RT_RMI: Maximum number of remote invocations (short)

Ideal

15

Soft

Invocations per cycle

dCD~250Hz

20

0 Ideal

Scenario considered

Invocations per cycle

DREQUIEMI: Maximum number of remote invocations (large)

30 Invocations per cycle

Invocations per cycle

DREQUIEMI: Maximum number of remote invocations (short)

dCD~250Hz

Hard Scenario considered dCD~100Hz

26

Soft

BS~40 Hz

Ideal

Hard

Soft

Scenario considered BS~20 Hz

dCD~250Hz

dCD~100Hz

BS~40 Hz

BS~20 Hz

Fig. 21. DREQUIEMI’s distribution possibilities (ideal, hard and soft real-time) in Boeing’s Bold Stroke architecture and Distributed CD for small, medium and large remote invocations. 769 MHz and 100 Mbits Switched Ethernet. The benchmark also includes reference results for the IDEAL RT-RMI implementation. (I.C.: ±0,2 and perror < 10-12).

The evaluation considered three different kinds of remote invocations: low-size, medium and large in three different scenarios: ideal, hard, and soft real-time. Low-size invocations refer to those whose response time in isolation is 1.8 ms; medium sizes for 2.4 ms; and large invocations have an end-to-end response time by 3.8 ms. In an ideal situation real-time remote invocations are fully preemptive and the real-time garbage collector has a null preemption time. However, when they run on DREQUIEMI the response time increases because the priority inversion suffered from low-level infrastructures and real-time garbage collectors requires some time to leave non-preemptable code sections. The experiment considers two cases for this low-level priority inversion, the first labeled with hard, which considers the worst-case priority inversion due to low-level real-time remote invocations, and the second labeled soft for the average priority inversion. In general the following inequations are true: InvIDEAL≥ InvSOFT ≥ InvHARD (i.e. the number of remote invocations carried out in the ideal case is higher than in the case of a soft-real time case and the latter is smaller than the number of remote invocations in the hard real-time configuration). This relationship has been evaluated empirically and it holds. Not all the different configurations can use remote invocations to communicate with other machines using DREQUIEMI. dCD cannot use DREQUIEMI running on two 769Mhz PCs using a 100Mbps connection with medium and large remote invocations under hard real-time configurations running at 250 Hz. At this frequency the cost of carrying out a remote invocation exceeds the deadline of the application. The result is also valid in the soft environment with large remote invocations. Nevertheless, the Bold Stroke scenario is less demanding, allowing the use of four invocations for its higher frequency (40Hz) in hard real-time configurations.

B. Blocking from the underlying configuration and resource management 1) Blocking from non-preemptive sections on remote invocations The previous experiment showed the existence of priority inversion due to the non preemptable parts of the remote invocation. The results of the experiment shown in Figure 22 refer to the interference introduced by other low priority remote invocations on a higher one. To measure the interference, the figure normalizes it by the period of the

27

tasks (in our particular case the frequency) leaving it in the [0, 1] range. As a result the 0 interference means no interference while 1 means that the interference may last the whole period. Taking into account the use of a single remote communication during the remote invocation, the results show that the maximum interference (40 % of its available time) happens with the highest frequency tasks (250 Hz). This interference reduces to 20% with Bold Stroke requirements. Another important result is that the priority inversion is proportional to the number of remote invocations carried out (i.e. 1, 2, 4, 8 or 16). Each remote invocation accumulates certain amount of interface that reduces the effective execution time at the client.

Interference from low priority remote invocations U_interference

100%

10%

1% 1

2 BS~20 Hz

4 8 16 Remote invocations per cycle BS~40 Hz dCD~100Hz dCD~250Hz

Fig. 22. Interference introduced (under saturation) by the lack a perfect preemptive remote invocation using Bold Stroke and Collision Detector requirements. A value close to 0 means no interference, while a value close to 1 means full interference. Ideally, RT-RMI should have a 0 interference that means a high-level priority does not introduce any interference. (I.C: ±1% and perror

Suggest Documents