An Improved IDL Compiler for Optimizing ... - ACM Digital Library

1 downloads 0 Views 223KB Size Report
PolyORB, our middleware, to make distributed applications ... the implementers free to choose the right way to implement ... on the functional parts of the application (algorithms). ... As stated, a large part of the distributed application code .... ification. To allow an application personality to interact with a protocol personality, ...
An Improved IDL Compiler for Optimizing CORBA Applications Bechir Zalila

Jer ´ ome ˆ Hugues

Laurent Pautet

GET-Tel ´ ecom ´ Paris – LTCI-UMR 5141 CNRS 46, rue Barrault, F-75634 Paris CEDEX 13, France

GET-Tel ´ ecom ´ Paris – LTCI-UMR 5141 CNRS 46, rue Barrault, F-75634 Paris CEDEX 13, France

GET-Tel ´ ecom ´ Paris – LTCI-UMR 5141 CNRS 46, rue Barrault, F-75634 Paris CEDEX 13, France

[email protected]

[email protected]

[email protected]

ABSTRACT Building CORBA distributed applications for embedded and real-time systems has brought a number of requirements to be satisfied (small footprint, determinism...). A large part of the distributed application code is generated automatically from its IDL (Interface Definition Language) specification using an IDL compiler. Thus the IDL compiler has to be flexible in order to generate optimized code and to easily support new optimizations. In this paper, we present an IDL compiler architecture which is more amenable to generate optimized code. Then, we list some optimizations we implemented on the code generated by IAC (IDL Ada Compiler ): the new IDL compiler and on PolyORB, our middleware, to make distributed applications suited for embedded real-time systems.

Categories and Subject Descriptors C.2.4 [Distributed Systems]: Distributed applications, Client/server; D.3.4 [Processors]: Code generation, Compilers, Parsing

General Terms Algorithms, Design, Languages, Performance

Keywords CORBA, OMG, IDL, Ada, Compilation, Real-Time, Embedded, PolyORB

1.

INTRODUCTION

The place of distributed applications is becoming prominent in software development. The complexity of such applications do not allow anymore their construction “from scratch” without conformance to any distribution standard [10]. Middleware was introduced to allow high-level development

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIGAda’06, November 12–16, 2006, Albuquerque, New Mexico, USA. Copyright 2006 ACM 1-59593-563-0/06/0011 ...$5.00.

of distributed applications. First, several aspects of the application (communication subsystem, file system access...) are made abstract by the middleware to ensure interoperability between nodes developed on heterogeneous architectures. Second, a large (complex) part of the distributed application code is automatically generated from a simple description of the application. The abstraction layer and the automatic code generation make the development of distributed application and their optimization easier. CORBA [7] is a popular distribution standard used to build distributed applications. It allows the development of distributed applications containing nodes written in different programming languages and running on different architectures. The interface between these nodes is described by means of the Interface Description Language (OMG IDL). Most of the CORBA implementations use an IDL translator to automatically generate a large part of the application code. The generated code maps the described interface into the target programming language and performs the calls to the middleware communication routines. The mapping specifications from the IDL to a particular programming language focus only on the names and types of entities and leave the implementers free to choose the right way to implement the functional aspects. Similarly, the CORBA specifications primarily describe the entity names and types (to ensure interoperability). The implementation is usually left to the middleware designers which could be beneficial for the implementation of optimizations in both the middleware and the generated code. In this paper, we describe the optimizations and enhancements that we performed on an IDL compiler and a CORBA middleware. These optimizations consist essentially on the enforcement of determinism and the reduction of memory size. This makes the optimized distributed applications suitable for real-time and embedded systems. In section 2, we give a brief overview of the CORBA standard and PolyORB, our CORBA middleware. In section 3, we explain why the architecture of the old IDL compiler is not sufficient to generate optimized code, we present a new architecture of an IDL compiler. In section 4, we describe some optimizations implemented on the new IDL compiler and give experimental results for these optimizations. Finally, we conclude and present future work.

2. CORBA and PolyORB In this section, we give a brief description of the CORBA specifications. Then we introduce PolyORB, our mid-

21

dleware that implements these specifications in addition to several other distribution standards.

2.1 CORBA When developing a distributed application, the communication between the different nodes takes a very important place in the development process. The CORBA standard [7] was created by the OMG in 1991 to keep this communication easier to implement and to let the developer focus on specifying the application’s global architecture (number of nodes, how these nodes interact between each others) and on the functional parts of the application (algorithms). The distribution paradigm used in CORBA is the distributed object paradigm. CORBA also allows the interoperability of software components written in different programming languages and running on different machine architectures. As stated, a large part of the distributed application code is generated automatically. The input language for the code generator is the “Interface Description Language” (OMG IDL). The IDL translator applies some rules defined in “The CORBA mapping specifications” [4] to generate the code in the output programming language. The OMG publishes many mapping specifications for many target programming languages published by the OMG (C++, C, Java, Ada 95, etc...). An IDL specification is a contract between the distributed application’s nodes (namely, the client and the server). The IDL allows the declaration of types and operations. It also allows hierarchical name spaces by defining modules. The distributed objects correspond to IDL interfaces. An interface describes the external appearance of the distributed object and defines the set of methods a client can invoke on this object.

Client

IDL Spec

Server Servant (Object)

Reference

Code Generation

Skeleton Stub Client ORB

Object Adapter

Network (GIOP)

and the same signature is created for the reference object. When the client wants to invoke an operation on the remote object, it simply calls the corresponding stub method on the reference corresponding to the remote object (we explain in the server side description how this reference is created). The stub contains all necessary routines to transform the client method call into a network message : it fills the communication buffer with the operation parameters (data marshalling). The parameters are marshalled conforming to a common way of representing data: the Common Data Representation (CDR). Then, the stub sends the message to the client ORB and finally gets the response message and extracts the result of the operation call (data unmarshalling). For the client, all happens as if it called a local operation. The server side : On the server side, the automatically generated code is called Skeleton. The skeleton contains all the necessary routines to handle a request coming from the client by unmarshalling the operation parameters from the communication buffer, invoking the corresponding implementation object (Servant) and finally by marshalling the result and sending it back to the client. The server-side of the distributed application contains an important component which is the Portable Object Adapter (POA). The POA can be described as the CORBA component that provides the reference object used by the client, and that activates the implementation object corresponding to a particular reference object. At the server startup, the Servant (the remote object) has to register itself within the POA. A consequence of this registration is the creation of an object reference and a binding between this reference and the implementation object. When the server ORB receives a message from the client ORB, it uses the POA to determine which Servant corresponds to the reference extracted from the message basing on the formerly-created binding. After that, it gives the request to the corresponding skeleton to handle it. There are several implementations of the CORBA specifications in many target languages. However, there are very few Ada implementations of CORBA. In the following section, we describe PolyORB, our Ada-based CORBA middleware.

Server ORB

Figure 1: Simplified architecture of CORBA

2.2 PolyORB

A second part of the distributed application code is the middleware runtime. To make the development of the distributed application easier and to ensure interoperability between the application’s nodes, the core distribution functions (communication part, distributed exception handling...) are implemented in a core library called The Object Request Broker (ORB). Each node of the distributed application has its own ORB instance and the real communication is performed between ORBs as described in the bottom part of figure 1. In the following, we describe a client/server scenario:

PolyORB is a configurable middleware: it allows the development of distributed applications according to several profiles (use of one thread/use of a pool of threads...). In addition, PolyORB is a generic middleware: it supports many distribution standards: CORBA which uses the remote object paradigm, DSA (Distributed Systems Annex of Ada 95) which uses the remote procedure call paradigm, MOMA (Message Oriented Middleware for Ada) which uses the message passing paradigm. Finally, PolyORB allows the interoperability between nodes written using different distribution standards: it is a schizophrenic middleware. PolyORB has been developed in Ada [5]. The PolyORB architecture separates the application-side (distribution standard) from the protocol-side (network communication). This leads to the definition of two kinds of personalities: the application personalities (CORBA, DSA...)

The client side : On the client side, the automatically generated code by the IDL translator is called Stub. This code contains essentially the definition of a local object, called reference and for each method of the remote object, a primitive method having the same name

22

DSA

IDL source

Neutral Core middleware GIOP

SOAP

Protocol personalities

Figure 2: Some personality interactions in PolyORB and the protocol personalities (GIOP, SOAP). The interactions between different personalities is performed through a Neutral Core Middleware (NCM). Figure 2 describes some of these interactions: in the application personalities, the type of data is specified by the corresponding distribution standard (CORBA, DSA...). In the protocol personalities, the way data are sent in the network, the Common Data Representation (CDR), is also given by the protocol specification. To allow an application personality to interact with a protocol personality, we use the Any type - which is a self-described data container - as the NCM’s unique internal data representation. Thus the use of the Any type to represent data in the NCM is the key of interoperability between different distribution standards. More details on the PolyORB architecture can be found in [9]. The CORBA personality of PolyORB implements the 3.0.3 version of the CORBA specifications [7]. The communication system of this personality supports the Dynamic Invocation Interface (DII) and the Dynamic Skeleton Interface (DSI) which are very flexible ways to invoke and handle requests. The object adapter implemented in the CORBA personality is the Portable Object Adapter (POA). In addition to the core specifications, the CORBA personality of PolyORB implements some CORBA COS (Common Object Services) : COS Naming, COS Event, COS Notification, COS Time and the Interface Repository. In order to develop distributed applications for real-time systems, the RT-CORBA (Real-Time CORBA specifications) are implemented in PolyORB. In addition, FT-CORBA (FaultTolerant CORBA) is implemented to allow the development of high criticality distributed applications. As stated by the CORBA specifications, the communication between a distributed application’s nodes must be ensured by the Generic Inter-ORB Protocol. We implemented a GIOP personality in PolyORB and we provided several instances to the GIOP : IIOP to support communication over TCP/IP, MIOP group communication using IP multicast and DIOP for UDP/IP communications.

3.

Lexer Token sequence

Parser IDL syntactic tree

IDLAC Backend

Any

CDR

Application personalities IDLAC Frontend

MW Type CORBA

Expander Expanded IDL tree Gen. 1

Ada file 1

Gen. 2

Ada file 2

... ...

Gen. n

Ada file n

Figure 3: Architecture of IDLAC

This architecture is very simple and widely used. It allows the programmer to add support for a new parsed language (add a new frontend) without modifying the backends and to generate code in a new language (add a new backend) without modifying the frontends. In a classical IDL-to-Ada compiler, the frontend parses IDL source files and the backend generates the Ada 95 code according to the Ada mapping specifications [6]. Figure 3 gives an overview of such a compiler: (1) in the frontend, a lexer transforms the IDL source into a sequence of tokens, then a parser creates the AST from this sequence and does the semantic analysis at the same time. (2) In the backend, an expander transforms the complex constructions into simpler ones (by replacing the IDL attributes with couples of get/set operations, copying the parent operations from parent interfaces in case of multiple inheritance...). The expanded AST is traversed by several code generators to create the Ada 95 packages required by the mapping specifications. This intuitive architecture has been adapted in many compilers (such as GCC) and has given good results, which made it the obvious choice when one wants to design a compiler. It was adopted in IDLAC (IDL to Ada Compiler), the current IDL compiler of PolyORB. However, there are two major drawbacks in this architecture. The first drawback is due to the difference between the Ada language and the IDL language: the IDL grammar is different from the Ada grammar. This makes the Ada 95 code generation directly from the IDL tree complex. It also implies that the code is generated in the same order IDL entities are met which is not always the desired behaviour. The second major drawback is due to the fact that we do not generate a single Ada file from an IDL specification (like done by classical compilers, source file → object file). For a single IDL specification, there are at least 6 Ada files that are generated: the Stub, the Skeleton and the Helpers (spec and body for each package). In this case, the code generation could be performed in two ways: (1) we generate progressively all the files at the same time. This approach makes the compiler code complex and, consequently, hard to maintain; or (2) generate the files one by one which is the way IDLAC works. This makes the modification or the optimization of the generated code a very tedious task since one must replicate the optimisations in each code generator and propagate it in all layers. For

A NEW COMPILER ARCHITECTURE

In this section, we list the drawbacks of the current PolyORB’s IDL compiler, IDLAC (IDL to Ada Compiler ) architecture that limits its maintainability and evolution. Then, we describe the new compiler we implemented (IAC) and list its advantages.

3.1 Drawbacks of the current architecture A modern compiler [1] consists of two major parts: a frontend that parses the source file and produces an internal representation called the “Abstract Syntax Tree” (AST) and a backend that generates the wanted code from this AST.

23

example, if we want to modify the layout of the generated code, we must change all the code generators of IDLAC. In addition to the two major drawbacks, the frontend of the current IDL compiler does not separate the syntactical verification and the semantic analysis. When IDLAC has been created, 10 years ago, there was no need to have more than one backend (the Ada backend). It is hard, now, to add a new backend to IDLAC without modifying very deeply its architecture. All these drawbacks in the current compiler architecture made the work of maintaining it, adding new functionalities to it and optimizing its generated code hard. Several attempts have been done in order to design a flexible IDL compiler (eg. [3]). The resulting architecture was very close to the architecture described above and the major drawbacks remain. In the next section, we introduce a new IDL to Ada translator which has a different architecture.

to optimization. It also eases the upgrade of the compiler to a new version of the standard specifications. (2) In the backend, a new phase is added before the Ada 95 code generation : the tree conversion. The IDL AST is converted into several Ada subtrees by different tree converters (T C1..n ), one for each Ada file to be generated. Then, the code generation is performed from these subtrees by one single code generator. The advantage of the tree conversion is that we gain more flexibility when we build the Ada tree, we can add nodes to the latter tree in an order that is different from the order of their transformation into source code. We can even override or remove nodes from the tree. The fact of having one Ada code generator makes easier the maintaining of the compiler: the layout the Ada code (indentation, addition of semi-colon after the statements, comments) are centralized in one code generation module. With this new architecture of the compiler, it became possible for us to optimize the generated code. We implemented this architecture and created a new IDL compiler: IAC (IDL Ada Compiler). IAC has been designed to accept several backends and it has already - in addition to the Ada code generation backend - an IDL pretty printer and a backend that gives information about used IDL types in a specification. The description of the IDL and the Ada trees is given in a pseudo-IDL form and all the routines to manipulate the trees are generated automatically from these descriptions. This makes the compiler easily extensible. Our first objective of having a flexible and easily maintainable compiler is achieved. In the next section, we present the optimizations we implemented in IAC which achieves our second objective.

3.2 Advantages of the new architecture Two main objectives have to be achieved with the new compiler architecture: (1) the IDL compiler must be flexible and easily maintainable, (2) and the implementation of optimisations using this architecture must be much simpler than using the current architecture. The new architecture must take in account the difference between Ada 95 and IDL and the fact that a single IDL specification leads to the generation of many Ada 95 files. To resolve these issues, we introduces a new phase in the compiler backend: the tree conversion phase which consists of transforming the IDL tree into an Ada tree before the code generation. IDL source

IAC Frontend

Lexer

4. OPTIMIZING THE COMPILER

Token sequence

In this section, we present some optimizations for the generated code and give experimental results obtained with our new compiler’s architecture.

Parser Uncomplete IDL AST

Analyzer

4.1 Determinism of execution time The purpose of this optimization is to keep the execution time of a request as constant as possible.

Complete and verified IDL AST

IAC Backend

Expander

4.1.1 Problem

Expanded IDL tree TC 1 Ada Tree 1

TC 2

...

When a servant receives a request and tries to handle it, it must find the corresponding operation to be executed. The easiest way to do this is to compares the request name with the names of all the operations in the interface (linear comparison). This approach makes the request execution time depend on the index of the operation in the interface. If an interface contains a large number of operations, then the WCET (Worst Case Execution Time) of the request is far greater than the average execution time.

TC n Ada Tree n

Ada Tree 2

Generator Ada File 1

Ada File 2

...

Ada File n

Figure 4: Architecture of IAC The figure 4 shows the new architecture for the IDL compiler; it differs from the classic architecture in two principle aspects: (1) In the frontend, the only checks done in the parser are syntactic (conformance to the IDL grammar). At the end of the parsing, the produced AST is an uncompleted tree (most of the semantic aspects like the scoped name resolutions are not present). The analysis phase completes this tree (resolution of the scoped names...) and checks the semantic rules (specific rules for local interfaces, constant value ranges...). The separation of the analyzer and the parser makes the compiler code clearer and more amenable

4.1.2 Approach To resolve this problem, we use minimal perfect hash functions. A minimal perfect hash function is a hash function which has the following two properties: (1) the size of the hash key set is equal to the size of the word set (minimal ) and (2) all the hash keys of all words are different which means that there are no collisions (perfect). The consequence of the perfect property is that the fetching time of a word is constant. We implemented an algorithm that generates minimal perfect hash functions [2]. This algorithm

24

The number of elements in the hash table is not high enough to show a significant gain.

Fetching + implementation execution times 5e-05 No_Opt CPU_Opt Mem_Opt

Linear Search 3249.73 KB

Implementation fetching time (sec)

4.5e-05

CPU Time 3248.79 KB

Memory Size 3248.43 KB

Table 1: Server sizes for each optimization 4e-05

4.1.4 Conclusion

3.5e-05

With the minimal perfect hash function optimization, we ensure that the fetching time of a request is deterministic and independent from the index of the operation in an IDL interface. This allows for WCET analysis in real-time distributed applications.

3e-05

0

10

20

30

40 50 60 Operation index

70

80

90

100

4.2 Static handling of requests

Figure 5: Use of the minimal perfect hash functions in request handling

4.2.1 Problem As said in 2.2, the interoperability between the different distribution standards in PolyORB is ensured in large part by the use of the internal data representation (the Any type). For the CORBA personality in particular, this implies the dynamic handling of the requests. The dynamic request handling has been defined by the OMG (chapter 7 and 8 of [7]) in order to allow type specification at runtime. The left part of figure 6 gives more details of the dynamic request handling. The operation parameters are converted from the CORBA type to the Any type by the application layer of the middleware. Then the protocol layer converts them to the Common Data Representation (CDR) which is a standardised manner to fill the communication buffer. The inverse is done when an application node gets the data from the communication buffer. We can see that handling a request dynamically implies doing 8 conversions from or to the CORBA::Any type. This compromises the speed of the execution especially when the types dealt with are complex (arrays, structures...).

was integrated in the GNAT Ada compiler in the GNAT.Perfect Hash Generators package. The authors of [2] propose two versions of the algorithm: one version that optimizes the CPU time and another version that optimizes the memory space. Both versions have been implemented.

4.1.3 Analysis of the results The results of this optimisation are given in figure 5. The example of application used is a very simple interface that contains 100 operations echoLongXX (XX = 00 .. 99). All the operations do the same work, which is returning the long value given as parameter. The measured time is the time spent by the CORBA skeleton to find the right operation from the request name. To get this time the most precisely, we execute each operation 100 times (so, the total number of invocations is 10000) and we display the average time for each operation. The test was compiled using GNAT Pro 5.04a and has been executed on a Pentium 4 2.4 GHz machine running under GNU/LINUX 2.6.8. The graph on figure 5 shows the average time of operation fetching and request handling for each operation. The increasing curve represents times measured without any optimization (linear comparison). The two other plots represent the two implementations of the minimal perfect hash function algorithm, the first (dotted curve) uses version of the algorithm algorithm that is intended to optimize CPU time and the second (solid curve) use a version of the algorithm that is intended to minimize memory size. We can see that, when using the linear comparison which is the way implemented by IDLAC, the invocation time for the 100th operation is by far larger than the invocation time for the first operation (+50% of the fetching time). However, when using the minimal perfect hash functions algorithm (with its two versions) implemented in IAC, the invocation time is constant and independent of the operation index. The size penalty introduced by the hash functions is negligible compared to the executable size (table 1). Yet, we can see that they are beneficial even if the interface contains only one operation. We also note that the version of the algorithm that optimizes CPU time does not give significantly better results than the version that optimizes memory size.

4.2.2 Approach The objective of this optimization is to handle the requests statically. Instead of converting parameters to the Any type, putting them in a list and then reconverting them to their original type and marshalling them to the communication buffer, we generate for each operation a marshaller procedure that fills directly the parameters in the communication buffer. The same work is done on the server side with an unmarshaller procedure. The right part of figure 6 shows the data-cycle of a request using the static request handling. We can see that the operation parameters are not converted to any other type: they are simply put in a record which is used by the protocol layer to fill the communication buffer; the inverse is done when we want to get the parameters from the communication buffer. Using this approach, we lose the interoperability between different distribution standards (CORBA, DSA...). The dynamic request handling in PolyORB and the use of the Any type are necessary to this interoperability [8]. We also become dependent on the GIOP protocol personality. This is a minor limitation because the optimization is done only for CORBA applications when the user wants it (at runtime or at compile time). The default behaviour of PolyORB remains the dynamic request handling.

25

CORBA Type

CORBA Type

Convert to ‘‘Any’’ Application layer

Any

Protocol layer

Convert from ‘‘Any’’

’Any’ List

Convert from ‘‘Any’’

Any

OUT parameters and Result

CORBA Type

CORBA Type

Fill Record CORBA Type

Convert to ‘‘Any’’

Read Record CORBA Type

Record

Fill Buffer

Empty Buffer/Fill record

1111111111111111111111111111111111111 0000000000000000000000000000000000000 CDR

Buffer

Buffer

CDR

CDR

Convert to ‘‘Any’’ Protocol layer

CDR

CDR

CDR

Any

Application layer

’Any’ List

Convert from ‘‘Any’’

Any

CORBA Type

Record

Read Record

CORBA Type

Fill Buffer

Empty Buffer/Fill Record CORBA Type

Convert to ‘‘Any’’

CORBA Type

CDR

CDR

Convert from ‘‘Any’’

Server

Network

IN parameters

OUT parameters and Result

Client

IN parameters

Fill Record

CORBA Type

CORBA Type

Handle request

Handle request

Dynamic request handling

Static request handling

Figure 6: Different ways of request handling 10000 calls of a simple echo function that returns its given argument. We note that for complex data types such as sequences (arrays), the gain in performance is important; this is due to the fact that for such types, the conversion to the Any type is also done for their contents (as stated by the CORBA specifications). For a sequence of 1000 long, the static request handling goes 5 times faster than the dynamic one.

The implementation of the static request handling required the implementation of additional constructs in the middleware API. The major modification performed on the middleware code was the addition of a new data structure that has to be extended for each operation and should contain all the arguments of the operation, its return value, if it exists and accesses to a marshaller and unmarshaller subprograms which are specific for the operation and which are generated automatically by the IDL compiler. This record plays the role of the Any list in the dynamic request handling.

4.2.4 Conclusion With the static request handling, we gained in the request execution time and in its average deviation. This increases the performances of our middleware. This optimization can be used only when all the distributed application’s nodes use the CORBA standard. The communication buffers are allocated step-by-step while filling the request parameters.

4.2.3 Analysis of the results The new backend architecture (with the tree converters) simplified very much the implementation of the static request handling in generated code, especially for complex types (structures, arrays).

Figure 8: Use of static buffers and Ada representation clauses

Figure 7: Experimental result using static and dynamic request handling (10000 invocations)

We are currently enhancing this optimization in order to calculate the communication buffer size and allocate it statically when sending the request. In addition to the static allocation of the buffer, we replaced the routines that align data in the buffer by the use of the Ada record representation clauses (§13.5.1 of [5]). Figure 8 shows experimental

Figure 7 shows experimental results obtained with IAC. The test was built and compiled under the same conditions as in 4.1.3. It compares the use of dynamic or static request handling. For each type, we tested the duration of

26

results using this optimization. The solid curve shows the marshalling time of a long sequence using the static request handling described in 4.2. The length of the sequence varies from 512 bytes to 16 Kilo-bytes. The dotted curve gives the marshalling times using the static buffer size allocation and the Ada record representation clauses to align data in the communication buffer. The gain in marshalling time using this optimization is very important (almost 30 times faster).

In this paper, we described a new IDL compiler and some optimizations we implemented on this compiler. These optimizations could not be implemented without the introduction of an improved compiler architecture. We tested these implementations using IAC, our new IDL compiler. We also listed some optimizations in the CORBA middleware implementation to make distributed applications performant and deterministic. These optimizations are implemented in PolyORB [9], our free schizophrenic middleware1 . The focus of our future work will be to achieve the CORBA specifications implementation. First, we will enhance the use of asynchronism in PolyORB by implementing the AMI (Asynchronous Method Invocation, chapter 22 of [7]). Second, we will implement the use of Objects passed by value which is the second mode of object passing in CORBA (with the object passed by reference). An additional interesting feature which could be implemented in IAC is the type restriction which is a set of operating modes under which IAC refuses to generate code for an IDL specification containing specific data types. For example, an embedded real-time distributed application should have a small footprint and has to be deterministic. Then, data types such as unbounded strings and sequences should not be used. Finally, it’s interesting to evolve the current IDL to Ada mapping in order to generate Ada 2005 code and benefit from the new features, especially in the real-time domain.

4.3 Global results Fetching + implementation execution times 5e-05 OmniORB : Default settings PolyORB : No Opt. PolyORB : All Opts

Implementation fetching time (sec)

4.5e-05

4e-05

3.5e-05

3e-05

2.5e-05

2e-05

1.5e-05

1e-05 0

10

20

30

40 50 60 Operation index

70

80

90

100

Figure 9: Performance comparison between PolyORB and OmniORB

6. REFERENCES [1] A. Aho, R. Sethi, and J. Ullman. Compiler Principles, Techniques, and Tools. Addison-Wesley, 1986. [2] Z. J. Czech, G. Havas, and B. S. Majewski. An Optimal Algorithm for Generating Minimal Perfect Hash Functions. Information Processing Letters, 43(5):257–264, 1992. [3] E. Eide, K. Frei, B. Ford, J. Lepreau, and G. Lindstrom. Flick: A Flexible, Optimizing IDL Compiler. In Proceedings of the ACM SIGPLAN’97 Conferance PLDI, Las Vegas, NV, June 1997. [4] T. O. M. Group. Ada Language Mapping Specification. Version 1.2, 2001. [5] Intermetrics. Annotated ada 95 reference manual. Technical report, 1995. [6] OMG. Ada Language Mapping Specification, v1.2. OMG, Oct. 2001. OMG Technical Document formal/2001-10-42. [7] OMG. Common Object Request Broker Architecture: Core Specification, Version 3.0.3. OMG, Mar. 2004. OMG Technical Document formal/04-03-12. [8] T. Quinot, F. Kordon, and L. Pautet. From functional to architectural analysis of a middleware supporting interoperability across heterogeneous distribution models. In Proceedings of the 3rd International Symposium on Distributed Objects and Applications (DOA’01). IEEE Computer Society Press, Sept. 2001. [9] T. Vergnaud, J. Hugues, L. Pautet, and F. Kordon. PolyORB: a schizophrenic middleware to build versatile reliable distributed applications. LNCS 3063:106 – 119, June 2004. [10] S. Vinoski. Middleware ”Dark Matter”. IEEE Internet Computing, 6(5):92–95, 2002.

In this section, we present the performances of our middleware after the implementation of all the optimizations described in 4.1 and 4.2. The experimental results are described in figure 9. The test example is the same used in 4.1.3. The dotted curve gives the operation fetching and execution times before we implement any optimization. The increasing solid curve gives the results using OmniORB, a very fast C++ CORBA ORB. The second solid curve (the horizontal one) shows the results of our middleware after the implementation of all the optimizations. We can see that the gain in performance in the optimized version varies from 50% (the first operation) to the 66% (the 100th operation). The obtained times are very close to the OmniORB ones and are better when an IDL interface contains more than 48 operations. The very high performances of OmniORB are due to the fact that it is over-optimized for CORBA which limits its configurability.

5.

CONCLUSIONS AND FUTURE WORK

In the last 15 years, distributed applications have become very important in several domains. The addition of the realtime and the embedded properties to the distribution property made the development of such applications complex. Middleware has been introduced to simplify the construction of distributed applications and to ensure interoperability between heterogeneous application nodes. CORBA is a standard that aims at easing the development of distributed applications. However, the CORBA specifications and the CORBA mapping specifications describe essentially the interfaces between nodes and let the middleware designer free to choose the appropriate implementation behind these interfaces.

1

27

http://polyorb.objectweb.org

Suggest Documents