Designing for Schedulability Integrating Schedulability Analysis with Object-Oriented Design Manas Saksena Department of Computer Science
Panagiota Karvelas Department of Computer Science
University of Pittsburgh Pittsburgh, PA 15260, USA
Concordia University Montreal, QC H3G 1M8, Canada
[email protected]
p
[email protected]
Abstract There is a growing interest in using the object paradigm for developing real-time software. We believe that an approach that integrates the advancements in both object modeling and design methods, and real-time scheduling theory is the key to successful use of object technology for real-time software. Surprisingly many past approaches to integrate the two either restrict the object models, or do not allow sophisticated schedulability analysis techniques. In this paper we show how schedulability analysis can be integrated with object-oriented design. More specifically, we show how fixed priority scheduling theory can be applied to designs developed using UML-RT, a specialization of UML for real-time software. We show how a design model built with active objects, and asynchronous and synchronous message passing (as is the case in UML-RT) can be implemented such that the implementation can be analyzed for schedulability. We then develop the response time analysis for such implementations, using which a designer can quickly evaluate the impact of various implementation decisions on schedulability. In conjunction with automatic code-generation, we believe that this will greatly streamline the design and development of real-time software. Based on our interactions with some of the leading commercial vendors, we expect that the results of this work will be integrated with commercial tools in the near future.
1. Introduction Object-oriented modeling and design has gained a recent popularity in the real-time circles, and there are a number of efforts to re-orient object-oriented technologies towards the modeling and design of real-time systems. One major reason for this is the growing complexity of real-time software. One of the key benefits of the object paradigm is that it allows designers to break up a complex software system into manageable pieces. Moreover, the object paradigm has evolved to augment the traditional programming language (e.g., C++) notions of classes and
objects with higher level modeling concepts allowing designs of complex system structures and behaviors. Such modeling concepts include (1) hierarchical specification of the software architecture of a system using objects, (2) behavioral modeling of objects using extended finite state machines, and (3) use cases and scenarios to model endto-end system behaviors. These modeling concepts use visual notations that greatly increase the understanding of the system structure and behavior. The recently standardized Unified Modeling Language (UML) [3, 13] is an amalgamation of many of these concepts and notations, and is quickly becoming very popular with a number of UML based modeling and development tools offered by various commercial vendors. Several of these commercial tool vendors offer modeling and design tools targeted for real-time systems. Model based software design is an effective tool to counteract the increasing complexity of real-time software development, especially when used in conjunction with a design tool that provides a graphical integrated development environment and automates many of the tedious chores. When the models are executable, they can also be simulated to identify design flaws. Many commercial tools provide such capabilities. A much greater benefit from this approach comes if the design models can be automatically translated into an implementation for a desired target platform. With such automatic code-generation, the benefits of modeling extend through the product’s life-cycle. Automatic code-generation is especially critical with an iterative and incremental style of software development. In the absence of code-generation, developers will often bypass the models and directly modify the code when pressed for time. Thus, models get out of sync with the code, and become less relevant. When code generation is supported, models retain their usefulness, and the design model becomes the implementation [2, 17] One of the first efforts to build design tools based on object-oriented modeling that provided automatic codegeneration came from ObjecTime, Ltd. through their design tool ObjecTime Developer, which is based on the ROOM modeling language [17]. Both ROOM and ObjecTime have roots in the telecommunications community, and have evolved to address the needs of telecommunication systems. As these systems are primarily “soft realtime,” little attention has been paid to addressing the timeliness requirements either during the modeling (e.g., there is no provision to specify timing constraints), or during the automatic code-generation (e.g., the code generator is optimized for average performance, but does not necessarily assure predictability in the worst-case). With the introduction of UML, ObjecTime has cooperated with Rational Software to develop UML-RT, which uses UML’s in-built extensibility mechanisms to integrate ROOM concepts within UML. UML-RT and the code generation technology of ObjecTime Developer has been integrated into Rational Rose, in the new product Rational Rose Real-Time. A number of other commercial tool vendors have also built design tools using UML (or some variant of UML) and supporting partial or total code generation. Examples of these tools include Rhapsody from iLogix, ObjectGeode from Verilog, and Real-Time Studio from Artisan Software. However, despite the claim of being “realtime” software development tools, none of these tools (including Rational Rose RT) provides much support for a designer to reason about timeliness properties. Worse still, it is not even possible to directly use the schedulability
analysis tools in the market (e.g., TimeWiz from TimeSys, and RapidRMA from TriPacific Software) to perform schedulability analysis for designs developed using these tools. One impediment to this is that the underlying computational model of implementations generated by tools like Rational Rose RT are based on an event-triggered architecture, where threads are implemented as event-handlers. In such an implementation scheme, each thread handles multiple events, and is part of multiple end-to-end computations and timing constraints. This is quite unlike the tasking models assumed in real-time scheduling research, and assumed in schedulability analysis tools, where a thread usually handles a single event and is part of only one end-to-end computation and timing constraint. In this paper, we show how to integrate schedulability analysis techniques with object-oriented design models. The results in this paper are built on our prior experience with ROOM and ObjecTime Developer [14, 16], where we had shown the feasibility of conducing schedulability analysis for designs developed using such tools. The main contribution of this paper is the formal treatment of schedulability analysis for object-oriented designs and their automated implementations. Rather than introduce our own design method, we use UML-RT as the modeling language for designing, and Rational/ObjecTime’s code generation strategy for implementation. Within the scope of these, we make the necessary adjustments to enable schedulability analysis. There have been many attempts to make use of object technology for real-time software. Several of these attempts have come from the industry (e.g., [17, 1, 5], while others have come from academia [6, 4, 8, 9]. While some of these approaches have been geared towards soft real-time (e.g., [17]), most of them claim support for hard real-time. Many of these claims are mostly based on the assumption that real-time scheduling theory can be used to perform schedulability analysis. Unfortunately, traditional real-time scheduling theory results (e.g., [12, 11, 7, 19]) can be directly applied only when the object models are restricted to look like the tasking models employed in real-time scheduling theory, as has been done in [4, 8]. In other cases, either the claims are unsupported (e.g., [5]) or based on less sophisticated analysis (e.g., [1, 6]). To the best of our knowledge, ours is the first attempt to apply real-time scheduling theory to object model designs by making use of the state-of-the art in both fields. The rest of the paper is organized as follows. In Section 2, we describe the key concepts of object-oriented design models as well as strategies for automated implementations. Then, in Section 3, we formulate the schedulability analysis problem, and develop the necessary notations to explain our results. We develop the response time calculations for single-threaded implementations in Section 4, and for multi-threaded implementations in Section 5. Finally we present some concluding remarks.
2. Object-Oriented Design: Modeling and Implementation In object-oriented design models, an application is cast as a network of concurrent collaborating objects that communicate with each other using messages. In this section, we briefly overview the essential aspects of objectoriented design models and their automated implementations using UML-RT [18]. UML-RT is a specialization of UML, and is targeted for real-time systems. It is based on the ROOM modeling language [17], and uses UML’s in-built extensibility mechanism of stereotypes to capture ROOM modeling language concepts in UML. UML-RT is supported by the Rational Rose Real-Time toolset, a leading design tool for object-oriented embedded real-
SYSTEM brake
accel
brakePort
lever
accelPort brakePort
leverPort
accelPort leverPort cruiseControl
throttlePort
throttlePort
throttle
speedometerPort
speedometerPort
speedometer
Figure 1. Object Structure Diagram
time software development. We will make use of the ubiquitous cruise-control system to illustrate the modeling concepts and their use in design.
2.1. Structure Modeling The basic architectural entity in UML-RT is an active object; these active objects are called capsules. UML-RT extends the active objects of UML by making them fully encapsulated. Thus, capsules interact with each other only through sending and receiving messages via interface objects called ports that are specialized attributes of capsules. A capsule may have an internal structure that can be specified using an object diagram or collaboration diagram [3, 18]. The nodes of the internal structure are also capsules, which may have an internal structure of their own and so on. This hierarchical decomposition allows the modeling of complex system structures. Figure 1 shows an example of a system structure for a cruise-control system, consisting of several active objects, and interconnections between objects through ports.
2.2. Behavior Modeling The behavior of a capsule is represented by an extended finite state machine using state diagrams. A capsule remains dormant until an event arrives, i.e., when a message is received by the capsule1. Incoming messages trigger transitions associated with the capsule’s finite state machine. Actions may be associated with transitions as well as entry and exit points of a state. The sending of messages to other capsules is initiated by an action. Finite state machines can be hierarchically specified, such that a state can be decomposed into a finite state machine. Figure 2(a) gives an example of a finite state machine for a cruise-control system. Note the decomposition of the Automatic Control state into the Cruising and Resuming (we have not shown the decomposition of the Manual Control state, for clarity.). 1
We use events and messages as synonymous terms in this paper.
Initialize Object Handling depends on the specific request type and object state. initial
Wait for Request
Automatic Control resume accel released Manual Control
Resuming
cruise off
cruise timeout
accel pressed brake pressed
timeout
Cruising
void do_something() { port.send(...) ; port.invoke(...) ; }
Terminate Object
cruise
(a) Finite State Machine
Handle Request
(b) Active Object Behavior Life Cycle
Figure 2. Finite State Machine Behavior of an Active Object
For many real-time systems, many activities are periodically triggered. Rather than have a separate notation for such activities, we allow them to be modeled within a finite state machine of an object through the use of “periodic timed events.” These periodic timed events are generated by setting periodic timers from the application. These timed events also trigger transitions in the finite state machine and may have actions executing as a result of the transition. In the cruise control example, the feedback control loops for automatic control can be implemented using such periodic timers, as shown in the Cruising and Resuming states. Conceptually, each active object has its own thread of control. The finite state machine behavioral model imposes that only one transition at a time can be executed by a capsule. Thus, a run-to-completion paradigm applies to state transitions. Figure 2(b) depicts the behavioral life-cycle of an active object using a flow-chart of its conceptual thread of control. An active object behaves as a message handler, processing incoming requests (sent as messages). The actions associated with processing a message may include sending messages to other capsules. There are three such actions: (1) a send action asynchronously sends a message (signal event) to another capsule, (2) a call or invoke action synchronously sends a message (call event) to another capsule and blocks waiting for a reply, and (3) a return or reply action that sends a message back to the caller, thereby unblocking it.
2.3. Detail Level Modeling UML-RT also supports the notion of “data classes” that can be used to model and implement detailed functionality. Furthermore, a programming language (e.g., C++) can be used to implement detail level functionality. Such code can be associated with entry and exit actions, and actions on transitions. The code can also be used for data classes, and in this way legacy code can be used. Typically, these data objects are fully encapsulated within an active object to avoid concurrency conflicts. However, sometimes it may be necessary to share these data objects – in such cases it is the application’s responsibility to ensure that the consistency of the data objects is maintained through appropriate synchronization mechanisms.
2.4. Automatic Code Generation A key benefit of using UML-RT is that the modeling concepts are precise enough to support automatic code generation. Code generation from UML-RT models is supported through the use of a real-time execution framework. A real-time execution framework provides services needed for implementing the application design models. These services may include: (1) primitives for sending messages between objects, (2) primitives for setting timers, both one shot and periodic; (3) primitives for creating and destroying objects, etc. The real-time execution framework can be built as a library or middleware on top of a real-time operating system. There are potentially many ways of implementing the design models. The approach followed in Rational Rose Real-Time consists of one ore more threads, where each thread acts as an event-handler, retrieving messages from an event queue and processing them one by one. Each active object is mapped to one of these threads. When messages are generated, they are placed in the event queue of the destination object’s thread. A thread simply executes an event handling loop, processing one event at a time. Figure 3 shows the behavior of a thread as an event handler. The simple event-loop structure of a thread is consistent with the run-to-completion semantics of the active objects. In addition to application threads, additional threads may be used to implement the framework services. For example, a dedicated thread may be used to provide timing services. Such a timer thread would manage timers, and would deliver timeout messages to objects when their timers expire. Similarly, one or more threads may be used to wait for external events and delivering them as messages to the application objects.
2.5. Scheduling and Priorities Just as there are potentially many ways to implement the models, there are potentially many ways to do the scheduling. In the implementation scheme described above, there are two levels of scheduling: (1) Within the context of a single thread, the enqueued events are scheduled in a non-preemptive manner. The simplest scheduler is a fixed event-priority scheduler. This is the scheme employed in Rational Rose RT. (2) Across the entire system, the threads are scheduled by the underlying real-time operating system. We will assume that the scheduling is done using preemptive (thread) priorities.
Messages are retrieved from
Initialize
from a priority queue.
Deliver Message to the Destination Object by Calling its Behavior Method
void deliver_msg(msg) { obj = msg->dest; obj->fsm(msg) ; }
Wait for Message
Deliver Message
Terminate
Figure 3. Thread Behavior as an Event Handler
It should be clear now that a designer must make a number of decisions that affect the scheduling, and the response times, and hence the schedulability of the system. These decisions include assignment of event and thread priorities, and assignment of active objects to threads. We note that in a well designed system, these decisions should not affect the functional correctness of the system. Therefore, these decisions should be guided solely by their impact on schedulability (and possibly other non-functional requirements).
3. Schedulability Analysis: Problem Description While finite state machine behavior models of objects are useful for code-generation, they are not very conducive for reasoning about end-to-end behaviors, or scenarios. UML-RT uses sequence diagrams to model end-toend system behaviors, or scenarios. However, sequence diagrams are weak in expressing a detailed specification of end-to-end behaviors, which is necessary for schedulability analysis. To express our ideas, we extend the sequence diagram notation to capture detailed end-to-end behaviors. We use the term transaction to refer to the entire causal set of actions executed as a result of an external event, i.e., an event coming from an external source. These external events also include timed events such as those generated from periodic timers. Since ultimately all processing is initiated by some external event, these transactions can capture all the computations in the design model. However, it is often useful to restrict attention to a few time-critical transactions. Also, for a single transaction, it is often useful to restrict attention to a particular behavioral path (e.g., under normal operating mode) rather than all possible behavioral paths. In the extended sequence diagram, we capture the details of the processing associated with an event. In the rest of the paper, we will use the term action to refer to the entire run-to-completion processing for an event. Each action is, in general, a composite action, and composed from primitive sub-actions. These primitive sub-actions include the send, call, and return actions described above, which generate internal events through sending messages
Cruise Control
Speedometer
Throttle
timeout
#$$
GetSpeed
#%'$
#$ %
#%%
!" #$ &
SetThrottle
#&'$
Figure 4. Extended Sequence Diagram Representation of a Transaction
to other objects. For the purpose of this paper, we will restrict our attention to a single sequence of sub-actions, although we note that conditional behavior (as may happen with guard conditions associated with transitions) can easily be incorporated. We will also assume that if an action is triggered by a synchronous message (generated from a call action), then that action must have a single reply action, and that this action must be the last sub-action. Figure 4 depicts the transaction Feedback Control for a cruise-control system. The transaction is driven by a timeout message. The diagram is color-coded to simplify understanding of the transaction. As can be seen, the cruise control active object obtains the speed from the speedometer object using a synchronous call action. It then does the control law calculations and generates a throttle output which is sent asynchronously to the throttle object. The throttle object then sends a command to the actuator. While we do not show it in the figure, the sequence diagram for a transaction can easily be extended to include sub-actions associated with code executed by the real-time execution framework. Currently, these sequence diagrams for transactions must be manually extracted from the design models, although we believe that this process can be automated. One hurdle is that actions are specified in detailed level language (C++) making it difficult to extract out the necessary information. Note also that there are many “prespecified” actions that are automatically generated with the code. These actions also include calls to the real-time execution framework. An automatic generation process can easily include these actions as well. The sequence diagrams are also useful to capture timing constraints [13, 3]. For the purposes of this paper, we are concerned about (1) arrival rates of external events, and (2) end-to-end deadlines. The end-to-end deadlines can be specified on any action in a transaction; the deadlines are end-to-end in the sense that they are relative to the arrival of the transaction (external event).
3.1. Notation Let
(*),+.-0/213-5461876767913-;::@?A/B1876767913-;CED
denote external event streams, and the remaining internal ones. Each external event stream G
transaction H . Associated with each event GJ)LKNMOGP/213MQGRP 481678767!1=M GP :@STP U
actions I
G -
is an action MOGP V
, where each
G I
maximum number of event arrivals in any interval ? the right. Also, we use the notation G -
in any right-closed interval [\ ?
G XYZ W
1
)ba Y"c
, and W
`;d
XY"Z
G
W
Y'_
\>]
)fe Yc
corresponds to a G
denotes a primitive action. Within this model, each action I
Event and Action Properties. Each external event stream XY"Z
->G
. An action is decomposed into a sequence of sub-
represents the entire “run-to-completion” processing associated with an event
G
-F/213-5421876767913-;:
be the set of all event-streams in the system, where
1
[\
G -
Y"Z
\^]
->G
.
is characterized by a function
GXYZ W
that gives the
, where the interval is closed at the left, and open at
to indicate the maximum number of event arrivals from event source
. For example, an event stream with a minimum inter-arrival time of ` has . In contrast to external events, the rates of internal events are dependent
`>gh]ji
on the execution, and are not specified. Each action is characterized as either asynchronously triggered or synchronously triggered, depending on whether the triggering event is asynchronous or synchronous. All external events are assumed to be asynchronous. Each action I
G
executes within the context of an active object (capsule) k X
ized by a priority l
G Z I
X MOGP V Z X
sum of its component sub-actions, i.e, m of sub-actions
to
G Z I
. Each action I
, which is the same as the priority of its triggering event
characterized by a computation time m MQGP p
X
MQGRP q
where r*s
(abbreviated as m G Z )on
I
)ot
GRP V
GP p@uu qh)
is m
is also character-
. Each sub-action
MQGRP V
G
of I
is
). The computation time of an action is simply the
GP V m . wyAlso, the V=vx q n V!wzp GP V m . V
-;G
G
computation time of any sequential sub-group
Each event and action is part of a transaction. For the rest of this paper, we will superscript to denote transactions. For example,
G
-
V
represents an action and
I;{
the superscript for external events
+@->}~) i
{ an 13x16767271=
D
associated with each transaction, i.e., external event
->}
event, both of which belong to transaction | . Adding is unnecessary since there is exactly one external event }
belongs to transaction k and would be denoted as
-
}
. In
this case, the superscript will be omitted. Communication Relationships We have two types of communication relationships between actions, asynchronous and synchronous. An asynGJ
chronous relationship I
I
V
V
sub-action) that triggers the execution of action I action I
G
G
indicates that action I
generates a synchronous call event
-
}
-V
generates an asynchronous signal event
. Likewise, a synchronous relationship I
G
I
}
(using a send indicates that
(using a call sub-action) that triggers the execution of action I
We also use a “causes” relationship, and use the symbol
}
.
for that purpose. The relationship captures the causal
relationship between actions. Both asynchronous and synchronous relationships are also causes relationships, i.e, X
I X X ! I
G G
VBZ>
I I
V Z
X
X
G
I
I V
I
I
VBZ
X
, and
} Z9Zh
execute (at least partially) for I
V
X
G
I
I G
I
I
V6Z } Z
X I
. When
G I
G
I
VZ
I
. Moreover, the causes relationship is transitive, thus V
, we say that I
V
is a successor of I
G
, since I
G
must
to be triggered.
Synchronous Set. For the purpose of analysis, we also need to define the term “synchronous set of I
G
”. The synchronous set of
Trans. 2
=
=
Period
Event (Type)
60
1000
Prio
Obj
10
(Signal)
y
6
¢
¡ (Signal)
y¡
10
£ (Call)
y£
10
¢
9
300
Action
(External)
(External)
¦ (Call)
y¦
9
¢¡
§ (Signal)
y§
7
¢
¨ (Call)
y¨
9
¢£
8
8
¢£
5
¢¦
(External)
«
'
(Call)
(Signal)
« '
Sub-Actions 2N P BN P BN U U P P U ¡ ¡ U ¡ £ = P = P 3 P 3 P = U ¡ U ¦ U § U ¨ = P 3 P 3 U 2« P 2« P 2« U 2'N U
Comp P'/9Times P'/"U
Events Generated P
2N
U 4 =P'/9PR4!U U
=
/!PT¤2P'/9P'/!PR¥3U /ª©3U
3 ! P
y¡=
¦=
¡
3
-
§=
3 3 P
¨=
=
-
/ª©3U T ©3U
¥2P'/9P U ¥2P'/9P U 4 ©3U
£=
2N !
3 ! ' B « 3
«
-
Table 1. Example System for Response-Time Analysis
I
G
is a set of actions that can be built starting from action I
G
and adding all actions that are called synchronously
from it. The process is repeated recursively until no more actions can be added to the list. Let G
synchronous set of I set. We also call I
G
and let m
X ¬
X I
G Z!Z
¬
X I
G Z
denote the
denote the cumulative execution time of all the actions in this synchronous
as the “root” action of this synchronous set.
3.2. A Simple Example We will use a simple example system through the rest of this paper to illustrate our ideas. As shown in Table 1, the example system consists of three periodic transactions. The table shows, for each transaction, the events and the actions comprising the transaction. For each action, we also show the sub-actions, their computation times as well as which internal events are generated by which sub-action. Note that within each transaction we have included both synchronous (call) and asynchronous (signal) events. Furthermore, each transaction traverses multiple objects, and has multiple priorities (due to different deadlines for different parts of the transaction). We will be primarily interested in showing how to compute worst-case response times. Therefore, we do not show any deadlines in the example system.
3.3. Response Time Analysis: Overview of our Approach Schedulability analysis in our system is carried out by computing response times of actions. The response time of an action I {
G
is derived relative to the arrival of the external event that triggers the transaction | . We
use the well-known critical instant/busy-period analysis [12, 11, 7] developed for fixed priority scheduling, but adapt it to our computational model. In our model, events (and therefore actions) have fixed priorities. Moreover, event priorities reflect real application priorities. A thread may or may not have a fixed priority depending on the implementation strategy. Accordingly we define the term priority inversion to refer to event priorities, and not thread priorities. Thus, a priority inversion occurs if a lower priority event is processed, while a higher priority event is pending. In the same way, a level-i busy period is a continuous interval of time during which events of
priority “ ” or higher are being processed. In our response time analysis for action I
G
, we will compute the response time of the action for successive {
arrivals of the transaction, starting from a critical instant, until the end of the busy period. Let t
worst-case start time for instance of action I
G {
from the critical instant. Let I5°@° {
¯
G X t Z {
denote the arrival time of instance ‘q’ of external event
We iteratively compute I½°@°
X º {
]ji
Z
and ¯
G X t Z {
for
denote the
denote the worst-case finish time, starting
critical instant, assuming the events arrive at the maximum rate. Thus, I5°@° G X t Z ® {
G X t Z {
(i.e., when the instance ‘q’ of the action gets the cpu for the first
time), starting from the critical instant (time 0). Likewise, let X t Z
®
t¶) i
13x13·¸1872767
X t Z )j±³² ´¢+ Y {
, until we reach a
t¹)»º
~T~ W
? {
, starting from the XY"Z )µtOD
.
{
, such that ¯
G X º {
Z*¼
G
. Then, the worst-case response time of action I;{ is given by: ¾ {
G
)
±¿@À q=ÁÂ/!Puuu P CÄà ¯
G X t ZÆÅ {
I5°.°
X t Z {
(1)
Response time analysis for a particular design model can be only be carried out after some implementation decisions have been made. Such implementation decisions include whether a single-threaded or a multi-threaded implementation is used. In either case, we must also assign priorities to threads. In multi-threaded case, a mapping of objects to threads must be made. A decision on thread priorities must also be made. These decisions are part of what we call the “implementation synthesis” problem. While these decisions impact response times, they should not (at least in a well-designed system) affect functional correctness. Armed with the ability to reason about response-times, we believe that a solution to implementation synthesis problem can be automated. Essentially, a solution requires a search through candidate implementation decisions and finding one that satisfies the timing requirements. Automated implementation synthesis complements automatic code-generation and allows the developer to focus on the design. However, in this paper we only make loose remarks to how an implementation may be synthesized – an initial approach based on the response time analysis presented in this paper can be found in [15]. In this paper, we will focus on how response time analysis can be conducted. In the next section, we consider single-threaded implementations, as it is a simpler case. Then, in the Section 5 we build on the results of the single threaded case and develop the analysis for multi-threaded case. While our analysis utilizes the concepts of existing fixed priority scheduling theory, we hope to show to the reader that the application of those concepts is not straightforward, especially for the multi-threaded case. One important insight that enables the analysis is the realization that in this model events (and not threads) represent the tasks in the system. Various previous attempts do not make this distinction (see for example [6, 4]), and therefore result in either restricting the analyzable part of object models (e.g., [4]), or using less sophisticated analysis (e.g., the treatment of event sequences in [6]). As we will see in the section on multi-threaded implementations, it is useful to view threads as special “mutex” resources – this insight allows us to use threads and thread priorities in a way that facilitates response time analysis.
4. Single Threaded Implementations In a single threaded implementation, a single thread processes pending events in priority order. Since there is only a single thread, there is only one level of scheduling. The resultant scheduling is simply non-preemptive priority scheduling of events (actions). There is a special case however: every time a sub-action makes a synchronous call to some other action, then the current action pauses its execution until one of the synchronous action’s subactions makes a reply call. In other words, synchronously-triggered actions called within the same thread are really an extension of the caller action. We will assume that the priorities to events have already been assigned. An action inherits the priority from its triggering event. Also, for the purpose of the response time analysis presented below, we will assume that a synchronously triggered action inherits its priority from its caller. Note that priorities of synchronously triggered actions are simply a matter of convenience; their priorities do not have any significance for scheduling. Therefore, we assume the following to be true: X I
G
V ZÆX I
X l
V Z ) I
l
X I
G Z!Z
(2)
As is common in fixed priority scheduling literature, we use the term blocking to refer to the effect of lower priority actions on an action’s response time, and interference to refer to the effect of higher or equal priority actions on an action’s response time. Thus, blocking refers to priority inversion.
4.1. Blocking Time We use Ç
X I
GªZ
to denote the maximum blocking time of an action. In single-threaded implementations, since
scheduling is non-preemptive, priority inversion is limited to one synchronous set of actions with a lower priority root action. This action must have started executing just before the transaction containing I
G
arrives. Thus, the
maximum blocking time of an action is given by: Ç
X
G Z )j±È¿@À +É } I
X ¬
X I
} Z9Z
~T~ l
X I
G Z>Ê
X l
I
} Z D
(3)
Note that this blocking term may come from any transaction, that is why we omit the superscript denoting the transaction.
4.2. Busy Period Analysis The critical instant for an action I {
G
occurs when (1) all transactions arrive at the same time (we will denote this
as time Ë ), and (2) the root action of the synchronous set of actions that contributes the maximum blocking term X Ç
I
G Z
has just started executing prior to time Ë . Note this is true because we assume that every transaction is made
up of actions with non-increasing priorities, i.e. l
X I
G Z5Ì l
X I
V Z
, where I
V
was triggered by I
G
. Furthermore, to
get the worst-case response time, all transactions should arrive at their maximum rate, i.e., following the function W {
XÍY"Z
.
Since actions are executed in a non-preemptive manner, it must be the case that when other action can interrupt it other than any synchronous calls that time for instance I
G
t
of action
G I
is
G X t Z { ¯
G X t Z ® { )
X
]jm
X ¬
I
G Z!Z I
start times. We compute the value of G X t Z
consideration, namely I>{ executed before I
G X t Z
G X t Z { ¯
starts executing, no
makes. Consequently, the worst-case finish
, where
{
belonging to transaction | . Having resolved the equation for G X t Z ® {
G
G I
m
X
X ¬
G Z9Z I {
is the synchronous set of
, we still need to find a way to compute
by considering all actions that can execute before the action under
. That is, we examine the actions that arrive during the interval
G X t ZN_ ®h{ 1
[Ë
and can be
. We can find that as follows:
{
ÏÎ)
Interference from other transactions. For any transaction
, all actions that are higher or equal priority2 |
must be considered for interference. Again, any synchronous calls made recursively from these actions must also be considered. We see however that they will be already included in the calculation because of our earlier assumption that the priority of a synchronously-triggered action is the same as that of the caller action. Also, interference is considered for all event arrivals in the interval interval, because if a higher priority action becomes enabled at
Z _ G X t " ®h{ . G X t Z time ® { , 1
[Ë
Note that we have to take the closed G X t Z
then I {
cannot begin executing.
Interference from Same Transaction. The interference from the same transaction (i.e., | ) is a little trickier to calculate. We must distinguish between previous instances, i.e., For all past instances,
Å
i
13¸187672713t
Å
13¸187672713t i
i
, and all other instances after that.
, the interference term is similar to other transactions, i.e., we must consider i
all higher or equal priority actions (and their synchronously triggered actions but again, the latter are already considered). On the other hand, for instances by the causes relationship) of
G I
t13t
]ji
?
18767671 W
X
G X t Z!Z ®Ð{
{
only those actions that are not successors (as defined
can contribute to interference. This is because, true since if {
I
G X t Z
has not
{
started executing, then any of its future instances, as well as any of its successors could not begin execution either. G X t Z
Combining all this, ®h{ Ñ
is given by the lowest value of W, that satisfies the following equation. X
) Ç
GªZ I
]Ó}Õ Ò wÔ
[zW Å{
X t ] ]
X
i ?
W
X Ñ
}
?
X Ñ
ZÆÖ
X
Ò Ø ZJÅ t
chronously triggered action
G
I
I>{ Ü{
. Let
I
m ]Ùi
{
The above description works for i.e., the synchronous-set of
Z×Ö
G { I;Ü{
X
ÒQØ X I
Ø Z {
Z¶Ö
} X
m ~T~
ÒQØ
Ø Z I X
l X
~T~ l
Ø Z;Ì { I
m
X
X
{
I
Ø ZÌ
l
Ø Z I
(4)
}
X ~T~OÚ
l
X I
G ZÆZA_ {
G ZÆZ { I X I
{
G
I
{
Ø ZÛ
X l
Ø ZÌ { I
X l
G ZÆZ { I
where this action was asynchronously triggered. Let’s consider a synbe the asynchronously triggered action, such that
. Then, we have a chain of actions, starting from
partially in this interval, and are blocked waiting for
I;{
G
I
Ü{
G
belongs to
I;{
up to I
{
G
¬
X
I>Ü{
Z
,
that only execute
to execute. Note that there must be exactly one such
action, I;Ü{ , so there is no ambiguity. 2
Equal priority actions are considered as well to ensure that we get the worst-case. However only some equal priority actions will
interfere because events are queued in FIFO ordering. Taking all equal priority actions into account for interference gives rise to pessimistic analytical results.
Transaction
Action
/
H
I
4 H H
4 I
¤
Response Time
¤ I
Action
295 I
382
I;Ý /!/
580 I
Response Time
¥
719 590 416
Table 2. Response Times for Single Threaded Implementation
This changes the interference for instances part of the synchronous set
X ¬
Z
I;Ü{
t1=t
?
16767871
]Þi
W
X
G X t Z!Z {
{
®
t
of transaction | . For instance , only a
has executed, and this should be reflected in the equation. Rather than extend
the notation to explicitly define this subset, we simply denote this subset of sub-actions by X
computation time associated with the subset as m in the synchronous set
X ¬
Z
I>Ü{
X
ß6àyá
X ¬
Z!Z!Z
Ü{ I
. For instances
t
X ¬
I
Ü{
Z!Z
and the
onwards, none of the actions
]âi
t
can cause interference, since their previous instance ( ) is blocked. The blocking 167876713t
term, interference from other transactions, and interference from previous instances ( i X
transaction are unaffected, since we have assumed l G X t Z ® {
Based on the above,
X
ß6àá
Z
Ü{ I
)
X l
I
G Z {
Å i
) of the same
.
for synchronously triggered actions is given by the lowest value of W, that satisfies
the following equation. Ñ
X
) Ç
Z Ü
I }Ò wÔ Õ
]
[OW Å{
X t ]
]
? W {
[OÒ Ø
X
X Ñ
ZÆÖ X
ÒQØ X
m
X Ñ X
?
Z×Ö i
][ Ò Ø X
}
Ø Z { I
m
ZÄÅ X I
m
X
ÒØ X
m
Ø Z
~T~
{ I X
~T~OÚ
Ü{ I
} X
Ø Z I
l
X
~T~ l
{
X l
Ø Z* {
ãI
Ø ZÌ I
Ø ZÌ I
(5)
} X
Ü{ I
Ø Z;Ì { I
ZÆZ×_
ZÆZ
Ü{ I
X l
l
X
X l
Ü{ I
ZhZ
]äm
X
ß8àá
X ¬
X M
Ü{
P } Z!Z9ZA_
t ZÖ {
Ø Z
~T~OÚ
X I
Ü{
ãI
{
Ø Z* l
X I
{
Ø ZÌ l
X I
Ü{
ZhZ
4.3. Response Times for the Example System Let us now revisit our example system and apply the analysis presented above to compute the response times. Table 2 shows the response times found by this analysis. We have only shown the response times of the last action that executes within a transaction at a particular priority level. We can see that the response times of all actions are large due to action I
/!/
which has an execution time of 250. Since the implementation is single threaded, it
causes blocking for all the actions. The effect of the lower priorities of actions I
¥
and I;Ý is also reflected in their
larger response times due to greater interference. Also interesting is the response time of the lowest priority action I
/!/
, which is relatively lower. This is the result of non-preemptive scheduling – once the action starts executing,
it executes as if its priority is raised to the highest priority in the system.
4.4. Discussion A single threaded implementation provides a low overhead non-preemptive scheduling strategy, and is very effective for many applications. However, there are certain situations in which a single threaded implementation may not be suitable. First, if an object makes a blocking call, then not only the object, but also the entire system is blocked. Second, a single threaded implementation cannot take advantage of multiple processors in a system. Furthermore, single threaded implementations may not be suitable for applications with compute-intensive actions, when some events may have tight deadlines. This may happen, for example, with many signal processing applications. In such cases, the system cannot respond to any new events once an action starts executing, and it may lead to missed deadlines. Our example system illustrates this situation, where some actions have large computation times as compared to others, and that these actions can easily cause missed deadlines. This situation is more likely to occur when synchronous message passing is used, because the non-preemptive section includes not only the action but all synchronous calls made by that action as well.
5. Multi-Threaded Implementations When non-preemptive scheduling is inappropriate, multiple threads may be introduced to achieve preemptability of event processing. By doing so, the threads and their priorities become artifacts of implementation, and we can view event priorities as the real application priorities. Thus, assuming that event priorities have already been assigned, the objective is to process higher priority actions in preference to lower priority ones. However, as we argue below, unless the multiple threads and their priorities are managed carefully, a solution with multiple threads may not give rise to any advantages over a single threaded implementation. We will begin with the assumption that objects have been assigned to threads, and consider the assignment of thread priorities. Static Thread Priorities. The simplest approach to manage thread priorities is to assign them static priorities. In fact, this is the approach followed in Rational Rose RT (and other tools as well). While the designer is free to choose any priority for the threads, a good heuristic is to assign a thread a priority that is maximum of all events that it processes. This is similar to highest locker protocol for mutex resources [10]. The response time calculations with static thread priorities are surprisingly hard due to the two level scheduling, and the difficulty of constructing an appropriate critical instant. Therefore, we will make qualitative arguments about the inappropriateness of static thread priorities. One of the problems with static thread priority assignment is that events at a lower priority in a higher priority thread have their priorities boosted up artificially. Due to this, there are possibilities of priority inversion from multiple lower priority actions. This is clearly undesirable in many situations. Another, and potentially more important, problem with static thread priorities is the fickleness it introduces in the design. Since blocking effects can be cumulative, the chances that the addition of new low-priority functionality may affect the timeliness of time-critical transactions increases. In the single-threaded case, since blocking was
limited to one synchronous-set of actions, new functionality could be safely added so long as the execution time of any new synchronous-set did not exceed the previous upper-bound. Still, however, it may be perfectly acceptable to use static thread priorities in specific situations where it is easy to estimate the amount of priority inversion. Dynamic Thread Priorities. A solution to avoid such (potentially unbounded) priority inversions is to dynamically manage a thread’s priority such that the thread’s priority equals the highest priority pending message in its event queue, including the currently processed event, if any. When thread priorities are dynamically managed like this, it can be shown that priority inversion is bounded. The thread priority can be automatically managed by the real-time execution framework, transparent to the application. This needs to be done at two places. First, when a message is deposited into a thread’s event queue, then the receiver thread should inherit the event priority, if it is higher than the receiver thread’s current priority. Similarly, at the end of event processing, a thread’s priority should be automatically re-adjusted to match the highest priority pending event. In our earlier work, we had implemented this dynamic thread priority management scheme in the ObjecTime Developer tool set. Experiments with the modified tool set showed that the scheme worked as expected [16]. While priority inversion with the above scheme is bounded, we may still get priority inversion from multiple lower priority events. In the worst case, this can occur with one lower priority event from each of the threads. Note that the above dynamic priority scheme is much like the priority inheritance protocols (recall our insight that threads should be treated like mutex resources), and therefore this effect is similar to the chained blocking that can occur in priority inheritance protocol. It is not surprising then that the work-around for this problem borrows the idea from priority ceiling protocols. Our solution borrows the idea of immediate priority ceiling protocol [19] or highest locker protocol [10]. We describe this modified scheme next. Dynamic Thread Priorities with Preemption Threshold.
In addition to a priority, each action is now also
assigned a preemption threshold [20], which we will denote as å
X I
G Z
. The idea is that when an action executes,
its priority is raised to its preemption threshold. In this way, an action can only be preempted by actions that have priorities higher than its preemption threshold. To avoid confusion, we will refer to an action’s priority as its nominal priority. By appropriately setting the preemption threshold, the priority inversion problems can be minimized. The management of thread priorities with preemption threshold can also be done by the realtime execution framework. Specifically, when an event is removed from a thread’s event queue, and before it is processed, the thread’s priority is raised to its preemption threshold. The management of preemption threshold is on top of the dynamic thread priority described above. Taken together this implies that a thread’s priority is the highest of its pending events when it is not in the middle of processing any event. When it is processing an event (even if it gets preempted), its priority equals the preemption threshold of the event being processed. The use of preemption threshold with synchronous calls requires special care if a synchronous message is being sent to another thread. In this case, the receiver thread should inherit the highest between the nominal priority of the action that is associated with this message (as in the asynchronous
case) and the sender thread’s current priority (which is the same as the calling action’s preemption-threshold), again if this value is higher than the thread’s current priority. Assumptions. In order to facilitate the response time analysis, we will make some of the same assumptions that we made earlier. Specifically, we assume that an action can only generate events of equal or lower priority. Also, we will assume that the priority of a synchronously triggered action is equal to that of the caller action. Additionally, we will assume that synchronously-triggered actions must be handled by the same thread as the caller-action. The last supposition can be relaxed but leads to more complicated analysis, which will require additional notation to develop.
5.1. Blocking Time Analysis Blocking is inevitable in this architecture due to the run-to-completion semantics of objects (capsules), and the run-to-completion processing architecture of a thread. With preemption thresholds, we can bound and minimize the blocking suffered by an action, as we show below. First, we define the ceiling priorities of each object and thread, as follows: X æ æ
where k
)è+ k
/B1 k
4@16787671
Ø D k
and
GRZ )
±¿@À + } l
X
X'ç G Z )
±¿@À + } l
X
k
ç
I I
}6Z
~T~
} Z
~T~ çÐX
)è+ ç /21 ç 4@187676791 çêé D
k
X
}.Z ) I
} Z ) I
k
G D
ç GD
represent the set of capsules and threads respectively
within the system. We then add the following constraint on the assignment of preemption-threshold of actions. X å
GTZëÌ I
±È¿@À X æ
X
X k
GRZ!Z 1Jæ I
X'çÐX I
G'Z!Z×Z
(6)
G
Let us consider an action I;{ , and suppose that the transaction | arrives at time Ë . First, any lower priority event that has not started executing at time Ë cannot cause blocking. Second, if a lower priority is “in-execution” at time X
and its preemption threshold is lower than l Ë
G Z
, then it will not cause any blocking since it will be preempted.
I>{
Note that by the constraint imposed by Equation 6, the action could not have been in the same object or thread as I {
G
V
, and therefore preemption is possible. Based on above, we can say that if an action I
then it must be the case that l
X I
VBZ
sâl
X I
GZ>¼ å
X
V3Z I
causes blocking for I
G {
,
.
Now let us suppose that there are two such asynchronously triggered actions I
V
}
and I
having the property of a
blocking action. If both of these actions have to cause blocking, then they both must be in-execution. Without loss of generality, assume that I
V
}
started execution first. Then, I
However, if this is the case, then å
X I
V Z¶¼ l
X I
} Z
sìl
X I
cannot also be in-execution unless å G Z
, i.e., I
V
cannot block
asynchronously triggered actions. If, however, an asynchronously triggered action I
V
I
G
X I
V3Z;¼ l
X I
}6Z
.
. The above is true for
can cause blocking, then its
entire synchronous set can cause blocking as well. This is true due to the way in which the priorities are managed when a synchronous invocation is made.
Based on the above arguments, we conclude that the blocking time of an action in this model is restricted to the synchronous set of a single lower priority action, and is formally given by the following equation. X Ç
G Z )j±¿@À +2É } I
X
X ¬
} Z!Z I
~T~
X å
} ZÌ I
X l
G Z>Ê I
l
X I
} Z D
(7)
5.2. Busy Period Analysis Calculating the worst case response time is quite similar as the single threaded implementation with just a few variations. Note that this is true only for the dynamic thread priority scheme with preemption threshold. The entire process will be briefly explained, focusing more on the differences. The critical instant for an action
G
I;{
is
the same as for the single threaded implementation, because again, we make the assumption that every transaction is made up of actions with non-increasing priorities. The derivation of worst-case start time is identical to the single-threaded case, except for the difference in computing blocking time. This is, because, the effect of preemption threshold does not occur until the action begins executing for the first time. Moreover, with the dynamic thread priorities, any higher priority event that G
arrives prior to I>{ starting will get to execute due to the dynamic thread priorities. As in the single-threaded case, this is not necessarily true for events with equal priorities, but to get the worst-case, we will make the pessimistic assumption. The main difference in the analysis comes in the time window between the start of action I
G
, i.e.,
G X t Z í {
and
G X t Z before it finishes execution, i.e., 5 . Since the action may be preempted by higher priority actions in other î { X X G X t Z ) G X t Z G Z!Z threads, we cannot say that î5{ as we did in the single threaded case. Instead, we must íÆ{ ]ëm ¬ I { ;
consider all actions that can cause interference during this interval. First, if an action interval after
G X t Z í {
I
V
can cause interference after
that I
executes in a different thread than
objects, i.e., k
X I
G
has started executing, then it must have arrived in the
; it would be already accounted for in the estimation of í
must be no less than the preemption threshold of V
I
V Z Î)
k
X I
G Z
I
G
, i.e.,
I çÐX
G
, i.e., I
V2Z Î)
l
X
I çhX
V Z¶Ì I
GÍZ
X å
I
G Z
G X t Z {
otherwise. Second, its priority
. Furthermore, it must be the case
, and that the two actions execute in different
. Finally, this must be true for the action that triggers
event that triggers execution of I
V
I
V
as well, otherwise, the G
cannot be generated. Note that this also eliminates any successors of I {
, so we
do not need to consider that separately. The last condition must actually be recursively true up to the root of the transaction, i.e., the action triggered by the external event. Based on the above, we can write: ¯
G X t Z )µ±ï²T´ { Ñ
~T~
) Ñ
G X t Z { X ]Ùm ¬ ®
]ðÒ}Õwy Ô G
where
ñ } X I
{
G Z
is defined below, and W
} X Ñ
ZÆÅ W
}
?
X ] X ®
W
(8) X X
G X Ñ
G X t Z9Z {
W
I
G ZZ
{ } X Ñ Z×Å
W
ZÆÅ
}
? X
W ®
}
? X
®
G X t ZZ!ZÆÖ8ñ } X { I
G X t ZZ!ZÄÖBñ G X { I
G Z {
G Z {
gives the number of new arrivals of event
-;}
.
Transaction
Action
/
H
I
4 H H
4 I
¤
Response Time
I
¤
Action
27 I
64
I;Ý /!/
141 I
Response Time
¥
156 156 570
Table 3. Response Times for Multi-Threaded Implementation
ñ } X
ñ½X
{
I
{
ñ5X )
òóó }
G 1 I
G Z I
V Z )
ô
}
} Z
G 1 I
I
{
óóõ
²ö
Ë X m
}
X ¬
V Z!Z I
]
ñ½X Ò
Ü8úú üý6 û þ
Note that if an action I
V
I
G 1 {
} I
not in k
X I
{
G Z
G Z
I;{
} V Z I
çhX )
G Z!ZA÷øX
I;{
k
X I
} V Z )
X k
G Z9Z
÷øX
I;{
l
X I
} V Z
sùå
Z
Ü
ûÿ
can cause interference, then all its synchronous successors also cause interference.
This is because they have the same priority/threshold as çhX
X'çhX
I
V
, and they are in the same thread as I
V
, and therefore
. Moreover, since objects are mapped to threads, they cannot be executing in the same object as
.
5.3. Response Times for the Example System We can now revisit the example system and see the response times with a multi-threaded implementation. For purposes of illustration, we will first assign each object to its own thread. However, this violates the assumption
that synchronous calls are made to the same thread. Therefore, then we merge objects in such cases. With these rules, we get
çhX k
X Z!Z )
çÐX k
X OZ!Z )
çhX k
X OZ!Z
and
çhX k
X · Z9Z )
çhX
X Z!Z k
. We will also assume that each action is
assigned the smallest preemption threshold consistent with Equation 6. Table 3 shows the response times for the example system with the above assignment. As compared to the single threaded case, we can see a considerable improvement in the response times of higher priority actions. This is, obviously attributable to the smaller blocking. For example, now as may be expected, the response time of I
/!/
I
only gets a blocking of
i6Ë
(from I>Ý ). Also,
has increased since it now gets interference from higher priority
actions even after it starts execution.
5.4. Discussion As the example system shows, multi-threaded implementations can overcome some of the priority inversion problems of single threaded implementations. However, this required careful management of thread priorities. We note that this comes with some costs. First, the cost of inter-thread message passing is more expensive than intra-thread message passing, since one must “lock” the event queues for sending messages. Second, our priority management scheme adds additional costs for message passing (the receiver thread’s priority may need to be
X
G Z!Z
I>{
changed) and in each execution loop of a thread (changing priority to threshold, and re-adjusting priority at the end of the loop). Unfortunately, these costs vary a lot from one RTOS to another. For example, setting the priority of another thread can be quite expensive if it requires searching through the entire thread list. The analysis we presented here assumed that objects are mapped to threads. It may be argued that this complicated thread priority scheme is a result of that, and if threads are assigned to transactions (or parts of transactions with the same priority) instead, then it would not be required. While this is true, one must consider that the modified scheme would require “locks” to be used to access objects, to maintain the consistency of objects. In that case, similar overheads would be incurred due to the priority inheritance/ceiling protocol that would need to be employed to avoid priority inversions. In any case, the analysis presented in this paper does not depend on objects being mapped to threads, and so one can easily use the analysis to evaluate different thread architectures and choose the one that works best.
6. Concluding Remarks In this paper we have presented the results of our work in integrating real-time scheduling theory results with object-oriented design. While we find that the basic concepts of fixed priority response time analysis are applicable, the application of those concepts to the object design models is far from straightforward. In particular, it requires a careful consideration of the implementation of object-design models, the scheduling of threads and their priorities, the mapping of the design to multiple threads, the scheduling of events within a single thread, and so on. The main contribution of this paper is in the development of response time analysis for object-oriented design models. We have used UML-RT as the modeling language, however the results developed are generally applicable to any modeling language using active objects, and explicit communication between objects through message passing. We also show the relationship between the design models, their implementations, and the analysis models. The response time analysis results presented in this paper can be used to automate the design of the implementation, which we call implementation synthesis. In conjunction with automatic code-generation, this can greatly reduce the development life-cycles of real-time object-oriented software development, as well as allow the designers to focus more on design (rather than implementation) issues. In this sense, it is the logical evolution of the automatic code-generation technology that is now supported in several real-time, object-oriented design tools. We conclude with a final remark that the response time analysis presented in this paper has been implemented. Based on our conversations with leading commercial vendors, we believe that the results of this work will be available with some of the commercial tools in the near future.
References [1] M. Awad, J. kuusela, and J. Ziegler. Object-Oriented Technology for Real-Time Systems: A Practical Approach using OMT and Fusion. Prentice Hall, 1996.
[2] R. Bell. Code generation from object models. Embedded Systems Programming, 11(3), March 1998. [3] G. Booch, J. Rumbaugh, and I. Jacobson. The Unified Modeling Language User Guide. Addison-Wesley, 1999. [4] A. Burns and A. J. Wellings. HRT-HOOD: A Design Method for Hard Real-Time. Real-Time Systems, 6(1):73–114, 1994. [5] B. P. Douglass. Doing Hard Time: Developing Real-Time Systems with Objects, frameworks, and Patterns. Addison-Wesley, 1999. [6] H. Gomaa. Software Design Methods for Concurrent and Real-Time Systems. Addison-Wesley Publishing Company, 1993. [7] M. Harbour, M. Klein, and J. Lehoczky. Fixed Priority Scheduling of Periodic Tasks with Varying Execution Priority. In Proceedings, IEEE Real-Time Systems Symposium, pages 116–128, December 1991. [8] L. Kabous and W. Nebel. Modeling hard real-time systems with uml the ooharts approach. In Proceedings, International Conference on Unified Modeling Language (UML’99), 1999. [9] K. H. Kim. Object structures for real-time systems and simulators. IEEE Computer, pages 62–70, August 1997. [10] M. H. Klein, T. Ralya, B. Pollak, R. Obenza, and M. G. Harbour. A Practitioner’s Handbook for Real-Time Analysis. Kluwer Academic Publishers, 1993. [11] J. Lehoczky, L. Sha, and Y. Ding. The rate monotonic scheduling algorithm: Exact characterization and average case behavior. In Proceedings of IEEE Real-Time Systems Symposium, pages 166–171. IEEE Computer Society Press, December 1989. [12] C. Liu and J. Layland. Scheduling algorithm for multiprogramming in a hard real-time environment. Journal of the ACM, 20(1):46–61, January 1973. [13] J. Rumbaugh, I. Jacobson, and G. Booch. The Unified Modeling Language Reference Manual. AddisonWesley, 1999. [14] M. Saksena, P. Freedman, and P. Rodziewicz. Guidelines for Automated Implementation of Executable Object Oriented Models for Real-Time Embedded Contol Systems. In Proceedings, IEEE Real-Time Systems Symposium, pages 240–251, December 1997. [15] M. Saksena, P. Karvelas, and Y. Wang. Automatic synthesis of multi-tasking implementations from realtime object-oriented models. In Proceedings, IEEE International Symposium on Object-Oriented Real-Time Distributed Computing, March 2000.
[16] M. Saksena, A. Ptak, P. Freedman, and P. Rodziewicz. Schedulability Analysis for Automated Implementations of Real-Time Object-Oriented Models. In Proceedings, IEEE Real-Time Systems Symposium, December 1998. [17] B. Selic, G. Gullekson, and P. T. Ward. Real-Time Object-Oriented Modeling. John Wiley and Sons, 1994. [18] B. Selic and J. Rumbaugh. Using UML for Modeling Complex Real-Time Systems. White Paper, Published by ObjecTime, and available from www.objectime.com, March 1998. [19] K. Tindell, A. Burns, and A. Wellings. An extendible approach for analysing fixed priority hard real-time tasks. The Journal of Real-Time Systems, 6(2):133–152, March 1994. [20] Y. Wang and M. Saksena. Fixed priority scheduling with preemption threshold. In Proceedings, IEEE International Conference on Real-Time Computing Systems and Applications, December 1999.