Automated model generation for performance engineering of building ...

4 downloads 26230 Views 464KB Size Report
Aug 10, 2006 - International Journal on Software Tools for Technology Transfer ... engineeringAutomatic model buildingBuilding automationTraffic modeling.
Int J Softw Tools Technol Transfer (2006) 8:607–620 DOI 10.1007/s10009-006-0016-8 S P E C I A L S E C T I O N O N Q UA N T I TAT I V E A NA LYS I S O F R E A L - T I M E E M B E D D E D S YS T E M S

Automated model generation for performance engineering of building automation networks Joern Ploennigs · Mario Neugebauer · Klaus Kabitzsch

Published online: 10 August 2006 © Springer-Verlag 2006

Abstract Large technical systems need to be designed both reliable and efficient. Specialized design tools offer therefore a simplified, abstract design and extend details autonomously in the background. Analytic and simulation based models could improve the quality by testing and dimensioning the design before implementation, but setting up the necessary models is time-consuming and expensive. Therefore many developers ask for analysis tools which are able to create their models from the available information in the design tools. This paper presents such an automated modeling approach basing on an existing design database at the example of a network analysis for building automation fieldbuses. The process of automated modeling is unfolded, and the potentials and limitations are discussed. Keywords Network performance engineering · Automatic model building · Building automation · Traffic modeling 1 Introduction It is commonly accepted that the complexity of future control networks and their provided functionality will J. Ploennigs (B) · M. Neugebauer · K. Kabitzsch Dresden University of Technology, Department for Computer Science, Institute of Applied Computer Science, 01062 Dresden, Germany e-mail: [email protected] M. Neugebauer SAP Research Group Dresden, Dürerstrasse 24, 01307 Dresden, Germany e-mail: [email protected] K. Kabitzsch e-mail: [email protected]

further increase [14]. The possibility to install extensive functions on small distributed controllers allows new comfort and safety features to be integrated nearly everywhere. Growing processing power of the controllers permits to take advantage of control algorithms which can analyze complex data scenarios as in [30]. This not only increases the absolute number of devices in control networks but also their interaction and communication effort. The resulting exponentially rising quantum of data needs to be transmitted over networks varying from cheap Ethernet controllers, specialized fieldbuses, to powerless, easy-to-install wireless sensor networks. These network classes have a wide range of pro and cons which do not compete but complement one another well. No single all-purpose network system can fulfill the broadening functional requirements and, mixed network solutions will establish. Large hybrid control networks of this kind are common in building automation. In modern office and public buildings tens of thousands of devices control the lighting, the heating or assure the security at day and night. Although the networks are complex they are designed in a short time using commercial-off-the-shelf components and software tools which allow an abstract functional view. These tools further carry out the commissioning, provide simple diagnosis functions and usually base on standardized databases. Nevertheless, the tools lack the possibility to evaluate the performance and reliability of the designed network. Thus, overloaded segments are discovered frequently during implementation and operation and imply expensive redesigns. Finally, undiscovered overloaded segments do not only bear an economical risk, but also reduce the quality of control of the whole building and can cause critical errors in unprotected security and safety systems.

608

J. Ploennigs et al.

The increasing size and complexity of automation networks requires a performance-oriented design of networks. The so-called performance engineering completes the network design process with a permanent performance evaluation of the design and supports optimization, dimensioning, the detection of design errors and the reduction of construction and operation expenses. However, the most important question for the designer is the quality of control and other processes, i.e. the application he considers has to work. This requires a prevention of message loss and a transmission in time and is therefore closely connected to the network quality of service [32]. Additionally, the network utilization is important for economical validations. A good dimensioned network utilizes its recourses and still offers full quality of control during mean and worst-case utilization. Therewith, the performance engineering has to supply these measurements: the mean and worst-case utilization, delay times and message loss probabilities. The necessary methods for performance evaluation of networks are well known and reach from analytical techniques using superposition [21], queuing theory [3] or network calculus [16, 34] to simulative approaches [11, 31, 33]. Besides these scientific results also established commercial tools are available (e.g., OPNET [24]). But, all these approaches require a detailed evaluation model that is commonly hand-made. For example, Hintze et al. [10] developed a simulation method for performance evaluation of industrial fieldbus networks. His models were described manually using different formal languages and are transferred afterwards into simulation models. This modeling from scratch is suitable for small simulations of experimental systems but not for large networks with lots of repetition. The usage of predefined components simplifies the modeling in [4] where the application and network behavior is combined in a declarative language. This common modularization simplifies the modeling but is not sufficient for performance engineering. The advantages of performance engineering unfold with permanent available evaluations during network design. Only then, is the performance situation obvious for the designer and leads his decisions. This requires the parallel creation and permanent update of the evaluation Table 1 Comparison of office and building automation networks

Number of elements Traffic Diversity Knowledge Sources Continuity

model during the design process. But already a few minutes additional engineering time per device results in days of work time for large size networks with thousands of devices and will reverse any cost saves. Thus, the key for network performance engineering is the automatic modeling of the evaluation model. Automatic modeling is also used for software performance engineering. Woodside et al. [36] proposed to include performance prediction into software design environments. In his approach the program code was automatically interpreted in its performance properties without additional effort for model building. The domain of building automation has some advantages enabling a similar approach. First of all, the existing design databases are a valuable source of information. Further, in contrast to regular changing office communication networks, a building is planned to work for years with as few maintenance as possible. Because of these long operation times, customers expect interoperable systems such that broken devices can be replaced even after years of operation [6]. This has led to a strong standardization of interfaces and functions of the devices (e.g. [1, 9]) which reduces the diversity of devices and provides default values for model parameterization. Another important difference is that most of the traffic in control networks does not depend on arbitrarily acting humans causing bursty and self-similar traffic [17, 26]. Rather, it is primarily affected by deterministic physical processes and simple control structures [28]. This permits simplified traffic models which can be generated automatically. Table 1 lists these and further differences between office and building automation networks. These features of building automation enable automatic generation and parameterization of network models from available design databases. But there are also difficulties in this domain. Different protocol standards are comparably well established in this domain, like LON, EIB/KNX or BACnet [14] and the proposed network performance engineering method should be applicable to them. Further, building automation uses primary commercial-off-the-shelf devices of various manufacturers to fasten the design process, shorten the expenses and to avoid manufacturer monopoles. But,

Office communication

Building automation

≈ 103 Bursty and self-similar through human impact Diverse bulk goods Human centered Permanent changes

 104 Steady and determined by physical processes Highly standardized devices Substantial design databases Unchanged for years

Automated model generation for performance engineering of building automation networks

the implementation details of these devices are often unpublished. This limits the knowledge about the application behavior and prohibits complex models as used in [4, 10] for instance. Thus, the automatically generated models are rather restricted to the information included in the design database and still have to serve their purpose. In Sect. 2 the generated network model is investigated to develop step-by-step the details of Tables 2, 3, 4, 5. Therein, the integral parts of the model are collected and assigned to their specific information sources to clarify the process of automatic modeling. A mapping to analytic queuing network analysis will be introduced in Sect. 3, the sources of information are explained in Sect. 4 and finally, a sample network will be investigated in Sect. 5.

2 Model structure 2.1 Overview Before a model can be generated it is essential to know its structural components. The network model is designed for automatic generation and an easy transfer to analytic and simulative performance evaluation. On the one hand this implies that it is restricted in its detail by the available information in the design database, but on the other hand its slim nature facilitates fast generation and analysis. The model is separated in a network model and a traffic model as displayed in Fig. 1. While the first provides the structural details in analysis, the second extends most of the parameterization. This separation allows a fast update of changed network properties while the network structure can be reused. As a consequence, the network designer can test different parameterizations in a short time. The network model consists of three submodels which are extracted from the design database. These submodels correspond to the OSI layer model [13] and divide up in the physical layer model which describes the

Fig. 1 Process of automated model generation

physical network topology, the transport layer model that summarizes the addressing (layers 3–4) and the application layer model which covers the application interaction of the devices and the layers 5–7. Finally, the network and traffic model are combined in the evaluation model which can be easily mapped to an analytic or simulative performance model as demonstrated in Sect. 3. The performance model implements the details of the network media access control (MAC). This keeps the network and traffic model general to represent different media and OSI compatible protocols, like wireless sensor networks or specialized fieldbusses [14]. The design database covers only a part of the required information for modeling. It has the purpose to collect all design decisions and to prepare the network commissioning and not to permit a detailed performance analysis. Hence, it favors the network topology, addressing and network parameterization but lacks many application details. Other information sources need to be consulted to generate a solvable model. The network designer is of course the most valuable information source, but as already stated, he does not have to know anything about the implementation details of the devices he uses. Furthermore, the modeling in performance engineering should be automatically and autonomously to aid the designer rather than to bother him. This requires the usage of alternative knowledge sources. One alternative is the usage of a separated database containing reusable information like device models, generalized measurements or standardized parameters. Other informational parts can be combined out of different sources or can be approximated with defaults if possible. Otherwise, it is worth a thought if the information is really important or possibly could be neglected without loss of much precision. Only if it is inevitable the designer should be asked explicitly. This prioritized use of alternative sources minimizes the editing effort for the designer and allows a high grade of automation. As the user is not excluded completely, he or she can render information more precisely to improve the model. The information is handled as follows in decreasing priority:

queuing network

traffic model device database

communication model network model

design database

609

simulation

610

S1 S2 S3 S4 S5 S6

J. Ploennigs et al.

Design databases Separate databases with reusable information Combine results Default values from standards or measurements Neglect if possible Network designer

2.2 Physical layer model The physical layer model represents the hardware connections of the devices, which is in classical understanding, the wiring of the devices and in wireless sensor networks, the characteristics of the connection to base stations or relay nodes [22]. An example of the physical structure for a single room temperature control is shown in Fig. 2. The distributed system contains a temperature sensor, the controller and the valve at the radiator in the room to control. A centralized management system monitors the whole building and observes the room controller 1 and 2, which is installed in another room and accessible by a wireless connection. Each device is assigned to only one channel. The network is split into tree physical channels which are interconnected by routers (including gateways, hubs, repeater, base stations, relays). Channels might use different media which can be coaxial, fiber optic or

central management d1 p1

base station

port channel A

single room control 2 d2

p2

p3

p4 wireless channel B

p5 d4

d3

wireless with varying properties e.g. bandwidth or propagation time. To support the modeling of the specific media access parameters, a port is introduced as an interface between device and channel. This implies that a device possesses only one port and a router multiple ports, as it is connected to multiple channels. However, many devices in wireless sensor networks operate as relay stations and perform router functionality thereby. The Abstract Device class models this possible twofold functionality of nodes and is related to the class Port by a composition. Although the distinction between router and device may appear unnecessary, the classification is useful in later analysis as devices and routers are modeled differently (Sect. 2.6). The structure of the resulting physical layer model is displayed in Fig. 3. It is denoted in the formal unified modeling language (UML) to enable better understanding and easy transformation to software. The attributes of the objects depend on the information necessary for performance analysis and the available data in the design database. The key elements are listed in the first column of Table 2, while the other columns represent the sources listed in Sect. 2.1. A cross indicates a used data source and multiple crosses in one row mean that these sources are scanned in decreasing priority from left to right until the first source providing the information is found. The user is always the last to be asked and burdened. The information necessary to build up the structure of the physical layer model is contained completely in the design database and allows an automatic extraction. The media access parameters are commonly standardized in the protocol specification (e.g. [9, 12, 23]) and

d5

p7

p6

p8

temperature sensor

router

single room control 1

channel C

p9

1..*

AbstractDevice

0..*

1

Channel

radiator Router

Fig. 2 Physical layer model of a local single room control with a centralized management system

Table 2 Information sources of the physical layer model

Port

Device

Fig. 3 UML model representing the physical structure

Item

S1

S2

Devices Channels - Bandwidth, arbitration - Cable length Routers - Delay times, buffer sizes Physical network structure

X X X

X

S3

S4

S5

S6

X

X

X

X

X

X X X

Automated model generation for performance engineering of building automation networks

Fig. 5 UML model representing the logical structure

1.2.1

1.1.2 2.1.1

2.2.1

2.1.0

2.2.2

subnet 2.1

Network

subnet 2.2

Member

0..*

0..*

Port

1..*

1..* Subnet

1..*

0..*

Domain

Group

1..*

Item

S1

Addressing Logical network structure Routing tables Groups Service type of messages

X X

additional analysis parameters, like router delay times, can be generalized from measurements1 . The modeling of the physical structure of wireless sensor networks requires more information. As the nodes do not necessarily connect to a certain base station, this connectivity information cannot be extracted from a design database. On the one hand, it can be postulated that the wireless devices need to use a predefined connection, but this will restrict their ability to keep multiple connections. On the other hand, the model already permits a device to connect to multiple channels, but then a probability of use needs to be defined in the model for each channel. In the given example only one wireless channel is active to which the device d2 connects. However, this flexibility according to the wireless network topology is considered when the evaluation model (Sect. 2.6) is derived.

1

1.2.0

1.1.3

1.1.1

1..*

Table 3 Information sources of the transport layer model

subnet 1.2

subnet 1.1

group

domain 2

domain 1

Fig. 4 Transport layer model of the example in Fig. 2

611

Our experiments with routers using the Local Operating Network (LON) protocol [9] resulted in a mean routing delay of approximately 5 ms without network arbitration delays.

S2

S3

S4

S5

S6

X X X

2.3 Transport layer model The transport layer model assumes a hierarchically organized addressing consisting of three levels like in IP networks. The top level is called domain which contains subnets at the second level. Each port has at least one unique address within one subnet and therewith at least one unique address in the whole network. Figure 4 displays the logical structure of the single room temperature control example. This tripartite address can be used to address a certain device in the network, which is called unicast. A broadcast within a domain or subnet reaches all devices belonging to it and is modeled by assigning the corresponding domain or subnet as destination. To target multiple devices beyond subnets they can be combined in a group which is used as multicast address. Such a group message requires only one packet per channel instead of addressing each member separately and is used to save bandwidth. Figure 5 presents the transport layer model in UML. Special message service types like acknowledged or repeated can be used to increase the reliability of a connection. If a message uses the repeated service it will be immediately sent multiple times to the target in the hope

612

J. Ploennigs et al.

that at least one copy will arrive. The acknowledged service waits with the repetition until an receive acknowledged message of the target is missing, this usually saves bandwidth but takes more time in case of a loss. The data required for building the transport layer model are mainly part of the design database as shown in Table 3. It can be applied to both wired and wireless networks. Only the routing tables of routers and the grouping information may not be part of the design database. But, the routing tables can be reconstructed from the physical structure and the groups can be constructed from the application layer model introduced in the next section. 2.4 Application layer model Design tools in general use a black-box function block model to enable an abstract outline of the functional interaction of the devices. The network designer can therefore use commercial-off-the-shelf devices without thinking about the implementation and protocol details. The function block model offers the network designer input and output data access points. They have commonly a predefined and standardized variable type from 2 to 32 Bytes, which is sufficient to exchange e.g. temperature values. The network designer simply links these data access points by a symbolic binding. Therewith, messages are exchanged autonomously among the devices as required by the application program. The resulting data flow graph of the example network is shown in Fig. 6; while Fig. 7 depicts the dependencies of the class Binding within the UML model. The target of a binding can not only be a single data access point, but, to be consistence with the transport layer model,

d1

data access point b1

binding d2

d4 b3

function block d5 b4

Fig. 6 Application layer model of the example in Fig. 2 Fig. 7 UML model representing the logical structure

Domain 0..*

2.5 Traffic model The traffic is estimated in our model for each device individually in their arrival rate distribution and mean arrival rate on application layer, which is the number of messages the device application intents to send and not the number of messages the MAC will finally transmit. Although this decomposition approach is expected to be complex for thousands of devices, the device interactions of the application layer model reduce the estimation effort. First, each device that only receives messages can be ignored, which exclude devices with incoming bindings only. Second, many devices process messages like the room controller in the example. Such devices only generate messages as reaction on incoming ones. Thus, input and output mean arrival rates are equal. Using the binding information from the application layer model the arriving messages can be backtracked to their source devices and only their arrival rates need to be estimated. The same can be expressed as an equation forming the generalized device model. Let O be the set of all output data access points; I the set of all inputs and M the size of any set M like O or I. The mean arrival rate λo of any device output o ∈ O depends on an external source λsrc;o as well as on the mean arrival rate λi of any device input i ∈ I with the gain vo,i. The corresponding matrix representation is λO = VλI + λsrc;O ⎞ ⎛ v1,1 . . . v1,I ⎟ ⎜ .. .. with V = ⎝ ... ⎠. . . vO,1 · · · vO,I

is dest. 0..*

Subnet 0..*

(1)

This model is time-invariant and is used to estimate the mean arrival rate on application layer for statistical independent devices. A more complex version can be used to estimate the arrival rate distribution [28], which is not trivial. Each device needs a variable time for processing on application layer which is not entirely statistical independent and depends among other things on the controller clock speed, memory access, operating

b2 d3

also multiple devices combined in a group, subnet or domain are allowed. A group can be easily formed from bindings with the same source data access point.

is dest.

Group Device 0..* is dest. 0..* 0..* 0..* is dest. 0..* Binding DataAccessPoint 0..* has src. 1 0..*

Automated model generation for performance engineering of building automation networks

system and application performance2 . Such a detailed model permits to analyze parallel processes. But, any implementation details of the application, like detailed source code or simple performance measurements, are commonly unknown for the user and not part of the design database, so that a detailed model is not automatically generatable. However, even if the delay distribution may change the shape of the resulting arrival rate distribution, the mean arrival rate is unaffected of any constant delay and almost unaffected of delay jitter and can be modeled using the introduced simple device model (1) without detail knowledge beside the gain parameter vo,i. It is usually 1 for processing devices, like PID controllers, which react on each incoming message with an outgoing one. Other cases like multi-stepcontrollers are discussed in [28]. The mean arrival rate of an input data access point can be computed from their connected bindings. The application layer model allows multiple bindings to address the same network variable input while arriving messages will overwrite each other. The arrival rate of the input is then the sum of the connected binding arrival rates. The arrival rate of a binding equals the rate of its source data access point as long as each message reaches its target. In reality, of course, messages can get lost on their way between the application layers. This means formalized, that the mean arrival rate of any input data access point i is the sum of the mean arrival rate of the connected bindings Bi. The mean arrival rate of each binding b ∈ Bi corresponds to the mean rate of the source output data access point ob which is reduced by perr;ob ,i, the possibility of an erroneous transmission between ob and i. Finally is   λi = λb = (1 − perr;ob ,i)λob . (2) b∈Bi

b∈Bi

It can be assumed that the error probability perr;ob ,i for a message transmission is zero if the passed segments use a lightly loaded wired media [20]. Otherwise, the value needs to be specified. Message errors arise out of three major causes: first, buffer overruns between the OSI layers in the sending device3 in each channel (routers) dc with a probability of povr;db ; second, corrupted bits in the message with a probability of pbit;c per bit and passed channel c and 2

The LON controllers for example runs an event-oriented operating system on application layer where the test of one event takes approximately 1 ms on its own. The process time varies now strongly with the number of events, the code length, etc.

3

Buffer overruns in the receiving devices can be neglected in LON, as the receive buffers are larger and better adjusted to the application delays.

613

third, message collisions with pcoll;c for the same channel. This results in the error probability between the application layers





l 1−povr;dc 1−pcoll;c 1−pbit;c Msg p∗phy;ob ,i = 1− c∈Co b ,i

(3) with the physical message bit length lMsg and the set of used channels Cob ,i which is arranged in Sect. 2.6. The probability of bit corruption varies widely depending on the termination, the wire quality, and length and electro magnetic noise particularly in wireless sensor networks. It can be hard to estimate a precise value for a specific installation but defaults between 10E−6 and 10E−5 for wired [15] and 10E−5 and 10E−3 for wireless channels [35] give good approximations. The bit error probabilities are rather small for wired media and buffer overruns or message collisions are more dangerous then. Their probability can be computed during the later performance evaluation [3, 15]. As the traffic model is used therefore, the consideration of loss probabilities in the model requires a fixed point approach. Experiences show a rapid convergence of the fixed point even if existence and uniqueness cannot be proven in a strict mathematical sense. However, the introduced errors happen below the application layer which needs to be considered in the traffic model. Bindings can use special message service types like acknowledged or repeated services on transport layer to increase their reliability. These message service types result directly (repeated) or on failure (acknowledged) in multiple transmitted messages on physical layer to increase the probability that at least one will reach its target. So, the error probability on application layer is reduced to perr;ob ,i = prphy;ob ,i,

(4)

depending on the maximum number of repeats r provided of the assigned message service type. The refinement of the traffic model with loss probabilities is arguable. As long as the loss probabilities are small they can be neglected. However, message service types cannot be used for all transmissions, bit errors cannot be neglected for wireless media and the probabilities of collisions and buffer overflows do rise fast with a channel utilization larger than 30% [3]. In these cases the loss probability has a significant influence on the arrival rate model and should be included. Finally, only the arrival rate λsrc;o is left to be estimated for all source output data access points. In building automation the processes are dominated by physical environmental changes rather than human actions. The

614

J. Ploennigs et al.

number of human actions (e.g. switch light) is negligible in comparison to a light sensor which creates a message every 100 ms. This further simplifies the estimation since physical processes are less erratic. In contrast, they can even be steady. An equidistant sampling with a constant sampling period TD produces a constant arrival rate of −1 = const. λsrc;o = TD

(5)

To reduce the network load, not every sample but only significant changes are transmitted over the network. They are identified by a parameter delta  that needs to be exceeded of each new sample in difference to the last transmitted one. To allow alive messages the parameter max-send-time TU defines the maximum time between two messages and to prevent a babbling idiot problem a minimum inter arrival time can be defined with min-send-time TL . This concept is called send-ondelta in building automation [18, 23] and may also be known as deadband or magnitude-driven sampling [25]. The min-send-time TL and max-send-time TU are limiting the inter arrival time TI in its lower and upper bounds TL ≤ TI ≤ TU . The send-on-delta parameter  influences the sending behavior between these bounds. If a process is linear rising with d units per second, than every /d s the difference between the actual and last transmitted value will exceed  and a new message will be generated. The arrival rate is then d/. However, physical processes are commonly not linear rising but are rather oscillating with varying frequencies. Further, each signal needs to be discretized before it can be evaluated, which is still done by equidistant sampling. As the sampling period defines the minimum sampling period the resulting interarrivaltimes of the transmitted samples will be a multiple of it. The arrival rate is now in fact presyncnronized of the sampling rate and depends on the oscillation comparable to the multilevel-crossing problem [2]. To simplify the analysis it is assumed that the sampling frequency and the  are small in comparison to the minimal relevant signal frequency and amplitude; then the arrival rate λsrc;o(t) can be approximated for a known continuous signal f (t) with



1 1 f  (t) 1 = min ; max ; λsrc;o(t) = TI TL TU  (6)

 d f (t)

. with f (t) =

dt Table 4 Information sources of the application layer model

This approximation is quite accurate with the given constraints and becomes generally more precisely with a smaller delta and sampling rate. An accurate model of the signal behavior f (t) is very important for the quality of the resulting mean arrival rate estimation. But, the behavior of each process, room and building is quite different and depends on many parameters and influences impossible to model automatically. However, our experiments with precise simulation models [5] showed that the behavior of a physical process within a building depends a lot on the behavior of a comparable outside process, e.g. the illumination of a room changes with the illumination outside. This results in a comparable dynamics between an inside and outside physical process, while the inside processes usually have lower amplitudes. Therefore, the mean arrival rate of an inside process tends to be lower than the mean arrival rate of the comparable outside process. Using this knowledge, the gradients of typical physical weather processes (e.g. temperature, illumination) were calculated and stored in the design database [28]. Basing on these results the worst-case mean arrival rate can be approximated with Eq. 6 for any parameter set TL , TU , . This results in an overestimated traffic model which can be improved with better dynamics characteristics from measurements or simulations. It is important for later analysis that the send-on-delta sampling produces an approximately exponentially distributed arrival rate. The gradients of physical processes are commonly normally distributed around zero, i.e., it is more common that a process changes slowly than instantly. Taking the absolute value of this gaussian distribution will result in a distribution

having exponential

character. If the absolute rise

f  (t)

is exponentially distributed, also the arrival rate f  (t) / will be. Only the min- and max-send-time do change the shape of the distribution, but their influence is usually neglectable. A detailed investigation of this problem can be found in [28]. The required information to build the device model is primarily obtained from the device database. If a device model is not defined therein, it will be derived from the design database. Subsequently the network designer or device engineer need to

assign the gain vo,i and

the generalized dynamics f  (t) once. If the parameter min-send-time, max-send-time and the send-on-delta cannot be obtained from the design database the recommendations from standards [18] are used. If a parameter

Item

S1

S2

Bindings Message size

X X

X

S3

S4

S5

S6

Automated model generation for performance engineering of building automation networks Table 5 Information sources of the traffic model

Item Device model Gain vi ,o of λ-processors

Characteristic deltas f  Parameters TL , TU ,  Error rates

is not defined for an output the corresponding part of the equation is omitted with TU = ∞, TL = 0 or  = 1. The designer can replace such defaults with correct values to increase the model precision. It is also possible to extend the device model with further details like application layer delay time distributions if required. Please refer to Table 5 for further details.

2.6 Evaluation model The evaluation model combines all previously defined models to derive a model ready for evaluation. Therefore, the results of the traffic model which corresponds to the application layer are broken down to the physical layer model to identify the parts of the network which will be finally loaded. This is done by analyzing the route each message takes through the physical structure from one output to another input data access point at application layer. This way composed of channels, routers and ports is called message class and has a constant message size. Figure 8 shows the UML model of a MessageClass and its associations. As stated above, the traffic model estimates the application layer mean arrival rate and not the physical layer rate. Depending on the message service types multiple messages are generated physical for each application message. For example, the receiver can acknowledge a message or the sender can repeat a message multiple times. These messages may have different sizes and opposite direction and are therefore modeled by multiple message classes.

S1

S2

X X

X X X X X

S3

1 splits in

0..* uses 1..*

S6

X X

X X X X

ch. A

ch. B p2

p3

p5

p6

p4 c1a

c1 p1 ch. C

p8 c1b

Router

b3

c3

b4

c5 p8

p7

ch. C

p8

c4

p9

c6 p9

p7

ch. C

p8

Channel

0..* uses 2..*

S5

For example, the binding b3 in Fig. 6 uses the repeated service and generates with each request two messages in the same direction (c3 and c4 in Fig. 9). In contrast the binding b4 uses acknowledged service and the massage classes c5 and c6 have the opposite direction. The bindings b1 and b2 use the same way but have different variables sizes and are modeled separately with the message classes c1 and c2. While at port p1 only one message is generated for binding b1 the message will be split by routers into two messages filling channel B and C. This means, that a part of the way is shared and another part is separated. This is modeled by the ability of message classes to split up in subclasses. The splitting allows also the representation of alternative ways. As explained in Sect. 2.2 wireless sensor nodes will not connect to a predefined relay but to the one with the currently slightest error rate. This results in different ways a message may use on physical layer. They are modeled by alternative message classes with a specific probability of usage. This probability is not contained in the design database so far and needs to be requested from the user if wireless nodes are used. The previously discussed message loss rate does also influence the arrival rate in the evaluation model if acknowledged message services are used. A receiver will return an acknowledged only if the original message from the sender did arrive. So the arrival rate of the acknowledged message class equals the application

b1

dissects in 1..* 0..* MessageClass

S4

X X X

Binding

0..* uses 0..*

615

ch. C

ch. C

p8

Port

Fig. 8 UML model of a message class and its associations

Fig. 9 Evaluation model of the example in Fig. 2. Message class c2 of binding b2 has the same layout as c1 and has been skipped

616

J. Ploennigs et al.

layer arrival rate λ reduced by the physical message loss probability pphy;ob ,i. In case of a loss of either the original message or the acknowledged, the sender will repeat its original message after a timeout. This increases the arrival rate of the original message by the physical loss probability of the original and acknowledged message. Again the receiver will only answer if this message arrives. The cycle restarts and will be repeated r-times. In the simplest case there is only one receiver and the loss probability of the original and acknowledged messages are equal. Then the arrival rates of the original and acknowledged message class are λorg = λ

r k=0

 k 1 − (1 − pphy;ob ,i)2 ,

The usage of loss probabilities in the evaluation model requires the same fixed point approach as the traffic model. In fact, the fixed point analysis wraps the parameterization of both modeling steps while the model structure is reused. Both models could be included in a highly integrated performance evaluation model, but the next section will prove the benefits of the separated models.

3 Analysis with queuing networks One benefit of the separated models is the independence of special network protocol implementations. The introduced model parts orientate on the OSI layer model and are therefore general and applicable on different protocols and media. The protocol details and behavior is modeled within the performance evaluation which can be either an analytical or simulative approach of different detail level. The introduced evaluation model can be easily mapped to both which is another benefit. This section gives an example for the mapping to an analytically solvable queuing analysis model [3, 27] which is suitable for fast computation of mean load and timing performance measurements. The queuing network takes advantage of the evaluation model message classes. They have a constant message size and a known arrival rate from the traffic model. Every message class contains the list of passed physical elements. These elements, which are channels, ports or routers, need to be transformed into the queuing model.

(7)

λack = (1 − pphy;ob ,i)λorg .

In case of a multicast with group addressing the situation is more complex. The message will be repeated if at least one acknowledged of the group messages is missing. In LON the repeated message will address only members with outstanding acknowledges using a special group message. The inverse probability of a second repeat requires now that all acknowledges of this subset do arrive. Hence, the connection with the highest error probability determines the number of repeats. Another detail which could be modeled of acknowledged services is the case that the acknowledged arrives after the acknowledge timeout and provokes an unnecessary reminder message. This indicates that the latency in the network is too large and the timeout is not adjusted to it. However, the approximation of probabilities requires quantiles or at least higher moments of the transmission delay times. These measurements are not easy to compute with either analytical or simulative performance evaluation approaches. It is better to compare the timeout only with the easier to compute mean delay times. If they are of the same grade the timeout is definitely too small, if they are not, then this case can be probably be neglected. Fig. 10 Queuing model for a single room control with a centralized management system

Table 6 Mapping from the UML model to a queuing analysis model UML model name

→ Queuing model

Message class Sending port Router Channel

→ Message class → FIFO finite capacity queue → Delay station → Load dependent station

single room control 2 c1 c2

router 2 central management c1

channel A

single room control 1 c1 c2 c3

router 1

c2 c3 temperature sensor c4 single room control 1

channel B

c4 radiator

Automated model generation for performance engineering of building automation networks

617

Each sending port can buffer messages if the channel is busy and is therefore mapped onto a FIFO queue. If an entry for the queue length of a device or device class is specified in the design or device databases a finite capacity queue is used, otherwise the queue capacity is assumed to be infinite. Further, a router needs about 5 ms to process a transmitted message and is therefore described by a delay station. A channel is modeled as a load dependent service station with a varying service rate as described in [3, 27]. Table 6 summarizes these mapping rules and Fig. 10 shows the queuing model of the message class example in Fig. 9. It demonstrates the simple mapping process in a model ready to evaluate.

important for modeling entire building automation systems. Different types of reusable information are contained in the device database:

4 Used sources of information





4.1 Design database The introduced method of automated modeling is applicable wherever the design information of the control network is available in a machine-readable form. One example of such a design database in building automation is the LNS Network Operating System [7] a platform for development, integration and monitoring of LonWorks [9] based control networks. Several third-party software tools with different applications in the life cycle of a control network access the LNS database to store and exchange information. It manages the access to the real network for configuration, integration or monitoring issues. Thereby, it is connected continuously to the network and allows to upgrade the system knowledge during lifetime. A Component Object Model (COM) [19] interface enables access to this platform from a variety of languages, either to change the database or to interact with the real network. Therewith, most information about the control network is available for extraction. Another example is the design platform ETS3 [8] for EIB/KNX-based networks. The DCOM library Falcon enables extraction of the necessary network information from ETS3. For pure wireless sensor networks there is no comparable design database available. Further development and standardization of design tools will enable more precise models especially with regard to radio wave propagation models. 4.2 Device database In order to gather information about the devices that are deployed in the network a device database has been developed. It pools several device characteristics and defaults from standards and measurements that are







Generalized device models – these models define the default parameters of Eq. (1) and its dependencies for a product class of devices, which are the arrival rate gains between data access points and device-specific defaults for the send-on-delta, min-send-time and max-send-time. Each source is further assigned to a generalized physical dynamics. The device model is automatically associated in the evaluation model to each device using the product name and manufacturer from the design database. Standardized device models – due to the strong standardization in building automation a lot of devices of different manufacturers are using standardized functional profiles [1, 18]. These profiles define an abstract device behavior and can be used as master for the device models. Therewith, even unknown devices that use functional profiles can be automatically modeled without a specific generalized device model.

Generalized dynamics f  (t) – the processes outside of the device impact the arrival rates on application layer. Generalized analysis results about these processes (e.g. gradient of the temperature) are saved to enable quick model generation without additional effort for environmental modeling. Standardized properties – this includes standardized variables types [18] and media access parameters, like header sizes and transmission timers defined in the network protocol description (e.g. [9, 12]). Measured delay times – routers and devices spend time on processing messages. These delay times depend on various parameters, but defaults for analysis have been generalized from measurements.

4.3 Miscellaneous sources The design and device database contribute the major part of information to the system model. Information that is not contained in the databases has to be reconstructed if possible. For example, groups are arranged automatically from multiple bindings of one source. Routing tables used in routers to filter the message traffic are derived from the physical structure. For other parameters recommended defaults from standards can be assumed. Some parameters have limited influence on the results and can even be ignored like cable length.

618

J. Ploennigs et al.

5 Case study with example network

and application layer model are constructed according to the UML models defined before. Merging the application layer model with the device information from the device database generates the traffic model. Combining the submodels in the third step results in the evaluation model which is computed using the queuing analysis algorithm presented in [3, 27]. The whole process takes a few seconds and then the tool presents the analyzed network to the user for customization and experiments. Figure 12 shows the results of the queuing analysis for the example network in the tool. It rates the major bottlenecks according to load and time aspects. The results on the right reveal that the channels 1–3, 7 and 8 are critically overloaded. The theoretical utilization of more

The last part introduces a case study with an example LonWorks network which has been analyzed with the proposed methods using the implementation in the NetPlan tool [21]. Figure 11 displays the physical structure of the example network. The network consists of 7 subnets with 13 channels. There are 25 devices and 12 two-port routers connected to the channels. The device database contains device models for all devices. The 32 bindings cover all available service types and addressing possibilities. The example network is read from the design database into the analysis tool and the physical-, transportSubnet 1 Sender 1

Receiver 1

Receiver 2

1.1.1

1.1.2

1.1.3

1.1.5

NE 1

NE 2

NE 3

NE 5

NP 1

NP 2

Channel 1

NP 3

1.2.5

1.2.4

1.2.2

1.2.1

Router 1

NE 11

NE 9

NE 8

NE 7

NP 11

NP 14

Channel 4

NE 6

hannel 2 NP 4

NP 5

NE 4 Bridge 1

NP 12

NP 23

1.5.1

1.5.2

NE 18

NE 19

NE 20

NP 24

NP 26

NP 25

Channel 7 NP 20

NP 27

Subnet 4 Sender 7

Receiver 7 NE 10 Bridge 2

Subnet 5 Receiver 8

Router 4

NP 9

NP 10

Channel 3 NP 13

NP 7

Sender 6

NE 12 NP 15

1.1.6

Sender 3

Receiver 4

Sender 4

Receiver 3

NP 6

C NP 8

Subnet 2 Receiver 5

Sender 2

Receiver 9

Subnet 6 Receiver 10

Sender 8

Router 3

1.4.1

1.6.1

1.6.2

Router 5

1.6.4

1.6.5

NE 16

NE 17

NE 22

NE 23

NE 21

NE 25

NE 26

NP 16 Receiver 6

Sender 5

Router 2

1.3.2

1.3.1

NE 13

NE 15

NP 21

NP 29

NP 22 Channel 6

NE 14

NP 30

Channel 8

NP 19

NP 18

Channel 5

Receiver 12 1.7.7 NE 34

Sender 11

Sender 10

1.7.6 NE 33

1.7.4 NE 31

hannel 9 NP 35

Receiver 11

NP 31

NP 32

Sender 9 NE 24 Bridge 3

Router 6 NE 27

1.7.1 NE 28

1.7.3 NE 30

NP 34

NP 33

C

Subnet 7 NP 17

NP 28

Sender 12

Receiver 13

1.7.9 NE 36

1.7.10 NE 37

Subnet 3 NP 45

Channel 11 NP 43

NP 42

NE 32 Bridge 5

Fig. 11 Example network Fig. 12 Load estimation for the example network

NP 37

NP 40

NP 41

NP 44

Channel 12

NP 36

NP 48

Channel 10

NP 39

NP 38

NE 29 Bridge 4

Channel 13 NP 46

NP 47

NE 35 Bridge 6

NP 49

Automated model generation for performance engineering of building automation networks

than 100% indicates that only a part of the evaluated application layer traffic could be transmitted and a much higher bandwidth is necessary. With this theoretical utilization the user stays aware of the collision and buffer losses, because they are included in the results. This prevents him from misinterpreting the physical utilization which does not exceed 100% independent of the induced traffic. The channel 10 with a utilization of 63% is already highly loaded and suffers losses. By selecting the affected channels, the impacting bindings are listed which enables an effective analysis of the reasons for the overload. A part of the actual work focuses on consulting the user in solving problems using the complex model information [29]. The user can notice the effect of parameter changes immediately, because traffic model generation and queuing analysis are very fast.

6 Conclusion This paper presents an approach for automatic model building based on existing design databases in the exemplary case of building automation. The model structure was introduced and the used sources of information were unfolded. The proposed method is designed to enable model generation without additional work for a user. He is not excluded completely and can render information more precisely to improve the model. However, the resulting model is mainly limited by the detail of electronically available design information. Other sources like knowledge databases, standards and generalized measurements are available. But, the resulting model will always contain generalizations and should be handled with care. In our case the traffic model is the most artificial part and therefore the most sensitive one. A single incorrect chosen characteristic can deliver results far from reality. Here, again the system designer is requested to work carefully and double-check his design and the performance results. The proposed automatic model generation not only allows automatic load prediction during the network design, but also enables automatic adjustment to changing system designs. During the run of a network it can be used for diagnosis in conjunction with monitoring by protocol analyzers. The packet occurrence, order, way and content are known in the proposed model and can be compared to the monitored data. This allows the detection of error effects and model-based conclusion about the error causes. Comparable approaches in automatic modeling are feasible for all systems which are too complex to be designed by sketch and need tools with substantial design databases. Missing information does not need

619

to be necessarily provided by users. Other sources like knowledge databases, standards, generalized measurements or reconstruction are available. This disburdens the user while using tools for performance prediction and permits automated network performance engineering to design optimal dimensioned and parameterized control networks. Acknowledgments The project the present report is based on was promoted by the Federal Ministry of Education and Research under the registration number 13N8177. The authors bear all the responsibility for contents.

References 1. American Society of Heating, Refrigerating and Air-Conditioning Engineers: Standard 135-2001 – BACnet – A Data Communication Protocol for Building Automation and Control Networks (2001) 2. Blake, I.F., Lindsey, W.C.: Level-crossing problems for random processes. IEEE Trans. Inf. Theory 19, 295–315 (1973) 3. Buchholz, P., Plönnigs, J.: Analytical analysis of accessschemes of the CSMA-type. In: WFCS 2004 – 5th IEEE International Workshop on Factory Communication Systems, pp. 127–136. Vienna (2004) 4. Castelpietra, P., Song, Y., Simonot-Lion, F., Attia, M.: Analysis and simulation methods for performance evaluation of a multiple networked embedded architecture. IEEE Trans. Ind. Electron. 49(6), 1251–1264 (2002) 5. Dementjev, A., Kabitzsch, K.: A consulting module in room automation. In: Proceedings of the IFAC Symposium on Telematics Applications in Automation and Robotics, pp. 37–42. Espoo, Finland (2004) 6. Dietrich, D., Loy, D., Schweinzer, H.J.: Open Control Networks. Kluwer, Boston (2001) 7. Echelon Corporation: LNS Network Operating System. http://www.echelon.com/lns (2004) 8. EIB Association: www.eiba.com (2005) 9. European Committee for Standardization: CEN prEN14908: Open Data Communication in Building Automation, Controls and Building Management. (in press, 2004) 10. Hintze, E., Kucera, P.: Simulation of RFieldbus Networks. In: Proceedings of the 5th IFAC International Conference on Fielbus Systems and Their Applications (FeT 2003), pp. 115– 122. Aveiro, Portugal (2003) 11. Hunstock, R., Rüping, S., Rückert, U.: A distributed simulator for large networks of building automation systems. In: Proceedings of the 2000 IEEE International Workshop on Factory Communication Systems (WFCS 2000), pp. 203–210. Porto (2000) 12. Institute of Electrical and Electronics Engineers: IEEE Std 802.15.4. Tech. rep., IEEE (2003) 13. International Organization for Standardization: ISO/IEC 7498 – Information technology – open systems interconnection – basic reference model (1994) 14. Kastner, W., Neugschwandtner, G., Soucek, S., Newman, H.M.: Communication systems for building automation and control. Proceedings of the IEEE 93(6), 1178–1203 (invited paper, 2005) 15. Koopman, P.J.: Tracking down lost messages and system failures. Embedded Syst. Program. 9(11), 38–52 (1996)

620 16. LeBoudec, J.Y., Thiran, P.: Network Calculus – A Theory of Deterministic Queuing Systems. No. 2050: Tutorial in Lecture Notes in Computer Science. Springer, Berlin Heidelberg New York (2001) 17. Leland, W.E., Taqq, M.S., Willinger, W., Wilson, D.V.: On the self-similar nature of Ethernet traffic. In: Sidhu, D.P. (ed.) ACM SIGCOMM, pp. 183–193. San Francisco, California (1993) 18. Application-Layer Interoperability Guidelines: Http://www. lonmark.org/products/guides.htm (2002) 19. Microsoft Corporation: Component Object Model Specification. Microsoft Corporation, 0.9 edn (1995) 20. Miskowicz, M., Sapor, M., Zych, M., Latawiec, W.: Performance analysis of predictive p-persistent CSMA protocol for control networks. In: 4th IEEE International Workshop on Factory Communication Systems, pp. 249–256. Västeras, Sweden (2002) 21. Neugebauer, M., Plönnigs, J., Kabitzsch, K.: Prediction of Network Load in Building Automation. In: FET 2003 5th IFAC International Conference on Fieldbus Systems and their Applications, pp. 269–274. Aveiro (2003) 22. Neugebauer, M., Plönnigs, J., Kabitzsch, K.: A new Beacon order adaptation algorithm for IEEE 802.15.4 networks. In: Proceedings of the 2nd European Workshop on Wireless Sensor Networks (EWSN 2005), pp. 302–311. Istanbul, Turkey (2005) 23. Neugebauer, M., Stein, G., Kabitzsch, K.: A new protocol for a low power sensor network. In: Proceedings of the 23rd IEEE International Performance Computing and Communications Conference, pp. 393–399. Phoenix (2004) 24. OPNET Technologies: http://www.opnet.com (2005) 25. Otanez, P.G., Moyne, J.R., Tilbury, D.M.: Using deadbands to reduce communication in networked control systems. In: Proceedings of American Control Conference, vol. 4, pp. 3015– 3020 (2002) 26. Paxson, V., Floyd, S.: Wide Area Traffic: The Failure of Poisson Modeling. IEEE/ACM Trans. Netw. 3(3), 226–244 (1995) 27. Ploennigs, J., Buchholz, P., Neugebauer, M., Kabitzsch, K.: Automated modeling and analysis of CSMA type accessschemes for building automation networks. IEEE Trans. Ind. Inf. 2(2), 103–111 (2006)

J. Ploennigs et al. 28. Ploennigs, J., Neugebauer, M., Kabitzsch, K.: A traffic model for networked devices in the building automation. In: WFCS 2004 – 5th IEEE International Workshop on Factory Communication Systems, pp. 137–145. Vienna (2004) 29. Ploennigs, J., Neugebauer, M., Kabitzsch, K.: Fault analysis of control networks designs. In: ETFA – 10th IEEE International Conference on Emerging Technologies and Factory Automation, vol. 2, pp. 477–484. Catana (2005) 30. Pratl, G., Penzhorn, W.T., Dietrich, D., Burgstaller, W.: Perceptive awareness in building automation. In: Proceedings of the ICCC 2005 IEEE 3rd International Conference on Computational Cybernetics, pp. 259—264. Mauritius (2005) 31. Schwarz, P., Donath, U.: Simulation-based Performance Analysis of Distributed Systems. In: International Workshop Parallel and Distributed Real-Time Systems, pp. 244–249 (1997) 32. Soucek, S., Sauter, T.: Quality of service concerns in IP-based control systems. IEEE Trans. Ind. Electron. 9, 1249–1258 (2004) 33. Tomura, T., Uehiro, K., Kanai, S., Yamamoto, S.: Developing simulation models of open distributed control system by using object-oriented structual and behavioral patterns. In: 4th IEEE International Symposium on Object-Oriented RealTime Distributed Computing, pp. 428–437. IEEE, Magdeburg (2001) 34. Watson, K., Jasperneite, J.: Determining end-to-end delays using network calculus. In: FET 2003 5th IFAC International Conference on Fieldbus Systems and their Applications, pp. 255–260. Aveiro (2003) 35. Willig, A., Wolisz, A.: Ring stability of the PROFIBUS tokenpassing protocol over error-prone links. IEEE Trans. Ind. Electron. 48, 1025–1033 (2001) 36. Woodside, M., Hrischuk, C., Selic, B., Bayarov, S.: A wideband approach to integrating performance prediction into a software design environment. In: Proceedings of the first international workshop on Software and performance, pp. 31–41. ACM Press, New York (1998)

Suggest Documents