Mar 19, 2004 - use Register Transfer Language (RTL), assembly or, at best, C description languages in order to ...... nation of twist-grip throttle state. â torque ...
Project IST-2001-38314 COLUMBUS Design of Embedded Controllers for Safety Critical Systems
WPPBD: Platform Based Design
Platform Based Approach with Constraints and Export Mechanism A. Balluchi, M. D. Di Benedetto, A. Ferrari, G. Girasole, F. Graziosi, F. Parasiliti, R. Petrella, A. Sangiovanni-Vincentelli1, F. Santucci, M. Sgroi, M. Tursini, R. Alesii, S. Tennina
March 19, 2004
Version:
0.3
Task number:
2
Deliverable number: Contract:
DPBD1 IST-2001-38314 of European Commission
DOCUMENT CONTROL SHEET Title of document: Authors of document: Deliverable number: Contract: Project:
DPBD1 IST-2001-38314 of European Commission Design of Embedded Controllers for Safety Critical Systems (Columbus)
DOCUMENT CHANGE LOG Version #
Issue Date
Sections affected
Relevant information
0.1
31 December 03 All
First draft
0.2
9 February 04
All
Second draft
0.3
19 March 04
All
Third draft
Author(s) and Reviewers Authors A. Balluchi
Organisation
Signature/Date
AQUI
M.D. Di Benedetto
AQUI
A. Ferrari
PARADES
G. Girasole
AQUI
F. Graziosi
AQUI
F. Parasiliti
AQUI
R. Petrella
AQUI
A. Sangiovanni - Vincentelli
PARADES
&
UCB F. Santucci
AQUI
M.Tursini
AQUI
R. Alesii
AQUI
S. Tennina
AQUI
Internal reviewers P. Tognolatti S. Di Gennaro
AQUI AQUI
3
Abstract Platform-based Design (PBD) is a relatively new methodology paradigm for the design of embedded systems. It has made significant inroads in the electronic industry (see for example the OMAP platform for cellular communication and the Nexperia Platform for multimedia). However, the concept means different things for different industrial sectors and for different design groups. An attempt at structuring this approach has been put forward by our research group. The basic aspects of the methodology are its meet-inthe-middle view of the design process where a combination of top-down and bottom-up processes define an approach that maximizes re-usability and verifiability while mantaining constraints on performance, cost and power consumption. We describe the general aspects of the methodoloogy and we give three applications to show how this method can be applied in a number of diffente industrial domain of great interests such as wireless sensor networks, automotive controllers and electric motor drives. The point here is to show that it is possible to adopt a general design methodology for all embedded system applications thus forming the basis for a well structured discipline that yields repeatable results and save substantial amount of expensive resources. In particular, the wireless sensor network domain presents several challenging problems, it is characterized by hard real-time constraints, it has to be fault tolerant and design-error free, and it has to react to a nondeterministic adversary environment. Ad hoc wireless sensor networks are designed for environmental monitoring application and we emphasize a methodology that favours re-use at all levels of abstraction. In particular, we used the platform-based design paradigms to identify an abstraction layer for the applications that is implementation independent
The goal is to design a sensor node which is able to reconfigure itself and to form a network without any need for expensive infrastructure. The design of automotive control systems is becoming increasingly complex due to the increasing level of performances required by car manufactures and the tight constraints on cost and development time imposed by the market. In this report, we illustrate the application of an integrated controlimplementation design methodology, recently proposed by our group, to the development of the highest layers of abstraction in the design flow of an engine control system for motorcycles. Finally we show that an appropriate subset of the layers of abstraction used for the automotive application can also be used for the design of electric drives. The essential gain in this domain is the possibility of designing for non idealities of the computing platform at the functional level thus allowing for early detection of errors and short time-to-market while satisfying tight performance constraints.
2
Contents 1 Introduction
5
2 System Design Methodology
12
2.1
Function and Communication-based Design . . . . . . . . . .
13
2.2
(Micro-)Architecture . . . . . . . . . . . . . . . . . . . . . . .
16
2.3
Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
2.4
Link to Implementation . . . . . . . . . . . . . . . . . . . . .
19
2.5
Platform-Based Design . . . . . . . . . . . . . . . . . . . . . .
20
2.5.1
The Overarching Conceptual View . . . . . . . . . . .
20
2.5.2
(Micro-) Architecture Platforms . . . . . . . . . . . .
23
2.5.3
API Platform . . . . . . . . . . . . . . . . . . . . . . .
27
2.5.4
System Platform-Stack . . . . . . . . . . . . . . . . . .
29
2.5.5
A Formal Interpretation of Platform-based Design . .
31
3 Platform-based Design for Wireless Ad-hoc Sensors-Network 37 3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2
A Standard for the Development of Implementation Independent Applications . . . . . . . . . . . . . . . . . . . . . . . . .
37 39
4 A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
42
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
4.2
Ad-hoc Wireless Sensor Networks - Functional Architecture .
46
3
4.2.1
The Application . . . . . . . . . . . . . . . . . . . . .
47
4.2.2
The Sensor Network Services Platform (SNSP) . . . .
47
4.2.3
Sensors and Actuators . . . . . . . . . . . . . . . . . .
49
The Query/Command Services: the Core of the SNSP . . . .
51
4.3.1
Naming . . . . . . . . . . . . . . . . . . . . . . . . . .
52
4.3.2
Query Service . . . . . . . . . . . . . . . . . . . . . . .
53
4.3.3
Command Service . . . . . . . . . . . . . . . . . . . .
58
Auxiliary Services . . . . . . . . . . . . . . . . . . . . . . . .
59
4.4.1
Concept Repository Service . . . . . . . . . . . . . . .
59
4.4.2
Time Synchronization Service . . . . . . . . . . . . . .
61
4.4.3
Location Service (LS) . . . . . . . . . . . . . . . . . .
62
4.5
A Bridge to the AWSN Implementation . . . . . . . . . . . .
63
4.6
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
4.7
Design of AWSN Using the PBD Paradigm . . . . . . . . . .
65
4.7.1
Communication Networks: OSI Reference Model . . .
68
4.7.2
Network Platforms . . . . . . . . . . . . . . . . . . . .
72
4.7.3
Network Platform API . . . . . . . . . . . . . . . . . .
76
4.7.4
Quality of Service . . . . . . . . . . . . . . . . . . . .
77
4.7.5
Classes of Communication Service . . . . . . . . . . .
79
4.8
Examples of Network Platforms . . . . . . . . . . . . . . . . .
80
4.9
Concluding Remarks and Future Work . . . . . . . . . . . . .
82
4.3
4.4
5 Integrated control-implementation design for automotive embedded controllers
84
5.1
Integrated control-implementation design of a motorcycle ECU 87
5.2
From System Specification to Functional Decomposition . . .
87
5.2.1
Functional platform. . . . . . . . . . . . . . . . . . . .
87
5.2.2
Functional refinement. . . . . . . . . . . . . . . . . . .
89
5.2.3
ECU functional design. . . . . . . . . . . . . . . . . .
91
From Functional Decomposition to Control Strategies . . . .
92
5.3.1
92
5.3
Control platforms. . . . . . . . . . . . . . . . . . . . .
4
5.4
5.5
5.3.2
Control refinement. . . . . . . . . . . . . . . . . . . . .
93
5.3.3
ECU control strategies design. . . . . . . . . . . . . .
94
From Control Strategies to Implementation Abstract Model .
95
5.4.1
Implementation platforms. . . . . . . . . . . . . . . . .
96
5.4.2
Implementation abstract model refinement. . . . . . .
97
5.4.3
ECU implementation abstract model design. . . . . .
98
Concluding Remarks and Future Work . . . . . . . . . . . . . 100
6 Platform–based design for electric motor drives
102
6.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6.2
Platform-Based-Design approach for electrical drives . . . . . 105
6.3
An application of PBD design approach: sensor-less control of electrical drives . . . . . . . . . . . . . . . . . . . . . . . . 107
6.4
6.5
6.3.1
Sensor-less control of electrical drives . . . . . . . . . . 107
6.3.2
Sensor-less control of IPM motors . . . . . . . . . . . 109
6.3.3
Sensor-less drive scheme . . . . . . . . . . . . . . . . . 111
6.3.4
Signal injection technique . . . . . . . . . . . . . . . . 112
6.3.5
Kalman filtering . . . . . . . . . . . . . . . . . . . . . 114
6.3.6
Demodulation strategy: carrier recovery . . . . . . . . 115
6.3.7
Adaptive observer for the IPM motor . . . . . . . . . 118
Simulation of the continuous time ideal drive system . . . . . 121 6.4.1
Signal injection based estimation engine . . . . . . . . 122
6.4.2
Motor model . . . . . . . . . . . . . . . . . . . . . . . 124
6.4.3
Coordinate transformations . . . . . . . . . . . . . . . 128
6.4.4
Results of the continuous time ideal drive system . . . 129
6.4.5
Results with off-line estimation . . . . . . . . . . . . . 130
6.4.6
Results with on-line estimation . . . . . . . . . . . . . 131
Introduction of platform specific implementation constraints . 134 6.5.1
Finite precision fixed-point numerical representation . 137
6.5.2
Quantisation of measured values . . . . . . . . . . . . 138
6.5.3
Control loop delay (or latency time) . . . . . . . . . . 139
5
6.5.4
Actuation delay (due to the presence of the power converter) . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.5.5
Simulation of the drive system adopting platform specific implementation constraints . . . . . . . . . . . . . 141
6.5.6
Introducing actuation delay and measurements quantisation . . . . . . . . . . . . . . . . . . . . . . . . . . 146
7 Conclusions
151
6
List of Figures 2.1
Overall Organization of the Methodology [1] . . . . . . . . . .
14
2.2
Platforms, Mapping Tools and Platform Stacks [2] . . . . . .
22
2.3
Layered software structure (Source: A. Ferrari) [2] . . . . . .
28
2.4
System platform stack [2] . . . . . . . . . . . . . . . . . . . .
30
2.5
Architecture and Function Platforms . . . . . . . . . . . . . .
33
2.6
Mapping of function and architecture . . . . . . . . . . . . . .
34
4.1
Functional model of an AWSN as a set of controllers (a) interacting with the environment and among each other; (b) interacting with the environment through a set of spatially distributed sensors and actuators; (c) interacting with the environment through a unified Application Interface (AI). Observe that interactions between the controllers themselves are now supported through the same paradigm. . . . . . . . . . .
48
4.2
Query Service interactions . . . . . . . . . . . . . . . . . . . .
55
4.3
Query Service execution. Controller/QS interactions are defined by the AI, while the QS/Sensor interface is implementation dependent and might follow for example the IEEE 1451.2 standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
4.4
Mapping Application and SNSP onto a SNIP . . . . . . . . .
64
4.5
OSI-RM layering structure [3] . . . . . . . . . . . . . . . . . .
70
4.6
Process Composition in TSM . . . . . . . . . . . . . . . . . .
73
4.7
Examples of NPIs (base) [3] . . . . . . . . . . . . . . . . . . .
80
7
4.8
Examples of NPIs (refinement) [3] . . . . . . . . . . . . . . .
81
4.9
Wireless NPIs [3] . . . . . . . . . . . . . . . . . . . . . . . . .
82
5.1
Functional decomposition. . . . . . . . . . . . . . . . . . . . .
88
5.2
General scheme for the functional design. . . . . . . . . . . .
90
5.3
Refinement of the functional decomposition . . . . . . . . . .
93
5.4
AFR control parameter admissible set J(c) ≤ 0, for settling time specification J1 (gray) and overshoot specification J2 (cyan). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.5
Abstract representation of the effects of implementation non– idealities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.6
95 96
Values of the platform parameter Tc that guarantee settling time specification J1 (top) and overshoot specification J2 (bottom), for admissible control parameters (KP , KI ). . . . . 101
6.1
sensor-less drive scheme. . . . . . . . . . . . . . . . . . . . . . 112
6.2
Digital PLL adopted for carrier recovery. . . . . . . . . . . . . 117
6.3
Flux and current space vectors in an IPM synchronous motor. 118
6.4
Flux observer. . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.5
Adaptive magnet flux and speed observer. . . . . . . . . . . . 121
6.6
The control system under VisSim environment. . . . . . . . . 123
6.7
Signal injection based estimation engine. . . . . . . . . . . . . 123
6.8
Signal injection implementation. . . . . . . . . . . . . . . . . 124
6.9
Calculation of quadrature-axis flux. . . . . . . . . . . . . . . . 126
6.10 Calculation of direct-axis flux. . . . . . . . . . . . . . . . . . . 126 6.11 Calculation of quadrature-axis current. . . . . . . . . . . . . . 126 6.12 Calculation of direct-axis current. . . . . . . . . . . . . . . . . 127 6.13 Calculation of electromagnetic torque. . . . . . . . . . . . . . 127 6.14 Calculation of rotor speed. . . . . . . . . . . . . . . . . . . . . 127 6.15 Rotor position equation. . . . . . . . . . . . . . . . . . . . . . 127 6.16 Floating-point discrete-time integrator. . . . . . . . . . . . . . 127
8
6.17 Coordinate transformation at the motor model input side. . . 128 6.18 Coordinate transformation at the motor model output side. . 128 6.19 Speed step response (estimator off-line). . . . . . . . . . . . . 130 6.20 Rotor position estimation error (estimator off-line). . . . . . . 131 6.21 Electromagnetic torque and q-axis current component (estimator off-line). . . . . . . . . . . . . . . . . . . . . . . . . . . 132 6.22 Speed step response (estimator on-line, sensor-less operations).133 6.23 Rotor position estimation error (estimator on-line). . . . . . . 133 6.24 q-axis and d-axis current components (estimator on-line). . . 134 6.25 Motor phase current processing including quantisation effects. 139 6.26 Simulation model by adopting some implementation constraints.141 6.27 Estimated phase angle of the demodulation signal. . . . . . . 143 6.28 Rotor position estimation error when control loop delay is neglected. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 6.29 Rotor position estimation error when control loop delay is modelled. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 6.30 q-axis current component. . . . . . . . . . . . . . . . . . . . . 146 6.31 Simulation model after modelling actuation model. . . . . . . 147 6.32 Compare unit and inverter. . . . . . . . . . . . . . . . . . . . 147 6.33 Speed step response. . . . . . . . . . . . . . . . . . . . . . . . 149 6.34 Comparison between reference and actual q-axis current. . . . 150 6.35 Comparison between reference and actual q-axis current. . . . 150
9
List of Tables 4.1
Query Service primitives . . . . . . . . . . . . . . . . . . . . .
54
4.2
Primitives used for the interaction with Virtual Sensors . . .
58
6.1
Adopted IPM motor parameters . . . . . . . . . . . . . . . . 125
Columbus
IST-2001-38314
WPBD
Page 4
Chapter 1
Introduction Embedded systems today are the core of most consumer products as well as industrial automation processes and transportation systems. This new market includes small and mobile devices that provide information, entertainment and communication features. These Embedded Systems require complex design and integration to be achieved in the short time frame of consumer electronics. The design challenge is the expansion of this spectrum of diversity and the implementation of a set of functionalities satisfying a number of constraints, ranging from performance to cost, emission, power, consumption, weight and form factor. The functionalities to be implemented in embedded systems have grown in number and complexity so much that the development time is increasingly difficult to predict. This complexity increase, coupled with the evolving specifications, has forced designers to look at implementations that are intrinsically flexible. The increase in computational power of processors and the corresponding decrease in size and cost have allowed the transfer of functionalities from hardware to software to achieve the desired flexibility. The overall goal of electronic embedded system design is to balance production costs with development time and cost, in view of performance and functionality considerations. Minimizing production cost is the result of a
Page 5
Columbus
IST-2001-38314
WPBD
Introduction
balance between competing criteria. If one considers an integrated circuit implementation, the size of the chip is an important factor in determining production cost. Minimizing the size of the chip implies tailoring the hardware architecture to the functionality of the product. As a consequence, one could determine in the Integrated Circuit world a common hardware denominator (which is referred to as a hardware platform), that could be shared across multiple applications in a given application domain. Therefore, increasing in production volume may eventually cause a (much) lower decrease in overall costs than in the case when the chip is customized for the application. Of course, while production volume will drive overall cost down, it is important to consider even the final size of the implementation as well as the functionality and performance the platform should support. Today the choice of a platform architecture and implementation is much more an art than a science. In [1] it is believed that in the next-generation, successful system design methodology must assist designers in the process of designing, evaluating, and programming such platform architectures, with metrics and with early assessments of the capability of a given platform to meet design constraints. As the complexity of the products and related design increase, development efforts increase dramatically: there are problems in verifying design correctness. This is a critical aspect of embedded systems, since several application domains, such as automotive or environment monitoring, are characterized by safety considerations that are certainly not interesting for traditional PC-like software applications. Embedded controllers for safety critical systems presents the most challenging problems: it is characterized by hard real-time constraints, it has to be fault tolerant and design-error free, and it has to react to a non-deterministic hostile environment. To dominate these effects and, at the same time, meet the design time requirements, a design methodology that favours reuse and early error detection is essential.
Columbus
IST-2001-38314
WPBD
Page 6
Both reuse and early error detection imply that the design phases must be defined carefully: all activities have to be clearly identified. Thus, a design methodology that addresses such complex embedded systems must start at high levels of abstraction [1]. Integrated Circuit designers, for example, work with abstraction layers too close to implementation and as a consequence, they experience several problems in sharing knowledge among different working groups. In addition, verifying the correctness of the design before building a physical prototype is very difficult. Most IC designers use Register Transfer Language (RTL), assembly or, at best, C description languages in order to capture the behaviour of their components, but these levels are clearly too low for complex system design. In particular, it is believed that the lack of appropriate methodology and tool support for modeling of concurrency in its various forms is an essential limiting factor in the use of commonly used programming languages to express design complexity [1]. Only by taking a global, high-level view of the problem, can we devise solutions that are going to have a real impact on the design of embedded systems. On one hand, starting from a high-level abstraction requires that we define the system functionality that is completely implementation independent and we keep solid theoretical foundations for formal analysis. On the other hand, we need to select a platform that can support the functionality meeting physical and user constraints about the final implementation. Our ultimate goal is to create a library of functions, along with associated hardware and software implementations, that can be used for all designs. It is important to have multiple levels of functionality supported in such a library, since lower levels, that are closer to the physical implementation, may often change because of the advances in technology, while higher levels tend to be more stable across product versions. Finally, we believe the preferred approaches to the implementation of complex embedded systems should include the following aspects [1]. • Design time and cost are likely to dominate the decision-making pro-
Page 7
Columbus
IST-2001-38314
WPBD
Introduction
cess for system designers. Therefore, design reuse in all its shapes and forms, as well as just-in-time, low-cost design-debug techniques, will be of paramount importance. Flexibility is essential to be able to map an ever-growing functionality onto a continuously evolving problem domain and set of associated hardware implementation options. • Designs must be captured at the highest level of abstraction to be able to exploit all the degrees of freedom that are available. Such a level of abstraction should not make any distinction between hardware and software, since such a distinction is the consequence of some design decision. • The implementation of efficient, reliable, and robust approaches to the design, implementation, and programming of concurrent systems is essential. In essence, whether the silicon is implemented as a single large chip or as a collection of smaller chips interacting across a distance, the problems associated with concurrent processing and concurrent communication must be dealt with in a uniform and scalable manner. In any large-scale embedded systems program, concurrency must be considered as a crucial aspect at all levels of abstraction and in both hardware and software. • Concurrency implies communication among components of the design. Communication is too often intertwined with the behaviour of the components of the design, so that it is very difficult to separate the two domains. Separating communication and behaviour is essential to dominate system design complexity. In particular, if behaviours and communications are intertwined in a design component, it is very difficult to re-use components, since their behaviour is tightly dependent on the communication with other components of the original design. In addition, communication can be described at various levels of abstraction, thus exposing the potential of implementing communication
Columbus
IST-2001-38314
WPBD
Page 8
behaviour in many different forms according to the available resources. Today this freedom is often not exploited. • Next-generation systems will most likely use a few highly complex (Moore’s Law Limited) part-types, but many more energy/power-costefficient, medium-complexity (O(10M-100M) gates in 50nm technology) chips, working concurrently to implement solutions to complex sensing, computing, and signalling/actuating problems. These chips will most likely be developed as an instance of a particular platform. That is, rather than being assembled from a collection of independently developed blocks of silicon functionality, they will be derived from a specific family of micro-architectures, possibly oriented toward a particular class of problems, that can be modified (extended or reduced) by the system developer. These platforms will be extended mostly through the use of large blocks of functionality (for example, in the form of co-processors), but they will also likely support extensibility in the memory/communication architecture as well. When selecting a platform, cost, size, energy consumption and flexibility must be taken into account. Since a platform has much wider applicability than ASICs, design decisions are crucial. A less than excellent choice may result in economic debacle. Hence, design methods and tools that optimize the platform-selection process are very important. • Platforms will be highly programmable, at a variety of levels of granularity. Because of this feature, mapping an application into a platform efficiently will require a set of tools for software design that resemble more and more logic synthesis tools. This is believed to represent a very fruitful research area. Platform-based Design (PBD) is a relatively new methodology paradigm for the design of embedded systems that addresses most of the aspects outlined above. It has already made significant inroads in the electronic indus-
Page 9
Columbus
IST-2001-38314
WPBD
Introduction
try (see for example the OMAP platform for cellular communication and the Nexperia Platform for multimedia). However, the concept means different things for different industrial sectors and for different design groups. An attempt at structuring this approach has been put forward by our research group. The basic aspects of the methodology are its meet-in-the-middle view of the design process where a combination of top-down and bottom-up processes define an approach that maximizes re-usability and verifiability while mantaining constraints on performance, cost and power consumption. We describe the general aspects of the methodoloogy and we give three applications to show how this method can be applied in a number of diffente industrial domain of great interests such as wireless sensor networks, automotive controllers and electric motor drives. The point here is to show that it is possible to adopt a general design methodology for all embedded system applications thus forming the basis for a well structured discipline that yields repeatable results and save substantial amount of expensive resources. The methodology has a dual use: on one hand, for relatively well established disciplines, it can cast best engineering practices in a fairly rigorous framework thus allowing a considerable reduction in design time and effort, and the wide adoption of a common view of the design process across engineering organizations that can be geographically and intellectually quite distant as documented by the results in the automotive industry. On the other hand, it can provide breakthroughs in new design domains by providing abstractions and processes that make it possible to create an industry as in the case of wireless sensor networks where we are in the process of proposing a standard for the interface between applications and underlying phyisical wireless networks. In this deliverable, we first present the basic tenet of a design methodology based on separation or orthogonalization of concerns and we introduce the concept of platform-based design(Chapter 2). In Chapter 3, we show an application of the system design ideas pre-
Columbus
IST-2001-38314
WPBD
Page 10
sented in the previous chapters and, in particular, we demonstrate how the methodology can be used to: • Define novel layers of abstraction and interfaces that allow the development of applications sem-independently from the implementation detail of the underlying network; • Build a generalized platform for ad hoc wireless networks. In particular, we present our proposed solution for the Data Link (DL) algorithm in these networks. In the sequel, we give an overview about the TinyOS/nesC, which are the operating system and the programming language, respectively, of MICA2 motes. In Chapter 4, we present the application of the PBD methodology to the design of automotive systems and we use the design of a motorcycle engine controller as a demonstrator of the paradigms we support. In Chapter 5, we present the application of PBD to the design of electric motor drives to minimize time-to-market while satisfying performance constraints. We believe we described here a general methodology and the associate design flow that can have a significant impact on design and on tools.
Page 11
Columbus
IST-2001-38314
WPBD
Chapter 2
System Design Methodology An essential component of a new system design paradigm is the orthogonalization of concerns (we refer to orthogonalization, like orthogonal bases in mathematics, against separation to stress the independence of the axes along which we perform the “decomposition”), i.e. the separation of the various aspects of design to allow more effective exploration of alternative solutions. An example of this paradigm is the orthogonalization between functionality and timing exploited in the synchronous design methodology that has been so successful in digital design. In this case, provided that signal propagation delays in combinatorial blocks are all within the clock cycle, the check of correct behaviour of the design is restricted to the functionality of combinatorial blocks, thus achieving a major design speed-up factor versus the more liberal asynchronous design methodology. Other more powerful paradigms must be applied to system design to make the problem solvable, let alone efficiently so. One pillar of a design methodology that has been proposed over the years [7, 8, 9] is the separation between: • Function (what the system is supposed to do) and architecture (how it does it); • Communication and computation.
Columbus
IST-2001-38314
WPBD
Page 12
2.1 Function and Communication-based Design
The mapping of function to architecture is an essential step from conception to implementation. An industrial example is the topic of HardwareSoftware Co-design. The problem to be solved is coordinating the design of the parts of the system to be implemented as software and the parts to be implemented as hardware blocks, avoiding the HW/SW integration problem. Worrying about hardware-software boundaries without considering higher levels of abstraction may be a wrong approach. HW/SW design and verification happens after some essential decisions have been already made, and this is what makes the verification and the synthesis problem hard. SW is really the form that a behaviour is taking if it is mapped into a programmable microprocessor or DSP. Motivations behind the preference for SW design against the HW one can be the performance of the application on a particular processor, or the need for flexibility and adaptivity. The origin of the HW and SW separation problem is in the behaviour that the system must implement. The choice of an architecture, i.e. of a collection of components that can be either software programmable, re-configurable or customized, is the other important step in design. The basic tenet of the proposed design methodology is shown in Figure 2.1 below and it is detailed in the next sections.
2.1
Function and Communication-based Design
Generally, a system implements a set of functions, where a function is an abstract view of the behaviour of an aspect of the system. This set of functions is the input/output characterization of the system with respect to its environment. There is no notion of implementation associated with it. For example, when the engine of a car starts (input), the display of the number of revolutions per minute of the engine (output) is a function, while when the engine starts, the display in digital form of the number of revolutions per minute on the LCD panel is not a function. In the latter case, it is already decided that the display device is an LCD and that the
Page 13
Columbus
IST-2001-38314
WPBD
System Design Methodology
Figure 2.1: Overall Organization of the Methodology [1]
format of the data is digital. The notion of function strongly depends on the level of abstraction in which the design is referred to. For example, the decision whether to use a LCD display or some other visual indication about the engine revolutions per minute may not be a free parameter of the design. However, even in this case, it is important to realize that there is a higher level of abstraction where the decision about the type of signal is taken. This may lead to develop new paradigms, that were not even considered before because of the entry level of the design. The point is that no design decision should ever be made implicitly and that capturing the design at higher levels of abstraction yields better designs in the end. If there are design decisions to be made, then they are grouped in a design phase called function design. The description of the function the system has to implement is captured using a particular language that may
Columbus
IST-2001-38314
WPBD
Page 14
2.1 Function and Communication-based Design
or may not be formal. The languages most widely used today for capturing functional specifications are application dependent. For example, for control applications Matlab is typically adopted. However, these languages often do not have the semantic constructs needed to specify concurrency. The most important point for functional specification should be the underlying mathematical model, often called model of computation. The most significant models of computation that have been proposed are based on three basic concepts: Finite State Machines, Data Flow and Discrete Events [13, 14]. All models have their strengths and weaknesses, and an important differentiating factor is the ability to use these models at their best. It is to be remarked that each model is composable (can be assembled) in a particular way, which guarantees that some properties of single components are maintained in the overall system. Communication and time representation in each model of computation are strictly intertwined. In fact, in a synchronous system, communication can take place only at precise instants of time, thus reducing the risk of unpredictable behaviours. Synchronous systems are notoriously more expensive to implement and often less performing, thus opening the door to asynchronous implementations may be the winning choice. In this latter case, which is often the choice for large safety critical system design, particular care has to be exercised to avoid undesired and unexpected behaviours. The balance between synchronous and asynchronous implementations is likely the most challenging aspect of system design. The view of communication in these models of computation is often addressed at a low level of abstraction. It would be opportune to be able to specify abstract communication patterns with high-level constraints, which do not imply yet a particular model of communication. For example, an essential aspect of data communication is the loss-lessness feature: there must exist a level of abstraction that is high enough to require that communication takes place with no data losses. For example, Kahn process networks
Page 15
Columbus
IST-2001-38314
WPBD
System Design Methodology
[14] are important Data Flow models, that guarantee loss-less communication at the highest level of abstraction by assuming an ideal buffering scheme that has unbounded buffer size. Clearly, the unbounded buffer size is a non-implementable way of guaranteeing loss-lessness. When moving towards implementable designs, this assumption has to be removed. A buffer can be provided to store temporarily data that are exchanged among processes, but it must be of finite size. Hence, the choice of the buffer size is crucial. Unfortunately, deciding whether a finite buffer implementation exists that guarantees loss-lessness is not theoretically feasible in the general case, but there are cases for which the optimal buffer size can be found. In general, buffer overwriting could occur and the designer needs for additional mechanisms, that composed with the finite buffer implementation still guarantee that no loss takes place. For example, a send-receive protocol can be used to prevent buffer over-write to occur. Note that in this case the refinement process may be quite complex and involves the use of composite processes. This particular example will be take again in consideration in the last section of Section 4, in order to better explain our methodology. Approaches to the isolation of communication and computation, and how to refine the communication specification towards an implementation have been presented in [15]. In some cases, the designer has been able to determine a synthesis procedure for the communication that guarantees some properties. Clearly, this formalism and the successive refinement process opens a very appealing perspective to system design with unexplored opportunities in component-based software design.
2.2
(Micro-)Architecture
In most design approaches, the next stage of the design process involves the evaluation of trade-offs across the architecture/micro-architecture boundary, and the class of structural compositions that implement the architecture is of primary concern. While the word architecture is used in many meanings
Columbus
IST-2001-38314
WPBD
Page 16
2.2 (Micro-)Architecture
and contexts (the reader can refer to [1]) for examples, we adhere to the definitions introduced in [16]: the architecture defines an interface specification that describes the functionality of an implementation, while being independent of the actual implementation. The micro-architecture, on the other hand, defines how this functionality is actually realized as a composition of modules and components, along with their associated software. The instruction-set architecture of a microprocessor is a good example of an architecture: it defines what functions the processor supports, without defining how these functions are actually realized. The micro-architecture of the processor is defined by the organization and the hardware of the processor. These terms can easily be extended to cover a much wider range of implementation options. At this point, the design decisions are made concerning what will eventually be implemented as software or as hardware. Consistent with the above definitions, a micro-architecture is a set of interconnected components (either abstract or with a physical dimension) that is used to implement a function. For example, the LCD, a physical component of a micro-architecture, can be used to display the number of revolutions per minute of the automotive engine, which we referred above. In this case, the component has a concrete, physical representation. In other cases, its representation may be more abstract. In general, a component is an element with specified interfaces and explicit context dependency. The micro-architecture determines the final hardware implementation and hence it is strictly related to the concept of (hardware) platform [2] that will be presented in greater detail later. The most important micro-architecture for the majority of embedded safety critical designs consists of microprocessors, peripherals, dedicated logic blocks and memories. In the case of automotive body electronics, the actual placement of the electronic components inside the body of the car and their interconnections is kept mostly fixed, while the single components, i.e., the processors, may vary to a certain extent. A fixed micro-architecture simplifies the design problem, but limits design
Page 17
Columbus
IST-2001-38314
WPBD
System Design Methodology
optimality. The trade-off is not easy to achieve. In addition, the communication among micro-architecture blocks must be handled with great care: its characteristics can make the composition of blocks easy or difficult to achieve. Standards are useful to achieve component re-use and a bus is typical interconnection structures intended to favor reuse. Unfortunately, the specification of standard busses is hardly formal.
2.3
Mapping
The essential design step that connects the different abstraction layers is the mapping process, where the functions to be implemented are assigned (mapped) to the components of the micro-architecture. For example, the computations needed to display a set of signals may all be mapped to the same processor or to two different components of the micro-architecture (e.g., a microprocessor and a DSP). The mapping process determines the performance and the cost of the design. As we said above, to measure the performance of the design and its cost in terms of used resources, it is often necessary to complete the design. We look for a more rigorous design methodology. In the mapping step, our choice is dictated by estimates of the performance of that implemented function onto the micro-architecture component. Estimates can be provided either by the manufacturers of the components or by system designers. Experts designers use some analysis tool to develop estimation models, that can be easily evaluated to allow for fast design exploration and yet are accurate enough to choose a good microarchitecture. Given the importance of this step in any application domain, automated tools and environments should support effectively the mapping of functions to micro-architectures. The output of this step consists in one of the following alternatives. • A mapped micro-architecture, iteratively refined towards the final implementation with a set of constraints on each mapped component
Columbus
IST-2001-38314
WPBD
Page 18
2.4 Link to Implementation
(derived from top-level design constraints); or
• A set of diagnostics for the selection phase of micro-architecture and function, whereas the estimation process will signal that design constraints may not be met with the present micro-architecture and function set. In this case, if possible, an alternative micro-architecture is selected. Otherwise, the designer needs to work in the function space by either reducing the number of functions to be supported or their demands in terms of performance.
2.4
Link to Implementation
This phase is entered once the mapped micro-architecture has been estimated as capable of meeting the design constraints: in our case, they have to be met the safety critical ones. The next major issue to be tackled is building the components of the micro-architecture. This requires the development of appropriate hardware or software, that enable the programmable hardware platform to perform its task. This step leads the design to its final implementation stage. The hardware block may be found in an existing library or may need a special purpose implementation as dedicated logic based on existing library of components. Also the software components may exist already in an appropriate library or may need further decomposition into a set of sub-components. Thus, either in hardware or software, the need for a customized component expose us to what reference [1] calls the fractal nature of design, i.e., the design problem repeats itself at every level of the design hierarchy into a sequence of nested function (also called architecture)–micro-architecture–mapping processes.
Page 19
Columbus
IST-2001-38314
WPBD
System Design Methodology
2.5
Platform-Based Design
Once we have analyzed the general view of the methodology plan, we have to formalize the design methodology in a structured set of elements. In this frame, we will examine the Platform Based Design (PBD) features, as defined in [2].
2.5.1
The Overarching Conceptual View
Various forms of platform-based design have been used for many years. The basic advantages of PBD are as follows [2]: • It lies the foundation for developing economically feasible design flows because it is a structured methodology that theoretically limits the space of exploration, yet still achieves superior results in the fixed time constraints of the design. A platform is an abstraction layer in the design flow that facilitates a number of possible refinements into a subsequent abstraction layer in the design flow. The design process progresses through a series of platforms and each platform layer defines bounds for what is achievable by mapping into it, while offering faster design and lower risk. • It provides a formal mechanism to indentify the most critical hand-off points in the design chain: the hand-off point between system companies and IC design companies and the one between IC design companies and manufacturing companies represent the articulation points of the overall design process. For instance, semiconductor companies need to minimize risks when designing standardized chips. Hence, they need to have a fairly complete characterization of the application spaces they wish to target together with the associated constraints in terms of affordable costs and performance levels. By the same token, system companies need to have an accurate characterization of the capabilities of the chips in terms of performance such as power con-
Columbus
IST-2001-38314
WPBD
Page 20
2.5 Platform-Based Design
sumption, size and timing, as well as “Application Program Interfaces” (APIs) that allow the mapping of their application into the chip at a fairly abstract level. APIs must then support a number of tools to ease the possibly automatic generation of the personalization of the programmable components of the chips. • It eliminates costly design iterations because it enables derivative design, i.e. the technique of building an application-specific product by assembling and configuring platform components in a rapid and reliable fashion. For instance, a hardware platform is a family of architectures satisfying a set of constraints imposed to allow reuse of hardware and software components. Then, an API platform can be developed to effectively extend the hardware platform toward the application software, thus enablng quick, reliable, derivative design. • Regarding design as a meeting-in-the-middle process where successive refinements of specifications meet with abstractions of potential implementations. • The identification of precisely defined layers (platforms), where the refinement and abstraction processes take place. The layers then support designs built upon them, where upper layers are isolated from lowerlevel details but letting enough information transpire about lower levels of abstraction. This is oriented to allow design space exploration with a fairly accurate prediction of the final implementation properties. The information should be incorporated in appropriate parameters that annotate design choices at the present layer of abstraction. The general definition of a platform, that we recall from [2], is an abstraction layer in the design flow that facilitates a number of possible refinements into a subsequent abstraction layer (platform) in the design flow. The mille feuilles of Figure 2.2 is a sketch of the design process as a succession of abstraction layers. The analogy covers also the filling between consecutive
Page 21
Columbus
IST-2001-38314
WPBD
System Design Methodology
Figure 2.2: Platforms, Mapping Tools and Platform Stacks [2]
layers. This filling corresponds to the set of methods and tools that allow to map the design from one abstraction layer to the next one. Often, the combination of two consecutive layers and their filling can be interpreted as a unique abstraction layer with an upper view (the top abstraction layer) and a lower view (the bottom layer). So, every pair of platforms, the tools and methods which are used to map the upper layer of abstraction into the lower level one is a platform stack. It can be noted that a platform stack may include several sub-stacks if the designer wishes to span a large number of abstractions. This could depend on the sub-groups of competences. An example is when the physical layer designers meet in the middle the constraints of DataLink protocol layer ones in building two prototypes that need to perform a point-to-point communication: next in this deliverable we better detail this. Now, let simply note that the larger the span is, the more difficult will be to map effectively
Columbus
IST-2001-38314
WPBD
Page 22
2.5 Platform-Based Design
the two, but the greater will be the potential for design optimization and exploration. An essential aspect in application of the design principle is the careful definition of platform layers. Platforms can be defined at several point of the design process, and some levels of abstraction are more important than others in the overall design trade-off space. In particular, the articulation point between system definition and implementation is a critical one for quality and time in an embedded system environment. For this reason, we will be concerned now with the definitions of Architecture Platform and of its Instance, from reference [2], which deals with low-level layer (hardware specifications), but it brings us to the most important definition of Programmer’s Model or Application Program Interface (API). In the next chapter, where we consider the Network Platform, we will see how to generalize that definition to NAPI (Network API, also called NPI).
2.5.2
(Micro-) Architecture Platforms
Integrated circuits used for safety-critical embedded systems will most likely be developed as an instance of a particular (micro-) architecture platform. That is, rather than being assembled from a collection of independently developed blocks, they will be derived from a specific family of microarchitectures, possibly oriented toward a particular class of problems, that can be modified by the system developer. The elements of this family are a sort of common hardware denominator that could be shared across multiple applications. Hence, the (micro-) architecture platform concept, as a family of micro-architectures that are closely related to each other, leads to design time optimization: every element of the family can be obtained quickly by personalizing an appropriate set of parameters of the micro-architecture. For example, in an ad-hoc wireless sensor network environment, like the MICA2 motes which we will consider
Page 23
Columbus
IST-2001-38314
WPBD
System Design Methodology
later in this deliverable, each node may be characterized by the same family of programmable processor and even the same interconnection scheme, but the memories or the transceiver radio system may be selected from a predesigned library of components depending on the particular application or communication protocol constraints. The less constrained the platform is, more freedom a designer has in selecting an instance (choosing its parameters) and the more potential there is for optimization – if time permits. However, more constraints mean more constrained standard interface, consequently easier addition of components to the library that defines the architecture platform. Note that the basic concept is that regularity and re-use of library elements allow faster design time at the expense of some optimality. The trade-off between design time and cost and design quality has always to be kept in mind. Given that the elements of the library are re-used, there is a strong incentive to optimize them. In fact, it makes sense to offer a variation of hardware blocks with the same functionality but with implementations which differ in performance, area and power dissipation. It should be remarked that this optimization issue is closely related to the hardware implementation, so the system designer, who operates at the top of the stack, is only interested in functionality of the hardware: the details of their implementation are insignificant. Thus, we actually realize the orthogonalization of concerns we mentioned before. Architecture platforms are often characterized by the presence of programmable components, so that each platform instance, that can be derived from the architecture platform, maintains enough flexibility to support a sufficiently wide application space. Obviously, an architecture platform instance is derived from an architecture platform by choosing a set of components from its library and/or by setting parameters of its re-configurable components. Programmability will ultimately be of various forms: software programmability is usually intended to indicate the presence of a micro-processor,
Columbus
IST-2001-38314
WPBD
Page 24
2.5 Platform-Based Design
DSP or any other software programmable component, while hardware programmability is intended to indicate the presence of reconfigurable logic blocks such as FPGAs, whereby logic function can be changed by appropriate software tools. Some of the new architecture platforms being offered on the market include a mix of the two into a single chip (for example, Altera and Xilinx are offering FPGA fabrics with some PowerPC, that is an embedded hard processor). Software programmability yields a more flexible solution since modifying software is in general faster and cheaper than modifying FPGA personalities. On the other hand, logic functions mapped on FPGAs can be executed orders of magnitude faster and with much less power than the corresponding implementation as a software program. Thus, the trade-off here is between flexibility and performance.
Architecture Platform Design Issues When seen from the application domain (top view), constraints that determine the architecture platform are often given in terms of performance and size. For a particular application, it is required that, to sustain a set of functions, a CPU should be able to run at least at a given speed and the memory system should be of at least a given number of bytes. Coming from the IC manufacturer space (bottom view), production and design costs imply adding platform constraints and consequently reducing the number of choices. The intersection of the two sets of constraints defines the architecture platforms that can be used for the final product: this is neither a top-down nor a bottom-up design methodology. In fact, in a pure top-down design process, application specification is the starting point for the design process. The sequence of design decisions drives the designer toward a solution that minimizes a cost function. Whereas, in a bottom-up approach, a given instance of the architecture platform is designed to support a set of different applications that are often vaguely defined. Therefore, the designer tries to maximize the number of applications of its platform instances. The
Page 25
Columbus
IST-2001-38314
WPBD
System Design Methodology
ultimate goal of the new methodology is to define platforms and platform instances in close collaboration with system companies thus fully realizing the meet-in-the-middle approach. Note that, because of this process, we may have a platform instance that is over-designed for a given product; that is, the potential of the architecture is not fully exploited to implement the functionality of the final product. In several applications, the over-designed architecture has been a perfect vehicle to deliver new software products and extend the application space. Thus, it is believed that some degree of such over-design will be positive the embedded systems. To summarize, the design of an architecture platform is the result of a trade-off in a complex space that includes [2] the following aspects: • The size of the application space that can be supported by the architectures belonging to the architecture platform. This represents the flexibility of the platform; • The size of the architecture space that satisfies the constraints embodied in the architecture platform definition. This represents the degrees of freedom that architecture providers have in designing their hardware instances. Once an architecture platform has been selected, then the design process consists of exploring the remaining design space with the constraints set by the platform. These constraints cannot only be on the components themselves, but also on their communication mechanism. In addition, approaching an implementation by selecting components that satisfy the architectural constraints defining a platform, means performing a successive refinement process where details are added in a disciplined way to produce an architecture platform instance. Application developers work with an ideal architecture platform by first choosing the architectural elements they believe are the best for their pur-
Columbus
IST-2001-38314
WPBD
Page 26
2.5 Platform-Based Design
poses. Then, they must map the functionality of their application onto the platform instance. The mapping process includes hardware/software partitioning. In fact, while performing this step, designers may decide to move a function from software implementation to a dedicated hardware block. Once the partitioning and the selection of the platform instance are finalized, the designers develop the final and optimized version of the application software. Because of the market forces briefly outlined above (low cost and developing time), many implementations of a system functionality are done in software. This implies that an effective platform must offer a powerful design environment for software. Thus, there are two main concerns for an effective platform-based design: • Software development environment; • A set of tools that insulate the details of the architecture from application software. This brings us to the definition of an API platform.
2.5.3
API Platform
The concept of architecture platform by itself is not enough to achieve the level of application re-use we require. The architecture platform has to be abstracted at a level where the application software sees a high-level interface to the hardware that is called Application Program Interface (API) or Programmer’s Model. A software layer is used to perform this abstraction (see Figure 2.3). This layer wraps the essential parts of the architecture platform [2]: • The programmable cores and the memory subsystem via a Real Time Operating System (RTOS), • The I/O subsystem via the Device Drivers, and
Page 27
Columbus
IST-2001-38314
WPBD
System Design Methodology
Figure 2.3: Layered software structure (Source: A. Ferrari) [2] • The network connections via the network communication subsystem. In this frame, while a programming language is the abstraction of the Instruction Set, the API is the abstraction of a set of computational resources (concurrency model provided by the RTOS) and available peripherals (Device Drivers). The API is a unique abstract representation of the architecture platform through the software layer. With such a defined API, the application software can be re-used for every platform instance. Indeed, the Programmer’s Model is a platform itself which can be called the API platform. As usual, the higher the abstraction level at which a platform is defined, the more instances it contains. For example, to share source code, we need to have the same operating system but not necessarily the same instruction set, while to share binary code, we need to add the architectural constraints that force to use the same ISA, thus greatly restricting the range of architectural
Columbus
IST-2001-38314
WPBD
Page 28
2.5 Platform-Based Design
choices. RTOS is responsible for scheduling of the available computing resources and of communication between them and the memory subsystem. Note that in several safety-critical embedded system applications, the available computing resources consist of a single microprocessor, but in general one can imagine a multiple core architecture platform, where RTOS schedules software processes across different computing engines.
2.5.4
System Platform-Stack
The basic idea of system platform-stack is captured in Figure 2.4. The vertex of the two cones represents the combination of the API or Programmer’s Model and the architecture platform. A system designer maps its application into the abstract representation, which includes a family of architectures that can be chosen to optimize a proper functional (cost, efficiency, energy consumption or flexibility). Mapping of the application platform into the actual architecture platform instance in the family specified by the API can be carried out (at least in part) automatically and in a two step sequence, only if a set of appropriate software synthesis tools is available. It is clear that the synthesis tools have to be aware of the architecture features as well as of the API related with them. Then, starting from the application platform we can synthesize the API platform in a first step, eventually with a successive refinement process; after this, in a second step we can use the software layer to go from the API platform to the architecture platform. In the design space, there is an obvious trade-off between the level of abstraction of the Programmer’s Model and the number and type of the platform instances covered. In fact, remember that the more abstract the Programmer’s Model is the richer the set of platform instances is, but the more difficult is the choice of optimal architecture platform instance and automatically mapping on this. Hence, one can envision a number of system platform stacks which will be handled with somewhat different abstractions
Page 29
Columbus
IST-2001-38314
WPBD
System Design Methodology
Figure 2.4: System platform stack [2]
and tools. In fact, later in this deliverable we will see the Network Platform, which is a generalization of this framework and that is a typical example of the fractal nature of the design methodology: the design problem repeats itself at every level of the design hierarchy. Generalizing the process, the design is seen mainly as a process of providing abstraction views like in a database management system, where each user could see only a part of data there are stored, and it could obtain any kind of data aggregation according to its needs. Therefore, an API platform is an abstraction layer above some more complex systems, which could be used for designing at a higher level, forgetting that there are several implementation details. Following this model, a structural view of the design is abstracted into the API model that provides the basis for all design processes that rest upon this layer of abstraction. To choose the right architecture platform, we have to export at the API level an execution model of such an architecture platform that estimates performance of the lower level. On the other hand, we can pass constraints from higher levels of abstrac-
Columbus
IST-2001-38314
WPBD
Page 30
2.5 Platform-Based Design
tion down to lower ones, in order to continuing the refinement process and, hence, satisfying the original design constraints. With both constraints and estimates, we may also use some cost function to select a solution among the feasible ones. In summary, the system platform stack is a comprehensive model which includes the view of platforms from both the application and the implementation points of view. It is the vertex of the two cones in Figure 2.4. Note that the system platform effectively decouples the application development process (the upper triangle) from the architecture implementation process (the lower triangle).
2.5.5
A Formal Interpretation of Platform-based Design
As we have seen, an essential component of PBD is the successive refinement process that takes from the higher layers of abstractions to the final implementation. The refinement process is interpreted as the concretization of a function in terms of the elements of an architecture. The process of design consists of evaluating the performance of different kinds of architectures by mapping the functionality onto its different elements. The implementation is then chosen on the basis of some cost function. In the sequel, we will cast the successive refinement design flow in a formal framework described in terms of abstract algebra. This section is not self-contained in the sense that some of the terminology typical of abstract algebra is not defined. The theory background is way beyond the scope of this report. We refer the reader to [10] as the main source of the results presented here that can be nevertheless intuitively grasped. Both the functionality and the architecture can be represented at different levels of abstraction. For example, an architecture may employ a generic communication structure that includes point-to-point connections for all elements, and unlimited bandwidth. On a more accurate level, the communication structure may be described as a bus with a particular arbi-
Page 31
Columbus
IST-2001-38314
WPBD
System Design Methodology
tration policy and limited bandwidth. Similarly, the functionality could be described as the interconnection of agents that communicate through either unbounded (more abstract) or bounded (more concrete) queues. To characterize the process of mapping and performance evaluation, we use three distinct semantic domains. Two domains, called the architecture platform and the function platform, are devoted to describing the architecture and the function, respectively. The third, called the semantic platform, is an intermediate domain that is used to map the function onto an architecture. An architecture platform, depicted in Fig. 2.5 on the right, is composed of a set of elements, called the library elements, and of composition rules that define the admissible topologies. To obtain an appropriate domain of agents to model an architecture platform we start from the set of library elements. We then construct the free algebra generated by the library elements by taking the closure under the operation of composition. In other words, we construct all the topologies that are admissible by the composition rules, and add them to the set of agents in the algebra. Thus, each agent in the architecture platform algebra, called a platform instance, is a particular topology that is consistent with the rules of the platform. This construction is similar to a term algebra, subject to the constraints of the composition rules. For most architecture platforms, the composition must be constrained, since the number of available resources is bounded. For example an architecture platform may provide only one instance of a particular processor. In that case, topologies that employ two ore more instances are ruled out. Similarly to the architecture platform, the function platform, depicted in Fig. 2.5 on the left, is represented as an agent algebra. Here the desired function is represented denotationally, as the collective behavior of a composition of agents. However, unlike the architecture platform that is used to select one particular instance among several, the function is fixed and is
Columbus
IST-2001-38314
WPBD
Page 32
2.5 Platform-Based Design
Figure 2.5: Architecture and Function Platforms used as the specification for the refinement process. The specification and the implementation come together in an intermediate algebra, called the semantic platform. The semantic platform plays the role of common refinement and is used to combine the properties of both the architecture and the function platform. In fact, the function platform may be too abstract to talk about the performance indices that are characteristic of the more concrete architecture, while at the same time the architecture platform is a mere composition of components, without a notion of behavior. In particular, we assume that there exists a conservative approximation (see [10] for a complete description and characterization of conservative approximations and their use in successive refinement and heterogeneous composition) between the semantic platform and the function platform, and that the inverse of the conservative approximation is defined at the function that we wish to evaluate. The function therefore is mapped onto the semantic platform as shown in Fig. 2.6. This mapping also includes all the refinements of the function that are consistent with the performance constraints, which can be interpreted in the semantic platform. The correspondence between the architecture and the semantic platform is more complex. A platform instance, i.e., an agent in the architecture
Page 33
Columbus
IST-2001-38314
WPBD
System Design Methodology
Figure 2.6: Mapping of function and architecture platform, usually includes programmable elements (microprocessors, programmable logic) that may be customized for the particular function required. Therefore, each platform instance may be used to implement a variety of functions, or behaviors. Each of these functions is in turn represented as one agent in the semantic platform. A platform instance is therefore projected onto the semantic platform by considering the collection of the agents that can be implemented by the particular instance. These, too, can be organized as a refinement hierarchy, since the same function could be implemented using different algorithms and employing different resources even within a particular platform instance. Note that the projection of the platform instance onto the semantic platform, represented by the rays that originate from the architecture platform in Fig. 2.6, may or may not have a greatest element. If it does, the greatest element represents the nondeterministic choice of any of the functions that are implementable by the architecture. An architecture and a function platform may be related using different semantic platforms, and under different notions of refinement. The choice of semantic platform is particularly important. The agents in the semantic platform must in fact be detailed enough to represent the performance values of interest in choosing a particular platform instance, and a particular realization (via programmability) of the instance. However, if the semantic platform is too detailed, the correspondence between the platform instance
Columbus
IST-2001-38314
WPBD
Page 34
2.5 Platform-Based Design
and its realizations may be impractical to compute. This correspondence is therefore usually obtained by estimation techniques, rather than by analytical methods. The semantic platform is partitioned into four different areas. We are interested in the area that corresponds to the intersection of the refinements of the function and of the functions that are implementable by the platform instance. This area is marked “Admissible Refinements” in Fig. 2.6. In fact, the agents that refine the function, but do not refine the architecture, are possible implementations that are not supported by the platform instance. The agents that refine the platform instance, but not the function, are possible behaviors of the architecture that are either inconsistent with the function (they do something else), or they do not meet the performance constraints. The rest of the agents that are not in the image of any of the maps correspond to behaviors that are inconsistent with the function and are not implementable by the chosen platform instance. Among all the possible implementations, one must be chosen as the function to be used for the next refinement step. Each of the admissible refinements encodes a particular mapping of the components of the function onto the services offered by the selected platform instance. Of all those agents, we are usually interested in the ones that are closer to the greatest element, as those implementations more likely offer the most flexibility when the same refinement process is iterated to descend to an even more concrete level of abstraction. In addition, several different platform instances may be considered to search among the different topologies and available resources and services. Once a suitable implementation has been chosen, the process continues with the next refinement step. The new function platform is obtained as the combination of the semantic platform that provides information on the desired behavior, and the architecture platform, which provides information on the topology and the structure of the mapped implementation. The new
Page 35
Columbus
IST-2001-38314
WPBD
System Design Methodology
function is then mapped to a new architecture, employing the same device of a semantic platform as an intermediate domain.
Columbus
IST-2001-38314
WPBD
Page 36
Chapter 3
Platform-based Design for Wireless Ad-hoc Sensors-Network According to the previous chapter, the Platform-Based Design paradigm can be applied by including the formalization of abstraction levels higher than the API Platform. In particular, this approach can be applied to wireless ad hoc sensor networks. The use of PBD in this context is also useful to define layers of abstraction that can be used to identify standards that enable the use of this fairly new technology. We first begin by discussing the applications and the challenges for AWSNs. Then we move to the definition of the abstraction layers and we discuss a potential standard for the application interface. Finally we develop the PBD paradigm to propose a flow for the design of an AWSN.
3.1
Introduction
Ad-hoc Wireless Sensor Networks (AWSNs) are an essential factor in the implementation of the ”ambient intelligence” paradigm, which envisions smart
Page 37
Columbus
IST-2001-38314
WPBD
Platform-based Design for Wireless Ad-hoc Sensors-Network
environments aiding humans to perform their daily tasks in a non-intrusive way [43]. The wide deployment of sensor networks will also dramatically change the operational models of traditional businesses in application domains such as home/office automation [44], power delivery [45], and natural environment control [46]. The potential applications of AWSNs can be distinguished between those that monitor and those that control the environment in which they are embedded. Monitoring applications gather the values of some parameters of the environment, process them, and report the outcome to external users. Control applications, in addition to monitoring, influence the environment so that it achieves a required behavior. Both types of applications require the deployment of sensor networks with a large number of nodes that are able to capture different physical phenomena (sensors), to make control decisions (controllers), and to act on the environment (actuators). The design problem is certainly challenging: • the nodes of these networks fulfill their data gathering and control functionality working in a cooperative way; hence, inter-node communication, mostly over RF links, plays an essential role • the requirement that the network operation should continue for very long periods of time without human intervention makes low energy consumption of paramount importance • to meet the stringent power, cost, size and reliability requirements, the design of sensor networks needs optimizations throughout all the steps of the design process, including application software, the layers of the communication protocol stack and the hardware platform Currently, AWSN designs are rather ad-hoc and tailored to the specific application both in the choice of the network protocols and in the implementation platform. Today, it is virtually impossible to start developing applications without the previous selection of a specific, integrated hardware/software platform. Coupling applications with specific hardware
Columbus
IST-2001-38314
WPBD
Page 38
3.2 A Standard for the Development of Implementation Independent Applications
solutions slows down the practical deployment of sensor networks: potential users hesitate to invest in developing applications that are intrinsically bound to specific hardware platforms available today.
3.2
A Standard for the Development of Implementation Independent Applications
To unleash the power of AWSNs, standards are needed that favor • the incremental integration of heterogeneous nodes and • the development of applications independent of the implementation platforms. To address the first concern, several efforts have been recently launched to standardize communication protocols among sensor network nodes. The best known standards are BACnet and LonWorks, developed for building automation [44]. They are geared towards well-defined application areas, and are built on top of specific network structures. Hence, they are not well suited for many sensor network applications. ZigBee [48] defines an open standard for low-power wireless networking of monitoring and control devices. It works in cooperation with IEEE 802.15.4 [47], which focuses on the lower protocol layers (physical and MAC). ZigBee defines the upper layers of the protocol stack, from network to application, including application profiles. Yet, it is our belief that efforts, like Zigbee, created in a bottom-up fashion do not fully address the essential issue: how to allow interoperability between the multitudes of sensor network operational models that are bound to emerge. In fact, different application scenarios lead to different requirements in terms of data throughput and latency, quality-of-service, use of computation and communication resources, and network heterogeneity. These requirements ultimately result in different solutions in network topology, protocols, computational platforms, and air interfaces.
Page 39
Columbus
IST-2001-38314
WPBD
Platform-based Design for Wireless Ad-hoc Sensors-Network
The second concern has been partially addressed in the automation and manufacturing community where networks of sensors (mostly wired) have been widely deployed. To achieve interoperability between different manufacturers the IEEE 1451.2 [49] standardizes both the key sensors (and actuators) parameters and their interface with the units that read their measures (or set their values). In particular, the standard defines the physical interface between the Smart Transducer Interface Module (STIM), which includes one or more transducers, and the Transducer Electronic Data Sheet (TEDS) containing the list of their relevant parameters, and the Network Capable Application Processor (NCAP), which controls the access to the STIM. Realizing that the present efforts are lacking generality and, to a certain degree, rigor, we propose in this paper an approach for the support of true interoperability between different applications as well as between different implementation platforms. We advocate a top-down approach similar to the one adopted very successfully by the Internet community and propose a universal application interface, which allows programmers to develop applications without having to know unnecessary details of the underlying communication platform, such as air interface and network topology. Hence, we define a standard set of services and interface primitives (called the Sensor Network Services Platform or SNSP) to be made available to an application programmer independently on their implementation on any present and future sensor network platform. Furthermore, we separate the virtual platform defined by the logical specification of the SNSP services from the physical platform (called the Sensor Network Implementation Platform or SNIP) that implements it and determines the quality and cost of the services. As the definition of sockets in the Internet has made the use of communication services independent of the underlying protocol stack, communication medium and even operating system, the application interface we propose identifies an abstraction that is offered to any sensor network application
Columbus
IST-2001-38314
WPBD
Page 40
3.2 A Standard for the Development of Implementation Independent Applications
and supported by any sensor network platform. Yet, while similar in concept, the application interface needed for sensor networks is fundamentally different from the one defined in the Internet space. In the latter, a seamless set-up, use, and removal of reliable end-to- end communication links between applications at remote locations are the primary concern. On the other hand, sensor network applications require communication services to support queries and commands [50] among the three essential components of the network (sensor, monitor/controller, and actuator), and need also other services for resource management, time synchronization, locationing and dynamic network management. TinyDB [51] is the existing approach closest to our effort. It views a sensor network as a distributed database and defines an application-level abstraction based on the Query/Command paradigm to issue declarative queries. However, its main goal is to define the interface and an implementation of a specific service, the query service. Hence, the TinyDB abstraction lacks of several auxiliary services needed in many sensor network applications. In addition, several decisions made as for the service description seem to have been driven by implementation considerations. This section is structured as follows. First, we introduce the functional components of a wireless sensor network and create a framework in which we can define the services offered by the distributed Service Platform. Next, we present the services offered by the SNSP, starting from the essential Query/Command Service. A brief discussion of some auxiliary services such as locationing, timing, and concept repository follows. The section concludes presenting the Sensor Network Implementation Platform, and giving some perspectives.
Page 41
Columbus
IST-2001-38314
WPBD
Chapter 4
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks 4.1
Introduction
Ad-hoc Wireless Sensor Networks (AWSNs) are an essential factor in the implementation of the ”ambient intelligence” paradigm, which envisions smart environments aiding humans to perform their daily tasks in a non-intrusive way [43]. The wide deployment of sensor networks will also dramatically change the operational models of traditional businesses in application domains such as home/office automation [44], power delivery [45], and natural environment control [46]. The potential applications of AWSNs can be distinguished between those that monitor and those that control the environment in which they are embedded. Monitoring applications gather the values of some parameters of the environment, process them, and report the outcome to external users. Control applications, in addition to monitoring,
Columbus
IST-2001-38314
WPBD
Page 42
4.1 Introduction
influence the environment so that it achieves a required behavior. Both types of applications require the deployment of sensor networks with a large number of nodes that are able to capture different physical phenomena (sensors), to make control decisions (controllers), and to act on the environment (actuators). The design problem is certainly challenging: • the nodes of these networks fulfill their data gathering and control functionality working in a cooperative way; hence, inter-node communication, mostly over RF links, plays an essential role • the requirement that the network operation should continue for very long periods of time without human intervention makes low energy consumption of paramount importance • to meet the stringent power, cost, size and reliability requirements, the design of sensor networks needs optimizations throughout all the steps of the design process, including application software, the layers of the communication protocol stack and the hardware platform Currently, AWSN designs are rather ad-hoc and tailored to the specific application both in the choice of the network protocols and in the implementation platform. Today, it is virtually impossible to start developing applications without the previous selection of a specific, integrated hardware/software platform. Coupling applications with specific hardware solutions slows down the practical deployment of sensor networks: potential users hesitate to invest in developing applications that are intrinsically bound to specific hardware platforms available today. To unleash the power of AWSNs, standards are needed that favor • the incremental integration of heterogeneous nodes and • the development of applications independent of the implementation platforms.
Page 43
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
To address the first concern, several efforts have been recently launched to standardize communication protocols among sensor network nodes. The best known standards are BACnet and LonWorks, developed for building automation [44]. They are geared towards well-defined application areas, and are built on top of specific network structures. Hence, they are not well suited for many sensor network applications. ZigBee [48] defines an open standard for low-power wireless networking of monitoring and control devices. It works in cooperation with IEEE 802.15.4 [47], which focuses on the lower protocol layers (physical and MAC). ZigBee defines the upper layers of the protocol stack, from network to application, including application profiles. Yet, it is our belief that efforts, like Zigbee, created in a bottom-up fashion do not fully address the essential issue: how to allow interoperability between the multitudes of sensor network operational models that are bound to emerge. In fact, different application scenarios lead to different requirements in terms of data throughput and latency, quality-of-service, use of computation and communication resources, and network heterogeneity. These requirements ultimately result in different solutions in network topology, protocols, computational platforms, and air interfaces.
The second concern has been partially addressed in the automation and manufacturing community where networks of sensors (mostly wired) have been widely deployed. To achieve interoperability between different manufacturers the IEEE 1451.2 [49] standardizes both the key sensors (and actuators) parameters and their interface with the units that read their measures (or set their values). In particular, the standard defines the physical interface between the Smart Transducer Interface Module (STIM), which includes one or more transducers, and the Transducer Electronic Data Sheet (TEDS) containing the list of their relevant parameters, and the Network Capable Application Processor (NCAP), which controls the access to the STIM.
Columbus
IST-2001-38314
WPBD
Page 44
4.1 Introduction
Realizing that the present efforts are lacking generality and, to a certain degree, rigor, we propose in this paper an approach for the support of true interoperability between different applications as well as between different implementation platforms. We advocate a top-down approach similar to the one adopted very successfully by the Internet community and propose a universal application interface, which allows programmers to develop applications without having to know unnecessary details of the underlying communication platform, such as air interface and network topology. Hence, we define a standard set of services and interface primitives (called the Sensor Network Services Platform or SNSP) to be made available to an application programmer independently on their implementation on any present and future sensor network platform. Furthermore, we separate the virtual platform defined by the logical specification of the SNSP services from the physical platform (called the Sensor Network Implementation Platform or SNIP) that implements it and determines the quality and cost of the services.
As the definition of sockets in the Internet has made the use of communication services independent of the underlying protocol stack, communication medium and even operating system, the application interface we propose identifies an abstraction that is offered to any sensor network application and supported by any sensor network platform. Yet, while similar in concept, the application interface needed for sensor networks is fundamentally different from the one defined in the Internet space. In the latter, a seamless set-up, use, and removal of reliable end-to- end communication links between applications at remote locations are the primary concern. On the other hand, sensor network applications require communication services to support queries and commands [50] among the three essential components of the network (sensor, monitor/controller, and actuator), and need also other services for resource management, time synchronization, locationing and dynamic network management.
Page 45
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
TinyDB [51] is the existing approach closest to our effort. It views a sensor network as a distributed database and defines an application-level abstraction based on the Query/Command paradigm to issue declarative queries. However, its main goal is to define the interface and an implementation of a specific service, the query service. Hence, the TinyDB abstraction lacks of several auxiliary services needed in many sensor network applications. In addition, several decisions made as for the service description seem to have been driven by implementation considerations. This paper is structured as follows. First, we introduce the functional components of a wireless sensor network and create a framework in which we can define the services offered by the distributed Service Platform. Next, we present the services offered by the SNSP, starting from the essential Query/Command Service. A brief discussion of some auxiliary services such as locationing, timing, and concept repository follows. The paper concludes presenting the Sensor Network Implementation Platform, and giving some perspectives.
4.2
Ad-hoc Wireless Sensor Networks - Functional Architecture
The functionality of an AWSN is best captured as a set of distributed compute functions (typically called controllers or monitors, given that the most applications are either control or monitor oriented), cooperating to achieve a set of common goals. AWSNs interact with the Environment in the form of spatially distributed measurements and actuations (Figure 4.1a). The interactions with the environment are accomplished via an array of sensors and actuators, interacting with the controllers via a communication network (Figure 4.1b). The idea behind this paper is to formalize and abstract the interaction and the communications between the Application and the distributed sensors and actuators through the SNSP and its Application
Columbus
IST-2001-38314
WPBD
Page 46
4.2 Ad-hoc Wireless Sensor Networks - Functional Architecture
Interface.
4.2.1
The Application
An AWSN Application consists of a collection of cooperating algorithms (which we call controllers) designed to achieve a set of common goals, aided by interactions with the Environment through distributed measurements and actuations. Controllers are components of AWSNs that read the state of the environment, process the information and report it or apply a control law to decide how to set the state of the environment. A controller is characterized by its desired behavior, its input and output variables, the control algorithm and the model of the environment. In addition, to ensure proper operation a controller places some constraints on parameters expressing the quality of the input data such as timeliness, accuracy and reliability.
4.2.2
The Sensor Network Services Platform (SNSP)
The Sensor Network Services Platform (SNSP) decomposes and refines the interaction between controllers and the Environment and among controllers into a set of interactions between control, sensor, and actuation functions. The services that the SNSP offers the Application are used directly by the controllers whenever they interact among each other or with the Environment. This approach abstracts away the details of the communication mechanisms, and allows the Application to be designed independently on how exactly the interaction with the environment is accomplished (Figure 4.1c). The Application Interface (AI) is the set of primitives that are used by the Application to access the SNSP services. The SNSP is a collection of algorithms (e.g. location and synchronization), communication protocols (e.g. routing, MAC), data processing functions (e.g. aggregation), I/O functions (sensor, actuation). The core of the
Page 47
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
Figure 4.1: Functional model of an AWSN as a set of controllers (a) interacting with the environment and among each other; (b) interacting with the environment through a set of spatially distributed sensors and actuators; (c) interacting with the environment through a unified Application Interface (AI). Observe that interactions between the controllers themselves are now supported through the same paradigm.
Columbus
IST-2001-38314
WPBD
Page 48
4.2 Ad-hoc Wireless Sensor Networks - Functional Architecture
SNSP is formed by: • Query Service (QS) (controllers get information from other components) • Command Service (CS) (controllers set the state of other components) In addition, the SNSP should provide at least the following supporting services, essential to the correct operation of most sensor network applications and the ad- hoc nature of the network: • Timing Service (TSS) (components agree on a common time) • Location Service (LS) (components learn their location) • Concept Repository Service (CRS) (components agree on a common definition of the concepts during the network operation and maintain a repository of the capabilities of the deployed system) This is not an exhaustive list and more services can be built on top of these basic ones.
4.2.3
Sensors and Actuators
A sensor is a component that measures the state of the Environment. We identify two types of sensors: 1. Simple sensors: devices that directly map a physical quantity into a data value and provide the measure upon request. Devices that measure physical parameters, such as temperature, sound, and light, as well as input devices such as keyboards or microphones that allow external users to enter data or set parameters are examples of simple sensors.
Page 49
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
2. Virtual sensors: components that overall look like sensors in the sense that they provide data upon an external request. Virtual sensors are defined by the list of parameters that can be read and by the primitives that are used for reading them. Examples of virtual sensors are • A sensor that provides an indirect measure of a certain environment condition by combining one or more sensing functions with processing (e.g., transformation, compression, aggregation). • A controller when it is queried for the value of one of its parameters. • An external network providing data through gateways. Let us consider a sensor network accessing an external information service provided by a global network (such as, for instance, weather forecasts or energy prices available on an Internet web-server). External services are accessed through a gateway that interfaces the sensor network with the global network. Through the gateway, the global network appears to the rest of the sensor network as a virtual sensor or a virtual actuator; thus, it might be queried or set by a command. In the example of the query of the weather forecast over the Internet, the gateway and the rest of the global network define a virtual sensor, which delivers data to the sensor network application. An actuator is a component that sets the state of the environment. We identify two types of actuators: 1. Simple actuators that are devices that map a data value into a physical quantity and may return an acknowledgment when the action is taken. Examples of actuators are devices that modify physical parameters, such as heaters and automatic window/door openers, as well as output devices, such as displays and speakers.
Columbus
IST-2001-38314
WPBD
Page 50
4.3 The Query/Command Services: the Core of the SNSP
2. Virtual actuators that overall look like actuators in the sense that they receive values to set some parameters. Virtual actuators are defined by the list of parameters that can be set and by the primitives that are used for setting. Examples of virtual actuators are: • An actuator that provides an indirect way of controlling a certain environment condition by combining one or more physical actuators with processing (e.g., transformation, and decompression). • A controller whose parameters are set by other controllers. • A network getting commands to take actions through gateways.
A detailed list of the relevant parameters of sensors and actuators is given in [52]. The most important feature, differentiating sensors and actuators from controllers, is that the former are purely reactive components; that is they will only read or set the state of the environment upon a request or command of a controller. Hence, only controllers can initiate events or sequence of events.
4.3
The Query/Command Services: the Core of the SNSP
To ensure generality and portability, our proposal formalizes only the primitives that allow the Application to access the services of the SNSP, and does not define the architecture of the SNSP itself. In this section, we outline the functionality and the interface primitives for the two core services of the SNSP: Query and Command. Due to space limitations, we provide a detailed description only of the Query Service primitives. Details of the other services are given in [52].
Page 51
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
4.3.1
Naming
In AWSN applications, components communicate with each other because of specific features, often without knowing a priori which and how many components share those specified features. For example, a controller may want to send a message to all the sensors measuring temperature in a region, say in the kitchen. In this case, the group of temperature sensors located in the kitchen may be named and addressed for message delivery using the attributes ”temperature” and ”kitchen” rather than using the list of IDs of all the individual sensors having those attributes [53]. A name is an attribute specification and scope pair. An attribute specification is a tuple ((a1,s1), (a2,s2),... (an,sn), expr1, expr2,... exprl), where ai is an attribute; si is a selector that identifies a range of values in the domain of ai; exprk is a logical expression defined by attribute-selector pairs and logical operators. For example, a name can be defined by the pairs (temperature, = 30 C), (humidity, = 70 C) OR (humidity, = 70 commonly used for naming in sensor networks are the physical parameter being measured by sensors or modified by actuators (e.g. temperature, humidity). Attribute specifications are always understood within a scope. A scope is a tuple (O1, O2, ... On, R1, R2,... Rm), where Oi is an organization unit and Rj is a region. A region is a set of locations. We differentiate between two types of regions: • a zone is a set of locations identified by a common name; e.g. the kitchen or the SF Bay • a neighborhood represents a set of locations identified by their closeness to a reference point; e.g. all nodes within a radius of 10 m from a given location An organization is an entity that owns or operates a group of nodes.
Columbus
IST-2001-38314
WPBD
Page 52
4.3 The Query/Command Services: the Core of the SNSP
Organizations are essential to differentiate between nodes that operate in the same or overlapping regions, yet belong to different organizations (for instance, the police and the fire department). In general, names are and must not be unique. In addition, names may change during the evolution of the network because of the movement of nodes or the modification of some attributes. An essential assumption underlying the SNSP is that all the functions participating in the network (that is, controllers, sensors and actuators) always have a sense of their location within the environment.
4.3.2
Query Service
The Query Service (QS) allows a controller to obtain the state of a group of components. A query is a sequence of actions initiated by a controller (query initiator), which requests specific information from a group of sensors (query targets). If the requested information is not available, QS always returns a negative response within a maximum time interval, which has been set previously. Figure 4.2 visualizes the interactions of QS with the query initiator and the query target. In sensor network applications, query targets are typically sensors that provide controllers with the requested measures but in some applications the target may be a group of controllers (considered as virtual sensors) that are requested their current state. Queried parameters. In addition to the physical data being measured by a sensor, a controller may query other parameters related to the sensor, such as time (when the measure was taken), location (where the measure was taken), accuracy (how accurate was the measure), and security (if the measure comes from a trusted source). If no parameter is indicated in a query request, by default the response returns the data measured by the target sensors. The primitives of the Query Service are summarized in Table 1.
Page 53
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
QSRequestWrite (Target, Parameter, QueryClass, ResponseType, Reliability) Initiates a query of the type indicated in QueryClass to obtain a Parameter from the components addressed by the name Target. It returns a QueryId as a descriptor of the query or Error. QSResponseRead (QueryId) Returns the value of the parameter, requested by the query identified by QueryId, if available. If it is not available, it returns a special value indicating the response has not arrived yet. QSClassSetup (Accuracy, Resolution, Timeliness, MaxLatency, Priority, Loc,TimeTag, Operation, Security, ...) Creates a QueryClass, configuring one up to all the parameters in the list. It returns the descriptor of the query class or ERROR QSClassUpdate (QueryClass, Accuracy, Resolution, Timeliness, MaxLatency, Priority, Loc,TimeTag, Operation, Security, ...) Updates one up to all the set of parameters of a QueryClass previously defined. QSStopQuery (QueryID) is used by a query initiator to stop a query and release the QueryID for use in future queries. It returns OK or Error. Table 4.1: Query Service primitives
Columbus
IST-2001-38314
WPBD
Page 54
4.3 The Query/Command Services: the Core of the SNSP
Figure 4.2: Query Service interactions The QS primitives use the following arguments.
QueryID. Multiple
queries, coming from the same or different controllers, can occur concurrently. QS uses a QueryID number to relate a query request with the corresponding responses. An arbitrary integer chosen by the QS (for example the timestamp indicating the time when the query request is sent) can be used as QueryID. The QueryID assigned to a query is always released as soon as the corresponding query terminates. Response Type. In a query, the controller specifies the frequency of the responses it expects from the sensors. Three types of response patterns are especially relevant: • one-time response • periodic responses with interval period p • notification of events whenever an event specified by an event condition occurs Reliability. A query is reliable if the query initiator is guaranteed to receive at least one response, which has not been corrupted. In all other cases, the query is said to be unreliable. The default case is the unreliable
Page 55
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
query. If a reliable query is requested this has to be explicitly specified. The support for reliable queries is provided, within the query service, by means of specific reliability assuring mechanisms. Query Class. A given query can be subject by a wide range of constraints such as accuracy, timeliness, etc. One option is to repeat all these constraints with every QueryRequest. This would make the query messages traveling through the network quite heavy, which does not fit well with the energy-efficiency and light- weight requirements typically imposed on sensor networks. The Query Class allows for a one-time definition of the context of a query by defining and constraining the response scope. The following parameters can be set: • Accuracy and resolution of the sensor measures. • Timeliness, MaxLatency and Priority define response time constraints on the Query • Loc,TimTags indicate if the query response should include time and location tags. • Operations such as max, min, average. They indicate the type of operation to be performed on multiple measures from the same source. • Security. It indicates if the data must be secure or not. • Other constraints ... All the query instances belonging to a certain class must follow the parameters of that class. QS operation. Figure 4.3 plots a sequence of primitive function calls associated with a query. The QS execution follows the client-server model. First, the query class parameters are initialized using QSClassSetup. Then, the controller calls the QSRequestWrite function to initiate individual queries. QS returns the query descriptor (QueryID) to the controller which is blocked
Columbus
IST-2001-38314
WPBD
Page 56
4.3 The Query/Command Services: the Core of the SNSP
Figure 4.3: Query Service execution. Controller/QS interactions are defined by the AI, while the QS/Sensor interface is implementation dependent and might follow for example the IEEE 1451.2 standard
waiting for an immediate answer on whether the query can be initiated. If the answer is negative, an error message is returned. If the query is successfully initiated, QS begins the procedure of getting the parameter requested by the application. Observe that the service does not specify how the data is obtained. For instance, instead of getting a new sensor measure, the QS may reuse local information previously gathered if the queried parameter is still available from a previous query and satisfies the timeliness requirement. In addition, the QS may perform additional functions such as aggregation to process the data coming from multiple sensors. To access the values of the queried parameter, the controller calls the QSResponseRead primitive specifying the QueryID parameter. A query
Page 57
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
QSRequestRead (parameter) provides a virtual sensor with the name of the parameter being requested QSResponseWrite (parameter, value) provides the value for a requested parameter Table 4.2: Primitives used for the interaction with Virtual Sensors with a one-time response or an event notification terminates either when the response arrives or when a timeout of duration max time set by the query initiator expires. The timeout prevents a query from staying active for an unnecessarily long time, especially when it is not known a priori how many responses will be received. It also allows the corresponding QueryID to be released. In the case of a periodic- response query, the query can be terminated at any time by the application simply calling the QSStopQuery primitive. Additional primitives used by Virtual Sensors. Virtual Sensors necessitate the introduction of the following primitives, reading the parameter being queried, and writing the corresponding value, respectively ? acting as counterparts for the corresponding functions in the controllers. As a special case, controllers acting as virtual sensors use them to interact with other controllers that have queried their parameters. These primitives substitute the implementation dependent QS/Sensor interface used for simple sensors.
4.3.3
Command Service
The Command Service (CS) allows a controller to set the state of a group of components. A command is a sequence of actions initiated by a controller (command initiator), which demands a group of components (command targets) to take an action. The Command Service operates completely symmetrical to the Query Service defined above-its primitives are hence omitted for the sake
Columbus
IST-2001-38314
WPBD
Page 58
4.4 Auxiliary Services
of brevity. One major difference should be pointed out - while requests are self-confirming (that is, a set of values is returned), commands are not. Often, there is a need to know if a command has reached its targets. Hence, we differentiate between commands of the type confirmed (where actuators return an acknowledgment to the controller after the action is taken) or unconfirmed (if they do not send any acknowledgment).
4.4
Auxiliary Services
4.4.1
Concept Repository Service
The Concept Repository Service (CRS) maintains a repository containing the lists of the capabilities of the network and the concepts that are supported. The CRS plays a key role in the network operation because it allows distributed components to refer to common notions of concepts such as names, attributes and regions. Moreover, it maintains agreement on these concepts in the presence of changes occurring dynamically during the network operation (e.g. new nodes join the network, or existing nodes move across region boundaries). In line with the philosophy of this paper, we do not prescribe how the CRS is implemented. For instance, the CRS may be centralized or distributed over the network. In both cases the correct operation of the system requires that the repository be updated in a timely manner when parameters change. For example, a node that moves from the kitchen to the living-room must update the region in which it is located and therefore must check which region includes its new coordinates. While the Location Service (Section 4.3) gives a node its new spatial coordinates, the CRS provides the node with the association of these coordinates with the ”living-room” region. In addition to supporting the operation of a single network, CRS supports the interoperation of multiple networks (discussed in [52]) since it provides a complete and unambiguous definition of the capabilities of each
Page 59
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
network. CRS holds the definition of the following concepts: 1. Attributes, used to define names. Examples of attributes, common especially in environment monitoring applications, are temperature, light, and sound. Attributes can be added to the repository either by the application or automatically (”plug-and play”) by the CRS, when the platform is augmented with sensors (or actuators) that read (or write) a new attribute. 2. Regions, used to define the scope of a name. The name of a zone is added to the repository together with the zone boundaries, expressed in terms of the spatial coordinates. During the network operation, a component that knows its spatial location can get through the CRS also the name of all the zones that include its location. 3. Organizations, used to define the scope of a name. Organizations indicate who owns or operates certain groups of nodes (e.g. the police, the fire department). The capability of distinguishing nodes by organization is necessary when there are nodes performing the same function (e.g. sensor nodes of the same type) in the same region, but are deployed and used to achieve a different task or belong to a different owner. 4. Selectors, logic operators and quantifiers, used to define names and targets of queries and commands. The following types are the most commonly used: • selectors: ¿ n, ¡ n, even, odd, =. • logic operators: OR, AND, NOT • quantifiers: all, at least k, any
Columbus
IST-2001-38314
WPBD
Page 60
4.4 Auxiliary Services
The CRS primitives allow to Add or Delete an attribute (or a region or an organization), Get the list of all the defined attributes (regions, organizations) and Check if a given attribute (region, organization) is currently present in the repository.
4.4.2
Time Synchronization Service
The Time Synchronization Service (TS) allows two or more system components to share a common notion of time and agree on the ordering of the events that occur during the operation of the system. Typical application scenarios that require time synchronization are ”heat room at 6 pm” or ”send me the temperature within 5 seconds”. The TSS is used to measure time and check the relative ordering among the events in the system. If the events to be compared belong to the same component (e.g. ”retransmit message if acknowledgment is not received within 10 seconds”) only local resources such as clock and timers are used. If they belong to different components, TSS uses a distributed synchronization algorithm to ensure that their clocks are aligned in frequency and time value. A component can be in synchronized or not-synchronized state. If it is in not- synchronized state, time is measured by the local clock and is called individual time. If it is in synchronized state, the time value is agreed with one or more other components, which share a common reference. Synchronizing multiple components may require a time interval, called synchronization time, and can be achieved up to a certain specified accuracy. The synchronization scope of a component is defined by the set of components with which it is synchronized. TSS primitives allow to setup the resolution and accuracy of synchronization (TSSSetup), activate or deactivate the synchronization with components specified in a given scope (TSSActivateSynchronization), get the time (TSSGetTime), and set a timer to expire after a given number of time units (TSSSetTimer).
Page 61
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
4.4.3
Location Service (LS)
The Location Service (LS) collects and provides information on the spatial position of the components of the network. The operation of sensor networks commonly uses location as a key parameter at several levels of abstraction. At the application level location is used for example to define the scope of names in queries (e.g. ”send me the temperature measures from the kitchen”). Depending on the use, location information can be expressed as a point in space or by a region where the node is located. A point location, or simply location, is defined by a reference system and a tuple of values identifying the position of the point within the reference system. LS supports the definition of location in a Cartesian Reference System, where location is expressed as a triple (x,y,z), with x, y and z representing respectively the distance from the origin along the axis x, y, and z. Regions can be easily expressed within this framework. A zone can have the form of a block, a sphere, a cylinder, or a more complex shape. A block is represented in terms of the coordinates of four vertices, while a sphere is represented by the center and the length of its radius. Neighborhood is a region defined by the proximity to a reference point. Proximity is expressed either in terms of Euclidean distance (spherical region) or by the number of routing hops. LS primitives allow to setup the resolution, accuracy and reference system (LSSetup), get the location of a component (LSGetLocation), and get the list of the regions including a specified location (LSGetRegions). It is often useful for a controller to know what type and how many nodes are located in a given region. Also, it maybe useful to know if these nodes are static (that is, have not moved for a long time), or are mobile. It turns out that this information can be readily obtained using the primitives defined in the Query Service querying for parameters ”location” and ”mobility”. To find the number of temperature sensors in the kitchen region, it suf-
Columbus
IST-2001-38314
WPBD
Page 62
4.5 A Bridge to the AWSN Implementation
fices to launch a query for parameter ”location” with name ”(temperature, kitchen)” and to determine the cardinality of the returned list.
4.5
A Bridge to the AWSN Implementation
The SNSP and the AI, as described, are purely functional entities, which are totally disconnected from the eventual implementation. The only nod to performance metrics is that queries and commands can identify constraints on timeliness, accuracy, etc. However, one of the differentiating features of AWSNs is that implementation issues such as the trade-off between latency and energy- efficiency play a crucial role. Conserving independence from implementation, while being sensitive and transparent to implementation costs, is an essential part of this proposal. To accomplish these goals, a second platform is defined. The Sensor Network Implementation Platform (SNIP) is a network of interconnected physical nodes that implement the logical functions of the Application and the SNSP described earlier. Choosing the architecture of the SNIP and the mapping of the functional specification of the system properly is a critical step in sensor network design. The frequency of the processor, the amount of memory, the bandwidth of the communication link, and other similar parameters of the SNIP ultimately determine the quality and the cost of the services that the network offers. Figure 4.4 visualizes the concept of mapping the Application and the SNSP functions onto the SNIP nodes Ni. The shaded boxes and the dotted arrows associate groups of logical components with physical nodes. An instantiated node binds a set of logical Application or SNSP functions to a physical node. Physical parameters such as cost, energy efficiency, and latency can only be defined and validated after mapping. The operation of the Application or the SNSP may depend upon the value of a physical parameter of the SNIP (for instance, amount of energy
Page 63
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
Figure 4.4: Mapping Application and SNSP onto a SNIP level, quality of the channel, etc). To make this information available in a transparent fashion, an additional service called the Resource Management Service (RMS) is provided, which allows an application or other services of the SNSP to get or set the state of the physical elements of the SNIP. Typically, the RMS can access physical parameters such as the energy level of a node, the power management state, the quality of the radio channel, the energy cost of certain operations and their execution time, the transmission power of the radio. The RMS can be accessed using the same primitives offered by the QS and the CS. The main difference is that the parameters being set or queried do not belong to the environment but to the physical nodes of the network.
4.6
Summary
A service-oriented platform for the implementation of AWSN applications was presented. We believe that by defining an application interface at the
Columbus
IST-2001-38314
WPBD
Page 64
4.7 Design of AWSN Using the PBD Paradigm
service layer, it should be possible to develop and deploy sensor network applications while being agnostic about the actual network implementation yet still meeting the application requirements in terms of timeliness, lifetime, etc. Some of the concepts introduced in this paper have broader applicability than AWSNs. For instance, a concept such as the CRS would also be useful in the operation of ad-hoc multimedia networks. In fact, the development of a service- based application interface for this emerging class of applications in a style similar to the one presented here seems like a logical next step the potential success of ambient intelligence hinges on the simple and flexible deployment of both media and sensor networks and the interoperability between the two.
4.7
Design of AWSN Using the PBD Paradigm
A specification can be loosely defined as a description of a system. It may be informal and consist of just a few graphical sketches or sentences in natural language, or be formally described using a language with formal semantics. The design of a system consists of a sequence of steps, in which new details are gradually introduced until the physical implementation. Each step identifies a specification at a different level of abstraction. At the most abstract levels, a specification defines the functionality of the system as a whole, at more refined levels, it usually consists of a network of distinct components interacting and cooperating to realize a required collective behavior. The behavior of a system is defined by the sequences of actions that it takes as a response to input stimuli from its environment. Actions [54] [4] are the elementary units of behavior that make the system evolve from one state to another. Actions can be of several types: computation (perform arithmetic operations), control (select the next action to be taken), read/write (read or modify variables). Embedded systems specifications, especially in networking and multimedia applications, are heterogeneous and
Page 65
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
include a mix of different types of actions. Two actions whose order of execution is not specified are said to be concurrent, as they may occur in any order or at the same time. They are said to be in conflict, if either one is executed as a result of a control decision. They are causally ordered, if one is always executed before the other. Distributed system specifications usually consist of sequential objects that run concurrently and communicate through other objects called channels storing the messages they exchange. A specification is usually associated with a set of constraints that are expressions on quantities that are not defined in the present specification yet, but will be defined after further implementation steps. In other words, constraints define properties that must be satisfied by valid implementations of a given specification. For example, bounds on the communication delay are usually given at the beginning of the design process, but quantities such as time and power are defined only after an implementation architecture is selected (e.g. choosing the processor that runs the software tasks defines the reaction time of an embedded system implementation). In a distributed system specification, the network supporting the communication among components is one of the major factors of complexity. The initial network specification is defined by the behavior of the interacting components and is associated with cost and performance requirements. As the design proceeds, the network specification is successively refined: the structure is described in terms of the topology (nodes and links), the behavior in terms of the protocols that rule the exchange of messages among components. Protocol specifications include several types of actions, that are either in sequence (“send an acknowledgment after receiving the packet”), or in conflict (“forward packet to upper layer if correct otherwise discard it“), or concurrent (“write to multiple channels”). Constraints on protocol implementations usually concern QoS parameters such as error rate, throughput, maximum delay, duration of a service etc. [55] [56] The types of actions in a protocol specification and the type of constraints drive the choice of the
Columbus
IST-2001-38314
WPBD
Page 66
4.7 Design of AWSN Using the PBD Paradigm
physical resources that define the physical implementation. Protocols can be fully implemented in HW, in SW, or as a mix of the two. HW is the preferred choice for protocols with tight power and real-time requirements. SW implementations are chosen when mostly flexibility matters and timing and power issues are not so critical. Therefore, protocols are usually implemented partially as software tasks running on a processor and partially as Application Specific Integrated Circuits (ASICs) and Field Programmable Gate Arrays (FPGAs). Communication networks are commonly designed decomposing the problem into a stack of layers following the OSI reference model. Each protocol layer with the underlying channels defines a platform that provides communication services to the upper layers and, at the top of the stack, to the application-level components. There is an ongoing debate about the use of the OSI protocol stack since, as we shall see in the next section, its layering structure is standard and application-independent, therefore, rather inefficient to use in those applications where optimization is of paramount importance such as wireless sensor networks. Our approach presented here is to leverage the idea of layering the protocol stack in the design of wireless sensor network as in the OSI model but not to fix a priori the layers themselves. By allowing the freedom of choice in the layer structure, we aim to optimize along criteria such as power consumption, reliability and response time since the applications of wireless sensor networks depend critically on these quantities. As a result, the protocol stack architecture includes only the layers that are relevant for the target application and the protocol functions, rather than being assigned a priori to specific layers, are introduced in the position where the services they offer are required. To implement an AWSN networks, the functionality is mapped onto a Network Platform (NP) that consists of a set of processing and storage elements (nodes) and physical media (channels) carrying all the messages exchanged by nodes [2]. Nodes and channels are the architecture resources in
Page 67
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
the NP library and can be identified to a first extent by some parameters like processing power and storage size for nodes, and bandwidth, delay and error rate and for channels. Usually, choosing a Network Platform requires selecting from its library an appropriate set of resources and a network topology: for each parameter a broad range of options is usually available. Therefore, the design space to be explored is very wide. Moreover, choosing an NP is especially challenging for distributed wireless networks, because of the inherent lossy nature of the physical channels that connect the nodes. In these cases, when reliable communication is required, like in our system target, it is necessary to introduce additional resources (that is, reliable acknowledgement protocols) to overcome the effects of noise and interference. However, introducing reliable and complex protocols requires adding more processing power and more memory capabilities in the nodes. As a result, the protocol constraints often dominate the implementation cost function and the design effort. Therefore, in selecting an NP it is essential to balance the cost of all different components and to trade between the use of complex protocols and the adoption of more reliable physical or logical wired/wireless channels. The section is organized as follows: we begin by reviewing the OSI standard and pointing out its strengths and weaknesses. Then we propose the definition of Network Platform including quality of service parameters and give some examples.
4.7.1
Communication Networks: OSI Reference Model
A common practice to simplify the design of general purpose communication networks consists in dividing the problem into different layers of abstraction, according to the Open Systems Interconnection (OSI) Reference Model (RM) that was standardized by the International Standard Organization (ISO) in 1983 and, since then, widely used as a reference to classify and structure communication protocols [17].
Columbus
IST-2001-38314
WPBD
Page 68
4.7 Design of AWSN Using the PBD Paradigm
The OSI-RM groups the protocol functions into the following seven layers [18]: 1. Application Layer: defines the type of communication between application processes and links them to the lower layers of the stack. Examples: HTTP, SMTP, FTP. 2. Presentation Layer: defines the format of the data being transferred, by performing transformations such as encryption, compression, conversion between coding formats. 3. Session Layer: coordinates the establishment and the maintenance of connections among application processes. It inserts synchronization points in the data stream to allow resynchronization in case of errors. 4. Transport Layer: provides reliable communication among end users through functions such as segmentation and reassembly, flow and error control. Examples: TCP, UDP. 5. Network Layer: establishes connections across sub-networks and intermediate nodes by choosing a route from source to destination and forwarding packets along this route. 6. Datalink Layer: provides reliable communication over a link by packaging bits into frames or packets and ensuring their correct communication. It includes a Logical Link sub-layer, with error detection or correction functions such as CRC, FEC, ARQ, and a Medium Access Control sub-layer, that defines a policy for arbitrating the access to a shared medium. Examples: HDLC, SDLC, X.25. 7. Physical Layer: takes care of the transmission of a bit stream over a physical channel by defining parameters such as the electric level, and including functions such as modulation and encoding.
Page 69
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
Figure 4.5: OSI-RM layering structure [3] The OSI-RM is based on the concepts of protocol and service, as illustrated in Figure 4.5. A service is a set of operations (primitives) that a layer offers to upper layers. A protocol layer defines an implementation of the service primitives and, similar to API, their details are hidden to upper layers. Let us consider Layer N in Figure 4.5: this layer provides services that Layer N+1 uses to communicate with its peer in a remote host. Layer N+1 invokes the service primitives available at the interface with Layer N (interfaces are called Service Access Points or SAPs) and communicates over the virtual channel made available by Layer N and the lower layers. Every message is passed through all lower protocol layers in the local host and then transmitted to the remote host over the physical medium. At each layer, a frame including the payload data and a header with control information is created and passed to the lower layer. When the frame arrives at the receiver end, each layer takes off and interprets the header introduced by the corresponding layer at the sender host and passes the rest of the frame to the upper layer.
Columbus
IST-2001-38314
WPBD
Page 70
4.7 Design of AWSN Using the PBD Paradigm
The use of the OSI-RM has several advantages. First of all, identifying layers from the beginning of the design process allows to decompose the problem into a number of independent and easier to handle sub-problems delimited by well-defined interfaces. This approach implies high modularity, because the interactions between adjacent layers are limited to the service primitives in the SAPs. Therefore, a layer can be easily replaced with another one only if it has consistent interfaces and provides comparable services, without the need of modifying other layers of the stack. Applying such a modular approach also allows to partition the workload among different teams, whose work can proceed in parallel once they agree on the common interfaces. However, there are also several drawbacks concerned with the use of the OSI-RM [15]. First of all, the layers are against any form of cross-layer optimization and therefore limit the performances theoretically achievable. The main problem is that often distinct protocol layers manipulate the same data, and it is inefficient to repeat the load and the store of these data from memory for each layer. It is rather convenient to read the data from memory once and perform as many manipulations as possible while holding the data in cache or registers [19]. Another issue is the semantic isolation of the protocol layers: that is, each layer knows only the meaning of the data defined at that level. This means that a certain layer is not aware of the meaning of the contents exchanged by the upper protocol layers or arriving from the lower ones. For example, let us consider the well known issue of the poor performances of the TCP protocol [20] over a wireless medium. The TCP protocol provides reliable communication by using a window scheme that regulates the transmission rate based on a network congestion control algorithm. The protocol is based on the assumption that the underlying medium is reliable and that losses due to noise are negligible if compared to those due to buffer congestion. This assumption is not valid at all for a wireless medium. Hence,
Page 71
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
when TCP runs over a wireless channel, packet losses due to the low Signal to Noise (plus Interference) Ratio are misinterpreted by the protocol, which reduces the transmission rate to avoid network congestion. As a result, the throughput of TCP over a wireless link is rather low. In conclusion, the OSI-RM is commonly used as a reference, especially because the concepts of layering and service allow to simplify complexity of the network design problem. However, applications that require protocol functions not included in the OSI-RM (e.g. positioning in the wireless sensor networks) and the increasing demand for high performance protocols have further emphasized the limitations of the OSI approach and the need of highly optimized protocol stacks [3].
4.7.2
Network Platforms
A Network Platform (NP) is a library of resources that can be selected and composed together to form Network Platform Instances (NPIs) and to support the communication among a group of interacting objects. An NP library includes resources of different type. One distinction is between logical resources (e.g. protocols, virtual channels...) that are defined only in terms of their functionality and physical resources (e.g. physical links). An orthogonal distinction is between resources that perform or implement computation functions and communication resources that provide logical or physical connectivity. The structure of an NPI is defined abstracting computation resources as nodes and communication resources as links. Ports interface nodes with links or with the environment of the NPI. Hence, the structure of a node or a link is defined by its input and output ports. The structure of an NPI is defined by a set of nodes and the set of links connecting them. The behavior of an NPI is formalized using the Tagged Signal Model introduced by Lee and Sangiovanni-Vincentelli. Before diving into the precise definition of Network and Network API Platforms, we summarize the model
Columbus
IST-2001-38314
WPBD
Page 72
4.7 Design of AWSN Using the PBD Paradigm
Figure 4.6: Process Composition in TSM to make the report self consistent.
Preliminaries: the Lee-Sangiovanni-Vincentelli Tagged Signal Model The Tagged Signal Model [21] is a denotational [22] framework proposed by Lee and Sangiovanni-Vincentelli to define properties of different models of computation. The denotation of a system component modeled as a process is given as a relation on the signals that define its interaction with concurrent processes. Concurrent processes have sets of behaviors, each defined as a tuple of input and output signals. In the TSM the event is the key modeling element. Given a set of values V and a set of tags T , an event e is defined as a member of T × V . Tags are used to model precedence relationships, values represent tha data associated with an event. A signal s is a set of events, i.e. a subset of T × V . Functional signals are (partial) functions from T to V . The set of all signals S = ℘(T × V ) (powerset of T × V ), while S n is the set of all tuples of n signals. A process P is a subset of S n where n is the number of input or output signals. A signal s s.t. s ∈ P is a behavior of P . If n > 1 a process can be seen as a relation between the n signals. The composition of multiple processes is defined as a process Q whose
Page 73
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
behaviors are in the intersection of the behaviors of the component processes. T Hence, Q = Pi0 , where Pi0 is derived from Pi augmenting the signal tuples so that they are defined in terms of the same set of signal S n (Pi0 ⊆ S n ). Connections are modeled as processes where two or more signals must be identical. Figure 4.6 shows the interconection of two processes P1 and P2 through connections C27 (signals s2 and s7 ) and C45 (s4 and s5 ). The projection πI (s) of a signal s = (s1 , s2 , ...sn ) ∈ S n onto S m is πI (s) = (si1 , si2 , ...sin ), where I = (i1 , i2 , ...im ) is an ordered set of indices in the range [1,n], that defines the signals that are in the projection and which order they appear. Projection simplifies the composition of processes by removing redundant signals. The definition of processes as relations among signals is general and does not distinguish between input and output signals. To introduce this distinction, [21] defines an input to a process P as a constraint A ⊆ S n imposed to P so that A∪P is the set of acceptable behaviors. B denotes the set of behaviors of a process for a set of possible inputs. Consider a process P ⊆ S n with m input signals. Then, for each A ∈ B, A = {s : πI (s) = s0 } for some s0 ∈ S m . To define a process in terms of a mapping of input signals into output signals, consider an index set I for m input signals and an index set O for n output signals. A process P is functional with respect to (I,O) if for every s ∈ P and s0 ∈ P , where πI (s) = πI (s0 ), it follows that πO (s) = πO (s0 ). For such process there is a single-valued mapping F : S m → S n such that ∀s ∈ P , πO (s) = F (πI (s). Tags are used to define ordering among events. An ordering relation ≤ on a set T is a relation on T such that, for all t1 , t2 and t3 : • t1 ≤ t1 (reflexive), • t1 ≤ t2 and t2 ≤ t3 imply t1 ≤ t3 (transitive), • t1 ≤ t2 and t2 ≤ t1 imply t1 = t2 (antisymmetric). A set with an ordering relation ≤ is called partially ordered set or poset
Columbus
IST-2001-38314
WPBD
Page 74
4.7 Design of AWSN Using the PBD Paradigm
[23]. The ordering of tags induces an ordering of the corresponding events. In untimed models events are partially ordered, while in timed models such as Discrete Event the set of tags T is totally ordered.
Network Platform Defined NPI components are modeled as processes. The event is the communication primitive. Events model the instances of the send and receive actions of the processes. An event is associated with a message which has a type and a value and with tags that specify attributes of the corresponding action instance (e.g. when it occurs). A signal is a totally ordered sequence of events observed at one port 1 . The set of behaviors of an NPI is defined by the intersection of the behaviors of its individual component processes. A Network Platform Instance is defined as a tuple N P I = (L, N, P, S), where • L = {L1 , L2 , ...LN l } is a set of directed links, • N = {N1 , N2 , ....NN n } is a set of nodes, • P = {P1 , P2 , ...PN p } is a set of ports. A port Pi is a triple (Ni , Li , d), where Ni ∈ N is a node, Li ∈ L∪Env is a link or the NPI environment and d = in if it is an input port, d = out if it is an output port. The ports that interface the NPI with the environment define the sets P in = {(Ni , Env, in)} ⊆ P, P out = {(Ni , Env, out)} ⊆ P • S=
T
N n+N l
Ri is the set of behaviors, where Ri indicates the set of
behaviors of a resource that can be a link in L or a node in N 1
We consider only discrete signals, because analog signals, under the conditions defined
by the Nyquist Sampling Theorem [24], can be described in terms of the equivalent discrete signals.
Page 75
Columbus
IST-2001-38314
WPBD
A Service-Based Universal Application Interface for Ad-hoc Wireless Sensor Networks
4.7.3
Network Platform API
An NPI is uniquely identified by its components and by the intersection of their individual behaviors. A more convenient description of an NPI is one where the details of its internal components are abstracted away and only the behaviors observed at the P in and P out ports are considered. Hence, one abstraction of the NPI behavior is simply defined by projecting the Q behaviors s ∈ S over P in and P out : S 0 = (P in ∪P out ) S. Another abstraction can be defined by taking into consideration the semantics of an NPI. An NPI is an entity whose purpose is to support the transfer of messages among its users connected through the ports P in ∪P out . Therefore, events observed at the input and output ports of an NPI are highly correlated: an event observed at an output port frequently carries the same message as another event observed at an input port. Multiple events at the same port may carry messages related to the same transaction and therefore to the same group of users. This observation allows to identify within the NPI behaviors subsets of correlated events that correspond to sequences of exchanges of messages between the components communicating over the NPI (NPI users). The basic services provided by an NPI are called Communication Services (CS). A CS consists of a sequence of message exchanges through the NPI from its input to its output ports. A CS can be accessed by NPI users through the invocation of send and receive primitives whose instances are modeled as events. An NPI Application Programming Interface (API) consists of the set of methods that are invoked by the NPI users to access the CS. For the definition of an NPI API it is essential to specify not only the service primitives but also the type of CS they provide access to (e.g. reliable send, out-of-order delivery etc.). in
A Communication Service (CS) is a tuple (P , P where P
in
⊆ P in is a non-empty set of input ports, P
out
out
, M, E, h, g, ∆}. According to (5.2), the functional specification is guaranteed for controller parameters c such that J(Td , M ) = (1 + M )Td + ∆ − D ≤ 0 . AFR control.
A PI controller is designed to achieve zero asymptotic error
for a fuel mass step disturbance δq . Control parameters are c = (KP , KI , β) with: (KP , KI ) PID tuning parameters, and β an anti-windup parameter. The abstract plant model is AFR(t) = LP (t) ∗
ma (t − τ0 ) , qobj (t − τ0 ) + δb (t)
where LP (t) is a unitary gain low-pass filter and τ0 models the induction to exhaust delay. Following (5.2), functional specification is achieved for control parameters in the set depicted in Fig. 5.4. Fuel injection actuation.
A standard piecewise linear approximation of
the injector characteristic is assumed: q = 0, if tinj < t0 , q = αtinj , if tinj ≥ t0 . Gain α depends on the pressure across the injector. Hence, α is represented as the sum of a nominal value αN and an uncertain component αU (which depends on the pressure disturbance δp ). The control algorithm is the inversion of the nominal piecewise model of the injector. The maximum value of αU is chosen as the control parameter c = δα = maxδp αU .
Columbus
IST-2001-38314
WPBD
Page 94
5.4 From Control Strategies to Implementation Abstract Model
60
50
KI
40
30
20
10
0
1
2
3
4
5 KP
6
7
8
9
10
Figure 5.4: AFR control parameter admissible set J(c) ≤ 0, for settling time specification J1 (gray) and overshoot specification J2 (cyan).
According to (5.2), the functional specification is guaranteed for the control parameter δα satisfying: 2
2 M M δα (tM inj − t0 )/2tinj ≤ qerr .
5.4
From Control Strategies to Implementation Abstract Model
Finally, referring again to Fig. 5.3, we describe the third step of the methodology in which control strategies are refined in an implementation abstract model.
Page 95
Columbus
IST-2001-38314
WPBD
Integrated control-implementation design for automotive embedded controllers
y
d u
∆u
+
Plant
Controller
nu
w
∆w
+
∆v
+
nw v
nv
Figure 5.5: Abstract representation of the effects of implementation non– idealities.
5.4.1
Implementation platforms.
The essential issue for representing implementation platforms in an abstract way is to determine the effect of implementation platforms on the controlled system performances. Accuracy of measurements and actuations, and how to represent the fact that computation and communication take time and may be affected by errors are important in this respect. The main effects of a particular implementation on the behavior of the controlled system must be carefully classified and characterized. They can be represented in terms of perturbations on the controller input/output channels, as illustrated in Fig. 5.5. Disturbances nu , nw , nr and blocks ∆u , ∆w , ∆r represent, respectively, value and time domains perturbations due to the implementation and acting on the control inputs u, feedback outputs w and reference signals v. Depending on the selected platform, these perturbations can be represented by different models and characterized by abstract parameters p. A set of implementation platforms with the corresponding exported parameters is defined by:
Columbus
IST-2001-38314
WPBD
Page 96
5.4 From Control Strategies to Implementation Abstract Model
• a number S of different platform structures; • a set of parameters XPs for each platform structure s ∈ {1, . . . , S}; • a set of platform constraints Jv (s, p) ≤ 0,
for v = 1 . . . V .
(5.3)
For a given platform structure s ∈ {1, . . . , S}, elements p ∈ XPs are referred to as the platform parameters.
5.4.2
Implementation abstract model refinement.
In the control parameters and platform parameters product space, feasible mappings are given by the set U
= {(r, c, s, p) | r ∈ {1, . . . R}, c ∈ XCr , s ∈ {1, . . . S}, p ∈ XPs , such that Ji (r, c, s, p) ≤ 0, for i = 1 . . . N + V }
(5.4)
where Ji include both conservative expressions for (5.2), including the effects of the implementation platform modeled by (s, r), and the platform constraints (5.3). To select the best mapping, i.e. the best implementation platform, we introduce an objective function H(r, c, s, p) and solve arg
min
H(r, c, s, p) .
(5.5)
(r,c,s,p)∈U
For layers of abstraction distant from the actual implementation, H does not represent the real cost, since an accurate estimate of it would be difficult to obtain. In these cases, a better solution, as demonstrated in [42], is to adopt a function that measures the “size” of the design space where platforms at lower levels of abstraction can be selected. If indeed the platform parameters chosen by the optimization process can be easily achieved by platforms at lower levels of abstraction, we minimize the risk of expensive design cycles
Page 97
Columbus
IST-2001-38314
WPBD
Integrated control-implementation design for automotive embedded controllers
that span several platforms and we offer a better platform choice while we are approaching the implementation level. The objective function that reflects these principles was called flexibility function. In some sense, the flexibility function is an auxiliary function that serves the purpose of a more efficient search of the design space. While the macro aspects of this function are easy to establish and can be generalized, the actual choice of flexibility functions is the result of the experience of the designer and can be refined during re-design to reflect more accurately the difficulty of achieving the platform parameters. For example, the flexibility function of a discrete-time platform can be an increasing function of the sampling time. The higher the sampling time, the easier is to find a platform that can support that sampling time.
5.4.3
ECU implementation abstract model design.
In the design of the motorcycle ECU, for each function the main effects of the implementation on the behavior of the controlled system have been modeled as in Fig. 5.5 and implementation parameters (s, p) have been identified, along with constraints (5.3). Then, the feasible parameter set U in (5.4) has been computed. For the examples considered above we have:
Digital input sensing.
The platform parameter p = Wd denotes the
worst case of the computation time of the DT debouncing algorithm (∆u in Fig. 5.5). The set U in (5.4) is defined by J(Td , M, Wd ) = (1 + M )Td + ∆ + Wd − D ≤ 0.
AFR control
. When the PI controller is implemented in the digital
system, the main platform parameters that affect the performance is the sampling time Tc and the worst case execution time WAF R . Hence, p = (Tc , WAF R ). Fig. 5.6 reports a section of the set U contained in composed parameter space (KP , KI , β) × Tc for which the specification is guaranteed.
Columbus
IST-2001-38314
WPBD
Page 98
5.4 From Control Strategies to Implementation Abstract Model
Fuel injection actuation . Threshold t0 depends on the battery voltage Vbat − δbat . The platform parameters p = (δt0 , λ) represent: the maximal variation of the injector threshold t0 due to the battery estimation error δbat (δt0 ), and the accuracy of the injector command (λ). In the composed control and platform parameter space, the performance specification is guaranteed for 2
M M M 2 (δα (tM inj − t0 ) + (α − δα )δt0 (2t0 + δt0 ) + 2λ(tinj − t0 − δt0 ))/2tinj ≤ qerr .
Additional platform parameters are: Wf i the worst case execution time and Tf i the minimum cycle time of the actuation of fuel injection. The execution time parameters can be refined to describe platform structures with different hardware/software partitioning, by writing WAF R = WAF Rhw + WAF Rsw Wf i = Wf ihw + Wf isw p = (WAF Rhw , WAF Rsw , δt0 , λ, Wf ihw , Wf isw ) Assuming that the implementation has a single CPU, the constraint that guarantees the scheduling with total utilization Ucpu can be expressed as Jv(1) (Wi , Ucpu , Ti ) =
m X
(Wi /Ti ) − Ucpu ≤ 0
i=1
where Wi is the worst case software execution time of the component i, and Ti is the execution period of that component. This constraint for the algorithms considered here is written as follows WAF Rsw /Tc + Wf isw /Tf i − Ucpu ≤ 0 . It is important to note that a different hardware/software partitioning is captured by different values of the platform parameters and different values of the objective function (5.5). A pure software implementation is represented with a zero value for any hardware contribution (Wihw ) to the execution time. An interesting implementation platform under investigation
Page 99
Columbus
IST-2001-38314
WPBD
Integrated control-implementation design for automotive embedded controllers
has a fully hardware fuel injection actuation and a fully software AFR control. This implementation is expressed by the following values of the model parameters WAF Rhw = Wf isw = 0, which shows that the scheduling problem is drastically simplified. The definition of a flexibility function for the motorcycle ECU, which will allow us to select a particular implementation platform, is currently under investigation.
5.5
Concluding Remarks and Future Work
The application of an integrated control-implementation design methodology to the development of an engine control system for motorcycles has also been illustrated. The adopted methodology allowed us to: 1. evaluate in terms of performance degradation the main effects of control algorithm implementation, at the first stage of control solution conception; 2. formally express the constraints on the implementation platform that guarantee fulfillment of the system specification. The work documented in this chapter had been achieved through intensive collaborations between control engineers and hardware/software designers.
Columbus
IST-2001-38314
WPBD
Page 100
5.5 Concluding Remarks and Future Work
TC 0.1
0.05
0 0 20 KI
40 60
10
8
6
4
2 KP
0
TC 0.1
0.05
0 0 20 KI
40 60
10
8
6
4
2
0
KP
Figure 5.6: Values of the platform parameter Tc that guarantee settling time specification J1 (top) and overshoot specification J2 (bottom), for admissible control parameters (KP , KI ).
Page 101
Columbus
IST-2001-38314
WPBD
Chapter 6
Platform–based design for electric motor drives 6.1
Introduction
Electric motors are widely adopted in all those applications where electrical power has to be converted into mechanical power and vice versa. In recent years we witnessed a complex evolution regarding the employment of digitally controlled electrical drives within industrial and consumer applications. From a technological point of view, almost all the classical solutions employing direct current motors have been replaced by those making use of digitally controlled alternating current motors. The introduction of various vector-controlled drives has allowed the dynamic performance of a.c. drives to match or sometimes even to surpass that of the d.c. drive. By using vector control, in fact, it is possible to control separately the flux and torque producing components of the supply current. Digitally controlled drives have spread very fast to the industrial world. Starting from the classical application fields where the velocity control is needed (tooling machines, robotics, rolling-mills, etc.), the employment of electrical drives is also extending to applications where the velocity control
Columbus
IST-2001-38314
WPBD
Page 102
6.1 Introduction
is optional, the reason of that being energy saving characteristics (pumps, fans, compressors, etc.) or better performance requirements (such as hanging loads movement). Nowadays, on the contrary, we witness a fast growth in the use of electrical drives in consumer and domestic fields. Just think about cooling and conditioning applications, both for domestic and commercial use. Several different applications can be mentioned, such as washing machines, elevators, portable tools. Furthermore, in domestic applications, the present and future needs for new control strategies, smarter diagnosis features and the possibility for remote interaction among different appliances (domotics), ask for the presence of on-board intelligent (programmable) systems, such as microprocessor systems. Their presence can also be effectively exploited for the control of the electrical drive itself to satisfy important system requirements such as: high energy efficiency and performance, cost effectiveness and comfort. Among the emerging applications of digitally controlled electrical drives we can mention the automotive area, where a lot of drives are employed on-board for widespread tasks. The economical relevance of such phenomenon is very important, because they represent wider markets than industrial ones. By analysing the general aspect of such different and spread applications, there are essentially three types of motors: induction motors, permanent magnet synchronous motors (both brushless a.c. and d.c.) and reluctance motors. These solutions allow to meet the requirements for automation and integration of the drive system when controlled by means of dedicated digital devices (microprocessors, digital signal processors, etc.) and specific control algorithms, as a function of both the adopted motor and the specific application. The development of digital control techniques for electrical drives and power converters, provides the possibility of carrying out flexible control systems that can be easily adapted to different applications with as little modifications as possible or varying only the software. At the moment,
Page 103
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
there are no ultimate solutions to achieve the described potentiality. Then, even if there is room for innovative proposals, a considerable design effort is necessary to characterise the most convenient choices and to attain a valid and cost effective product within reasonably short times. In this sense, it would be of great interest to leverage both hardware and software design methodologies and tools for electrical drives that, starting from the specifications of the actual application, would allow design and performance verification, both on a simulation environment and on the actual system, with the lowest impact regarding development and prototyping times. In fact, due to competition and customer’s demand, the market for electrical drive systems forces suppliers of control units to shorten development times while at the same time improving system performance. This requires fast implementation of modern and sophisticated control algorithms in the drive system. As a short time- to-market is essential, developing new control methods on the embedded control system would be too inefficient and time-consuming. The solution to this quandary in our opinion is applying a control design and simulation software with a seamless transition to a real-time test environment. Moreover the two opposite demands for high performance and cost reduction of drive systems ask for a complete functional analysis of hardware/software partitioning and integration also in the earlier steps of the design cycle. In this sense Platform-Based-Design represent an essential component for design organisation and partitioning. The recursive nature of that methodology allows to deal with the design task at different levels of refinement, both for the hardware and software design process. As far as the definition of the hardware architecture is concerned, the essence of the problem is the choice of the systems requirements for the development of a flexible platform for the control of electrical drives, with particular attention to the hardware/software integration and partitioning
Columbus
IST-2001-38314
WPBD
Page 104
6.2 Platform-Based-Design approach for electrical drives
of the functions to be implemented. The obtained results in terms of design tools and methodologies could then be adopted for each specific case aiming at reducing the global design and development time, thus allowing cost reduction and reliability improvements. As far as the definition of the software architecture, the reduction of the design cycle and the prototyping times could be obtained by the definition of an electrical drive simulation package, including a library of the optimised dynamical models of the overall drive system. The simulation platform should also be designed in order to allow control system design and including the target-system code generation feature. The goal of our research activity is applying concepts of the PBD methodology to electrical drives controllers design with particular attention to speed and effectiveness of the function to implementation transition. The general methodology is applied to the development of sensor-less high-performance control algorithm for interior permanent magnet synchronous motors. Sensorless control represents in fact the future of electrical drives, involving complex control techniques that require high computational burden and hardware/software interaction representing a particularly important and challenging research topic.
6.2
Platform-Based-Design approach for electrical drives
The application of the general Platform-Based-Design approach to electrical drives control is quite similar to the general case of process control systems. The line of thought of this section has been specialised to electrical drives control but remains also valid the general case of process control. In this report, a general drive controller is decomposed into different subsystems and functionalities. A tentatively classification has been proposed based on the interaction of each particular subsystem with system hard-
Page 105
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
ware (i.e. power converter, sensors, etc.), which is a function of the both the adopted processing architecture (microcontroller, ASIC, etc.) and the actual application requirements. The actual implementation of each subsystem is not predictable a priori. This means that the possibility to implement each subsystem by hardware (by means of a dedicated digital system) or by software (by means of a microcontroller) is an essential issue for controller implementation which has to be evaluated by means of hardware/software co-design concepts and tools. To refer the general design approach to an actual case study, the common adopted and accepted decomposition of a drive controller into hardware and software functionalities is considered. Each functionality can be classified in the following manner: • pure processing subsystems (PPS), which only depend on the adopted processing architecture but do not depend on the peripheral and interface hardware (PIH) and on application specific hardware (ASH); • peripheral and interface hardware dependent subsystems, which depend on the particular choice of the hardware implementing the interface and communication tasks within the subsystems of the drive; examples of this subsystems comprise the pulse-width-modulation generation module, the analog-to-digital conversion module, etc.; • application specific hardware dependent subsystems, which depend on the actual configuration of the links between the controller and system hardware of the drive (i.e. inverter, sensors, etc.). The definition of the data exchange characteristics (protocol) between PPS and PIH and between PIH and ASH is a critical task to be performed in the design stage and hardly influence the flexibility of the design itself and the possibility of design subsystems re-use for different applications and/or hardware platforms.
Columbus
IST-2001-38314
WPBD
Page 106
6.3 An application of PBD design approach: sensor-less control of electrical drives
As will be discussed later, it is important to identify in the early design stages those characteristics of the architectural layer that should be visible to the application layer and those that should be hidden. The characteristics is not only a parameter that should be passed through the underlying layer but could also mean that a certain dynamical model of a system functionality should be taken into account in the application design stage itself.
6.3
An application of PBD design approach: sensorless control of electrical drives
6.3.1
Sensor-less control of electrical drives
The control of electrical drives involve the presence of different transducers, such as shaft encoders or tacho–generators, that are used to provide velocity and/or rotor position information. The presence of these sensors increases the drive cost and encumbrance and reduces the robustness of the overall system. Due to this, in the last decades, many research efforts aimed at eliminating mechanical sensors, i.e. developing sensor-less control methods. Sensor-less drives, in fact, are the future for electrical drives and involve complex control and estimation techniques that generally require high performance hardware/software platforms. The classical estimation techniques are based on state observers, both deterministic (Luenberger and non-linear observers) and stochastic (Kalman filter). Observation algorithms make use of the analytical model of the motor and allow the estimation of both the rotor velocity and flux from the motor terminal quantities (currents and voltages). They guarantee a proper state estimation in all the motor operating conditions (torque/speed) except for the low speed region (or at start-up) where are not reliable or not effective. Their implementation is relatively simple by means of microprocessor systems and the standard control hardware is sufficient. The influence of parameter deviations is one of the most critical aspect. Among the observer
Page 107
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
based methods, for a certain period the extended Kalman filter appeared to be the ultimate solution for velocity sensor- less drives, as reported in numerous papers. Unfortunately, this stochastic observer has some inherent disadvantages, such as the influence of noise characteristic, the computation burden and the absence of design and tuning criteria. This has led to a renewed interest in deterministic approaches, where the structure of the standard Luenberger observer for linear system is enhanced to permit the simultaneous estimation of the rotor flux and velocity. Moreover, in order to take into account model and parameters uncertainties, adaptive observers have been developed allowing to estimate the velocity and/or other unknown parameters by additional equations and to find out the analytical conditions for systems stability.
In the last few years, magnetic saliency based methods have been proposed that allow standstill and low speed operation. These approaches rely on the motor response to the injection of relatively high frequency test signals, which investigate the motor saliency due to saturation or geometric construction. They need high precision in the measurement and increase the hardware/software complexity with respect to a standard control scheme. Moreover, motors having a low saliency content do not give an appreciable response, whereas enhancing the saliency requires a proper machine design.
Recently, sensor-less drives based on Artificial Intelligence (A.I.) approaches have been proposed. They do not require the knowledge of a mathematical model of the plant and allow to manage system non-linearity. These approaches promise to be robust to parameter deviations and measurement noises, but their computational requirements, development times and the need of expert knowledge for system set-up restrict the present applications to a limited range.
Columbus
IST-2001-38314
WPBD
Page 108
6.3 An application of PBD design approach: sensor-less control of electrical drives
6.3.2
Sensor-less control of IPM motors
Some recent proposals in emerging application fields such as electric vehicles have outlined an increasing interest in the so-called “interior” permanent magnet (IPM) synchronous motor, whose basic characteristic is represented by the construction with magnets displaced inside the rotor body [69]. The IPM motors share with their “non-salient” counterpart (the “cylindrical” or “surface” PM motors, built with magnets displaced on the rotor surface) some interesting properties, such as the absence of rotor losses (that calls for “cool” rotor and increasing efficiency) and the high torque vs. weight ratio. Additional features, due to the particular design of IPMs, are the robustness of the rotor structure (mechanically suited to high speed operation) and the presence of magnetic saliency. In fact, from a magnetic point of view, IPM motors exhibit a saliency ratio different from unity, i.e. the direct d-axis inductance is substantially different from the quadrature q-axis inductance, where the d-axis is usually selected to be aligned with the PM flux axis. This characteristic is particularly suited for extending the speed operating region by proper “field weakening” control techniques and, also, it allows the application of some interesting approaches to position and speed detection (self-sensing or “sensor-less” control). Among the proposals in this field, two kinds of approaches seem to be preferable, depending on the speed operating range required by the application: state observers and signal injection techniques. In fact, control of IPM motors requires the knowledge of the rotor position and speed, which is usually obtained by means of mechanical transducers such as encoders or resolvers. Nevertheless, most applications call for compact drive systems, robust to mechanical stress, with limited encumbrance and cost. According to these constraints, the use of mechanical sensors should be possibly avoided, i.e. the drive becomes sensor-less, that means speed and rotor position are evaluated through suitable estimation techniques. Obviously, the performance of the drive system will depend on
Page 109
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
the effectiveness of the estimation algorithm. Among the proposals in this field, two kinds of approaches seem to be preferable, depending on the speed operating range required by the application: state observers and signal injection techniques. State observers are preferred for medium/high speed operation. State observers are preferred for medium/high speed operation. They require the use of a relatively accurate motor model, the measurement of the motor currents (system output) and the knowledge of the feeding voltages (system input). The basic idea is to use the difference between the state variables and the estimated state variables to calculate the rotor position and speed, directly or through related variables. Several approaches are reported in literature, most of them applied to non salient motors. Both deterministic (Luenberger [70], [71], non-linear (sliding mode [72]) and stochastic (extended Kalman filters [73]) observers have been proposed, which exhibit different peculiarities in term of algorithm complexity and sensitivity to parameter variation and noise. Adaptive approaches based on the model reference adaptive system (MRAS) theory have been also suggested, [74]. Basically the main limitations of the observer based solutions refer to standstill/low speed operations and safe starting. Signal injection techniques are the last frontier of research in sensor-less control of IPM motors. These methods take advantage of the constructive magnetic saliency of the machine to detect the rotor position through the injection and back-processing of proper test signals [75], [76], [77]. They offer a solution both for standstill and low speed operations. As drawbacks they require high precision in the measurement, a certain degree of rejection to noise and disturbances, and high accuracy in signal processing especially when full-digital solutions are considered and/or low saliency motors are employed. The application of PBD methodology allows to decompose the control and estimation tasks into a set of simpler sub-tasks which are independently
Columbus
IST-2001-38314
WPBD
Page 110
6.3 An application of PBD design approach: sensor-less control of electrical drives
designed and modelled. Within each task, only the representative parameters and characteristics are modelled and exported to the upper and lower level sub- systems. The choice of the parameters affect the reliability of the results and the possibility to simulate those phenomena which are important to the design and tuning of the estimation algorithm. This has also led to the highlighting of some properties and limitations of the basic estimation algorithm that would have been difficult to achieve without the introduction of platform specific implementation constraints, as suggested by PBD approach. Corrective actions have also been proposed and tested in simulation. The adopted approach tends to reduce the possible gap between simulation and experimental description of the control system thus raising the reliability of the simulation results for the verification of the control and estimation performance on the actual experimental system. The adoption of a rapid-prototyping environment for the description of the system aims at raising the speed and effectiveness of the theoretical-to-actual implementation design-flow.
6.3.3
Sensor-less drive scheme
The sensor-less drive scheme is presented in Fig. 6.1. It refers to a fullydigital implementation employing fixed-point digital signal processor. The field-oriented controller is based on a current-controlled voltage source inverter structure. The current control loops are arranged in the two-phase synchronously rotating reference frame d-q aligned with the rotor magnet flux. Proportional and integral regulators are used for both the current and speed control loops. An adjacent-vector space vector pulse width modulator (AV-SVPWM) is used to apply the voltage commands. The adaptive observer estimates the rotor magnet flux angle θˆr (needed for the field orientation) and the rotor speed feedback ω ˆ r (used for the speed control loop). High frequency voltage signals are superimposed to the d-q
Page 111
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
Figure 6.1: sensor-less drive scheme. voltage commands during low speed and standstill operation. The resulting high frequency current components are processed by a heterodyning technique that produces an information on the rotor magnet position. This signal is used to tune the adaptive observer in such critical operation conditions.
6.3.4
Signal injection technique
The signal injection principle is similar to the one proposed by Corley and Lorenz in [75], which relies on the fact that the direct and quadrature axis of the motor are decoupled from each other. Let us assume to inject a proper high frequency voltage, such that the resistive voltage drops are negligible with respect to the reactive one. With this hypothesis, the injected high frequency flux d-q components in the estimated (θˆr ) rotor position reference
Columbus
IST-2001-38314
WPBD
Page 112
6.3 An application of PBD design approach: sensor-less control of electrical drives
frame can be described by: " ˆ # θr ψqsi ˆ
θr ψdsi
"
Vsi = sin (ωi t) ωi
1
#
0
(6.1)
where ωi and Vsi are the carrier pulsation and amplitude of the injected voltage, and Vsi /ωi is the magnitude of the corresponding flux. As a results, the high frequency current components in the estimated rotor position reference frame can be expressed as follows: ˆr iθdsi = Ii1 sin (ωi t) sin[2(θr − θˆr )] ˆr iθqsi = Ii0 sin (ωi t) − Ii1 sin (ωi t) cos[2(θr − θˆr )]
(6.2)
where Ii0 =
L Vsi 2 ωi L − ∆L2
,
Ii1 =
Vsi ∆L 2 ωi L − ∆L2
(6.3)
being
Lq + Ld Lq − Ld , ∆L = (6.4) 2 2 the average value and the amplitude of the spatial modulation of the inducL=
tance respectively, Ld and Lq the d- and q-axis inductances. From (6.2) it can be seen that carrier frequency signals are produced both on the d- and q-axis components, that are non-linearly amplitude-modulated by twice the difference between the estimated and the actual (θr ) position. Differently from previous solutions, in this proposal both the d- and q-axis components of the high frequency current in the estimated reference frame are processed, in order to extract the rotor position estimation error signal. In fact, the amplitude modulation of both (6.2): εd = Ii1 sin[2(θr − θˆr )] ,
εq = Ii0 − Ii1 cos[2(θr − θˆr )]
(6.5)
is evaluated by means of a proper demodulation engine. Thereafter, assuming the constant offset Ii0 is identified, the rotor position error (εθ ) can be expressed as follows: 1 εθ = tan−1 2
Page 113
εd Ii0 − εq
Columbus
IST-2001-38314
(6.6)
WPBD
Platform–based design for electric motor drives
where εd and εq denote the error signals extracted by each current component. By this approach a straightforward relationship between the error signal and the actual rotor position estimation error is achieved. Nevertheless, due to the periodicity of twice the mechanical angle, an ambiguity of 180 degrees affects the determination of the rotor position. For such an aspect, a proper management is required.
6.3.5
Kalman filtering
Once the error signal (6.6) has been evaluated, it is used as input to a Kalman filter which provides rotor position and speed estimation. The employed Kalman filter is based on a stochastic discrete-time model of the mechanical system arranged as follows, [80]: ^
xk+1 = A · xk + b · a k + wk
,
yk = c · xk + η k
(6.7)
where xk = [θr , ωr , ar ]Tk stands for the state variables vector (respectively the rotor position, speed and acceleration), yk = θr is the “measured” variable, wk and η k are the modeling and measurement error vectors respectively, with the associated covariance matrices: Q = var(wk ) ,
R = var(η k )
and
1 T
A= 0
1
0
0
T2 2
T 1
,
bT = [
T2 2
T
0 ] ,
cT =
h
1 0 0
i
are the system matrices. Such a model depends only on the duration of the ^
sampling interval T , whereas parameter a, meaning for the value of acceleration expected in each sampling interval, can be set, as a first approximation, to zero. Since the third order dynamic system given by (6.7) is linear and timeinvariant, a steady-state linear Kalman Filter could be utilized. But, in
Columbus
IST-2001-38314
WPBD
Page 114
6.3 An application of PBD design approach: sensor-less control of electrical drives
order to improve the dynamic behavior on the transient operation, an online adjustment of matrix Q is introduced, [81]. In particular, the coefficient q22 affecting the speed equation is modified according to transient or steadystate conditions. This improvement yields fast response in speed estimation on transient operation, whereas it assures smooth signal when steady-state operation is reached. Due to the on-line adjustment of matrix Q, the calculation of Kalman gain (K) must be done at every sampling time and the adopted algorithm is the same of the Extended Kalman Filter (EKF): ^
x ˜k+1 = A · x ˆk + b · a k P˜ k+1 = APˆ k AT + Qk h i−1 K k+1 = P˜ k+1 cT cP˜ k+1 cT + R xk+1 x ˆk+1 = x ˜k+1 + K y k+1 − c˜
(6.8)
Pˆ k+1 = P˜ k+1 − K k+1 cP˜ k+1 where P stands for the covariance matrix of the prediction errors. In the particular implementation, the estimation error at step 4th is given by the error signal (6.6): y k+1 − c˜ xk+1 = εθ,k
6.3.6
(6.9)
Demodulation strategy: carrier recovery
One of the problems that arises from the depicted sensor-less approach is the carrier recovery at the demodulation (receiver) side. In a complete coherent receiver implementation, carrier recovery is required since the receiver typically does not know the exact phase and frequency of the transmitted carrier. In this case, as the high frequency oscillator is the same for the source and the receiver, no uncertainty is introduced on carrier frequency. But a certain delay can exist between the high frequency carrier at the source side and that modulating the high frequency current components at the receiver side. The drive system can in fact be considered as a transmission path introducing an unknown delay in the high frequency signals which can
Page 115
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
hardly be analytically predicted. A wrong phase shift of the demodulation signal with respect to the useful one can in fact lead to a reduction of the amplitude of the error signals (6.5) from which the rotor position information is extracted. This reduces the signal-to-noise ratio of the demodulation scheme leading to poor accuracy of the estimates or, in the worst case, to the impossibility of extracting the rotor position at all. Then a suitable carrier recovery algorithm is needed for a correct demodulation of the high frequency current components. A more realistic model for the q-axis high frequency current component must then take into account the previous considerations and could be expressed in the form: ˆr = Ii0 sin (ωi t + φ) − Ii1 sin (ωi t + φ) · cos[2(θr − θˆr )] iθqsi
(6.10)
being ϕ an unknown phase shift between the injected high frequency voltage (carrier signal) and the correspondent current. When the position estimation error is kept constant, that is the estimation procedure is inhibited, the amplitude of the high-frequency component of that current is constant and could be expressed in the form: ˆ
r = a · sin (ωi t + φ) iθqsi
(6.11)
being a its constant amplitude. The demodulation process must be synchronous with this current component, that is a suitable strategy for carrier extraction must be found (ϕ is to be known). One can notice that the quadrature current component (6.2) is always present independently from the rotor position estimation error, as the term Ii0 is always non-zero, allowing the carrier recovery process always to be applied. The basic idea is to implement a simple digital phase locked loop (Fig. 6.2): the measured q-axis current component is first multiplied by an auxiliary numerically phase-controlled (ϕ) fixed-frequency (ωi ) oscillator (NPCO),
Columbus
IST-2001-38314
WPBD
Page 116
6.3 An application of PBD design approach: sensor-less control of electrical drives
Figure 6.2: Digital PLL adopted for carrier recovery. whose frequency is equal to that of the injected voltage: i 1h sin 2ωi t + φˆ + φ + sin φ − φˆ εφ = sin (ωi t + φ) · cos ωi t + φˆ = 2 (6.12) Then the error signal εϕ is passed through a low-pass filter in order to remove the (double) high-frequency content, thus obtaining the low-frequency error signal: 1 sin φ − φˆ (6.13) 2 In the actual implementation a 4th order 800 Hz cut-off frequency IIR lowεφ,lf =
pass filter has been adopted. Note that εϕ,lf is a function of the phase difference between the two input signals and it is zero when the received carrier and internally generated waves are exactly matched in phase (and frequency). When this condition is met, the corresponding phase shift φˆ equals the unknown one ϕ and a correct carrier recovery is accomplished: εφ,lf → 0 ⇒ sin φ − φˆ → 0 ⇒ φˆ = φ + kπ
k = 0, ±1, ...
The previous condition is realised by phase controlling the auxiliary NPCO. It can be shown that by adopting a simple proportional plus integral action in the control of the NPCO to force the low-frequency error to zero, the system response is always stable when the phase error is bounded in the range (−π; +π) degrees, becoming monotonous when the restricted range (−π/2; π/2) is considered.
Page 117
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
Figure 6.3: Flux and current space vectors in an IPM synchronous motor.
6.3.7
Adaptive observer for the IPM motor
The adaptive speed and position observer for medium/high speed and operation is presented in [78] and [79]. The electrical equations of the IPM synchronous motor in terms of stator fixed axis components α-β are as follows: v αβ = Riαβ +
dψ αβ dt
(6.14)
ψ αβ = Lαβ (θr ) iαβ + ψ M,αβ (θr )
(6.15)
where v αβ , iαβ and ψ αβ are vectors of the voltage/ current/ stator flux components respectively, R is the resistance of the stator windings, Lαβ is the matrix of the winding inductance and: ψ M,αβ (θr ) = ψM
cos θr sin θr
(6.16)
is the vector of the flux linkage components due to the magnet, whose position is measured by the angle θr and whose amplitude is ψM , (see Fig. 6.3). The flux model can be expressed in a more useful form in term of rotor fixed axis components d-q as follows:
Columbus
IST-2001-38314
WPBD
Page 118
6.3 An application of PBD design approach: sensor-less control of electrical drives
Figure 6.4: Flux observer.
ψ dq = T (θr ) ψ αβ = Ldq idq + ψ M,dq
(6.17)
where T (θr ) is the α-β to d-q transformation matrix, idq = [id , iq ] is the vector of the d-q current components, and: " # ψM Ld 0 Ldq = , ψ M,dq = 0 Lq 0
(6.18)
being Ld and Lq the direct and quadrature synchronous inductances respectively. According to the previous relations, the voltage equation in (6.14) and the flux model (6.17) can be arranged to build a flux observer as follows (Fig. 6.4):
˜αβ ψ
ˆαβ dψ ˆαβ − ψ ˜αβ = v αβ − Riαβ + K 11 ψ dt −1 −1 ˜dq = T θ˜r = T θ˜r ·ψ · Ldq˜idq + ψ M,dq = T (θ˜r )−1 · Ldq · T (θ˜r )iαβ + ψ M,dq
(6.19)
(6.20)
ˆαβ is the stator flux achieved by the voltage model (compare Equawhere ψ ˜αβ and ψ ˜dq represent the same flux as provided tions (6.14) and (6.19)), ψ
Page 119
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
by the flux model (respectively in terms of α-β and d-q components), K 11 is a 2x2 gain matrix used to feedback the voltage model by the difference between the fluxes estimated by the same voltage model and by the flux model, and θ˜r is the rotor magnet position calculated as follows: ! ˆαβ ∧ ψ ˜dq ψ θ˜r = arctg (6.21) ˆαβ · ψ ˜dq ψ where the symbols “∧” and “·” represent the vector and dot products respectively. According to (6.16), the flux linkage due to the rotor magnet can be represented by the dynamical model: dψ M,αβ = ωr Jψ M,αβ dt
(6.22)
where " J=
0 −1 1
#
0
From this assumption, an adaptive observer which estimates the rotor magnet flux and the speed can be arranged as follows (Fig. 6.5): ˆM,αβ dψ ˆM,αβ + K 21 eψ + K 22 ei =ω ˆr J ψ dt Z ω ˆ r = kP eω + kI eω dt
(6.23) (6.24)
where ei = (iαβ − ˆiαβ ) is the difference between the measured current and its estimate, the latter obtained by the introduction of an inverse flux model; ˜αβ − ψ ˆαβ ) is the flux estimation error obtained by the flux observer; eψ = (ψ eω = eiα ψˆM β − eiβ ψˆM α is the speed error, achieved by a Lyapunov approach; K 21 , K 22 are 2x2 gain matrices used in the rotor magnet flux observer; kP , kI are proportional/integral gains used in the speed identification equation; ω ˆ r , θˆr are the estimated rotor speed and position, the latter given by: ! ˆ ψ Mβ θˆr = arctg (6.25) ˆ ψ Mα
Columbus
IST-2001-38314
WPBD
Page 120
6.4 Simulation of the continuous time ideal drive system
Figure 6.5: Adaptive magnet flux and speed observer. The availability of the current error ei allows to introduce a current feedback also in the flux observer, in order to improve the robustness of the overall system. Thereafter, equation (6.20) takes the form: ˆαβ dψ = v αβ − Riαβ + K 11 eψ + K 12 ei dt
(6.26)
being K 12 the gain matrix 2x2 used to feedback the current error.
6.4
Simulation of the continuous time ideal drive system
In this section some simulation results of the continuous ideal system will be presented, in order to show the performance of the drive when . The considered drive is based on signal injection only. The adaptive observer was not considered for the results. The chosen simulation environment is VisSim for Visual Solutions. The tool is similar to Simulink as it is based on a graphical user interface and system representation is done by means of block which are inter-connected. The
Page 121
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
reason of the choice of that tool is to be found on an interesting feature which allows to automatically generate executable code from the control scheme for a certain fixed-point digital signal processor dedicated to the control of electrical drives (TMS320LF2407). To this purpose specific libraries are provided in order to simplify hardware configuration and interfacing within the simulation tool. Moreover customised fixed-point primitives are present allowing to perform all base calculation. Simulation of the sensor-less drive system has been initially performed in a continuous time fashion. The simulation of the continuous system is surely an ideal condition which allows the designer to develop and test a control or estimation algorithm when no actual implementation platform is considered. The significance of the results are the limited only to the validation of the theory itself and further investigations have to be done when an actual implementation architecture is considered. Details about this topic will be given in the next sections. The layout of the drive control system under VisSim simulation environment is shown in Fig. 6.6. Within this section, floating-point numerical representation has been adopted for variables and calculations and the same step of Tz = 5µs is considered for the simulation of the whole control system, including the dynamic model of the signal injection and estimation engine, shown in Fig. 6.7. The effect of the power converter has been neglected and a linear ideal power amplifier was supposed.
6.4.1
Signal injection based estimation engine
Signal injection detection scheme is implemented in its easiest form, that is the one introduced by Corley and Lorenz, [75]. In Fig. 6.8 the implementation of that estimation scheme is shown. The high-frequency d-axis current component is firstly passed through a band-pass filter, thus extracting only
Columbus
IST-2001-38314
WPBD
Page 122
6.4 Simulation of the continuous time ideal drive system
Figure 6.6: The control system under VisSim environment.
Figure 6.7: Signal injection based estimation engine.
Page 123
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
Figure 6.8: Signal injection implementation.
the component of the motor phase current which contains information about rotor position. Then this component is demodulated (by multiplication with the carrier) and low-pass filtered in order to extract the error signal which is zeroed by means of a simple PID controller. The output of the controller is the estimated rotor speed from which position is obtained by integration. The frequency of the injection is 1500Hz.
6.4.2
Motor model
The motor model has been developed directly in the d-q reference frame aligned with rotor flux in order to adopt the parameters Ld and Lq as constant parameters, thus simplifying the model. Otherwise, when considering α-β reference frame, the variation with rotor position of the inductance should be taken into account. Main motor parameters are reported in the following table: The continuous-time set of equation used to simulate the motor model
Columbus
IST-2001-38314
WPBD
Page 124
6.4 Simulation of the continuous time ideal drive system
Rated/base speed
2000 rpm
Rated/base current
5 A rms
Rated torque
5 Nm
Pole pairs
3
Stator resistance (Rs )
1.5 Ω
Direct inductance (Ld )
13.11 mH
Quadrature inductance (Lq )
18.65 mH
Average inductance (L)
15.88 mH
Inductance modulation (∆L) Moment of inertia (J) Back-EMF constant
2.77 mH 10−3
N · m · s2 /rad
64 mVrms /rpm (∆)
Table 6.1: Adopted IPM motor parameters
is: Zt (vd + ωr · λq − Rs · Id )dt
λd = λm + t0
Zt (vq − ωr · λd − Rs · Iq )dt
λq = t0
(λd − λm ) Ld λq Iq = Lq
Id =
(6.27)
Ce = (λd · Iq − λq · Id ) p ωr = · J
Zt (Ce − Cr )dt t0
Zt ϑr = θr0 +
ωr dt t0
Continuous time equations has been discretized by means of a simple fixed-
Page 125
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
Figure 6.9: Calculation of quadrature-axis flux.
Figure 6.10: Calculation of direct-axis flux.
step forward Euler integrator. This can be considered a good approximation of the continuous integration if the integration step is small enough (5 µs in the actual case). Each of the (6.27) have been implemented inside the visual simulation engine VisSim. Both library models and custom created blocks have been used and floating-point numerical representation has been adopted for all the calculation. This can be considered acceptable as in the actual control system, the motor model block will not be present and will be replaced by the interfaces with hardware (inverter, sensors, etc.).
Figure 6.11: Calculation of quadrature-axis current.
Columbus
IST-2001-38314
WPBD
Page 126
6.4 Simulation of the continuous time ideal drive system
Figure 6.12: Calculation of direct-axis current.
Figure 6.13: Calculation of electromagnetic torque.
Figure 6.14: Calculation of rotor speed.
Figure 6.15: Rotor position equation.
Figure 6.16: Floating-point discrete-time integrator.
Page 127
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
Figure 6.17: Coordinate transformation at the motor model input side.
Figure 6.18: Coordinate transformation at the motor model output side.
6.4.3
Coordinate transformations
As previously stated motor model has been developed directly in the d-q reference frame aligned with rotor flux and has to interfaced, both in simulation and in the actual implementation, with the vector control algorithm. The implementation of the control system in the d-q reference frame would be only a simplifying condition as the actual motor drive system is interfaced with the actual motor which is a three phase load. Then proper coordinate transformations has been developed at the input (voltage, see Fig. 6.17) and output (current, see Fig. 6.18) sides of the d-q motor model in order to obtain a new motor model which is expressed in the a-b-c reference frame. The following coordinate transformations have been developed: • Clarke (a-b-c → α-β) uα = uA
1 uβ = (2 · uB + uA ) · √ 3
,
(6.28)
• Inverse Clarke (α-β → a-b-c) √ √ 1 1 uA = uα , uB = ( 3 · uβ − uα ) · , uC = (− 3 · uβ − uα ) · (6.29) 2 2
Columbus
IST-2001-38314
WPBD
Page 128
6.4 Simulation of the continuous time ideal drive system
• Park (α-β → d-q) ud = (uα · cos ϑ + uβ · sin ϑ) ,
uq = (uβ · cos ϑ − uα · sin ϑ) (6.30)
• Inverse Park (d-q → α-β). uα = (ud · cos ϑ − uq · sin ϑ) ,
uβ = (ud · sin ϑ + uq · cos ϑ) (6.31)
As for the basic d-q motor model, also coordinate transformations has been developed by using floating-point numerical representation as motor model will be used only for simulation purposes and has then to provide the highest degree of accuracy and dynamic range. One can notice that a sort of feedback has been introduced inside the motor model rising from the use of the instantaneous motor rotor position in Park and Inverse Park coordinate transformations. That signal is directly the output of the d-q motor model and will not be available to the control system, where a position transducers (or a sensor-less algorithm) is normally adopted for position and/or speed feedback. One can also notice that, as a consequence of the introduce discretisation of motor equations, the position information is available with one simulation step delay to coordinate transformations. This will not cause any problem provided that the simulation step is small enough, as already stated.
6.4.4
Results of the continuous time ideal drive system
Some tests have been performed on the continuous ideal drive system in order to verify the simulation models of both the vector control and estimation of rotor position and speed. Moreover the ideal simulation is useful to highlight some properties of the high-frequency signal injection when secondary aspects of the an actual implementation are neglected. Preliminary results refer to off-line simulation of estimation engine aiming at assuring the proper estimation of rotor position and speed before closing the control loop with estimated variables.
Page 129
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
Figure 6.19: Speed step response (estimator off-line).
6.4.5
Results with off-line estimation
In Fig. 6.19 a 10 rad/s speed step response is considered and the comparison between estimated and actual rotor speed is shown. Apart from the high overshoot and settling time which is a function of the chosen parameter of the speed PI controller, one can notice a good dynamic and steady state behaviour of the estimated speed. The high-frequency residual due to signal injection is clearly visible. In Fig. 6.20 the same speed step response is considered but the rotor position estimation error is shown. One can notice that the initial rotor position estimation error goes quickly and remains at zero degrees until the motor is started. When the step reference is applied and the motor starts to move, a transient error is generated depending on the dynamic behaviour of
Columbus
IST-2001-38314
WPBD
Page 130
6.4 Simulation of the continuous time ideal drive system
Figure 6.20: Rotor position estimation error (estimator off-line). the estimation engine. The steady-state error when the motor is running is a function of the speed and is about 0.5 degrees in the considered operating condition. In Fig. 6.21 the behaviour of the electromagnetic torque (Ce) and the qaxis (on the estimated reference frame) current component (Iqsti) is shown. The steady-state oscillations reflect the high-frequency injection.
6.4.6
Results with on-line estimation
Once the behaviour of the system has been optimised with the estimation engine off-line, the speed and position control loops have been closed with estimated variables and same tests (speed step response) have been performed in order to highlight any difference, if present.
Page 131
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
Figure 6.21: Electromagnetic torque and q-axis current component (estimator off-line).
In Fig. 6.22 speed response is reported. The comparison with Fig. 6.19 shows that a better transient performance is attained and the same highfrequency oscillations are present on the estimated speed. Moreover one can notice that actual rotor speed is not affected by high-frequency ripple as the speed PI regulator acts as low-pass filter. The behaviour of the error between estimated and actual rotor position is similar to the previous one and it is shown in Fig. 6.23. There is a slight difference on the steady-state error, which is about 3 degrees when speed is not zero. In the shown results the carrier recovery algorithm introduced in previous sections has not been adopted. It would allow to raise the sensitivity of the estimator thus reducing steady-state rotor position estimation error. Finally in Fig. 6.24 q-axis and d-axis current components in the estimated rotating reference frame are shown. The behaviour of q-axis component of the current is similar to the one presented in Fig. 6.21 in the case the estimator was off-line. On the contrary, the d-axis current component contains the high-frequency ripple due to signal injection and the shape is very similar to that of rotor position estimation error shown in Fig. 6.23 (see also eq. (6.2)).
Columbus
IST-2001-38314
WPBD
Page 132
6.4 Simulation of the continuous time ideal drive system
Figure 6.22: Speed step response (estimator on-line, sensor-less operations).
Figure 6.23: Rotor position estimation error (estimator on-line).
Page 133
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
Figure 6.24: q-axis and d-axis current components (estimator on-line).
6.5
Introduction of platform specific implementation constraints
One of the main topics of the Platform-Based-Design approach is to represent the effects of the actual implementation with an abstract model characterised by idealised parameters. Hence, the design problem is decomposed into a set of platform mapping, each of them shielding lower level details. In this view, control design is a platform mapping with as many implementation details as exposed by the implementation platform. The essential issue for representing implementation platforms in an abstract way is to determine the effect of implementation platforms on control algorithms. Accuracy of measurements and actuations, and how to represent the fact that computation and communication take time are important in this respect. The task of the control designer using the principle of integrated control-platform design, is now to choose algorithms and platform parameters that are robust with respect to errors due to the computation of the control law. As argued above, it is important to classify and characterize carefully the effects that a particular implementation has on the behaviour of the controlled systems [82]. The results shown in the previous section refer to an ideal implementation platform where all the characteristics and limitations of an actual hardware (e.g. microcontroller or ASIC) that implements the control algo-
Columbus
IST-2001-38314
WPBD
Page 134
6.5 Introduction of platform specific implementation constraints
rithm and system have been neglected and thus hidden to the application layer. Now we will try to pass to the application layer all those characteristics of an actual implementation system that can affect the reliability of the control algorithm as shown before. It is important in this phase to identify only those implementation (architectural) constraints that can really influence the control system performance and design, hiding those that the designer considers negligible from that point of view. The availability of a rapid prototyping environment allowing to close very quickly both simulation and implementation loops represents a powerful design tool with respect to the reduction of design cycles, testing times and system cost. Moreover the trade-off between system requirements (in term of hardware resources, peripheral, interfaces, etc. ) and obtainable performance and cost can also be evaluated before the actual implementation is realised. Once the implementation constraints have been identified, two tasks have to be accomplished: • the definition of an implementation model for each architectural constraint for pure simulation purposes; • the definition of a set of high level interface parameters for the considered subsystem model in order to create an architectural-dependent API with a common and unique interface to the Application Layer. The first task have to take into account and model those characteristics of the actual implementation that could affect the reliability of the control action and has to be included inside the simulation tool in order to show and, eventually, compensate for them from either an algorithmic or architectural point of view or both of them. It is possible to identify three main steps which have to be followed for deriving an implementation model, that is: • functional definition: the function of the particular subsystem has to
Page 135
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
be refined in term of behavioural definition and analytical I/O model; no matter whether the implementation of a particular block will be in hardware or software in this phase, as only behavioural model is considered. After the introduction of successive refinements and immediately before the implementation of the system it is possible to evaluate the implementation architecture for each function, also taking into account the features described in the next steps; • architectural constraints definition: all the constraints of an actual implementation must be modelled in terms of analytical I/O subsystem. Examples of the architectural constraints are: computation imprecision and delay, quantisation in analog to digital conversion of measured data, measurement errors, etc. • timing, triggering and communication requirements: the execution of each function has to be time-related and synchronised with other subsystems of the control algorithm (“software” synchronisation and communication) or input/output peripherals (“hardware” synchronisation and communication). This also requires the definition of common data exchange formats between blocks and the evaluation of this choice of the performance of the control system. The simulation model and each implementation instance should have a common set of parameters to be processed for each functional block in order to guarantee the seamless transition from simulation to implementation of the system. This implies the definition of a set of high level interface parameters for each considered subsystem model in order to create a library of architectural-independent APIs, sharing a common and unique interface to the Application Layer. The rapid prototyping tool must be able to simulate the drive system with all architectural constraints, allowing to generate a particular implementation instance by simple substitution of those functional blocks of the
Columbus
IST-2001-38314
WPBD
Page 136
6.5 Introduction of platform specific implementation constraints
control system modelling hardware functionalities with the corresponding device drivers toward the actual instance of that subsystem. This approach assures that any change in the hardware architecture affects only the choice of the particular device drive, but the interface to the simulation and prototyping environment remains the same as well as its implementation model. As stated above, it is important in the preliminary phase to identify only those architectural constraints that can really influence the control system performance. and hide those that the designer considers negligible from that point of view. The following classification aims at representing some common and important cases: • finite precision fixed-point numerical representation; • quantisation of measured values and measurement noise; • control loop delay (or latency time); • actuation delay (presence of the inverter); • communication delays. In the next sections each topic will be discussed and related to the application to the considered design case.
6.5.1
Finite precision fixed-point numerical representation
Fixed-point numerical representation and processing hardware are the most frequently available features of modern microcontrollers dedicated to the control of electrical drives. Aiming to the simulation of the effects of the choice of the required fixed-point accuracy preliminary to the definition of the control hardware structure and peripherals or to evaluate the influence of a particular choice on the performance of the drive system. Then the control algorithm has been developed considering a fixed-point numerical representation for all the involved calculations.
Page 137
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
Data exchange between previously introduced ideal motor model and control system has to take into account control algorithm computation imprecision due to the particular choice of the fixed-point accuracy, i.e. the number of bits available for numerical representation. As a consequence of that choice, all the variables of the controller have been scaled with respect to certain base values and a per-unit equivalent system is obtained. This means that also the inputs and outputs of the motor model have been scaled to per-unit representation. At present stage only floating-point numerical representation is adopted inside the controller allowing to evaluate the performance of the estimation algorithm when a floating-point equipped processing architecture is considered. The extension to a fixed-point processing architecture is possible and is going to be developed..
6.5.2
Quantisation of measured values
In an analog to digital conversion, the digital signal is not a faithful reproduction of the analog one, hence the conversion process bears a distortion, also called quantisation error. Sometimes, especially in low-cost micro-controllers, that error can cause an unacceptable decrease of the overall control system performance. In the simulation of the sensored and sensor-less control system a 10-bit quantisation has been introduced on the measured motor phase currents thus reflecting the hardware which is commonly present in commercial microcontrollers. The introduction of the quantization effect of phase current measurements affect the reliability of the signal injection based estimator as the high-frequency content on motor phase currents is very small and quantisation noise can be comparable to the useful signal. Motor phase current are scaled to per-unit form, then passed through the quantizer and finally fed to the control algorithm (see Fig. 6.25)
Columbus
IST-2001-38314
WPBD
Page 138
6.5 Introduction of platform specific implementation constraints
Figure 6.25: Motor phase current processing including quantisation effects.
6.5.3
Control loop delay (or latency time)
In a digital control system the process of sensor reading, control law computation and output actuation takes a certain amount of time that is usually not negligible. A major reason of this delay is the complex sharing of computation and communication resources by several control loops. There are several known techniques to compensate a constant or varying delay even at the price of a reduced performance of the control system. The aim of this work is however the evaluation of the effects of such a delay on the control and estimation performance of the system. In the considered simulation model the control algorithm is processed every 100 µs, performing motor phase current measurement, control and estimation algorithm processing and control action (reference voltage) calculation. The actuation of control action is independently performed and modelled with respect to the controller processing and will be analysed further.
6.5.4
Actuation delay (due to the presence of the power converter)
The presence of the power converter (inverter) introduce a second source of control loop delay which, at this stage, has been distinguished with respect to the intrinsic delay caused by the implementation of the controller by
Page 139
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
a digital system (discussed in the previous section). This comes from the consideration that different models of the inverter can be provided, e.g. mean-value model or instantaneous model. In the following we will suppose that the control period is synchronised with the PWM cycle. This leads to the consideration that the reference voltage out of the digital controller at step k is realised by the inverter at the next step/modulation cycle (k+1). When mean-value model is considered, the inverter output voltage is modelled to be constant during all the PWM modulation cycle. This means that the instantaneous evolution of the output voltage within the modulation cycle is not modelled (i.e. simulated) and the constant value out of the digital controller is supposed to be applied to a motor phase for the whole duration of the modulation cycle (k+1). This implies that the instantaneous evolution of motor phase currents due to varying phase voltage within the modulation cycle is neglected and only its mean value is considered. When an instantaneous model of the inverter is considered, each single commutation of the three branches is simulated within the modulation cycle and the associated variation in the inverter output voltage is provided as input to the motor model. This model is certainly more accurate than the previous one, in the sense that it allows to monitor the evolution of all the characteristic variables of the system (current, voltage, torque, etc.) but requires to introduce more simulation steps within each control period. Such a model allows also to simulate non ideal behaviour of the inverter switches and secondary phenomena which have not been taken into account in the present work (presence of dead-times, variation of dc bus voltage, asymmetry of commutation, etc.). Such phenomena can also be described by means of dynamic model and parameters which could be exported to the upper platform (layer). In this work the simplest instantaneous model of the inverter is adopted and only two parameters are exported to the application layer, that is: the resolution of the digital counter used for PWM signals generation and the
Columbus
IST-2001-38314
WPBD
Page 140
6.5 Introduction of platform specific implementation constraints
Figure 6.26: Simulation model by adopting some implementation constraints. dc bus voltage which is commutated to the inverter output as a function of power switches states.
6.5.5
Simulation of the drive system adopting platform specific implementation constraints
In this section simulation results of the drive system adopting platform specific implementation constraints will be shown and discussed. Introducing discretization and control loop delay Preliminary results have been obtained by adopting a discrete control structure for the controller and simulation periods for both the controller and the motor model have been differentiated (refer to Fig. 6.26). The first one has been chosen to be 100 µ s, which is a common value adopted for electrical drives controllers and obtainable by means of medium-high performance microcontrollers. The motor model has been simulated with a 5 µ s step, a value that is compatible with the electrical time constant of the motor. A simple linear power amplifier, instead of the inverter, has been adopted for actuation purposes but the associated delay has been introduced at simulation/control level. This means that the reference voltages out of the digital controller are actuated during all the next control period, the corresponding value being constant during the whole period. Both the controller and the motor model have been converted to per unit form, meaning that all the variables have been scaled to certain base
Page 141
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
values. This step can help the designer when fixed-point processing architectures are adopted for the digital controller and introduce no simplifying hypothesis and/or limitation to system modelling at this stage. It should be reminded that all the calculations are now performed adopting floatingpoint numerical representation, both for the controller and for the motor model. A successive step that could be foreseen is the implementation of the controller on a fixed-point processing architecture which, following the PBD approach, would require the simulation of the system by considering the actual fixed-point numerical representation inside the controller. The simulation of the control system with the previous configuration and implementation constraints allowed to point out and optimise some problems at estimation engine and control levels which were not observed by means of the ideal simulation scheme. Particularly it could be noticed that: • the adoption of the carrier recovery algorithm inside the estimation engine becomes fundamental as the control (and estimation) delay leads to wrong demodulation of the high frequency estimation signals and then to the impossibility for position and speed estimation; • the high frequency content injected into motor voltage (and flux) and the corresponding current affect the behaviour of q-axis and d-axis current regulator which try to remove the high frequency content. This means that the motor control loop tries to eliminate the injected high frequency flux leading to a reduced sensitivity or impossibility to estimate rotor position and speed. A simple solution has been proposed based on a low-pass filtering of motor phase currents which are fed to current control loop. The chosen cut- off frequency is 500Hz, a value which is supposed to be sufficiently low with respect to the injected current components and high enough to avoid to affect motor control performance. • finally the tuning of the estimator parameters, which depends on con-
Columbus
IST-2001-38314
WPBD
Page 142
6.5 Introduction of platform specific implementation constraints
Figure 6.27: Estimated phase angle of the demodulation signal. trol loop frequency and delays, is performed in the actual operating conditions of the controller, thus allowing to predict the performance of the actual experimental control system. In Fig. 6.27 the behaviour of the phase angle of the demodulation signal is shown as it is calculated by the proposed recovery algorithm. The results of Fig. 6.27a are obtained by neglecting the control loop delay which is considered in Fig. 6.27b. One can notice how the calculated phase angle, after the first transient and a the successive one corresponding to motor start-up, reaches two different values which is obviously higher when control loop delay is considered. The need for carrier phase recovery algorithm is not evident when control loop delay is neglected, as it is shown in Fig. 6.28. In fact the instantaneous value of rotor position estimation error seems not to be affected by the
Page 143
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
Figure 6.28: Rotor position estimation error when control loop delay is neglected.
adoption of carrier recovery algorithm. This can be explained by considering the small value of demodulation phase shift in this operating condition. The results of Fig. 6.29 aim at showing the importance of the carrier recovery algorithm. Control delay is modelled in both the cases and it is clear how the adoption of the carrier recovery algorithm assures the convergence of the rotor position estimation error to a correct value (Fig. 6.29a). Finally in Fig. 6.30 the shape of the q-axis current component is shown during a speed step transient. The effect of the low-pass filter introduced on the current control loop aiming at filtering the high-frequency ripple due to signal injection is clearly visible. A similar shape with respect to that of Fig. 6.30b can be obtained in electromagnetic torque which cause little values of speed ripple due to higher mechanical time constant of the motor
Columbus
IST-2001-38314
WPBD
Page 144
6.5 Introduction of platform specific implementation constraints
Figure 6.29: Rotor position estimation error when control loop delay is modelled.
Page 145
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
Figure 6.30: q-axis current component. with respect to the injected frequency.
6.5.6
Introducing actuation delay and measurements quantisation
The investigation of the effects of implementation constraints on the performance of the sensor-less drive has been completed by the introduction of the actuation delay and measurement quantisation. The simple linear power amplifier has been substituted by the inverter and its dynamical model has been introduced inside the simulation scheme. Quantisation in the measurement of motor phase currents has been modelled as it is realised by a modern microcontroller. The considered drive scheme is therefore depicted in Fig. 6.31. The presence of the power invert model as well as driving logic (compare unit) is clearly visible. As the instantaneous model of the power inverter has been
Columbus
IST-2001-38314
WPBD
Page 146
6.5 Introduction of platform specific implementation constraints
Figure 6.31: Simulation model after modelling actuation model.
Figure 6.32: Compare unit and inverter. adopted, the corresponding simulation period has been fixed to 100ns, being also the resolution of PWM output voltage signals. This condition has lead to the reduction of the motor model simulation period in order to simulate its behaviour within the whole modulation cycle. The adopted choice greatly raises the time needed for the PC to perform each simulation but produces very accurate results. The interface between the controller and the inverter is represented by the three compare values (expressed in per unit form) which are then processed by the successive “compare unit” block. The output of the compare unit block represent the firing signals for inverter’s power switches (Fig. 32). This choice reflects the common hardware sub-system which is present on modern microcontrollers dedicated to the control of electrical drives. The format of the controller’s output signals has been chosen in order to be compatible to the inputs of the compare unit block. This choice reflect the PBD approach as the application layer (in this case the controller) only need to know that the input to the actuation hardware is represented by the duty cycle of each inverter’s branch but no information is available to that layer about the actual structure of compare unit block or inverter. Nor
Page 147
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
the actual implementation of that block is defined at this stage as only a behavioural description is given. The compare values out of the controller are generated by the spacevector PWM modulation which has been implemented as a pure processing sub-systems inside the controller itself. An important parameter is the simulation period of the compare unit and inverter blocks, that is the resolution of PWM output voltage signals. The choice of that value influences the resolution of the impressed motor phase voltages and is an important parameter to be defined in the drive system. Nevertheless its value is commonly defined by the adopted microcontroller unit which normally provides both the compare unit block and carrier wave generation (as hardware sub-systems). Details on the inverter structure and modulation algorithm can be found in [83]. Sampling and quantisation of motor phase currents has been introduced as it is realised by the analog-to-digital convert module of a modern microcontroller. The tests have been performed by considering a 10-bit equivalent resolution and a measuring range equal to twice the chosen base value for motor phase current. Both quantisation resolution and measuring range affect the reliability of the rotor position estimation as the high frequency current content is small compared to nominal frequency currents. The adopted implementation constraints allowed to evaluate the performance of the sensor-less control scheme and compare it with previous results. All the successive results are obtained by considering the actuation delay (the invert model) and some tests and comparisons are shown to highlight the effect of quantisation on motor phase current measurement. In Fig. 6.33, Fig. 6.34 and Fig. 6.35 a step speed transient response is considered and comparison between reference and actual speed, reference and actual q-axis current component, rotor position estimation error are reported respectively. One can notice that quantisation of motor phase
Columbus
IST-2001-38314
WPBD
Page 148
6.5 Introduction of platform specific implementation constraints
Figure 6.33: Speed step response. currents introduces an irregular behaviour of the responses.
Page 149
Columbus
IST-2001-38314
WPBD
Platform–based design for electric motor drives
Figure 6.34: Comparison between reference and actual q-axis current.
Figure 6.35: Comparison between reference and actual q-axis current.
Columbus
IST-2001-38314
WPBD
Page 150
Chapter 7
Conclusions In this report we presented the bases of Platform-Based Design (PBD) as a general methodology for embedded system design. In particular, we outlined the notion of “meet-in-the-middle” methodology and shown how to combine top-down and bottom-up phases to achieve a flow that can yield impressive results in terms of re-use, correctness and design space exploration. We introduced a formal view of the method that is used as an underlying paradigm for the Metropolis framework described in a companion report. To demonstrate the use and the power of the method, we chose three application areas: • Ad-hoc wireless sensor networks; • Automotive engine control; • Electrical Motor control. In the case of AWSNs, we showed how to use the paradigm to define a potential standard for interfaces between applications and AWSNs architectures. In addition, we applied the paradigm to design a protocol stack for AWSN. In the case of automotive engine control, we presented a novel example involving the design of an embedded controller for a motorcycle. This ex-
Page 151
Columbus
IST-2001-38314
WPBD
Conclusions
ample is simple enough so that it can be carried out across different abstraction layers illustrating the principles without being encumbered by pointless implementation details, yet it has real interest since it is an industrial application that is being pursued at Magneti-Marelli for a Piaggio motorcycle. The electrical control problem is tackled across one main layer of abstraction that delimits the boundaries between control algorithms and implementation architecture. Here the essential feature to exploit is the formalization of the interactions between implementation parameters and behavior of the control algorithm. For each of these applications, we can identify a specific design flow; though, this flow can be seen as a refinement of a general flow that shares the abstract view of the process. This “meta flow” is the incarnation of the PBD principles. The future of this approach lies in defining an environment and a set of tools that can support it seamlessly across all phases of the design process. PBD can be used to cast in a formal framework best-in-class approaches that are in use in industry, but can also be used to define revolutionary approaches to the design problem.
Columbus
IST-2001-38314
WPBD
Page 152
Bibliography [1] K. Keutzer, S. Malik, R. Newton, J. Rabaey and A. SangiovanniVincentelli, System Level Design: Orthogonalization of Concerns and Platform Based Design, IEEE Transactions on Computer-Aided Design of Circuits and Systems, Vol. 19, No. 12, December 2000 [2] A.
Sangiovanni-Vincentelli,
Defining
Platform-Based
Design,
EEDesign, February 2002 [3] M. Sgroi, Platform-based Design methodologies for Communication Networks, PhD University of California at Berkeley, Fall 2002 [4] , F. Balarin, L. Lavagno, C. Passerone, A. Sangiovanni-Vincentelli, M. Sgroi, Y. Watanabe, Modeling and Designing Heterogeneous Systems, Concurrency and Hardware Design. Advances in Petri Nets. Springer Verlag, 2002 [5] F. Balarin, M. Chiodo, P. Giusto, H. Hsieh, A. Jurecska, L. Lavagno, C. Passerone, A. Sangiovanni-Vincentelli, E. Sentovich, K. Suzuki, and B. Tabbara, Hardware-Software Co-Design of Embedded Systems: The Polis Approach, Kluwer Academic Press, 1997. [6] Win, M.Z.; Scholtz, R.A., On the energy capture of ultrawide bandwidth signals in dense multipath environments, Communications Letters, IEEE, Volume: 2, Issue: 9, Sept. 1998 Pages: 245 - 247
Page 153
Columbus
IST-2001-38314
WPBD
BIBLIOGRAPHY
[7] F. Balarin et al., Hardware-Software Co-Design of Embedded Systems: The POLIS Approach, Kluwer Publishing Co., 1998. [8] J. Rowson and A. Sangiovanni-Vincentelli, System Level Design, EE Times, 1996. [9] J. Rowson and A. Sangiovanni-Vincentelli, Interface-based Design, Proceedings of the 34th Design Automation Conference (DAC-97). pp. 178183, Las Vegas, June 1997. [10] 2004 Roberto Passerone, Semantic Foundations for Heterogeneous Systems. PhD Thesis. Department of Electrical Engineering and Computer Science, University of California, Berkeley, February 2004. [11] Win, M.Z.; Scholtz, R.A., On the robustness of ultra-wide bandwidth signals in dense multipath environments, Communications Letters, IEEE, Volume: 2, Issue: 2, Feb. 1998 Pages: 51 - 53 [12] Cassioli, D.; Win, M.Z.; Vatalaro, F.; Molisch, A.F., Performance of low-complexity RAKE reception in a realistic UWB channel, Communications, 2002. ICC 2002. IEEE International Conference, Volume: 2, 28 April-2 May 2002 Pages: 763 - 767 vol. 2 [13] M. Sgroi, L. Lavagno, A. Sangiovanni-Vincentelli, Formal Models for Embedded System Design, IEEE Design & Test of Computers, AprilJune 2000 [14] E. Lee and A. Sangiovanni-Vincentelli, A Unified Framework for Comparing Models of Computation, IEEE Trans. on Computer Aided Design of Integrated Circuits and Systems, Vol. 17, N. 12, pp. 1217-1229, December 1998. [15] J.L. da Silva Jr., M. Sgroi, F. De Bernardinis, S.F. Li, A. SangiovanniVincentelli, J. Rabaey, Wireless Protocols Design: Challenges and Op-
Columbus
IST-2001-38314
WPBD
Page 154
BIBLIOGRAPHY
portunities, 8th International Workshop on Hardware/Software CoDesign Codes/CASHE ’00, Diego, CA May 2000. [16] C. G. Bell and A. Newell, Computer Structures: Readings and Examples, McGraw-Hill, New York, 1971. [17] H. Zimmerman, OSI Reference Model: The ISO model of architecture for Open Systems Interconnection, IEEE Transactions on Communications, 28(4), pp. 425-432, April 1980. [18] A. S. Tanenbaum, Computer Networks, Third Edition, Prentice Hall PTR, 1996. [19] D. Clark and D. Tennenhouse, Architectural considerations for a new generation of protocols, In Computer Communication Review, ACM SIGCOMM ’90 Symposium, Communications Architectures and Protocols, Philadelphia, PA, USA, volume 20, pages 200–8, September 1990. [20] H. Balakrishnan, V. Padmanabhan, S. Seshan, and R. Katz, A comparison of mechanisms for improving TCP performance over wireless links, IEEE/ACM Transactions on Networking, 5:756–69, December 1997. [21] E. Lee and A. L. Sangiovanni-Vincentelli, A Unified Framework for Comparing Models of Computation, IEEE Trans. on Computer Aided Design of Integrated Circuits and Systems, volume 17 (12), pages 1217– 1229, December 1998. [22] , J. Stoy, Denotational Semantics: the Scott-Strachey approach to programming language theory, MIT Press, 1977. [23] ,B. Davey and H. Priestley, Introduction to lattices and order, Cambridge University Press, 1990. [24] , E. Lee and D. Messerschmitt, Digital Communication, Kluwer Academic Publishers, 1988.
Page 155
Columbus
IST-2001-38314
WPBD
BIBLIOGRAPHY
[25] A. J. Goldsmith, S. B. Wicker, Design challenges for energy - constrained ad hoc wireless networks, IEEE Wireless Communications, August 2002. [26] R. C. Shah, J. Rabaey, Energy aware routing for low energy ad hoc sensor networks, University of California at Berkeley, December 2001. c [27] TIMe Electronic Textbook v. 4.0 SINTEF, Luglio 1999. [28] M. A. Marsan, F. Neri, Reti di TLC, Lessons notes. [29] G. Di Stefano, F. Graziosi, F. Santucci, Distributed positioning algorithm for ad-hoc networks, 2003 International Workshop on Ultra Wideband System (IWUWBS), Oulu 2003. [30] V. Bhargahavan, A. Demers, S. Shenker, L. Zhang, MACAW: a media access protocol for wireless LANs, ACM SIGCOMM, 1994. [31] C. Savarese, Robust positioning algorithms for distributed ad-hoc wireless sensor networks, Degree of Master of Science, Department of Electrical Engineering and Computer Sciences, University of California ad Berkeley, May 2002. [32] David Gay, Philip Levis, David Culler, Eric Brewer, nesC 1.1 Language Reference Manual, May 2003, Available with TinyOS distribution at http://webs.cs.berkeley.edu. [33] David Gay, Philip Levis, Robert von Behren, Matt Welsh, Eric Brewer, and David Culler, The nesC Language: A Holistic Approach to Network Embedded Systems, In ACM SIGPLAN 2003 Conference on Programming Language Design and Implementation (PLDI), June 2003. [34] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. E. Culler, and K. S. J. Pister, System Architecture Directions for Networked Sensors, In Architectural Support for Programming Languages and Operating Systems, pages 93 V104, 2000.
Columbus
IST-2001-38314
WPBD
Page 156
BIBLIOGRAPHY
[35] B. W. Kernighan and D. M. Ritchie, The C Programming Language, Second Edition, Prentice Hall, 1988. [36] Samuel Ross Madden, The Design and Evaluation of a Query Processing Architecture for Sensor Networks, Doctor of Philosophy in Computer Science in the GRADUATE DIVISION of the UNIVERSITY OF CALIFORNIA, BERKELEY, Fall 2003. [37] Bill
Maurer,
Introduction
to
TinyOS
and
nesC
Program-
ming, DSP Labs, Livermore Ca., USA, CrossBow Tutorial at http://www.xbow.com, July 2003. [38] Antoniotti, M., A. Balluchi, L. Benvenuti, A. Ferrari, R. Flora, W. Nesci, C. Pinello, C. Rossi, A. L. Sangiovanni-Vincentelli, G. Serra and M. Tabaro (1998). A top-down constraints-driven design methodology for powertrain control system. In: Proc. GPC98, Global Powertrain Congress. Vol. Emissions, Testing and Controls. Detroit, Michigan, USA. pp. 74–84. [39] Antsaklis, P.J. (2000). Special issue on hybrid systems: theory and applications a brief introduction to the theory and applications of hybrid systems. Proceedings of the IEEE 88(7), 879–1133. [40] Balluchi, A., L. Berardi, M. D. Di Benedetto, A. Ferrari, G. Girasole and A. L. Sangiovanni-Vincentelli (2002). Integrated control– implementation design. In: Proc. 41st IEEE Conference on Decision and Control. Las Vegas, NV, USA. [41] A. Balluchi, A., M. D. Di Benedetto, A. Ferrari, G. Gaviani, G. Girasole, C. Grossi, W. Nesci, M. Pennesei, and A. L. SangiovanniVincentelli. Design of a motorcycle engine control unit using an integrated control-implementation approach. In: Proc. AAC Conference, 2004.
Page 157
Columbus
IST-2001-38314
WPBD
BIBLIOGRAPHY
[42] Chang, H., E. Charbon, U. Choudhury, A. Demir, E. Felt, E. Liu, E. Malavasi, A. Sangiovanni-Vincentelli and I. Vassiliou (1997). A Topdown Constraint-driven Design Methodology for Analog Integrated Circuits. Kluwer Academic Publishers. Boston/London/Dordrecht. [43] F. Boekhorst, ”Ambient intelligence: The next paradigm for consumer electronics”, Proceedings IEEE ISSCC 2002, San Francisco, February 2002. [44] D. Snoonian, ”Smart Buildings”, IEEE Spectrum, pp. 18-23, September 03. [45] J. Rabaey,
E. Arens,
schmitt,
W.
”Smart
Energy
tion
C. Federspiel,
Nazaroff,
K.
Pister,
Distribution
Technology
as
an
A. Gadgil, S.
and
Oren,
D. MesserP.
Consumption
Enabling
Force,”
Varaiya, ?Informa-
White
Paper,
http://citris.berkeley.edu/SmartEnergy/SmartEnergy.html. [46] G.Huang, ”Casting the Wire”, Technology Review, pp. 50-56, July/August 2003. [47] IEEE
802.15
WPAN(tm)
Task
Group
4
(TG4),
http:
//www.ieee802.org/15/pub/TG4.html [48] The Zigbee Alliance, http://www.zigbee.org/ [49] IEEE 1452.2 ”Standard for a Smart Transducer Interface for Sensors and Actuators - Transducer to Microprocessor Communication Protocols and Transducer Electronic Data Sheet (TEDS) Formats”, IEEE, 1997 [50] C. Srisathapornphat, C. Jaikaeo, C. Shen, ”Sensor Information Networking Architecture”, in Proceedings of the International Workshops on Pervasive Computing, Toronto, Canada, August 2000.
Columbus
IST-2001-38314
WPBD
Page 158
BIBLIOGRAPHY
[51] S. Madden, ”The Design and Evaluation of a Query processing Architecture for Sensor Networks”, Ph.D. Dissertation, UC Berkeley, 2003 [52] M. Sgroi, A. Wolisz, A. Sangiovanni-Vincentelli, J. Rabaey, ”A Servicebased Universal Application Interface for Ad-hoc Wireless Sensor Networks”, White Paper, http://bwrc.eecs.berkeley.edu/ [53] J. Heidemann, F. Silva, C. Intanagonwiwat, R. Govindan, D. Estrin, D. Ganesan, ”Building Efficient Wireless Sensor Networks with Low-Level Naming”, in Proceedings of the Symposium on Operating Systems Principles (SOSP 2001), Lake Louise, Canada, October 2001. [54] ,Action Semantics Consortium, Action Semantics for the UML, http://www.kc.com/assite/home.html, August 2001. [55] , C. Aurrecoechea, A. Campbell and L. Hauw, A surevy of QoS architectures, Multimedia Systems. Springer-Verlag, volume 6 (no.3), pages 138–51, May 1998. [56] , J. Richter and H. de Meer, Towards formal semantics for QoS support, Proceedings of IEEE INFOCOM ’98, the Conference on Computer Communications, San Francisco, USA, pages 472–79, March 1998. [57] ,K. E. ˚ Arz´en, B. Bernhardsson, J. Eker, A. Cervin, K. Nilsson, P. Persson and L. Sha, Integrated Control and Scheduling, Department of Automatic Control, Lund University, 1999, Internal Report, TFRT-758. [58] , K. E. ˚ Arz´en and A. Cervin and J. Eker and L. Sha, An Introduction to Control and Scheduling Co-Design, Proceedings of the 39th IEEE Conference on Decision and Control, Sydney, Australia, December 2000. [59] , L. Palopoli and L. Abeni and G. Buttazzo and F. Conticelli and M. Di Natale, Real-Time control system analysis: an integrated approach, Proceedings of the IEEE Real-Time Systems Symposium, Orlando, Florida, December 2000.
Page 159
Columbus
IST-2001-38314
WPBD
BIBLIOGRAPHY
[60] , D. Seto and J. P. Lehoczky and L. Sha and K. Shin, On task schedulability in real-time control systems, Proceedings of the IEEE Real-Time Systems Symposium, 1996, [61] , author = L. Palopoli and C. Pinello and A. Sangiovanni-Vincentelli and L. El-Ghaoui and A. Bicchi, Synthesis of robust control systems under resource constraints, Hybrid Systems: Computation and Control (HSCC 2002), Stanford, California (USA), Springer-Verlag, March 2002. [62] , H. Kopetz, REAL-TIME SYSTEMS. Design Principles for Distributed Embedded Applications, Kluwer Academic Publishers, 1997. [63] , J. Nilsson and B. Bernhardsson and B. Wittenmark, Some topics in real-time control, Proceedings of American Control Conference, Philadelphia, USA, 1998. [64] , J. Nilsson, Real-Time Control Systems with Delays, Department of Automatic Control, Lund Institute of Technology, Lund, Sweden, 1998. [65] , L. Xiao and M. Johansson and H. Hindi and S. Boyd and A. Goldsmith, Joint optimization of communication rates and linear systems, Proceedings of the 40th IEEE Conference on Decision and Control, Orlando, FL 2001. [66] , A. Bicchi and A. Marigo and B. Piccoli, Quantized control systems and discrete nonholonomy, IEEE Trans. on Automatic Control, 2001. [67] , David F. Delchamps, Extracting state information from a quantized output record, Systems and Control Letters, 1989. [68] , N. Elia and S. Mitter, Stabilization of linear systems with limited information, IEEE Trans. on Automatic Control, 2001.
Columbus
IST-2001-38314
WPBD
Page 160
BIBLIOGRAPHY
[69] F. Parasiliti, R. Petrella and M. Tursini, “sensor-less Control of Buried PM Synchronous Motors”, in Proc. of the Thirteenth Interactive Seminar, Vol. 2, pp.147-168, Bressanone (Italy), March 18-20 2002, (in Italian). [70] Ph.K. Sattler and K. Strker, “Estimation of speed and pole position of an inverter fed permanent excited synchronous machine”, in Proc. of the EPE Conf., pp. 1207-1212, Aachen, 1989. [71] L.A. Jones and J.H. Lang, “A state observer for the permanent magnet synchronous motor”, IEEE Trans. Ind. Electronics, Vol. 36, No. 3, pp. 374- 382, August 1989. [72] F. Parasiliti, R. Petrella and M. Tursini, “sensor-less speed control of a PM synchronous motor based on sliding mode observer and extended Kalman filter”, Proc. of the Thirty Sixth IEEE-IAS Annual Meeting, CD ROM, Chicago, September 30 October 4, 2001. [73] S. Bolognani, R. Oboe and M. Zigliotto, “sensor-less full-digital PMSM drive with EKF estimation of speed and rotor position”, IEEE Trans. on Industrial Electronics, Vol. 46, No. 1, pp. 184-191, February 1999. [74] N. Matsui, “sensor-less PM brushless dc motor drives”, IEEE Trans. on Industry Applications, Vol. 43, pp. 300-308, April 1996. [75] M.J. Corley and R.D. Lorenz, “Rotor position and velocity estimation for a salient-pole permanent magnet synchronous machine at standstill and high speeds”, IEEE Trans. on Industry Applications, Vol. 34, No. 4, pp. 36-41, July/August 1998. [76] M. Shroedl, “sensor-less control of AC machines at low speed and standstill based on the ’INFORM’ method”, in Proc. of the Industry Application Society Annual Meeting, Vol. 1, pp. 270-277, 1996.
Page 161
Columbus
IST-2001-38314
WPBD
BIBLIOGRAPHY
[77] A. Consoli, G. Scarcella and A. Testa, “sensor-less control of PM synchronous motors at zero speed”, in Proc. of the Industry Application Society Annual Meeting, Vol. 1, pp. 270-277, 1999. [78] F. Parasiliti, R. Petrella, M. Tursini, “sensor-less Speed Control of Salient Rotor PM Synchronous Motor Based on High Frequency Signal Injection and Kalman Filter”, in Proc. of the ISIE Conf., CD ROM, L’Aquila (Italy), July 2002. [79] F. Parasiliti, R. Petrella and M. Tursini, “Speed sensor-less Control of an Interior PM Synchronous Motor”, Proc. of the Thirty Seventh IEEEIAS Annual Meeting, CD ROM, Pittsburgh, October 13-17, 2002. [80] A. Bellini, S. Bifaretti and S. Costantini, “Identification of the mechanical parameters in high-performance drives”, in Proc. of the EPE Conf., CD ROM, Gratz, 2001. [81] M. Labbate, F. Parasiliti, R. Petrella, M. Tursini, “Speed and torque control of induction machine for hybrid electrical vehicles”, Proc. of SPEEDAM Conf., Ravello, 2002. [82] A. Balluchi, L. Berardi, M.D. Di Benedetto, A. Ferrari, G. Girasole. A.L. Sangiovanni-Vincentelli, “Integrated Control-Implementation Design”, Proc. of the IEEE Conference on Decision and Control, 2002. [83] N. Mohan, T. Undeland, B. Robbins, “Power electronics: devices, converters and control”.
Columbus
IST-2001-38314
WPBD
Page 162