and software domains, system synthesis and design verification. The paper ..... Register Transfer (RT) levels are useful to specify also the system structure. .... usually cheap, flexible but poor in performance if compared with hardware. Many.
Hw/Sw Codesign of Embedded Systems William Fornaciari and Donatella Sciuto Politecnico di Milano, Dipartimento di Elettronica e Informazione, P.zza L. Da Vinci, 32 20133 Milano, Italy. {fornacia, sciuto}@elet.polimi.it
Abstract. The architecture of systems tailored for a specific application frequently requires cooperation among hardware and software components. The design of these systems is typically a compromise among a number of factors: cost, performance, size, development time, power consumption, etc. To cope with increasing possibilities offered by nowadays integration technology and steady demanding of shorter time-to-market, a comprehensive strategy aiming at gathering all the involved aspects of the design is becoming mandatory. This new discipline, called codesign, considers in a concurrent manner all the activities involved in the design of a mixed hw/sw dedicated system: capturing of design specification and requirements, mapping of the design onto hardware and software domains, system synthesis and design verification. The paper introduces the key factors involved in the design of an embedded system, together with a description on how codesign is overcoming such problems, opening the way to a new generation of CAD frameworks supporting system-level design.
1. Introduction Applications in the domain of telecommunication, multimedia, automotive and consumer markets make wide use of embedded systems as an important component of the global product. Embedded systems work as part of a larger system with which they interact for control or for performing specific computing intensive tasks [1]. Usually, embedded systems are constituted by one or more microcontrollers or microprocessors to provide programmability of the system, and interact with other digital or analog components to receive data from the external environment and possibly to perform fast manipulations. The functionality of an embedded system is often fixed, as defined by the system interaction with its environment. They are usually characterized by a large number of operation modes, by a high degree of concurrency and by the capability of quickly responding to exceptions. Programmability is a an important feature of these systems, providing the possibility of implementing in software all those tasks that could be changed in future versions, without having to re-design the entire system. Different application domains define the different degrees of importance for other types of requirements that can be imposed on an embedded system [20]. In particular, we can mention cost, which is of paramount importance in consumer electronics,
performance, real-time response, power consumption, integration level, flexibility, reliability, availability and safety. Embedded systems can be implemented with different levels of integration: from boards to system on a chip, depending on the type of application. While lower levels of integration, such as boards, allow a reduced cost in the realization and greater flexibility, the higher level of integration, i.e. systems on a chip, allow lower power consumption, higher performance, but higher complexity and chip sizes. Embedded systems can be characterized by the function they perform. Basically, embedded systems can act as control systems or as data processing systems. Control systems are reactive systems, in the sense that they react to the inputs provided by the external environment. Often, these systems are characterized by real time requirements. In fact, most reactive systems, when they receive the inputs must execute their function before a given deadline, in a predefined amount of time, defined by hard or soft time constraints. Control-dominated embedded systems include the core of a controller, whose size and capabilities strictly depend on the requirements of the embedded system in terms of performance, power consumption, availability, reliability, safety. Embedded systems whose main function is to perform data processing, are usually classified as data-dominated systems and can be found in many telecom and multimedia systems, where they execute specific digital or video signal processing algorithms. Here, the starting point is the algorithm to be performed by the system. Then, in most cases, ASIPs (Application Specific Instruction Set Processor) are included to speed up the execution. In fact, in this case, the hardware is chosen in order to support the execution of the embedded software application. ASIPs are specific programmable cores, whose instruction set architecture and underlying hardware has been chosen to improve the performance of specific algorithms with specific instruction profiles. Therefore, in general, the main components found in embedded systems include some or all of the following: software, firmware, ASICs (Application Specific Integrated Circuits), general-purpose or domain-specific processors, core-based ASICs, ASIPs, memory, FPGAs (Field Programmable Gate Arrays), analog circuits and sensors. Hence, the design of complex embedded systems is a difficult problem, requiring designers with skills and experience in order to identify the best solution. Unfortunately, designers have little assistance from electronic design automation tools to perform such tasks. In fact, most system designers work in an ad-hoc manner, given the fact that, in most cases, products that must be designed represent enhancements of prior systems. There are no widely accepted methodologies or tools available to support the designer in the definition of a functional specification and then in the mapping phase onto a system architecture. Therefore, the designers usually rely on manual techniques, mainly driven by experience, allowing them to explore only a limited set of alternative solution architectures. An exception is represented by Digital Signal Processing systems, for which there is a longer design tradition and better tools supporting them. The problem is worsened by the rapid evolution of most markets in the telecom, multimedia, and consumer electronics application domains, which require a drastic reduction in the time to market. This implies that the system design time must be reduced and the design made flexible to modify it in a short time, while, however
always improving the performance and their basic characteristics. This has led to a shift to software of most of the system functionality, with dedicated hardware as a support for those fixed functionality delivering a greater speedup in performance, implemented on a single chip which integrates a programmable core. This evolution requires a careful choice of such a core, in terms of size, functionality and performance, and a careful management of the entire design process. A concurrent design process is necessary to try to balance all requirements that the embedded system must comply with. Such a concurrent design process requires synchronization between the team developing the system architecture, the software and hardware design teams, to meet the short deadlines, possibly avoiding any long re-cycle. This difficult process must be supported by specific EDA (Electronic Design Automation) tools, which can provide help both to the system, hardware and software designers in the entire design flow. The academic research in this area started only a few years ago, while commercial tools have been made available on the market only recently, covering few specific parts of the complete design flow [21]. This research area is today known as hardware/software codesign, providing a global view of the design of embedded systems (mainly completely digital ones). We should note that this field of research derives from the EDA community, and therefore mainly from the hardware perspective, with influence from the software community in the system specification. Basically, the automation of the global hw/sw design approach, that concurrently considers hardware and software development, synchronizing the two processes aims at providing a unified view of the system in all the design phases. This allows an easier exploration of different alternative solutions that can be more easily verified and tested, before the actual hardware and software components are designed. System integration in this case is improved with respect to the traditional separate design flows, because the co-design flow maintains an alignment between software and hardware developments. Furthermore, a global framework allows management of the documentation in a global manner, thus keeping track of all modifications that could occur in later design stages in a unified format, after the specification phase. The goal of this paper is to provide an overview of this design process, showing the characteristics of the main tasks and the main issues involved. Then, an overview of the main approaches presented in the academic world and the commercial tools today available will be presented, showing the variety of solutions proposed, based on the type of application they consider.
2. Embedded system design flow For many years, design teams had to face the realities of combining hardware systems with software algorithms to deliver the system capabilities and performance. The traditional design flow of embedded systems starts from a requirements document, usually leading to an incomplete specification of the system. Given this document, system architects define the set of system functions that must cooperate in order to satisfy the system requirements.
Modeling and validation of these functions is performed by simulating the system, often through an executable specification described in a programming language and executed with different data inputs in order to validate the functionalities. This however, does not allow verification of the timing constraints or any of the other nonfunctional relevant parameters of the system, such as power consumption, performance of the entire system, safety, reliability and so on. Furthermore, this phase is error prone, since prototyping is usually manually performed, and therefore it is not possible to formally guarantee the equivalence between the specification and the software prototype. Then, the definition of which functions will be implemented in software and which ones will be performed by dedicated hardware components is performed. In the traditional design flow, this initial partitioning is performed manually by the system designer, helped only by his experience. From this step on a separate path for the software and hardware components design is followed. Interface design requires the participation of both hardware and software designers for the specification. Integration and test of hardware and software is performed after all components have been separately developed. This is the actual phase when system design faults are discovered, thus requiring a re-cycle in the hardware or software design paths. Furthermore, the integration phase, i.e., the verification of the correctness of the interaction between the hardware and software components has required, until recently, the building of the actual hardware prototype. This strategy penalizes the time to market, since errors are discovered at the very end of the design process. In most cases, these errors could have been identified in earlier stages of the process, if a different organization of the work had been chosen. The process of designing concurrent hardware/software systems thus consists of consecutive phases of modeling and verification, starting from the specification level, followed by the first refined specification, after partitioning, followed by the specification and validation of the hardware, software and interface architectures, down to the implementation level. The application of the correct modeling strategy at the right time is the key to quickly identify design errors. However, no single approach can be applied in the different phases of the design, and no single modeling technique is suitable for all possible application domains. The solution to this problem is in the definition of a methodology for system design, from the concept of the product to the final implementation. These tasks are common to all the embedded system designs, while the design flows differ in the techniques they apply for the different classes of application domains. In particular, co-design can be subdivided into the following main tasks. 1. Specification capture. The desired system functionalities are expressed in a formal model and the description is validated by simulation or verification techniques. The result is a functional specification, without any implementation details. 2. Exploration. The formal model is used as input to analyze different design alternatives to identify the best one that satisfies all design constraints, both functional and non-functional. The result of this phase is the definition of the hardware and software architectures and partitioning.
3.
Hardware software and interface synthesis. An implementation is created for each component of the system architecture defined in the previous step. The result is the software code for the software components and the RT-level implementation for the hardware parts. 4. Coverification. This step, prior to the physical design phase, allows verification that all components do in fact satisfy all requirements in terms of performance, area, costs, real-time constraints, power consumption, reliability, availability and safety. The next section will show how in fact co-design methodologies have organized these main phases and which tasks must be performed to obtain a suitable, at least partially automated, design flow for the different classes of embedded systems.
3. Hw/Sw Codesign Hardware/software codesign in not a recent idea. It has been an industrial practice for a long time but only recently gained attention in the research and EDA communities. For decades, design teams have been faced with the necessity to combine low cost and flexible computing capabilities, such as those provided by microprocessors, with dedicated hardware (e.g., ASICs) in order to cope with systems requirements such as cost, performance, design modifiability, energy saving, size, time to market, etc. Current trends in embedded systems has both exacerbated the problem of keeping close hw and sw development as well as opened new potential solutions, since now entire systems can fit on a single chip. In addition, increasing time-to-market pressure is making more and more important the availability of virtual (possibly executable) models of the system, in order to verify the compliance of the design constraints during the earlier stages of the design, before building the actual prototypes. The shifting of many of the verification activities toward the top part of the design flow, enables the possibility of evaluating alternative hw/sw architectural solutions, instead of committing to conservative (but risk-free) realizations, far from the optimal system implementation. Codesign is becoming a very comprehensive term, currently research teams and EDA producers are addressing only some of its aspects. However, the ultimate goal is to bridge the gap between functional specification, architectural tuning, the development of hardware and software and system-level verification. In this and in the following section, we will discuss only the aspects related to the R&D process, regardless of the market, product definition, support and any other business-related issue. An approach aiming at covering also these factors, is still a long way to come. The main application field for hw/sw codesign is the area of embedded systems; the main tasks necessary to support a system-level design are depicted in Fig.1 and the involved aspects are briefly summarized next.
6\VWHP6SHFLILFDWLRQ
Functional Model
&RYDOLGDWLRQ
Cosimulation Verification
Requirements
'HVLJQ6SDFH([SORUDWLRQ
Estimation
Transformation Partitioning 'HVLJQ5HILQHPHQW
Memories Interfacing ArchitecturalMapping &RV\QWKHVLV
Software Generation Operating System
Hw Synthesis Interface Generation
Figure 1. A general system-level hw/sw codesign flow.
3.1 Specification formalism The description of a system can be performed at any of the abstraction levels, each of which has a specific purpose. For example, describing the system at the component level allows the designer to capture the “pure” functionality, while the logic or Register Transfer (RT) levels are useful to specify also the system structure. Specification formalisms tailored for codesign, should increase the possibility for the designer to describe a more conceptual view of the system in terms of executable specification languages, capable to capture both functionality and design constraints in a readable and simulatable form. Due to the necessity of managing complex systems, the cross dependencies between specification formalism and EDA tools (both for design specification and synthesis) are becoming crucial, and the use of graphical versions of formal languages is becoming a key factor at the higher level of abstraction. On the bottom side, VHDL and C/retargetable assembler formats seem to be the target level to interface with existing and well assessed final synthesis flows. Furthermore, self-documentation capability is also achieved, since the entire design flow is managed with integrated EDA tools, and the split of the system design onto different teams and implementation domains should require reduced integration effort. The main features of a conceptual model, are traditionally partitioned in three classes: control, data and timing. These three main characteristics, sometimes called "codesign cube" can be used to compare different models of computation and to identify the suitable implementation domain. Finite state machines are probably the most widely used model, both for hardware and software to represent and design controllers, control units of microprocessors and control logic. This model has been recently extended in terms of hierarchy and communication, in order to prevent the explosion of the state count [18]. Other
formalisms are based on customizations of process level models, such as CSP, OCCAM, LOTOS, ADA, etc. In some cases, e.g. DSP applications, there may be no need of control, since data are produced at regular rates, processed and sent out in the same manner. As far as data are concerned, the simplest model uses boolean constraints to control the variables used to represent the behavior of controllers and control units. In some cases this is not sufficient to provide a clear picture of the functions performed by the rest of the system controlled. To describe also the datapath, complex arithmetic functions can be introduced to specify assignments of values to integer, floating point or arrays. However, complex systems cannot be managed at the clock-cycle level and the computation performed at each step (e.g., a state), is specified by using procedures and processes of high-level languages such C, C++, Java, VHDL, or Verilog. Timing-related issues can be captured in terms of constraints on rate, delays and deadlines on specific system activities. Synchronous formalisms greatly simplify such a task, in particular when the description will be mapped onto an actual architecture. There exist other formal approaches, e.g., formal timing diagrams, more focussed on the definition and verification of system properties as well as customizations of process-level formalisms, where the related EDA graphical environments simplify the allocation of timing constraints onto sections of the specification. Models are called synchronous when all the steps of the computation are performed and synchronized by dividing the time into regular intervals, as it is popular in the DSP community. In case the transitions are triggered by external events, the model of time is called event-driven or asynchronous. Such a model is particularly suitable to represent reactive systems, whose behavior is determined as a response to stimuli coming from the environment; as a consequence, the related description formalisms must be able to support exceptions or interrupts. Differences exist when timing related issues influence not only the performance but the behavior correctness, as it happens for real-time systems. In this case, the ideal specification formalism should allow the designer to capture in a unified manner the functionality and timing constraints defined over “critical sections” of the system’s description.
3.2 Target architectures As recalled in the previous sections, embedded systems implementations can be classified according to the chosen architecture, defined by the type and amount of its components, the connectivity and the realization technology. The use of standard parts has been the typical solution for many years, while recently ASIC implementations for volume production and FPGAs are becoming the preferred target technology. FPGAs do not offer performance and silicon density as the ASICs, but for prototyping and for systems requiring partial modifiability falling in the range of 100Kgates/100MHz they can be a viable solution. The basic elements to be connected, range from basic RT-components (registers, counters, memories, ALUs, etc) up to microprocessor cores. Controllers can be hardwired or microprogrammable and are characterized by their Instruction Sets (IS). On
the opposite side, datapaths differ in terms of number and type of functional units and interconnection bus topologies. Each processor consists of one control unit driven one or more datapaths which can be pipelined or non pipelined. Two basic types of connections can be envisioned: point to point, preferred for shorter on-chip communication and bussed when the distance is longer and intrachips. Specific data transfers (DMA, Channels) can be occasionally managed through dedicated components. Architectures for codesign, typically consist of one ore more custom processors connected through busses to a processor acting as the master. Shared memory protocols are frequently adopted for data exchange. Multiple processors architectures are also present, in these cases custom coprocessors are connected via multiple buses or a switching network. Access to system’s resources by each processor is regulated by an arbiter and usually takes place using message-passing and data queuing. Current progress in silicon technology enabled the realization of a system-on-achip, and for many applications the "low power" part of the entire system, i.e. the processors surrounded by one or more hardware coprocessors, can be integrated in the same package. For this purpose, a number of companies are marketing core cells (hard of soft macros) for embedded applications, to be integrated and partially customized according to the design needs. Examples are DSP microprocessor cores, standard bus interfaces (PCMCIA, PCI, …), decoder/encoder (MPEG, GSM, …), etc.
3.3 Design space exploration Comparative analysis of alternative solutions can be accomplished by evaluating different aspects: hw/sw partitioning, architecture tuning, constraints fulfillment. The goal of partitioning is the selection of parts of the specification to be executed either on the microprocessor or through a custom hw implementation. Software is usually cheap, flexible but poor in performance if compared with hardware. Many partitioning algorithms have been presented in literature, mostly derived from the hw/hw existing partitioning strategies. The novelty consists of gathering in a unified goal function, different constraints and cost/performance models specific of the hw or sw domains. Additional problems have to be considered during the split of specification onto the implementation domains; in particular the presence of only a “virtual” task parallelism on the microprocessor with respect to hardware and the bus traffic originated by hw-sw communication. The selection of the architecture relies on both selecting the proper components and the interconnections. In many proposals it is necessary to tune prediction strategies to identify the proper processor for software execution and a suitable style for algorithm implementation. The exploration includes restructuring at the algorithmic level and scheduling of operations. One of the major problems in exploration relates to early verification of properties and constraints of the system operation. At such an abstraction level, in many cases, it is more important to quickly compare alternative solutions with respect to obtain the absolute precision in the evaluation; approximate prediction-based strategies, both for cost and performance, are currently a hot topic of codesign.
3.4 Concurrent Synthesis Cosynthesis consists of obtaining software code and schematics by transforming the optimal high-level design descriptions identified during design space exploration. Synthesis is also the step where domain-specific optimization can be performed, and some manual intervention is again necessary. The goal of codesign is to interface existing standards and, at the same time, to manage in a unified way both the hw and sw synthesis flows. One of the most important goals is to reduce the impact on the final system integration, which traditionally can account up to 50% of the global design time, and to possibly generate the hw/sw interface automatically. Concerning the software synthesis, three main components have to be considered: instruction set, compilation, operating system support [19]. The goal of considering the instruction sets as a variable of the design space is targeted to better tailor the microprocessor to the application needs; ASIP processors and retargetable compilers are good examples of such a research effort. The tuning of an instruction set requires to profile the application code to discover which IS will produce the best performance. However, some limitations must be considered since datapath resources and connectivity affect the IS implementation. Since the realization of a compiler for a specific microprocessor requires a big effort, researchers are working on retargetable compilation strategies able to generate optimized code for different instruction sets, starting form intermediate descriptions of the code and of the resources available to implement the instruction sets. Operating systems for embedded systems typically require real-time support and compactness. Differently from other general purpose commercial products, they implement scheduling policies conceived for hard real time which, in some cases, can be statically determined. Many customizations are typically present, and their code is deeply joined with the application code. Hardware synthesis is a threefold activity concerning behavioral, RT-level, and interface synthesis. The goal of RTL synthesis is to select proper units from libraries to store the variables and to perform the proper operations assigned to them. To improve performance, pipelining can be applied. Behavioral synthesis is a step forward in terms of abstraction, even if the results are less predictable with respect to RTL synthesis. In this case algorithms written through high-level formalisms, are converted into an internal representation onto which scheduling of operation and resources allocation can be globally performed. Allocation aims at finding out the suitable amount of resources to tradeoff cost, performance or power constraints. Scheduling typically produces an assignment of the algorithm operations to the different clock cycles. The generation of interfaces requires simultaneous design of hw and sw components and the adherence to standard strategies such as DMA or memory mapping under the bus bandwidth constraint. Due to its paramount importance onto the global system performance, this activity is hard to be fully automated. One of the goals of cosynthesis is to simplify the interfacing with industrial standards and tools both for hw and sw development. Hence, there is an increasing effort in the research community in using standard formalisms to represent the synthesized system (e.g., VHDL) as well as to refine strategies able to predict the final product of the well assessed synthesis strategies. These data can be
backannotated onto the system description and are useful to drive the design space exploration.
3.5 Coverification A key issue in codesign is the possibility of concurrently verifying the interaction between hardware and software prior to committing to an implementation. Main characteristics to be inspected are functionality, performance, communication and constraints fulfillment. This activity can be performed at different abstraction levels, with different models and accuracy, and constitutes the background for all other codesign activities. Coverification has several flavors, probably the former effort has been targeted to allow the designer to verify system consistency via simulation. Different tools, are present on the market, targeted for hw or sw: software simulators typically use functional models of the microprocessors and can be cycle-accurate as well as operate at the instruction level, while EDA environments for hw design use more time accurate models at the register-transfer or gate level. Such an heterogeneity in terms of accuracy (and consequently speed) and simulation technology has spurred the growing of cosimulation strategies, able to bring together and coordinate different simulator domains and tools; probably the best effort in this direction is represented by the Ptolemy environment [6]. However, in many occasions, the speed of hw and sw simulators is not sufficient for large scale runs, so that additional hw is used to speed up simulation. This strategy, called coemulation, uses programmable hardware (e.g., FPGAs) to implement the hardware part of the system, while the software runs on the target microprocessor through in-circuit emulation. In this case the speed of the obtained system is typically one order of magnitude smaller the coemulated system, hence they cannot be used for accurate debugging of time transient inspection. Apart from functional or performance analysis, coverification covers also much more “semantic” analysis of the system, in particular an active research field aims at enabling the verification of system properties or to formally prove the equivalence of different hw/sw implementations, maybe obtained through restructuring of the original specification.
4. Current approaches and EDA environments In the past few years, a number of institutions started to investigate specific aspects of codesign, and currently some prototype methodologies and CAD environments are becoming available within the research community. A brief taxonomy of the more mature approaches and projects is here reported. The problem of cosimulation has been extensively addressed in the Ptolemy project [6] and Coware [9], while the verification of formal properties is one of the focus of the Polis [2] environment. Concerning the capturing of system specifications and design requirements, graphical formalisms embedded in prototype environments have been studied in the TOSCA [14, 15] and SpecSyn [3] projects. The activity of
exploring different alternative designs at system-level is one of the main value added of SpecSyn, TOSCA and Co-Saw. The synthesis of software for embedded applications is the scope of SpecSyn, TOSCA, Co-Saw and Polis, while the activity of interfacing hw and sw is the main goal of Chinook [4]. By now, many approaches prefer to interface with existing environments for hw synthesis, e.g at the VHDL level, instead of defining a new development strategy. Distributed embedded systems are the reference architecture for [5, 6], whose focus is on performance modeling and software generation. Some recent projects, partially funded by the EC focussed on codesign. The effort of INSYDE [8] was in the field of system specification and object oriented modeling techniques to unify hw and sw representations. SYDIS/COBRA [7] and CASTLE [13] aimed at defining a complete co-design flow, covering specification, verification, partitioning and synthesis. Another framework providing a complete codesign flow is TOSCA, where the issues of modeling and design space exploration has been extensively considered in the SEED project. SYDIS focused on design data integration considered two target architectures: ASIP and a single processor joined to a coprocessor. Other important investigations have been carried out to realize the COSYMA and COSMOS/SOLAR [11] codesign systems, whose main goal is the partitioning and synthesis stages, and in the GRAPE [10] project where the problem of emulating a system for data-oriented and cycle-static applications, has been addressed. Currently, there exists a strong interest from CAD vendors in the codesign discipline, their main effort, up to now, has been in integrating existing hw and sw design flow starting from the bottom line: hw/sw cosimulation.
5. Concluding remarks The constant growing of the design complexity gap between the chip density and the ability to design complex systems, together with the reduction of design time are the main motivations behind codesign. Currently, the length of the design cycle ranges from 6 to 24 months. This gap slows down the development of really new products, the achievements of optimal realizations and requires broader design expertise and large design teams. Several strategies to control the complexity gap can be conceived, e.g.: 1. Concurrent development of hardware, software and mechanical parts of the system. 2. Reuse of standard parts in-house developed for other projects as well as cores provided by third parts (IP macrocells). 3. Designing at the higher abstraction levels, only at the level of virtual prototyping with the support of design automation tools to provide implementation. The focus is thus on product specification and solutions exploration. Although codesign relies on all the aspects above reported, current maturity of methodologies and tools allows the designer to be supported only during activities 1) and partially 3). This paper presented an overview of the open problems encountered during the design of an embedded system, and of the emerging methodology called codesign.
The research community is more and more getting aware that the success of each proposal requires a strong cooperation between university, EDA developers and Industries. This concept has been also the trigger of the RASSP Program [16], initially focused on signal processor-based military products, whose goal is to decrease the overall design-time, dramatically (at least of a 4X factor).
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.
G. De Micheli, M.G. Sami editors, Hardware/Software Co-Design, NATO ASI Series, Series E: Applied Sciences - vol.310, Kluwer Academic Pub., The Netherlands, 1996. F.Balarin et Al., Hardware-Software Co-Design of Embedded Systems, The POLIS approach Kluwer Academic Publishers, 1997. D.Gajski, F.Vahid, S.Narayan, J.Gong, Specification and Design of Embedded Systems, Prentice Hall, 1994. P.Chou, G.Borriello, The Chinook Hardware/Software Cosynthesis System, ISSS’95, Cannes, France, 1995. J.Hou, W.Wolf, Process Partitioning for Distributed Embedded Systems, DAC’96, Las Vegas, Nevada, 1996. Ptomely www site: http://ptolemy.eecs.berkeley.edu/ G.Koch, U.Kebschull, W.Rosenstiel, A Prototyping Environment for Hardware/Software Codesign in the COBRA Project, CODES/CASHE’94, Grenoble, France. AAVV, A formal Approach to Hardware Software Codesign: The INSYDE Project, ECBS’96, March 1996. K.Van Rompaey, D.verkest, I.Bolsens, H.De Man, CoWare – A Design Environment for Heterogeneous Hardware/Software Systems, Euro-Dac'96, Geneve, 1996. M.adè, R.Lauwereins, J.A. Peperstraete, Hardware/Software Codesign with GRAPE, Int Workshop on Rapid System Prototyping, Chapel Hill, North Carolina, June, 1995. T.Ben Ismail, A,A, Jerraya, Synthesis Steps and Design Models for Codesign, IEEE Computers, February 1995. J.Henkel, R.Ernst, et alii, An Approach to the Adaption of Estimated Cost Parameters in the COSYMA System, Codes/CASHE'94, Grenoble, France, 1994. Camposano R., Wilberg J. , Embedded System Design, Design Automation for Embedded Systems, Kluwer Academic Publishers, vol.1, n.1-2, pp.5-50, January 1996. W.Fornaciari, F.Salice, D.Sciuto, A two-level Cosimulation Environment, IEEE Computer, pp. 109-111, Jun. 1997. SEED www site: http://www.cefriel.it/eda/projects/seed/mainmenu.htm. RASSP Project WWW site: http://rassp.scra.org/ J.K.Adams, D.E.Thomas, The Design of Mixed Hardware/Software Systems, DAC'96, Las Vegas, Nevada, 1996. S. Edwards, et Al., Design of Embedded Systems: Formal Models, Validation and Synthesis, Proc. of the IEEE, vol.85, n.3, March 1997. G. Gossens, et Al., Embedded Software in Real-Time Signal Processing Systems: Design Technologies, Proc. of the IEEE, vol.85, n.3, March 1997. P. Paulin, et Al., Embedded Software in Real-Time Signal Processing Systems: Application and Architecture Trends, Proc. of the IEEE, vol.85, n.3, March 1997. G. De Micheli, R. Gupta, Hardware Software Codesign, Proc. of the IEEE, vol.85, n.3, March 1997.