Knowledge-Based Integrated Prototyping Approach ...

6 downloads 0 Views 2MB Size Report
Jun 26, 2008 - Jung, Dae-Ho Bae ,Dong-Kyu Lee, Jae-Shin Lee, Sung-Ho Park, Tae- ... Myung-Jin Lee, Ju-Yong Oh, Seung-Ryun Lee, Won-Eui Hong , Dong-.
Thesis for the Degree of Ph.D.

Knowledge-Based Integrated Prototyping Approach for Identifying Physical and Behavioral Constraints of Embedded Real-Time Systems

Laxmisha Rai

Department of Electronics, Major in Information and Communication Engineering, The Graduate School

June 2008

The Graduate School Kyungpook National University

CONTENTS Abbreviations....................................................................................iv List of Figures....................................................................................vi List of Tables.......................................................................................x Abstract...............................................................................................xi 1. INTRODUCTION..............................................................................1 2. BASIC CONCEPTS AND RELATED RESEARCH.............................10 2.1. ERTS Development Challenges………………...………………………11 2.2. Knowledge-Based Integrated Prototyping ……………………………16 2.2.1. Introduction to Knowledge-Based Systems……………………….16 2.2.2. Need for Integrated Prototyping…………………….…..…………17

2.3. Related Research…………………………………………………………21

3. DESIGN CONSIDERATIONS AND CONCEPTUAL OVERVIEW OF AIP......................................................................................................25 3.1. Multi-Prototype Based Development Environment for ERTS .......…26 3.2. Design Considerations …………………….……………………………29 3.3. Proposed Framework of Research………………………….……….….30 3.4. Overview of the AIP Architecture……………………….………….….35 3.5. Modular, Reusable, and Hierarchical Behavior Modeling……..……38

3.5.1. Case Study 1: Modular and Reusable Behavior Modeling in Reconfigurable Multi-shaped Robots………………………………..39 3.5.2. Case

Study

2:

Widely-Spread

Mobile

Robots

in

WSN

Environment...........................................................................................41 3.5.3. Hierarchical Behavior-Based Reasoning…………………………...43

4. DESIGN DETAILS OF AIP.............................................................48 4.1. Development of PP Environment.......………………………….………50 4.1.1. Development Physical Prototypes for the Reconfigurable MultiShaped Robots…………………………………………………………51 4.1.2. Development Physical Prototypes for the Widely-Spread Mobile Robots in WSN Environment………………………………….……..53 4.1.3. Development of Fully-Embedded Physical Prototypes for the Widely-Spread Mobile Robots in WSN Environment……………..55

4.2. Development of VP Environment …….……………...……………..…60 4.3. Establishing Reflective Coherence between VP and PP …………….61 4.4. Development of AIP Environment …….……………………………...63

5. EXPERIMENTAL EVALUATION.....................................................74 5.1. Testing and Results………………………………..………………......…75 5.1.1. Case Study 1 : AIP with PPE and without VPE……...…………….75 5.1.2. Case Study 2: AIP with PPE and VPE with Reflective Coherence………………………………………………………………79

5.2. Real-Time Simulation……………………………………………………83

ii

5.3. Individual and Group facts Performance Evaluation …………………86 5.4. Evaluation of Physical and Behavioral Constraints…………….…….92 5.5. Advantages and Limitations …….………………………………….....96 5.5.1. Flexibility in Dynamic Skill Selection……………………………….97 5.5.2. Community Behaviors with Variable Priorities………………….100 5.5.3. Flexibility in Dynamic Reconfiguration of Behaviors……………101

6. CONCLUSIONS AND FUTURE WORKS…...................................101 REFERENCES......................................................................................106 ABSTRACT (IN KOREAN).................................................................113 ACKNOWLEDGEMENTS…................................................................116

iii

ABBREVIATIONS AI

: Artificial Intelligence.

AIP

: Autonomous Integrated Prototyping.

AIPE

: Autonomous Integrated Prototyping Environment.

API

: Application Programming Interface.

APPD

: Average PPD.

ARM

: Advanced RISC Machine.

BSP

: Board Support Package.

CAN

: Controller Area Network.

CLIPS

: C-Language Integrated Production System.

DEVS

: Discrete Event System Specification.

DSP

: Discrete Step in Prototyping.

EP

: Evolutionary Prototyping.

ERTS

: Embedded Real-Time System.

ESPS

: Embedded System Prototyping Suite.

FIFO

: First-In First-Out.

I2 C

: Inter-Integrated Circuit.

IDE

: Integrated Development Environment.

IPC

: Inter-Process Communication.

KB

: Knowledge-Base.

KBS

: Knowledge-Based System.

PCI

: Peripheral Component Interconnect.

PP

: Physical Prototyping.

PPE

: Physical Prototyping Environment.

PPD

: Distance covered in Physical Prototyping Environment.

PPT

: Elapsed Time in PPE.

iv

RF

: Radio Frequency.

RTES

: Real-Time Expert System.

RTOS

: Real-Time Operating System.

SDLC

: System Development Life Cycle.

SPI

: Serial Peripheral Interface.

UART

: Universal Asynchronous Receiver/Transmitter.

VP

: Virtual Prototyping.

VPD

: Distance covered in Virtual Prototyping Environment.

VPE

: Virtual Prototyping Environment.

VPT

: Elapsed Time in VPE.

VPTT

: Transformation Time in VPE.

VRM

: Virtual Reality Modeling Language.

WSN

: Wireless Sensor Network.

v

LIST OF FIGURES Figure 1.1.

Merits of integrated prototyping approach……………....8

Figure 3.1.

Reflective coherence between virtual and physical prototyping….…...................................................................28

Figure 3.2.

Role of AIP in system development life-cycle mode of ERTS…………………………………………………………34

Figure 3.3.

(a) Layered behavior-based architecture which is suitable for AIP, related (b) software, and (c) physical and virtual mappings (PPE/VPE)……………………………………...35

Figure 3.4.

A schematic of the Snake robot sections and units………39

Figure 3.5.

A schematic which explains a Four-Legged robot’s sections and units…………………………………………....40

Figure 3.6.

Community network topology for widely-spread Multirobot system…………………………………………….......42

Figure 3.7.

Hierarchical organization of skill-set……………..………44

Figure 3.8.

Different behavior mode combinations......................…...46

Figure 3.9.

Reusable hardware and software modules with 1-1 mapping…………………………………………………….47

Figure 4.1.

Development environments for AIP, PP, and VP………49

Figure 4.2.

A view of a reusable robot module which is commonly used in (a) Snake robot, and (b) Four-Legged robot…....52

Figure 4.3.

The KNU-LEGO Snake robot……………………………..52

Figure 4.4.

KNU-Four-Legged robot…………………………………..53

Figure 4.5.

Components of individual member robots…………...…54

Figure 4.6.

KNU group robots…………...…………………………….54

Figure 4.7.

Architecture of Mobile-ESPS………...……………………56

vi

Figure 4.8.

ESPS-Mobile development board…………...……………57

Figure 4.9.

Components of individual ESPS-mobile robot………….58

Figure 4.10.

ESPS-Mobile-based KNU group robots…………….....…58

Figure 4.11.

Packet structure transferring between PPE and VPE…..61

Figure 4.12.

Integrated CLIPS main () module………………………....66

Figure 4.13.

The apply_skill () module………………………………......67

Figure 4.14.

The gbehf () module…………………………...……………67

Figure 4.15.

Identification scheme for individual and group robots, with N=3………………………………………………….....70

Figure 4.16.

The gbehf () module for ESPS-Mobile kit………………...72

Figure 4.17.

The ReceiveData () code in mobile robots……………...…73

Figure 5.1.

Robot move (single_step) rule……….…………………….75

Figure 5.2.

Understanding the sequence of robot activities in each robot unit……………………………………………………76

Figure 5.3.

Different types of Snake locomotion generated with different

angle

sensor

Serpentine;

(c)

values

(a)

Rectilinear;

and

(b) (d)

Concertina…...……………………………………………...77 Figure 5.4.

Sequence of movements in the Four-Legged robot…….78

Figure 5.5.

Implementation of AIP Environment……………………81

Figure 5.6.

Example of AIP module for group and individual robot behavior generation………..………………………………82

Figure 5.7.

PPE navigation of robots by following the AIP rule as shown in Fig.5.6 (a) Initial position (b) All 3 robots moving forward (gbehf 3 6 1), (c) robots 1 and 2 turning right by 90 degrees (gbehr 2 6 4), (d) robot 2 is turning left

vii

by 45 degrees (ibehl 2 6 2), and (e) robot 3 is moving backward (ibehb 3 6 2)………………………………...……83 Figure 5.8.

VPE navigation of robots by following the AIP rule and then, PPE as shown in Fig.5.6 and Fig. 5.7 respectively (a) Initial position, (b) All robots 3 moving forward (gbehf 3 6 1), (c) robots 1 and 2 turning right by 90 degrees (gbehr 2 6 4), (d) robot 2 is turning left by 45 degrees (ibehl 2 6 2), and (e) robot 3 is moving backward (ibehb 3 6 2)………..84

Figure 5.9.

Figure showing the PP testing environment of AIP……86

Figure 5.10.

The performance results of ibehf() with varying Speed, (a) Speed versus PPD, (b) Speed versus PPT and VPT….....88

Figure 5.11.

The performance results of gbehf() with varying Speed and RG=2 (a) Speed versus PPD, (b) Speed versus PPT and VPT………..............................................................................89

Figure 5.12.

The performance results of gbehf() with varying Speed and RG=3 (a) Speed versus PPD, (b) Speed versus PPT and VPT……...…………………………………………………...90

Figure 5.13.

Comparison of different VPTT while executing individual and group facts………………...……………………………91

Figure 5.14.

Comparison of distance covered in physical prototypes with expected ideal coverage distance while executing group facts...............................................................................93

Figure 5.15.

The (beh_sequence) rule…………………………….……….94

Figure 5.16.

Comparison of Robot-1 traversal in VPE and PPE after firing (beh_sequence) rule……………………………...……95

Figure 5.17.

Skill reasoning manager for behavior management……99

viii

Figure 5.18.

Rules

describing

role

of

skill-selection

during

reconfiguration………...…………………………………..99 Figure 5.19.

Group behaviors with variable priorities……...…….…100

Figure 5.20.

Rules

describing

dynamic

reconfiguration

of

behaviors…………………………………………..………101

ix

LIST OF TABLES Table 4.1. List of facts……………………………………………………..…….68 Table 4.2. List of fact parameters………………………………………………69 Table 5.1. Angle sensor values for different Snake robot movements…….77 Table 5.2. Angle

sensor

values

for

different

Four-Legged

robot

movements…………………………………………………………..78 Table 5.3. Individual fact performance with variable Speed……………….....87 Table 5.4. Group fact performance with variable Speed and RG=2…… …...87 Table 5.5. Group fact performance with variable Speed and RG=3…………87 Table 5.6 Virtual and physical constraints evaluation with variable Speed and RG=3.............................................................................................92 Table 5.7 Evaluation of behavioral and physical constraints using the rule shown in Figure 5.15..........................................................................94

x

ABSTRACT

Rapid prototyping methods are in need of autonomous decision making and analysis during product development stages, so that the ‘timeto-market’ can be reduced much faster than traditional product development methodologies. So, new methods of prototyping are essential to be developed. This thesis proposes a novel approach to utilize the benefits of Virtual Prototyping (VP) and Physical Prototyping (PP) methodologies by integrating them into Knowledge-Based System (KBS) by providing reflective coherence. We term this approach as Autonomous Integrated Prototyping (AIP). The main contribution of this thesis is the development of intelligent system architecture to facilitate and guide the product development autonomously and simultaneously in both VP and PP environments at the early stages of embedded systems’ design. The reflective coherence between virtual and physical prototyping, along with knowledge-based system; enable exploration of new behaviors of developing system and to analyze different behaviors. The architecture is applicable to Embedded Real-Time Systems (ERTS), sensor applications, robotics, and ubiquitous applications where the system interaction with the external environment is necessary.

xi

CHAPTER 1

INTRODUCTION

In the future, complex systems such as embedded devices, ubiquitous applications, and robots may replace many existing electronic systems in many applications. For example, closing and opening doors, serving dishes in restaurants, health monitoring in hospitals may be done by static or mobile robots. Embedded systems play significant role in everyday life. They are diverse and can be found in consumer electronics, such as digital cameras, DVD players, and printers; in industrial robots; in advanced avionics, such as missile guidance systems and flight control systems; in medical equipments such as in automotive designs, fuel injection systems, and auto-breaking systems. Embedded systems functions may include [Osh03]: •

Monitoring the environment: Read data from input sensors. This data then processed and the results are displayed.



Controlling the environment: Generate and transmit the controlling environment.



Transforming the information: Transform and process the data collected.

Although the interaction with external world via sensors and actuators is an important aspect of embedded systems, these systems also provide functionality specific to their applications. Embedded systems typically execute applications such as control laws, finite state machines and signal processing algorithms. These systems must also detect and react to faults in the

internal

computing

environment

electromechanical systems.

2

and

the

surrounding

A real-time system is a system that is required to react to stimuli from the environment within time intervals dictated by the environment. Generally, real-time systems are systems that maintain a continuous timely interaction with its environment. Correctness of a computation depends not only on its results but also on time at which its outputs are generated. A real-time system must satisfy bounded time constraints or suffer severe consequences. If the consequences consist of a degradation of performance, but not failure, the system is referred to as a soft-real time system. If the consequences result in a system failure, the system is referred to as a hard real-time system. There are two types of real-time systems: reactive and embedded. Reactive real-time system involves a system that has constant interaction with its environment. An embedded real-time system is used to control specialized hardware that is installed within the larger system.

Often referred to as pervasive or ubiquitous computers, embedded systems represent a class of dedicated computer systems designed for specific purposes. Embedded systems have significantly improved the way we live today and will continue to change the way we live tomorrow [Li03]. The exponential growth and importance of embedded systems would create vast opportunities for industries to develop innovative systems as early as possible, and market them to customers. Most of these systems need specific context-aware, software/hardware architectures for making their development faster, reducing the ‘time-to-market’. Also, customers tend to look for products which can act intelligently with or without their presence. So, new product developmental approaches are necessary to coping with these rapid changes.

3

Almost all new product development life-cycles make use of both virtual and physical prototypes. Prototyping can be considered as the best activity to minimize the development risk of embedded systems. Virtual and physical prototyping methods are two techniques that have many similar goals, but they achieve them in very different ways. With Virtual Prototyping (VP), the approach is to create as precise a numerical model as possible in the easiest possible way, whereas Physical Prototyping (PP) creates a physical model [Ian05]. During the early phase of design, virtual prototyping create an interactive prototype that looks and behaves like the actual product. Customers can then click their way around your designs and easily discover potential problems and request changes early in the development process. Virtual prototyping is the process of using a virtual prototype, instead of a physical prototype, for test and evaluation of specific characteristics of a product design [Cho06]. Virtual prototyping can accelerate production, provide a company a competitive edge over others, help program managers identify program risks, help engineers visualize the interactive results of designs, and allow operational testers to conduct evaluations that aid in the design of tests performed during each phase of product development. Physical prototypes can reflect physical reaction with real environment using standard sensors, actuators and reusable bricks (LEGO, Fischer). During the middle phase of design, this improves interdisciplinary communication and supports a concurrent, time-oriented approach and collaboration in balanced development teams by enabling the rest of the design team to work together and collaborate better during the design process.

4

However, it is difficult to determine, by system analysis alone, the performance of speed and accuracy of a given model. The project risk can be greatly reduced by applying an incremental model of development, first by developing and proving the performance of a simple model, then gradually expanding it to match the desired capabilities. This type of incremental development also has the advantage of having visible progress early in the developmental effort. So, it is very common that different stages in Systems Development Life-Cycle (SDLC) use prototyping to evaluate alternative models.

In addition, in recent years, the complexity of the engineering problems that can be solved through a combination of Artificial Intelligence (AI) and modeling, or simulation techniques has been increasing. So, many new development

environments

demand

an

intelligent

development

environment in order to obtain efficient implementation. Knowledge-Based Systems (KBS) allow the user to manipulate the design models, run simulations, and interpret the results more easily than by using conventional methods.

In this thesis, we argue that, Knowledge-Based Systems (KBS) are essential in prototyping of embedded real-time systems to implement the expert knowledge for efficient product development. Most of the nondeterministic factors cannot be easily replaced by traditional design approaches. Also expert knowledge can only be gained with learning and experience. The automation mechanisms are not effective unless they act intelligently to model the product based on customer requirements in limited time. In the proposed method, autonomous ability to learn and

5

model the product development can be supported by a knowledge-based system. Knowledge-based system also chooses the best solution from a nondeterministic problem space.

Knowledge (facts and rules) requires extracting or gathering by human expert; therefore, the correctness of the solution suggested by the KBS is depend on the accuracy of the knowledge-base. Knowledge-Base (KB) always allows to modification without considering the status of the running system. This facilitates the extending knowledge-base incrementally along with product development phases. Compared with traditional problemsolving approaches, non-deterministic approaches always arrive at a valid solution, irrespective of choices that are made as it encounters the solution space. Another use of non-deterministic approaches is that the choices are made by these methods are effective guesses in a search process. This eventually, leaves the designers to map the real-life problems using KBS approach to get effective results.

The rationale behind choosing a knowledge-based approach in multiprototyping of Embedded Real-Time Systems (ERTS) is its nondeterministic behavior. This means that multiple executions of a program with the same input may produce different results. This is essential for generating intelligent actions in a non-deterministic domain and coping with the problem of incomplete information provided by ERTS. At the early stages of developing ERTS, there are many non-deterministic factors, such as behavioral system constraints, that need to be considered by the design engineers; for example, unexpected responses from sensors, interaction with the system environment, and managing context-sensitive or location-

6

awareness details. This can be seen in many applications such as aviation, biomimetics, and in a group of mobile robots behaving like ant-colony organizations etc. Most of the actions in these applications are difficult to represent using deterministic or procedural approaches as the systems include a number of sensors and actuators. In addition, the behavior of the system varies with its actions, leading to unpredictable and unexpected responses. In this context, representing non-deterministic behaviors using knowledge-based systems is highly beneficial.

Moreover, if we extend the knowledge-base, we can achieve the autonomous ability of prototyping and as well as automatic ability. This helps to accelerate the ERTS product development process. For example, in a situation where there are hundreds of sensors and actuators, human expertise may not be sufficient to calculate all the probabilities of the combinations and permutations to test the system behavior in all situations. So, a completely integrated prototyping system can provide performance beyond the human capability and expertise.

In this dissertation, we suggest an integrated prototyping approach as an ideal framework for ERTS development. In general, using the specialized VP tool, VP engineers interact with the customers and demonstrate the target system. Based on the functionality of VP, the EP (Evolutionary Prototyping) engineer constructs an evolutionary prototype using domainspecific and reusable software, and hardware architecture. Then the EP engineer makes a physical prototype with the executables to connect the EP. Finally, using the PP, the EP engineer works with hardware engineers and customers to check the required system’s functionality and constraints. In

7

the proposed approach, KBS is used to integrate the VP primitives and PP primitives simultaneously, with a reflective coherence to identify the nondeterministic behavior of the developing system. We term this approach as Autonomous Integrated Prototyping (AIP). Figure 1.1 shows the merits of integrated prototyping approach.

We believe that, despite of many

intermediate prototyping methods, we can achieve significant amount of saving of time compared with conventional design life-cycle as shown in Figure 1.1.

Conventional Design Cycle Software Development

Specification

Integrate and Test

Hardware Development

Intergraded Prototyping Approach EP Software Development

Product Release

Protos

VP Specification

Protos

Protos

Integrate and Test Saved Time

Hardware Development

PP

VP: Virtual Prototyping PP: Physical Prototyping EP: Evolutionary Prototyping

Figure 1.1. Merits of integrated prototyping approach.

8

This thesis is organized as follows. Chapter 2 describes the basic concepts, and related works; Chapter 3 describes the design consideration and conceptual overview of the AIP. Chapter 4 describes design details of AIP. The experimental evaluation is given in Chapter 5, and finally, the concluding remarks and future works are drawn in Chapter 6.

9

CHAPTER 2

BASIC CONCEPTS AND RELATED RESEARCH

10

2.1 ERTS Development Challenges An embedded real-time system consists of a number of components (processes) that run concurrently and communicate with each other under predefined timing constraints. The correctness of such systems is important, since they are used in an increasing number of safety critical systems. To improve the quality of these systems, two techniques can be used, namely verification of the specification and testing of the implementation [Nou00]. A real-time system continuously interacts with an external environment through sensors, actuators, or other hardware interfaces.

The challenge of designing embedded systems is to conform unique characteristics to the specific set of constraints for the application. Robert Oshana [Osh03] lists some of the important features include: •

Application specific systems - embedded systems designs are optimized for a specific application, unlike general purpose processors.



Reactive Systems - reactive computation means that the system (primarily the software component) executes in response to environment via sensors and controls the environment using actuators, while running at the speed of the environment.



Distributed - a common characteristic of embedded system is one that consists of communication processes executing on several CPUs or ASICs, which are connected by communication links.

11



Heterogeneous - heterogeneous architecture often comprise embedded systems because they provide better design flexibility for handling tight design constraints of embedded systems.



Harsh environment - Many embedded systems do not operate in a controlled environment, so they must be able to withstand excessive heat, vibration, shock, power supply fluctuations and other physical abuse.



System safety and reliability - as embedded system complexity and computing power continue to grow; they are starting to control more system safety aspects.



Small and low weight - many embedded systems must be lightweight for portability purposes.



Cost sensitivity - sensitivity to cost changes can be vary dramatically in embedded systems.

The requirements for embedded system may be functional or nonfunctional. The functional requirements mostly depend on the particular embedded system, mostly decided by the customer. But the functional requirements are not just sufficient for embedded systems. The typical nonfunctional requirements include performance, cost, physical size and weight, power consumption, efficient use of memory and execution time and reliability. In real-time embedded systems certain tasks must be performed within a specified time. Consequently analyzing the tasks to meet such performance constraints is of considerable importance. To achieve this, special operating systems such as Real-Time Operating Systems (RTOS) may be used to perform strict timing constraints. Moreover

12

developments of ERTS introduce several challenges to the designers. Firstly, they need to meet both timeliness and reliability requirements. Secondly, as most of the embedded real-time systems operate in a real-environment such as industrial automation or wireless sensor network applications; the environment in which they operate is unknown at the early stages of the development. So testing the uncertainties in unknown environment and real-time responses is hard for designers. The designers need to aware of the non-deterministic behavior of the system during real-operation to avoid any further damage during real-world interaction. Most of the systems such as remotely connected robots also generate numerous dynamically changing intelligent behaviors during its field operation. These systems may include tens to hundreds of member robots to accomplish a particular task. Along with these problems rapidly growing ERTS industry brings greater challenges to the designers to market their products early.

To guarantee the timeliness feature in the system implementation, we expect the system behavior to be predictable and we would like to ensure in advance that all critical timing constraints are met. However, with the increase of complexity, traditional design approaches are not well-suited for developing time-critical applications. Several typical characteristics in traditional design approaches normally lead to unpredictable behavior of ERTS [Hua04]. In the conventional system development life-cycle, the different stages are performed separately, thus resulting in inherent inconsistency among analysis, design, test, and implementation prototypes. As shown in the Figure 1.1, in conventional design cycle, hardware and software developments are carried out almost independently from specification. Moreover, many embedded systems are designed in different

13

applications such as intelligent digital appliances in home or vehicles. However, current design technologies cannot meet the customer’s expectations because related researches primarily focused on just controlling and monitoring rather than analyzing the behavior of such systems. An integrated framework is particularly important to support the development of this kind of system, whose complexity would otherwise overwhelm the designer; as embedded real-time systems usually operate in dynamic, continuous changing, and even unpredictable environments.

Usually ERTS development deals with designing heterogeneous systems. This is another major challenge to embedded system developers. To achieve the co-design, the development team needs experts from many domains rather than from a specific area of expertise. Therefore, the designers have to exploit the advantages of the heterogeneity of the target system. Co-design is an interdisciplinary activity, bringing concepts and ideas from different disciplines together, e.g. system level modeling, hardware and software design. Moreover, programming embedded systems is a special discipline and this demands embedded system developers have working knowledge of multitude of technology areas.

To attain fast and early analysis of the behavior of the ERTS, the prototyping of physical and virtual models are highly essential. The prototyping helps to verify some of the basic requirements of the ERTS such as reusability and flexibility of the target system models. We need some target prototyping tool that is easy to make various embedded real-time systems model. For building the various types of target system model, it must have reusability and flexibility features.

14

It is a tedious task to plan and check all the properties of an industrial ERTS during its development stages. In particular, if the system is interacting with other devices and environments, or if it includes many sensors/actuators, the SDLC (In this thesis SDLC is emphasized as Systems Development Life-Cycle rather than Software Development Life-Cycle) needs to be applied intelligently. The traditional approaches of product development models no longer seem to be effective in such circumstances. In addition, the various models similar to SDLC are effective in terms of testing interactive features within the system and traditional models fail to identify, explore, and test the different possible behaviors of the system. For example, how a robot reacts in unexpected circumstances may not be tested in any stage of product development. Also, identifying such numerous behaviors is a cumbersome task for testing engineers. So, the new prototyping methods are very essential to overcome the developmental barriers.

Traditionally, ERTS product development includes steps such as requirement analysis, design modeling, implementation modeling, and verification and validation. Usually, VP or PP is developed in the design modeling stage. But, without proper synchronization and controlled interaction between the VP Environment (VPE) and PP Environment (PPE), it is difficult to test and monitor the behavior and physical constraints autonomously. In this thesis, we aim to add a new layer to the VP and PP, so as to make use of their advantages and to add autonomous ability of prototyping.

15

2.2 Knowledge-Based Integrated Prototyping In this section, we briefly explain the concepts of knowledge-based systems, and the need for integrated prototyping methods for ERTS development.

2.2.1. Introduction to Knowledge-Based Systems In the recent years, knowledge-based systems have made their way from research laboratories into the real world. Applications have been, and are continuing to be, developed in areas as diverse as business, medicine, manufacturing, defense, astronomy, science, and engineering [KBS95]. Such applications perform tasks that include interpretation, prediction, diagnosis, design, planning, monitoring, debugging, repairing, instruction, and control. Knowledge-based system is a software system which can mimic the performance of a human expert in a limited sense, and contains a significant amount of knowledge in an explicit, declarative form. The area of KBS development has matured over the past several decades. It started with first-generation expert systems with a single flat knowledge-base and a general reasoning engine, typically built in a rapid-prototyping fashion. This has now been replaced by methodological approaches that have many similarities with general software engineering practice. KBS development is best seen as software engineering for a particular class of application problems. These applications problems typically require some form of reasoning to produce the required results. In current business practice there is an increasing need for such systems, due to progression of information

16

technology in our daily work. Some typical applications are systems for assessing loans in a bank, for job-shop scheduling in a factory, for configuring an elevator, and for diagnosing problems in a production line.

Knowledge-based systems (KBS) emulate the human expert behavior in a certain knowledge area. The knowledge-base of an expert system encapsulates in some representation formalism (rules, frames, semantic nets, etc.) the domain knowledge that should be used by the system to solve a certain problem [Ran07]. A “knowledge engineer” gathers the expertise about a particular domain from one or more experts, and organizes that knowledge into the form required by the particular expert system tool that is to be used. Consisting of a set of rules and user-supplied data which interact through an inference engine, an expert or knowledge-based system is able to derive or deduce new facts or data from existing facts and conditions. Related to intelligent front ends, which help the user select the appropriate model, are knowledge-based model support systems, which help the user build the appropriate model. Rozenblit and Jankowski [Ros91] proposed such an approach to simulation modeling of natural systems and emphasized the following advantages: modular model specification facilities; high degree of model reusability; and support for model selection and coupling.

2.2.2. Need for Integrated Prototyping Prototyping is defined by the IEEE as: “A type of development in which emphasis is placed on developing prototypes early in the development

17

process to permit early feedback and analysis in support of the development process” [Boo93]. The main reason for using prototypes is economic: Scale models and prototype versions of most systems are much less expensive to build than the final versions [Kor02]. Prototypes should therefore be used to evaluate proposed systems on acceptance by the customer or the feasibility of development is in doubt. The need for prototyping has become more urgent as systems being developed have grown more complex, more likely to have requirements errors, and more expensive to implement.

Some experts [Kor02] think that use of prototyping-based development methodologies will increase in industry. Prototyping-based methodologies are of interest because they allow users to reduce the cost and time-tomarket of a system. For companies producing complex systems (such as embedded, distributed, real-time, etc.), there is an additional reason: The cost of highly skilled engineers increases rapidly since there is more demand than people to fill positions. Automated development approaches could reduce the need for highly skilled engineers since one of them could manage several “standard” engineers to operate prototyping tools. For companies building critical systems, a prototype-based approach is even more interesting since it is more likely able to operate formal verification techniques when required. It is now clear that such methods are the only way to provide extremely high levels of reliability in system design and implementation. The prototyping is essential to validate the system requirements at the earliest. There is need to develop several prototypes to define the boundaries of system properties or behaviors. The main objectives of prototyping are to help customers and developers to

18

understand the requirements of the system. With this, users can experiment with a prototype to see how the system supports their work. In addition the prototype can reveal errors and omissions in the requirements. They can be considered as a risk reduction activity which reduces requirements risks and helps to available the working system early in the process. The benefits of prototypes are improved system usability; closer match to the system needed, improved design quality, improved maintainability, and reduced overall development effort.

A prototype is an executable model of a system that accurately reflects a chosen subset of its properties, such as display formats, computed results, or response times. They are useful for formulating and validating requirements, resolving technical design issues, and supporting computeraided design of both software and hardware components of proposed systems. Rapid prototyping refers to the capability of creating a prototype with significantly less effort than it takes to produce an implementation for operational use.

A prototype can be used to give end-users a concrete impression of the system’s capabilities. Prototyping is becoming increasingly used for system development where rapid development is essential. Rapid development of prototypes is essential in many applications. This may require leaving out functionality or relaxing non-functional constraints. Prototyping techniques include the use of very high-level languages, database programming, and prototype construction from reusable components. It is essential for parts of the system such as the user interface which cannot be effectively prespecified. Users must be involved in prototype evaluation. Virtual

19

prototyping effectively allows the testing the software code while the physical prototyping tests the physical hardware. However, both fail to demonstrate all the unpredictable and non-deterministic behavior of the system during real operation. Evolutionary Prototyping (EP) is an approach to system development where an initial prototype is produced and refined through a number of stages to the final system. The objective of evolutionary prototyping is to deliver a working system to end-users. The development starts with those requirements which are best understood. EP must be used for systems where the specification cannot be developed in advance.

However, prototype may not satisfy all of the constraints on the final version of the system. For example, the prototype may provide only a subset of all the required functions, it may be expressed in a more powerful or more flexible language than the final version, it may run on a machine with more resources than the proposed target architecture, it may be less efficient in both time and space than the final version, it may have limited capacity, it may not include full facilities for error checking and fault tolerance, and it may not have the same degree of concurrency as the final version. Such simplifications are often introduced to make the prototype easier and faster to build. To be effective, partial prototypes must have a clearly defined purpose that determines what aspects of the system must be faithfully reproduced and which ones can safely be neglected. To be effective, prototypes must be constructed and modified rapidly, accurately, and cheaply. They do not have to be efficient, complete, portable, or robust and they do not have to use the same hardware, system software, or implementation language as the delivered system. Because of these

20

limitations in different prototyping approaches, there is a need to integrate several prototypes and use them simultaneously along with reflective coherence for efficient product development.

2.3 Related Research Prototyping and applications of real-time expert systems have been widely discussed by many researchers over the years. Researchers in [Tse97]

proposed

replacement

of

hardware

prototypes

with

computational or virtual prototypes of systems and the processes that they may undergo. They believe that, by replacing hardware with computational prototypes, the potential is tremendous for greatly reducing product development time, manufacturing facility ramp-up time, and product development cost. In addition, there have been a number of attempts to use the AI techniques during prototyping. Some of the earlier researchers discussed the role of Real-Time Expert Systems (RTES) for control of system prototyping [Arz93]. They believed that RTES environments are very well suited as rapid prototyping environments for control language development. In [Ryb99], the authors studied the role of RTES for the control of an electro-physical complex. They realized their findings with the Gensym’s G2. One of the advantages of this approach is quick configuration of the sub-systems of the electro-physical complex.

In another paper [Juu95], researchers proposed the use of rule-based techniques for modeling methods. They proposed, the concept of

21

adaptive interfaces, which are tools that combine traditional simulation and intelligent methods, thereby enhancing the capabilities of simulation systems. They felt that an integrated set of compatible intelligent tools are good in simulation environments for increasing the qualitative knowledge in simulation applications. They applied their techniques for uncertainty processing in simulation environments. But, this approach is basically suitable for simulation applications (i.e. VPE) in manufacturing applications. Few researchers have applied the rule-based approaches for rapid prototyping methods. For example, one paper [Sas03] explores a method for the use of rapid prototyping devices to physically construct details of Palladio’s unbuilt villa designs.

Raymond K Wong [Won95] proposes another approach of modeling and simulation with roles. This paper emphasizes the prototype system which supports the execution of roles, simulation, and hence validation and verification. The research aimed, with the role model execution based on the (rule-based) expert system techniques, to effectively simulate the above tasks and analyze the performance. For example, a robot at different times may be used (and hence act) differently: as a sensor to identify the defects of the products, as an assembly worker to assemble the product components, or as an “intelligent” retriever who retrieves the right parts from the conveyor, etc. Roles are employed to serve another level of abstractions to the object manufacturing systems, so that behavior of the objects can be naturally partitioned and hence decomposed. The different situations are expressed in CLIPS (C Language Integrated Production System) expert system.

22

Regarding simulation based approaches; paper [Hu05] describes a simulation-based software development methodology for distributed dynamically reconfiguring real-time systems. For example, a dynamic distributed real-time system might include hundreds of computing nodes, smart sensors and actuators, and continuously reconfigure itself in an uncertain or even hostile environment. Without the support of model continuity, it is very difficult to manage the software’s complexity during development of systems of this kind. This methodology is based on Discrete Event System Specification (DEVS) modeling and a simulation framework [Zei00]. It provides a “Modeling-SimulationExecution” process that includes several stages to develop real-time software.

An approach including both physical and virtual parts is proposed by Wihelm Bruns [Bru99]. In this approach, artifacts that have one real physical part and several virtual parts are coupled by bi-directional double links of control and view, enabling a synchronous update of all parts, if one of them is changed by user action or internal events. The bidirectional double links allow the control of virtual parts by grasping and pointing to real parts, and viewing the virtual parts by light projection into the real scene and vice versa. The concept is being demonstrated with prototypes for the application areas of pneumatic circuit design and flexible assembly systems. Other researchers such as in [Duf07], have integrated real world robots, multi-agent development tools, and VRML (Virtual Reality Modeling Language) visualization tools into a coherent whole, and have experimented with the interplay between virtual and real environments for social robot experimentation.

23

However, most of the earlier works were focused on virtual and realworld interactions rather than integrating together with a live reflective coherence. In addition, some of the integration efforts are unidirectional in nature and there is no proper integrating mechanism to use the advantages of both. In this thesis we present a new approach to integrate VPE and PPE effectively along with a knowledge-based system. Moreover, almost all methodologies related to prototyping are applicable in industries where product behavior is assumed as homogeneously static rather than heterogeneously dynamic. We suggest that the proposed AIP approach is very effective in complex manufacturing applications such as sensor/actuator-based systems, robotics, embedded real-time systems, context aware computing etc., where the product may behave differently based on responses from its surroundings.

24

CHAPTER 3

DESIGN CONSIDERATIONS AND CONCEPTUAL OVERVIEW OF AIP

25

3.1 Multi-Prototype based Development Environment for ERTS Many system designers will execute a life-cycle of hardware/software co-design, when both the hardware and software are being developed simultaneously. Understanding the relation between hardware and software functionality, and the boundaries between the two, helps ensure that requirements are designed and implemented completely and correctly [Osh03]. Early in the requirements definition and analysis phase, system developers, in close cooperation with the design engineers, allocate requirements to hardware or software, and sometimes both. This allocation is based on early system simulation, prototyping and behavioral modeling results, as well as experience and other trade-offs mentioned earlier. Once this allocation has been made, detailed design and implementation begins. Various analysis techniques are applied to embedded real-time systems development when the both the hardware and the software are being designed concurrently including: •

Hardware/software simulation.



Hardware/software co-simulation.



Schedulability modeling such as rate monotonic analysis.



Prototyping and incremental development.

Simulation can be used at various levels of abstraction, is used to make early evaluations of performance. Low level simulations are used to model bus bandwidths and data flows, and are useful for evaluating performance. High-level simulations can address interactions of functions and perform

26

hardware/software

trade-off

studies

and

validate

designs.

Using

simulation, complex systems can be abstracted down to fundamental components and activities. Simulation can help address functional concerns (data and algorithms), behavioral (process sequencing) or performance concerns (resource utilization, throughput, and timing).

A more realistic approach to study implementation details of ERTS is to develop prototypes step-by-step such as virtual and physical. Building ERTS prototypes pose difficult challenges to designers as they need to focus on the embedded and real-time properties of the final product. The need of multi-prototyping arises, when the system includes a heterogeneous process of development. For example, in a recent survey on present and future requirements in embedded real-time systems conducted by Kaj Hanninen et.al [Kaj06], it was found that the present requirements are fulfilled using considerably homogeneous development methods. They showed that there is a need for more sophisticated development support compromising additional tool support and more domain and application specific development platforms, resulting in a more heterogeneous development environment and resource efficient runtime structure.

In addition, in most of the ERTS applications, the system development is completely different from traditional SDLC methods. There is a need to develop several prototypes at each stage. Also, initial prototyping needs reusability claims to enhance the productivity and decision-making. Reusability of physical and virtual prototyping modules efficiently supports the development of design models as quickly as possible. The motivation for this research is shown in Figure 3.1. This shows the multi-prototyping

27

approach to build the ERTS using virtual and physical modeling, with reflective coherence between them.

Virtual Prototyping

Reflective Coherence

Requirement Model User/Customer

Design Model Physical Prototyping

Implementation Model

V & V Model Software Architecture based Evolutionary Prototyping

Figure 3.1. Reflective coherence between virtual and physical prototyping.

28

3.2 Design Considerations The fundamental objective of this work is to develop an effective mechanism to control different prototyping methods autonomously by integrating them using a knowledge-based system, thereby enhancing the power of existing prototyping methods by identifying physical and behavioral constraints of embedded real-time systems at the early stage of development for minimizing the development risk. In this thesis, we propose that prototyping tools with autonomous abilities are very effective in the manufacturing industry for reducing ‘time-to-market’ and developing products with numerous capabilities. The objective includes development of reflective coherence between VPE and PPE to explore different behaviors during manufacturing. This is also supportive for monitoring the status of the product development, remotely. AIP enhances the capability of the system development by synchronizing the VPE and PPE

development

approaches

and

reducing

the

effort

of

actual

implementation. As we are targeting the embedded real-time systems and robotics, designers need to face various challenges to limit or expand the behaviors of the systems based on sensor responses. In summary, requirements include the following: •

Development of intelligent system architecture to facilitate and guide the product development autonomously.



Establishment of reflective coherence between VPE and PPE to enable the knowledge-based system to explore and test new behaviors in the virtual and physical world. The advantage of

29

reflective coherence is the ability to monitor the progress of development remotely in a PC from the physical location of the product. •

To study and implement effectively physical and behavioral constraints and capabilities of embedded real-time systems (soft and hard), the role of multiple sensors and actuators, interactions with peers or neighboring systems, interaction and adapting to environment, intelligent behavior generation in robotics and ubiquitous applications during product development stages.



We also aim to provide very effective and handy rule-based intelligent commanding interpretation and concurrent execution architecture in remote controlled real-time systems for easy testing of behaviors by developers. For example, we aim to develop simple facts and rules to interact with both VPE and PPE by the designers or customers remotely.

3.3 Proposed Framework of Research In earlier years, customers are only focusing on appearance of the product rather than its behavior. In addition, customers may look for new behaviors after they exhausted with using the same features. It is natural that, customers look for new preferences year by year. Nowadays developing complex systems are made easier with component or objectbased development. But, system designers are skeptical about development of complex embedded systems, which can behave differently in different situations, such as robots, artificial animals and biomimetic systems. Such

30

systems pose great challenges to both hardware and software developers for testing and exhibition of intelligent behavior or natural language understanding. Also, as the ubiquitous world is changing very rapidly, addition of new designs to the existing ones and behavior customization in embedded systems/robotics, by satisfying the customer needs while keeping behavior flexibility, present other challenges. This poses difficulty to developers to base their decisions on what behaviors could be added or deleted. Another problem is to test the usability, stability, flexibility, and context-awareness of the system in terms of hardware, software, and behavior perspectives. Usually, in the testing stage of SDLC, usually the parameters or physical constraints are tested, but this is not sufficient for checking all the probabilities of making the product favorable to the customers. Such applications also demand an intelligent method to apply the system behavior to software and hardware modules so as to increase the abilities of flexibility and usability. For example, some of our earlier works [Rai07, Rai08a, Rai08b] describe such implementations, where intelligence plays a great role in the reusability of both hardware and software modules.

With these challenges, one may see that traditional development methodologies demand new prototyping scenarios to address issues such as communication protocols, real-time, embedded features, virtual, physical modeling, and rapid prototyping methods, amongst others. This demands an intelligent prototyping environment to connect all the methodologies together to accelerate the ‘time-to-market’ conditions. Therefore, novel prototyping methods are necessary.

31

In addition, novel prototyping methods are also required to address the reusability of hardware and software modules during development. As a result, there is scope for developers to achieve flexibility in reuse of the existing physical prototyping environment and virtual prototyping environment infrastructure.

AIP utilizes the benefits of both virtual and physical prototyping toolkits by integrating them to an intelligent knowledge-based system providing reflective coherence between them. So, developers have the flexibility to reuse the existing physical prototyping environment (PPE) and virtual prototyping environment (VPE) infrastructure. We aim to implement a generic framework to combine the ideas of virtual prototyping with design by simulation, physical prototyping, and autonomous reasoning abilities.

Moreover, there is an increasing need to reduce the cost of building virtual and physical models and to test the various features of final products. To make system development more realistic and faster, there is also a potential need to use virtual and physical prototypes simultaneously with reflective coherence between them. However, combining the virtual and physical prototypes during the product development stage is not sufficiently beneficial without proper intelligent integration and monitoring abilities. We believe that it is more appropriate if such multi-prototyping methods are directly linked to intelligent systems such as the rule-based KBS. We aim to integrate the knowledge-based system directly into the multi-prototyping systems, rather than controlling the prototyping as in many earlier approaches. There are three basic advantages can be derived

32

from our approaches: (1) intelligent modeling of VP and PP, (2) autonomous ability of prototyping using a rule-based production system, and (3) analysis of multiple physical and virtual prototypes simultaneously. We term this approach as Autonomous Integrated Prototyping (AIP), where the system is modeled itself based on its autonomous decisions by monitoring the virtual and physical prototypes. This eventually leaves the designers with more relief from the pressure they have in taking care of the traditional design approaches.

The proposed approach is a spiral-SDLC-like method, where the gradual development is performed at each stage (virtual, physical, and evolutionary) along with intelligent prototyping. In each step, we can gradually test the product for its behavior from simple to complex. AIP is defined in terms of rules or facts along with the graphics of physical representations and interactions with VP and PP, as shown in Figure 3.2. The rules are written in CLIPS [CLIPS] to perform intelligent behavior generation and dynamic reasoning so as to make the system behavior more realistic and hence improve the simulation environment.

33

As in Figure 3.2, AIP integrates both VP and PP and thereby enhances the ability of design modeling. AIP provides a user/designer interface to the developer to monitor and control the development process in both virtual and physical environments. AIP interacts directly with both VPE and PPE simultaneously or separately (bidirectional direct) using rules and facts. This has the advantage of testing VPE using a set of rules, along with PPE or without PPE, and vice versa. Similarly, a user in the PPE may change the system behavior (for example, robot location from one position to other), which may be reflected through VPE initially and then to AIP (indirect) or directly in VPE and AIP. Any change in the VPE or PPE is updated in the knowledge-base accordingly, as the KBS has the ability to make decisions based on sensing and actuation. In summary, we aim to develop a prototyping suite where different system properties or behaviors

KB Adding Knowledge-base

Adding Knowledge-base

VP Engineer

VP

PP Engineer

AIP

PP

Figure 3.2. Role of AIP in system development life-cycle mode of ERTS.

34

are autonomously tested, updated, and decided by AIP based on its knowledge-base of product specifications and requirements.

3. 4 Overview of the AIP Architecture

As shown in the Figure 3.2, the AIP includes VP and PP primitives and knowledge-base as its major components. The knowledge-base is updated as the number of VP and PP primitives increases by adding new rules or facts to support these additional primitives. The overview of the AIP architecture is shown in Figure 3.3. The Figure 3.3 (a) shows the basic framework towards achieving AIP with a reasoning layer. Software mappings

(Figure

programming

AIP

3.3(b))

shows

the

environment.

framework

Physical

and

suitable virtual

for

the

mapping

Reasoning layer

KBS module

ERTS

ERTS behavior/ Skill-set

Threads/Functions

Sections/Group robots

Skill layer

Functional primitives

Units/ Member robots

Skill-primitive layer

High-level instructions for sensor and actuator management

Sensor and Actuator blocks

Sensor/Actuator driver layer (a) Behavior-based layered architecture (AIP)

(b) Software mapping

(c) Physical/Virtual mapping (PPE/VPE)

Figure. 3.3. (a) Layered behavior-based architecture which is suitable for AIP, and related (b) Software, and (c) Physical/Virtual (suitable for PPE/VPE) mappings.

35

(Figure 3.3(c)) shows the framework suitable for the development of the PP as well as VP environment.

As shown in Figure 3.3 (a), the reasoning layer is responsible for behavior management during system interaction with the outside world. In the case of software mapping, the reasoning layer is mapped to the knowledge-based system module, the behavior layer to threads (or functions), and the skill-layer to functional primitives, respectively. The skill primitive layer is merged with the skill layer in our design. Skill primitives are elementary actions that reach predefined goal states, and their sequences are used to perform system tasks [Liu03]. In the case of hardware mappings, the ERTS behavior layer is mapped to robot sections or group robots, and the skill layer to robot units or individual member robots, respectively. The details of sections and units are described in the section 3.5.

Figure 3.3 (b) shows the software and programming environment for AIP. An ERTS can be effectively represented using a KBS module which includes several threads or functions. Many knowledge-based systems such as CLIPS will allow us to integrate the user defined modules as KBS modules and compile them together. The functionalities of different subsections or sections of ERTS may represented by different threads and functions, which in turn include several functional primitives for sensor and actuator operations. These functional primitives again include number of Application Programming Interfaces (APIs) or instructions for sensor and actuator management. For example, there are several APIs which are responsible for actions in virtual and physical environments such as: switch_init(), sensor_init(), motor_init(), motor_test(), motor_stop(), motor_set(),

36

sensor_get(), and LED_init() etc. These APIs form the software mapping for achieving different actions through skill-primitives.

Figure 3.3(c) shows the physical and virtual mappings to the AIP layers and programming environment as shown in the Figure 3.3(a) and Figure 3.3(b). In general , an ERTS system can be divided into different subsections or subunits. For example, multi-shaped robots may be subdivided into different sections depending on its role. A set of group robots also include several sub-groups of robots. These make good examples for embedded real-time systems in real operation. Moreover, the sub-units or individual sections again divided into smaller units comprising of number of sensors and actuators. This is same for a sub-group of a larger group of robots, where they may include several members. Finally each unit comprises of several sensors and actuators.

While constructing a physical prototype of an ERTS such as mobile robot, the PP engineer need to construct from individual sensor and actuator blocks such LEGO [LEGO] and Fischer Technik [FISH] blocks. The LEGO and Fischer blocks help the PP engineers to evaluate the APIs and functional properties of the developing system. Most of these blocks are already in use as a as a design tool for the preliminary stage evaluation of industrial projects. Moreover these blocks can be used to simulate very complex embedded real-time applications. Moreover, these blocks can also be virtually represented in the VP environment using several VP primitives, so that VP and PP engineers can establish 1-1 mapping between VP and PP primitives. In summary the Figure 3.3(c) make the basic framework for

37

representation of virtual and physical primitives in VP and PP environments respectively.

3.5

Modular,

Reusable,

and

Hierarchical

Behavior

Modeling There are several issues that need to be addressed in the development of an integrated prototyping environment. Most important of all these is the establishment

of

reflective

coherence

between

multi-prototyping

environments and flexibility in reusing software and hardware modules of the developing system. To understand the ability of reusable and reconfigurable hardware and software modules, one should consider the examples described in this section which shows the steps towards developing an integrated knowledge-based system.

In this section, we will demonstrate the domain-specific applications developed in our lab as two case studies. In the past few years, we have been successful in developing intelligent architecture using rule-based systems for multi-shaped robots and group behavior generation in mobile robots [Rai06, Rai07, Rai08a, Rai08b]. We will demonstrate the steps involved in developing knowledge-base related to these embedded realtime systems briefly.

In the first case study, we will explain the reusable and reconfigurable hardware and software modeling of multi-shaped mobile robots, such as snake and legged. In the second case study, we will demonstrate the

38

widely-spread mobile robot group behavior generation in a Wireless Sensor Network (WSN) environment. In both case studies, we will show the steps involved in physical prototyping and steps towards developing facts and rules. In the first case study, we will demonstrate the integrated prototyping with PP. In the second case study we will demonstrate the integrated prototyping along with PP and VP with reflective coherence.

3.5.1 Case Study 1: Modular and Reusable Behavior Modeling in Reconfigurable Multi-Shaped Robots Units

k p

k+1 p+1

p

p+1

Sections

Figure 3.4. A schematic of the Snake robot sections and units. To understand the behavior modeling in robots, we start with robots such as Snake and Legged. By understanding the locomotion of Snake and Legged robots, we can arrive at many conclusions. First, snakes move by pushing their body against an environment and, to achieve this, the different sections (Figure 3.4) of a snake’s body must generate serpentine locomotion or lateral undulation.

39

In Legged robots, however there is a need to synchronize leg movements to generate purposeful locomotion. In both cases, generating purposeful movement is a challenging task. To generate serpentine locomotion, the robot must generate left and right movements. As shown in Figures 3.4 and 3.5, the Snake and Four-Legged robots are divided into smaller modules called units. In our work, we coined two terms: robot units and sections [Rai05, Rai08a]. The combination of many units makes a section.

Section k

p+1

p+1

p

p

Section k+1

Units

Figure 3.5. A schematic which explains a Four-Legged robot’s sections and units. As a result, a snake may have N sections (1≤ k≤ N) and each section may have M units (1≤ p≤ M). The total number of units = N×M. Similarly, an NLegged robot may have a maximum of N sections ((1≤ k≤ N) and each section may have M units (1≤ p≤ M). The total number of units required for any movement is N×M, which is similar to that of the Snake robot. This model is the same as the earlier one, but the number of sections and units needs to be chosen carefully when reconfiguring a Snake robot into a

40

Legged robot or to any other type of robot. The “KNU-Snake Robot” is the name of the robot developed in our lab. It has four modules and we reconfigured it into a Four-Legged robot without making any changes in software [Rai08a]. This implies that it is easy to reconfigure a six module Snake robot into a Six-Legged robot rather than a 4 or 8 legged one. This feature is important for reusing the same software and hardware modules and for creating a symmetrically and diagonally synchronized legged robot model.

Legged robots can be used to travel over any terrain, but they are very difficult to construct. The most challenging part of locomotion control in a Legged robot is maintaining stability and balancing weight. In some legged robots [Eri98], a set of orthogonal balances is used to drive the body of the robot along all the possible modes of motions while the feet stay on the ground. However, we used the balance and forward approach, where the robot moves forward and backward by synchronizing movements between legs to balance itself.

3.5.2. Case Study 2: Widely-Spread Mobile Robots in WSN Environment In this case study, we will demonstrate another application of widelyspread mobile robots in the WSN environment. While designing robots which exhibit community behaviors, engineers need to understand the behavior of herds of sheep, flocks of birds, or schools of fishes. In general, their motion is random, but the scope of their movement is within a known

41

area. Flocking or herding behaviors are formed with the collective behavior of each member. Each group/community member will follow similar actions along with its neighboring members.

So, our requirement is to generate such community behavior collectively, as well as incrementally, member by member. Because, only regenerating the behavior of a herd or flock of natural entities is not sufficient to fully utilize the advantages of co-operative mobile robots in a WSN environment. This motivated us to control the robot community collectively and also control the individual member specifically. Intelligent group behavior requires proper synchronization between the co-working robots.

PC

MR: Mobile Robot R: Router

Gateway

R1

C

MR1

Coordinator

R4 Wireless Communication R5 R2

R6

R3

Rx MR4

MR2 Rn

MRx

MR3 MRn

Figure 3.6. Community network topology for widely-spread Multi-robot system.

42

The robots need to work together as a group rather than behaving in their own way. Figure 3.6 shows the architecture where the robot is directly mapped as an end node along with its sensors/actuators. This is of beneficial for the network designers as it reduces the effort of configuration of network topology, and the robot engineers to quickly generate group behaviors [Rai07, Rai08b]. The mobile robots include many sensors and actuators, which form a network of sensors. A robot is connected as an end node, within the vicinity of a router and it has the ability to move around the router.

3.5.3 Hierarchical Behavior-Based Reasoning In the KNU-Snake robot and the Four-Legged robot, basic motion primitive is a unit module (Figure 3.4 and 3.5) and is mapped to a “skill” in layered architecture. We have chosen a unit as “skill” and section-module as “ERTS behavior.” This is because the different robot (or system) sections are implemented by following the ERTS specifications. The reason for selecting section module as ERTS behavior is to map exactly the behavior of multiple threads or modules (such as C functions) to multiple skills. This is also the same with the widely-spread group robots (Figure 3.6). A set of group robots with multiple numbers of sensors and actuators are implemented as an ERTS system. In general, one or more set of skills form well-defined behavior. Again, these skills may have many Primitive Skills (PS) as in [Mil05]. In the present architecture, we are not worried about primitive skills because the basic skills required by any mobile robots are moveforward, move-backward, and move-left and move-right etc. In software terms, a

43

section module is nothing but a thread or C-function and it may have many procedural primitives. So, the behavior of the robot directly depends on unit modules which are mapped to skills.

For example, in the Snake robot, which has 3 sections, each section has 2 units; we can conclude the following related parameters: No. of behaviors generated =3, No. of sections=3, No. of skills=2 and No. of robot units=2. In a group of 3 robots we will also arrive at a similar number of parameters. These parameters may change depending on the expected behavior of the robot. In the above example, the number of types of behaviors generated is three. This indicates that three types of behaviors can be generated concurrently.

Sections/ Group Robots

Skill-set

Skills Unit-1/ Member Robot-1

A

S

S: Sensors

Unit-2/ Member Robot-2

A

S

Unit-M/ Member Robot-M

A

A: Actuators

Figure 3.7. Hierarchical organization of skill-set.

44

S

Figure 3.7 shows the hierarchical structuring of skill-set. As shown, skillsets are usually responsible for behaviors or actions related to the robot sections or individual robots. Many skills form a skill-set. The skills are responsible for actions related to a particular unit in a section or to a particular member (individual actions) of group robots. Many sensors and actuators are encompassed in a unit or individual mobile robot.

This architecture follows the subsumption architecture which was proposed by Rodney Brooks [Bro86]. Subsumption has been widely influential in autonomous robotics and elsewhere in real-time AI. Subsumption architecture is a way of decomposing complicated intelligent behavior into many simple behavior modules, which are in turn organized into layers. Each layer implements a particular goal of the agent, and higher layers are increasingly more abstract. Each layer’s goal subsumes that of the underlying layers. For example, a robot’s lowest layer could be “move single step”, on top of it would be the layer “move forward”. Each of these horizontal layers accesses all of the sensor data and generates actions for the actuators as shown in the Figure 3.7.

Similarly, the lower layers co-ordinate to achieve the overall goal of the higher layers. Feedback is given mainly through the environment. As can be seen each higher level contains as a subset each lower level of competence. The important part of this is that each layer of control can be built as a completely separate component and simply added to existing layers to achieve the overall level of competence [Bro90]. One of the advantages of this structuring is the generation of different behavior mode combinations as shown in Figure 3.8.

45

Figure 3.8 (a) shows, the situation whereby, two behaviors are synchronized to generate a more powerful behavior. Figure 3.8 (b) describes the behavior chaining mode, where sequences of behaviors are selected to accomplish a task. Figure 3.8 (c) shows the behavior selection mode. It is very clear that, in the behavior selection mode; many types of existing behaviors can be reused and called upon by the reasoning manager of the reasoning layer. The behavior selection mode is usually a combination of behavior synchronization and chaining. Also different types of behaviors can be selected in different sequences.

(a) Behavior X

Behavior Y

Behavior X

Behavior Y

Behavior Z

(b)

Behavior X

Behavior Y

Behavior Z

(c)

Figure 3.8. Different behavior mode combinations. In addition, reusability is essential in the design of multi-shaped robots, both in terms of software and hardware. Figure 3.9 shows design of the reusable software and hardware mapping with a 1-1 mapping. The robot sections and units are reusable hardware blocks and facts, the apply_skill() modules [more details are given in Section 4.4] are reusable software blocks. This architecture has the flexibility to exhibit different behaviors on different sections (or units) of the robot. For example, a Snake robot’s bellysection may need more units than the head or tail sections. This is because

46

more force is needed in climbing a tree or crawling over steps. The basic behavior in our robot is modeled as a thread or function, and these types of robotic behaviors are synchronized to generate purposeful goals. In our earlier

work,

we

proposed

multi-thread-based

synchronization

of

locomotion control in Snake robots where threads are responsible for robot behaviors [Rai05]. However, it is possible to implement this using a modular approach by incorporating the powerful management in the reasoning manager.

Multi-Shaped Robots

Rule-based Application (defrule move_onetime (robot_step single) => (apply_skill 1 1 10 1 10) (apply_skill 1 2 10 0 10) (apply_skill 2 1 10 1 10) (apply_skill 2 2 10 0 10)

…..

Sections

facts (apply_skill 1 1 10 1 10) (apply_skill 1 2 10 0 10) (apply_skill 2 1 10 1 10) (apply_skill 2 2 10 0 10)

….. Reusable Hardware Modules

Reusable Software Modules apply_skill () module

Units

apply_skill(){ int Section, Unit,Dir,Angle,Speed; ….. free_rotsen_handler(Unit); close_fifo(); }

Sensor/Actuators A

S A

A S

Sensor/Actuator Control set_rotsen_handler(Unit,test2,Sign*Angle); … set_motor(Unit,Dir,Speed);

S A

S

Figure 3.9. Reusable hardware and software modules with 1-1 mapping.

47

CHAPTER 4

DESIGN DETAILS OF AIP

48

The steps involved in design of the AIP approach are (1) development of PPE, (2) VPE, (3) establishing reflective coherence between VPE and PPE, and (4) development of an AIP Environment (AIPE). Step (4) includes integration of both VPE and PPE to AIPE.

Figure 4.1 shows the

development environments for AIP, PP, and VP, and their interconnections.

AIP Development Environment Autonomous Decision Making

User Interface

Real-Time Expert System (RTES)

Realtime Tasks RT Task

RT Task

Non-Realtime Tasks Task

RT Task

Task

Linux Kernel RTLinux Kernel

Wired/ Wireless Communication

TCP/IP

Network (Zigbee)

CAN Bus

CAN Bus

GLG Tool Kit

CAN Bus TCP/IP

Sensors

Actuators

Robot-1

Sensors

Actuators

Sensors

Actuators

Robot-N

Robot-2

PP Development Environment

Simulation Server Program (GLG/Java)

VP Development Environment

Figure 4.1. Development environments for AIP, PP, and VP.

49

4.1 Development of PP Environment As shown in the Figure 4.1, the PPE comprises of multi-shaped robots or set of coordinated mobile robots connected in a WSN environment. The CAN protocol is employed for sensor actuator network. The interface between the robots to the outside world is achieved using ZigBee protocol. Both CAN and ZigBee are highly essential to keep the real-time properties along with its applications in embedded systems. The CAN and ZigBee protocols are briefly explained below.

The

CAN

(Controller

Area

Network)

is

a

fast

serial

data

communications bus for real-time applications, which was developed by Bosch in the early 1980’s and became an international standard in 1994. It was especially developed for data exchange between electronic controllers in automobiles [Law97]. The CAN is used for communication between sensors and actuators deployed in the robot. This is required to satisfy the real-time characteristics.

The CAN can guarantee the hard-real time

communication between various sensors and actuators. It is easy to add additional sensors or actuators to CAN network and with that the control method will be made simpler. In the distributed CAN network, each sensor or actuator node can be exhibited to perform in complex conditions.

The ZigBee [ZigBee] is developed by ZigBee alliance and it is based on IEEE 802.15.4 standard. It is a low data rate, low power consumption, low cost; wireless networking protocol targeted towards automation and remote control applications. ZigBee is expected to provide low power connectivity

50

for equipment that needs battery life as long as several months to several years but does not require data transfer rates as high as those enabled by Bluetooth. ZigBee compliant wireless devices are expected to transmit 10-75 meters, depending on the RF environment and the power output.

4.1.1 Development

of

Physical

Prototypes

for

the

Reconfigurable Multi-Shaped Robots. As shown in the Figure 4.1, the PPE comprises of multi-shaped robots or set of coordinated mobile robots connected in a WSN environment. The physical prototyping of a Snake robot is developed using a KNU-LEGObased embedded real-time toolkit [LEGO, Jun02]. Each unit of the robot is represented by a hardware module embedded with a motor and an angle sensor. A hardware interface was developed to support many sensors and actuators of the KNU-LEGO kit. This kit includes touch, light and angle sensors, motors and an embedded computer system. The PCI-104 is an embedded computer standard and is commonly used with LEGO systems. Also, the PCI-104 is suitable for the development of autonomous embedded robot-control applications. The proposed architecture is implemented in RT-Linux based dual-kernel environment. This architecture supports both real-time and non real-time tasks execution. The FIFO, shared memories, and IPC (Inter-Process Communication) modules support communication between real-time and non-real-time tasks.

51

Angle Sensor Unit Joint Motor

(a)

(b)

Figure 4.2. A view of a reusable robot module which is commonly used in (a) Snake robot, and (b) Four-Legged robot.

Figure 4.3. The KNU-LEGO Snake robot. The angle sensors are used to co-ordinate sine wave sequences in the Snake robot and legged-gait in the Four-Legged robot. In both cases, the number of motors needed is equal to 4 (N×M). Figure 4.2 shows a reusable hardware module in the Snake and Four-Legged robots with minor modifications. The shape of the module is more important, in making it a generic unit for both Snake and Legged robots. We also designed and tested a reusable hardware module with CAN protocol which supports real-time

52

communication performance to actuators and sensors. Figure 4.3 shows the actual physical prototype of the Snake robot, and Figure 4.4 provides the photograph of the Four-Legged robot, which is physically reconfigured from the Snake robot.

Figure 4.4. KNU-Four-Legged robot.

4.1.2 Development of Physical Prototypes for the WidelySpread Mobile Robots in WSN Environment. In terms of the second case study to develop group behavior, the physical prototypes of robots are necessary. We have developed the physical prototypes in two different ways. In the first approach, the physical prototypes of robots were developed using a LEGO kit carrying CAN and ZigBee boards (Figure 4.5). In the second approach, we have also implemented

fully-embedded

prototypes

using

Embedded

Prototyping Suite (ESPS), which is explained in section 4.1.3.

53

System

ZigBee Module

Motor

Angle Sensor

CAN Board

Figure 4.5. Components of Individual member robots.

Figure 4.6. KNU group robots.

The CAN-based sensor network is directly responsible for interacting with sensors and actuators in the robot. In the implemented prototype of the widely-spread mobile robots, in each robot, the CAN module is

54

networked with 2 angle sensors and 2 motor drive actuators. We have followed the model proposed in [Luc07] to develop member robots, and to accurately study the behavior of robot movements. Figure 4.5 shows the components of individual member robots and Figure 4.6 shows the group of three robots in operation.

4.1.3 Development of Fully-Embedded Physical Prototypes for the Widely-Spread Mobile Robots in WSN Environment The fully embedded based version of the physical prototypes of robots was also developed with the mobile version of ESPS (Embedded System Prototyping Suite) kit [ESPS]. This kit developed in our lab to support several sensors and actuators which are commonly used around the world. This kit includes, touch, light and angle sensors and motors. The basic architecture of the ESPS supports communication for UART, Ethernet, ZigBee, and CAN as shown in Figure 4.7. The development board in ESPS-Mobile uses the ARM processor based embedded system. ARM CPU’s are most frequently used in mobile and embedded market where low power consumption is a critical requirement. ARM based embedded processors provide solutions for real-time systems for mass storage, automotive body and power-train, industrial and networking applications. The ARM architecture enjoys the widest choice of embedded Operating Systems (OS) for system development. ARM enables choice by partnering with many leading suppliers of embedded OS and

55

development environments [ARM, MakT]. FreeRTOS is the operating system [RTOS] used along with Board Support Packages (BSP). FreeRTOS, is an open source Real Time Operating System for the SAM7X. The BSP is required for the I2C (Inter-Integrated Circuit), SPI (Serial Peripheral Interface), etc.

UART

CAN

Ethernet ZigBee

Hardware

FreeRTOS

Software

User Applications

BSP Control Board

Figure 4.7. Architecture of Mobile-ESPS.

FreeRTOS is the right choice in these applications as compared with the RT-Linux. The size of the kernel and RAM is in terms of Kilobytes and Bytes rather than Megabytes as in RTLinux. FreeRTOS, which has a portable code is especially suitable to small microcontrollers can be applicable to embedded systems. But they provide only basic features and hard to scale beyond the target platform. The other advantages of FreeRTOS are: number of tasks can share the same priority and provides application design flexibility.

In embedded systems, a board support package is

implementation specific support code for a given board that conforms to a given operating system. It is commonly built with a bootloader that contains

56

the minimal device support to load the operating system and device drivers for all the devices on the board. The core architecture of the CPU is based on ARM7.

Figure 4.8. ESPS-Mobile development board.

The ESPS Mobile development board which is shown in Figure 4.8 is based on ARM cross development tool chain, Atmel AT91SAM7S-EK evaluation board. The modifications are done to this evaluation board so that it can support for CAN and ZigBee protocols. Specification of this

57

board include ARM7-TDM CPU, Atmel AT91SAM7X256, maximum 55 MHz clock speed, 256K of Flash, SRAM of 64K, three sensor inputs with 10bit ADC , three-bidirectional DC motors, 8×2 Character LCD and JTAG. The open source tools are used to develop embedded software for the Atmel AT91SAM7S family of microcontrollers.

ESPS Mobile Board

Angle Sensor

Motor

Figure 4.9. Components of individual ESPS-mobile robot.

Figure 4.10. ESPS-Mobile KNU group robots.

58

Three major open source software tools make up the ARM cross development system are Eclipse IDE, YAGARTO, and OpenOCD [Lyn07]. Eclipse provides the Integrated Development Environment (IDE) which includes a superior source code editor and symbolic debugger. YAGARTO provides a recent version of the GNU C/C++ compiler suite natively compiled for Windows. OpenOCD interfaces the Eclipse symbolic debugger with the JTAG port available on the Atmel ARM7 microcontrollers. This board also includes two serial ports, a USB port, JTAG connector, two buffered analog inputs, two pushbuttons, three LEDs and a prototyping area. The board may be powered from either the battery or an external DC power supply (7v to 12v). The ESPS-Mobile development board is a powerful, flexible, and easy to use embedded prototyping suite based on the Atmel ARM7 SAM7X256. It is designed to interface easily to additional circuitry in order to connect with the real world. The firmware for this board is organized into general library and specific project directories. It's based on FreeRTOS and includes networking, USB, ADC, PWM, CAN, RS232, SPI, motor control, EEPROM, and high resolution timing. Figure 4.9 shows the components of the robot developed for the physical prototyping purpose. Figure 4.10 shows the ESPS-Mobile based group robots.

59

4.2 Development of VP Environment The development of VPE includes programming for a simulation environment using GLG toolkit [GLG] to reflect the navigations of robots in PPE, and also act accordingly in VPE as per the autonomous decisions of AIP. The GLG toolkit is used to create sophisticated real-time animated drawings without much effort.

The design of a GLG drawing allows

developers to edit drawings simply and quickly, without re-programming. In addition, this feature allows the application programmer (or VPE developer) to concentrate on data collection and management aspects instead of display. Along with GLG toolkit, C/C++, Java, ActiveX can be used to develop standalone or web-based applications. In the present implementation VPE is implemented only for the second case studywidely-spread mobile robots in the WSN environment.

60

4.3 Establishing Reflective Coherence between VP and PP In VPE, the simulation server shows the actual movements of the robots in PPE. The information to the server is received from the PPE through a TCP/IP socket. The TCP/IP socket connection is also used in between VPE and AIPE. Wireless ZigBee is used in between AIPE and PPE. However wired connections are also acceptable. Reflective coherence is established as soon as the VPE (server) and PPE (client) is ready for communication. The client program in PPE is actually a task in RT-Linux. This task is in turn mapped to a fact in AIP which will be discussed in the next sub-section 4.4.

fact Code

RG/RN

Speed

Dist/Angle

Figure 4.11. Packet structure transferring between PPE and VPE.

The data transfer between from VPE is bi-directional, as mentioned in sub-section 3.3. While transferring data between PPE to VPE, the PPE sends a packet with data fields, as shown in Figure 4.11. The fact code is to identify the type of movement such as forward, backward, left, or right. RG/RN is the parameter to identify group, subgroup, or individual robot navigation. Speed is the robot speed and Dist is the distance parameter in group or individual facts. Angle is the parameter used to turn the robots in particular angle with left or right directions.

61

In the VPE, the packet is received and modified into a slightly different form with (Xi, Yi) co-ordinates, where subscript ‘i’ indicates the corresponding identifying number for a particular robot, i.e., (X1, Y1) for robot-1 and (X2, Y2) for robot-2 etc. The exact transformation to (Xi, Yi) coordinates from the Speed and Dist parameter (in PPE) is calculated based on the formula given in [Luc07]. The same packet is transferred while VPE triggers PPE. For example, any movement in VPE is also responsible for physical navigation in PPE and this information is also updated in AIPE. But, any unexpected movements in VPE or PPE are autonomously decided by the AIPE so that it get all the authority necessary to avoid any damage to the physical system by the VP engineer to the PP engineer and vice-versa. During transfer of the packet from VPE to PPE the (Xi, Yi) co-ordinates transformed to Speed and Dist parameters as shown in Figure 4.11 to keep uniformity in packet transfer between PPE and VPE and to reduce the overhead during seamless data transfer.

62

4.4 Development of AIP Environment The goal of AIPE is to co-ordinate the VPE and PPE with intelligent reasoning. The AIPE is implemented using the CLIPS expert system [CLIPS]. CLIPS is a productive development and delivery expert system tool which provides a complete environment for the construction of rule and/or object based expert systems. CLIPS is now widely used throughout the government, industry, and academia. The origins of the C-Language Integrated Production System (CLIPS) date back to 1984 at NASA’s Johnson Space Center. CLIPS provide a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rulebased, object-oriented and procedural. CLIPS are similar to capabilities found in languages such as C, Java, Ada, and LISP and is written in C for portability and speed and has been installed on many different operating systems without code changes. Operating systems on which CLIPS has been tested include Windows XP, MacOS X, and Unix. CLIPS can be ported to any system which has an ANSI compliant C or C++ compiler and comes with all source code which can be modified or tailored to meet a user's specific needs. CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, Java, FORTRAN and ADA and can be easily extended by a user through the use of several welldefined protocols.

In addition to being used as a stand-alone tool, CLIPS can be called upon from a procedural language to perform its function, and then, return to the calling program. Likewise, the procedural code can be defined as

63

external functions from CLIPS. When the external code completes execution, the control returns to CLIPS. In our experiment, we have used CLIPS version 6.23.

A program written in CLIPS consists of rules, facts, and objects. The inference engine decides which rules should be executed, and when. A rulebased expert system written in CLIPS is a data-driven program where the facts, and objects if desired, are the data that stimulate execution via the inference engine. CLIPS follow the pattern matching process, where each rule searches for facts that satisfy the rule’s conditions and place the rule on the agenda. Rules become activated whenever all the patterns of the rule are matched by facts. The process of pattern matching is always kept current and occurs regardless of whether facts are asserted before or after a rule has been fired. The pattern- matching process also maintains and updates these rules after each execution cycle.

The rules are defined by using defrule constructs. These rules control high-level decisions and can call upon C-compiled procedures. In the case study 1, apply_skill () is the external module in our implementation, and it is used to generate various types of behavior dynamically by using rules in Snake and Legged robots. The rules have access to C-coded external modules (such as apply_skill), which control the individual modules of the robot. The decision to use different skills can be initiated by sensor values. For example, a light sensor connected to the head of the robot is responsible for the activation of a particular rule and thus a certain type of behavior. The pseudo code for skill module apply_skill () is shown in Figure 4.13, and it is a generic skill module which can be applied with different parameters.

64

The apply_skill () module has five parameters as shown in Table 4.1, and it can be inserted as a fact and can be applied to the robot as shown below: (apply_skill 1 2 10 1 11)

The above fact turns the motor, in Unit 2 of Section 1, 45 degrees clockwise with a motor Speed of 11. This can be applied with different parameters dynamically in order to generate different types of behavior for robots with different parameters. In practice, skills are executed based on various conditions. For example, if the robot is facing an obstacle, then it has to change its direction. In such cases, the reasoning layer may decide to move backward by firing appropriate rules. This provides a robot, with the abilities to learn autonomously, with new capabilities based on selection of different skills. With this, learning abilities are incorporated into the proposed framework; for example, interacting with the environment using sensors and actuators.

65

…… int main(int argc,char *argv[]){ void *theEnv; theEnv = CreateEnvironment(); RerouteStdin(theEnv,argc,argv); CommandLoop(theEnv); return(-1); } ……. void UserFunctions(){ extern double apply_skill(); extern double ibehf(); extern double ibehb(); extern double ibehr(); extern double ibehl(); extern double gbehf(); extern double gbehb(); extern double gbehr(); extern double gbehl(); DefineFunction2("apply_skill", 'v', PTIF apply_skill, "apply_skill", "55iiiii"); DefineFunction2("ibehf", 'v', PTIF ibehf, "ibehf", "33iii"); DefineFunction2("ibehb", 'v', PTIF ibehb, "ibehb", "33iii"); DefineFunction2("ibehr", 'v', PTIF ibehr, "ibehr", "33iii"); DefineFunction2("ibehl", 'v', PTIF ibehl, "ibehl", "33iii"); DefineFunction2("gbehf", 'v', PTIF gbehf, "gbehf", "33iii"); DefineFunction2("gbehb", 'v', PTIF gbehb, "gbehb", "33iii"); DefineFunction2("gbehr", 'v', PTIF gbehr, "gbehr", "33iii"); DefineFunction2("gbehl", 'v', PTIF gbehl, "gbehl", "33iii"); } ……. …….

Figure 4.12. Integrated CLIPS main() module.

66

apply_skill(){ int Section, Unit,Dir,Angle,Speed; ……. ……. if(set_rotsen_handler(Unit,test2,Sign*Angle) (apply_skill 1 1 10 1 10) (apply_skill 1 2 10 0 10) (apply_skill 2 1 10 1 10) (apply_skill 2 2 10 0 10) (apply_skill 1 1 20 0 10) (apply_skill 1 2 20 1 10) (apply_skill 2 1 20 0 10) (apply_skill 2 2 20 1 10) (apply_skill 1 1 20 1 10) (apply_skill 1 2 20 0 10) (apply_skill 2 1 20 1 10) (apply_skill 2 2 20 0 10) (apply_skill 1 1 10 0 10) (apply_skill 1 2 10 1 10) (apply_skill 2 1 10 0 10) (apply_skill 2 2 10 1 10))

Figure 5.1. Robot move (single_step) rule.

75

Skills

Section-1, Unit-1

Section-1, Unit-2

Section-2, Unit-1

Section-2, Unit-2

Details Initial Position.

(apply_skill 1 1 10 1 10) (apply_skill 1 2 10 0 10) (apply_skill 2 1 10 1 10) (apply_skill 2 2 10 0 10)

Unit 1 of section 1 and 2 turns right at an angle 45 degrees and unit 2 turns left to the same angle

(apply_skill 1 1 20 0 10) (apply_skill 1 2 20 1 10) (apply_skill 2 1 20 0 10) (apply_skill 2 2 20 1 10)

Unit 1 of section 1 and 2 turns left at an angle 90 degrees and unit 2 turns right to the same angle

(apply_skill 1 1 20 1 10) (apply_skill 1 2 20 0 10) (apply_skill 2 1 20 1 10) (apply_skill 2 2 20 0 10)

Unit 1 of section 1 and 2 turns right at an angle 90 digress and unit 2 turns left to the same angle

(apply_skill 1 1 10 0 10) (apply_skill 1 2 10 1 10) (apply_skill 2 1 10 0 10) (apply_skill 2 2 10 1 10)

Unit 1 of section 1 and 2 turns left at an angle 45 degrees and unit 2 turns right to the same angle to return to initial position

Figure 5.2. Understanding the sequence of robot activities in each robot unit.

The robots used wheels to aid their movements. We tested the movements of the Snake robot for sine-wave and Legged for gait movement. The Snake robot was able to generate different kinds of snake movements as shown in Figure 5.3, including rectilinear, serpentine and concertina movements. Different motion schemes (a, b, c, d) were generated with different angle sensor values. Table 5.1 illustrates various angle sensor values for different robot movements with 4 units. These were generated by setting different angle sensor values to motors (M1-M4). The different

76

movements were generated because different skill sets were applied to the robot. For example, the rectilinear type of movement (Figure 5.3.a) is considered to be a special skill for robots and it can be dynamically changed into a serpentine locomotion skill (Figure 5.3.b). The robot can effectively apply its different skills based upon different situations and applications. TABLE 5.1 ANGLE SENSOR VALUES FOR DIFFERENT SNAKE ROBOT MOVEMENTS AS Value(5.3.a) AS Value(5.3.b) AS Value(5.3.c) AS Value(5.3.d)

M1

Head

M1 3 9 12 3

M2

M2 3 9 9 6

M3

(a)

M3 3 9 6 9

M4 3 9 3 12

M4

Tail

(b)

(c)

(d) Figure 5.3. Different types of snake locomotion generated with different angle sensor values (a) Rectilinear; (b) Serpentine; (c) and (d) Concertina.

77

TABLE 5.2 ANGLE SENSOR VALUES FOR DIFFERENT FOUR-LEGGED ROBOT MOVEMENTS AS Value(5.4.b) AS Value(5.4.c) AS Value(5.4.d) AS Value(5.4.a)

M1 14 26 26 14

M2 14 26 26 14

M3 14 26 26 14

M4 14 26 26 14

Position Forward Backward Forward Initial

Figure 5.4. Sequence of movements of the Four-Legged robot. In the Legged robot, leg movements proceed forward-backward instead of left-right as in the Snake robot. In case of balance and moving forward, the robots’ legs move forward and backward, but the distance moved in the forward direction is much greater than that backward. Moving backward is necessary for balance against gravitational forces. Greater forward distance

78

movement can be achieved by providing higher angle sensor value to motors.

Figure 5.4 explains the sequence of leg movements in the Four-Legged robot. Figure 5.4(a) shows robot legs in their initial position, the legs moving in a forward direction with a greater angle sensor value (AS Value) in Figure 5.4(b), and in Figure 5.4(c) the legs moving backward in order to balance against gravitational forces with a lesser AS Value. This procedure continues until the robot reaches the desired location. Table 5.2 shows the various angle sensor values in robot units and the related movements.

5.1.2 Case Study 2: AIP with PPE and VPE with Reflective Coherence For the second case study, we have implemented the AIP environment along with PPE and VPE, as shown in Figure 5.5. The AIP screen shows an executing fact (gbehf 3 6 1), which is responsible for the navigation of three robots (group behavior) in a forward direction with Speed parameter = 6 and Distance parameter = 1. As soon as the above fact is asserted the robots move in a forward direction as shown in the PPE screen, and also at the same time it is displayed in VPE. The initial positions of the robots are assumed with minimum gap, and robot directions are initially set. The AIP approach provides selected navigation in a known environment. The entire environment can be programmed using a knowledge-base. The appropriate rules can also be

79

fired if the robots expect to return back to their original position. Figure 5.6 shows the example-AIP module written using rules to test the interaction of AIPE with PPE and then VPE. The exact navigation can be improved with precise calibration. The rules in AIP guide the robots to go in a particular predefined direction, and also guide the robots to think as a group. We have tested the robots to traverse in a known environment and also to return to their initial positions using an autonomous reasoning module in AIP as in Figure 5.6, which captured the moving images in PPE and VPE as shown in Figure 5.7 and 5.8 respectively (videos of the physical prototypes of the two case studies can be viewed at: http://rtlab.knu.ac.kr/robots.htm).

80

AIP Screen

VPE Data from PPE

Virtual Prototyping Environment (VPE)

Physical Prototyping Environment(PPE)

Figure 5.5. Implementation of AIP environment.

Figure 5.6 describes an AIP rule for group behavior, where 3 robots are moving forward, with the first 2 robots turning right by 90 degrees, and the second robot moving left by 45 degrees and moving the third robot backward. This rule is an example to show the flexibility applying individual and group facts.

81

The rule shown in Figure 5.6 is fired when we assert the following: >(assert(behavior-is spread-random)) In a typical movement with speed parameter of 6 and distance parameter value of 10, the robot covers 1.5 meter in distance. To map the transformation data from physical to virtual, we have tested the physical navigation of robots in a space of 100 cm × 100 cm, using floor as a physical test bed. Initially all robots were kept in a particular position (Xi, Yi), where ‘i’ is the robot number (for example, 1, 2, 3). The positions of robots in VPE are calculated based on the Dist and Speed parameters provided by the AIP rule. Figure 5.7 and 5.8 shows the captured screens of navigation in the physical and virtual environment as followed by the AIP module presented in Figure 5.6.

(defrule group _behavior (behavior-is spread-random) => (gbehf 3 6 1) ; All 3 robots move forward with speed =6 and distance value =1 (gbehr 2 6 4) ; First 2 robots turn right with angle =90 degrees and speed=6 (ibehl 2 6 2) ; The second robot turn left with angle =45 degrees and speed=6 (ibehb 3 6 2)) ; The third robot move backward with speed =6 and distance value =2

Figure 5.6. Example of AIP module for group and individual robot behavior generation.

82

(a)

(b)

(d)

(e)

(c)

Figure 5.7. PPE navigation of robots by following the AIP rule as shown in Figure 5.6., (a) Initial position (b) All 3 robots moving forward (gbehf 3 6 1), (c) robots 1 and 2 turning right by 90 degrees (gbehr 2 6 4) , (d) robot 2 is turning left by 45 degrees (ibehl 2 6 2), and (e) robot 3 is moving backward (ibehb 3 6 2).

5.2. Real-Time Simulation To enhance the capability of VPE we have proposed the concept of Discrete Step in Prototyping (DSP), which is an approach of sending information packets at discrete intervals from PPE and VPE and vice-versa. However, we have only implemented simple behavior generation from AIPE to PPE to VPE rather than all possible combinations of direct and indirect bidirectional prototyping as mentioned in Section 3.

83

The data

packets are sent dynamically between PPE and VPE, so that VPE will show the actual information about the PPE.

(a)

(b)

(d)

(e)

(c)

Figure 5.8. VPE navigation of robots by following the AIP rule and then, PPE as shown in Figure 5.6 and Figure 5.7 respectively (a) Initial position, (b) All robots 3 moving forward (gbehf 3 6 1), (c) robots 1 and 2 turning right by 90 degrees (gbehr 2 6 4), (d) robot 2 is turning left by 45 degrees (ibehl 2 6 2), and (e) robot 3 is moving backward (ibehb 3 6 2).

As most of the robots can navigate only in 2-dimensional space, only forward, backward, left or right directions of movements are considered in AIP along with its corresponding facts. In VPE, assuming that a robot starts from its initial position (X1, Y1), (X2, Y2), (X3, Y3) and each robot moves to its final positions (X11, Y11), (X22, Y22) and (X33, Y33) for the robots R1, R2 and R3 respectively. In every single DSP, data is sent to the GLG simulation server at fixed time intervals. The GLG server program immediately computes the

84

corresponding location from the VPE data received from PPE and displays the robotic actions on the screen as shown in Figure 5.8.

85

5.3. Individual and Group Facts Performance Evaluation The AIP environment is also tested for performance by varying parameters such as Speed (of motors) and number of robots (RG). We have tested to find the goal completion time with different facts such as individual, subgroup, and group. The testing is carried out to calculate the elapsed time between the AIPE to PPE and then PPE to VPE. Since PP and VP are executed in different environments, we have calculated the time differently. This testing is done only for the second case study where the robots’ locomotion time and distance are recorded in a 100cm×100cm floor as test bed. The robots are tested to traverse by firing individual (ibehf) and group (gbehf) facts with varied Speed and RG parameters and the elapsed times are recorded in both PP and VP environments. The results are shown in Tables 5.3, 5.4 and 5.5. The distance is measured in centimeters (cm) and time is calculated in milliseconds (ms).

Figure 5.9. Figure showing the PP testing environment of AIP.

86

TABLE 5.3 INDIVIDUAL FACT PERFORMANCE WITH VARIABLE SPEED fact

Speed

(ibehf 1,1,10) (ibehf 1,2,10) (ibehf 1,3,10) (ibehf 1,4,10) (ibehf 1,5,10) (ibehf 1,6,10)

1 2 3 4 5 6

PPD (R1) (cm) 120 135 141 144 150 154

PPT (ms) 10003 10010 10009 10004 10006 10003

VPTT (ms) 17.985 18.198 17.980 18.101 17.990 17.898

VPT (ms) 10022.368 10029.447 10028.918 10023.351 10025.365 10022.191

TABLE 5.4 GROUP FACT PERFORMANCE WITH VARIABLE SPEED AND RG=2 fact

Speed

(gbehf 2,1,10) (gbehf 2,2,10) (gbehf 2,3,10) (gbehf 2,4,10) (gbehf 2,5,10) (gbehf 2,6,10)

1 2 3 4 5 6

PPD (cm) R1 R2 120 124 138 133 142 146 148 148 150 149 154 150

APPD (cm) 122.00 135.50 144.00 148.00 149.50 152.00

PPT (ms) 20013 20015 20017 20015 20020 20017

VPTT (ms) 21.063 21.438 21.687 21.703 21.796 21.813

VPT (ms) 20035.431 20037.885 20040.605 20038.054 20043.161 20040.004

TABLE 5.5 GROUP FACT PERFORMANCE WITH VARIABLE SPEED AND RG=3 fact (gbehf 3,1,10) (gbehf 3,2,10) (gbehf 3,3,10) (gbehf 3,4,10) (gbehf 3,5,10) (gbehf 3,6,10)

Speed 1 2 3 4 5 6

R1 120 136 141 145 152 154

PPD (cm) R2 R3 127 125 131 137 147 142 148 147 150 152 151 156

87

APPD (cm) 124.00 134.67 143.34 146.67 151.34 153.67

PPT (ms) 30030 30030 30028 30025 30027 30021

VPTT (ms) 32.614 31.187 31.203 30.953 32.875 31.665

VPT (ms) 30063.677 30062.625 30060.890 30057.656 30061.671 30053.783

Speed-PPT/VPT

Speed-PPD 10034

180 160

10029

140

P P D(cm )

100 PPD 80 60

PPT/VPT(ms)

10024

120

10019

PPT VPT

10014 10009

40 10004

20

9999

0 1

2

3

4

5

1

6

2

3

4

5

6

Speed

Speed

(a)

(b)

Figure 5.10. The performance results of ibehf () with varying speed, (a) Speed versus PPD, (b) Speed versus PPT and VPT.

As shown in the Tables 5.3, 5.4 and 5.5, the PPD represents the distance covered in physical prototyping environment, PPT represents the time elapsed in PPE, VPT indicates the total elapsed time in VPE and VPTT indicates the processing of parameter transformation time in VPE. VPTT includes TCP socket connection time from PPE to VP server, reading the input buffer of incoming PP packets, and transformation to (Xi, Yi) coordinates from the Speed and Dist parameters (in PPE). The time in VP includes the VPTT with PPT with few milliseconds of execution delay. The PPT includes time starting from firing the fact (or rule), parameter transfer to the individual or group robots through ZigBee coordinator, establishment of connection with VP server and parallel PPE data transfer to VPE.

Table 5.3 shows the performance of individual fact (ibehf) with variable Speed parameter starting from 1 to 6. The speed range 1 to 6 is based on the

88

duty cycle clock. From the analysis of results, it is very clear that, speed is proportional to distance covered by the robots by applying individual as well as group facts as shown in Figures 5.10(a), 5.11(a) and 5.12(a).

Speed-PPD

Speed-PPT/VPT

180

20050

160

20045 20040

140

PPD(cm )

PPD(R1)

100

PPD(R2) 80

APPD

60

PPT/VPT(m s)

20035

120

20030 20025

PPT

20020

VPT

20015 20010

40

20005

20

20000 19995

0 1

2

3

4

5

1

6

2

3

4

5

6

Speed

Speed

(b)

(a)

Figure 5.11. The performance results of gbehf() with varying speed and RG=2 (a) Speed versus PPD, (b) Speed versus PPT and VPT. The testing is carried out in ideal conditions, with complete power in the batteries, and assuming that there is no change in environmental conditions. The ZigBee broadcasting range tested with single coordinator node and three end nodes carried by mobile robots. From the results, the maximum distance with which ZigBee node can join the single coordinator is 8 meters. After joining 8 meter zone, the end-nodes (carried with robots) continue to work up to 15 meters (7 meters more). The coordinator is fixed and located at 1 meter above the ground.

The performance of individual fact (ibehf) execution is shown in Table 5.3. The related graphs are shown in Figure 5.10. Figure 5.10(b) shows the time elapsed in PPE (PPT) and in VPE (VPT) environments. It is very clear that

89

VPT is higher than PPT, as it included VPTT. The results of testing carried out for group behavior is shown in Tables 5.4 and 5.5. These tests are carried out for subgroup fact (with RG=2) as shown in Figure 5.11 and complete group fact (with RG=3) as shown in Figure 5.12.

Speed-PPD

Speed-PPT/VPT

180

30070

160

30060

140

PPD(cm)

PPT/VPT(ms)

30050

120

PPD(R1)

100

PPD(R2)

80

PPD(R3) APPD

60

30040 PPT

30030

VPT

30020 30010

40

30000

20

29990

0 1

2

3

4

5

1

6

2

3

4

5

6

Speed

Speed

(a)

(b)

Figure 5.12. The performance results of gbehf() with varying speed and RG=3 (a) Speed versus PPD, (b) Speed versus PPT and VPT.

In group/subgroup fact testing, initially one robot made to finish its goal and later its subordinates made to follow the first leader robot. This is a kind of leader-follower action. APPD shows the average PPD covered by robot.

Figure 5.13 shows the comparison of different VPTTs, while

executing different facts. By analyzing these results we can note that, the time elapsed to execute different facts is directly proportional to the number of physical and virtual prototyping primitives.

90

Speed-VPTT 35 30 VPTT(s)

25

ibehf

20

gbehf (RG=2)

15

gbehf (RG=3)

10 5 0 1

2

3

4

5

6

Speed

Figure 5.13. Comparison of different VPTT while executing individual and group facts.

91

5.4. Evaluation of Physical and Behavioral Constraints Table 5.6 shows the results of the group forward behavior in both VPE and PPE environments by considering limited physical and behavioral constraints. This shows the limitations in the PPE environment when compared to the VPE or ideal environments. For example, after firing of the fact (gbehf 3 5 4), the distance covered in physical environment is 81cm instead of actual expected 90cm. With this PPE engineers can analyze the physical limitations of the physical prototype to correct its errors. The figure 5.14 shows the error generated in physical movements from actual expected ideal coverage distance.

TABLE 5.6 VIRTUAL AND PHYSICAL CONSTRAINTS EVALUATION WITH VARIABLE SPEED AND RG=3 fact

Speed

(gbehf 3,1,4) (gbehf 3,2,4) (gbehf 3,3,4) (gbehf 3,4,4) (gbehf 3,5,4) (gbehf 3,6,4)

1 2 3 4 5 6

PPD(PPE) (cm) 22 48 63 73 81 86

92

VPD(VPE) (cm) 2.5 5.0 7.0 8.5 9.0 9.5

Ideal Coverage (cm) 25 50 70 85 90 95

VPE-PPE-Ideal Peformance

Distance(cm)

100 80 PPD(PPE)

60

VPD(VPE) 40

Ideal Coverage(~VPE)

20 0 1

2

3

4

5

6

Speed

Figure 5.14. Comparison of distance covered in physical prototypes with expected ideal coverage distance while executing group facts.

Figure 5.15 shows an example rule to test the traversal of robot-1 by applying different individual facts. The goal of this rule is to traverse the robot in a square wave fashion starting from position (10, 10) in a grid of 100cm× 100 cm. Table 5.7 shows the summary of the sequence movements starting initial position (10, 10). Figure 5.16 shows the related Robot-1 traversal in both VPE and PPE environments. As it is very clear from the Figure 5.16, the VPE shows the perfect behavior by traversing in the expected path and shows the squarewave sequence. But, in PPE the traversal is deviated from the expected positions because of the physical limitations of the robots. With this way the prototyping engineers can identify the physical and behavioral constraints of the developing system.

93

(defrule beh_sequence (beh sq_wave) => (ibehf 1 2 3) (ibehr 1 2 4) (ibehf 1 2 3) (ibehr 1 2 4) (ibehf 1 2 3) (ibehl 1 2 4) (ibehf 1 2 3) (ibehl 1 2 4) (ibehf 1 2 3) (ibehr 1 2 4) (ibehf 1 2 3) )

Figure 5.15. The (beh_sequence) rule.

TABLE 5.7 EVALUATION OF BEHAVIORAL AND PHYSICAL CONSTRAINTS USING RULE SHOWN IN FIGURE 5.15. fact Initial Position (ibehf 1,2,3) (ibehr 1,2,4) (ibehf 1,2,3) (ibehr 1,2,4) (ibehf 1,2,3) (ibehl 1,2,4) (ibehf 1,2,3) (ibehl 1,2,4) (ibehf 1,2,3) (ibehr 1,2,4) Final Position

VPE-x VPE-y (×10cm) (×10cm) 10 10 10 30 Turn Right 90 degrees 30 30 Turn Right 90 degrees 30 10 Turn Left 90 degrees 50 10 Turn Left 90 degrees 50 30 Turn Right 90 degrees 70 30

94

PPE-x (cm) 10 15

PPE-y (cm) 10 28

33

33

42

17

58

23

62

38

76

48

60

Y-Distance(cm)

50

40

30

VPE(x10cm) PPE(cm)

20

10

0 0

10

20

30

40

50

60

70

80

X-Distance(cm )

Figure 5.16. Comparison of Robot-1 traversal in VPE and PPE after firing (beh_sequence) rule. The rule shown in Figure 5.15 can be fired by asserting (beh sq_wave) fact.

95

5.5 Advantages and Limitations In most of the earlier product development stages, the designers were describing the system as a combination of different software and hardware components. In this thesis, we aim to describe the system of development in terms of rules and facts corresponding to its behaviors and to explore the various possibilities of testing. This also helps the developers to easily list the system boundaries and gives customer-edge by satisfying customers’ needs. For example, in the case of domain-specific example described in this thesis, architecture is highly useful for manipulating a community of ubiquitous devices or robots which spread over a large geographical area with the support of WSN. A single fact is sufficient to gather all the community members together or make a member behave such as a leader or follower. The other advantages include simulation of easy robot navigation in both VPE and PPE without the use of any major sensors. The next part of this section describes the practical applications during complex behavior generation, considering the domain-specific architecture described in this dissertation. However, the present architecture does not support features for managing collisions. Collision detection and avoidance is possible with the use of ultrasonic proximity detectors, which make the robots bulky in their size. In summary, AIP describes the system in terms of behaviors or facts, so that it can be modeled into a knowledge-based system. The AIP can interact with both VPE and PPE simultaneously and autonomously, and this allows the designers to be free from monitoring the development process repeatedly. Designers can also monitor the status of development or components of the system, such as sensors and actuators,

96

from remote sites using VPE. AIP allows the user to program the rules and to check the variety of possibilities to explore new behaviors of the developing system.

In addition, AIP is close to the real-world implementation. AIP tools can also help the end users to explore and test its behaviors. Because, what remains ultimately in the customer’s mind is flexibility of product behavior rather than the product itself. So, AIP helps to reduce the gap between the real-world implementation and prototype implementation.

5.5.1. Flexibility in Dynamic Skill Selection Behaviors are executed in parallel. This can be applied to all levels, from behavior design to hardware implementation. The proposed architecture supports the developing system so that it can adapt its behaviors to adjust to and manage cases of failure or unexpected events. This is important for real-time response-based applications. The necessary information is received by the sensors.

In order to perform high-level goals, the robot must be capable of autonomous reasoning regarding information it has about the outside world. In such cases, a flexible reasoning mechanism can be used to operate in a practical problem domain for the proper functioning of the mobile robot. As shown in Figure 5.17, the CLIPS Skill reasoning manager is a decision engine that selects what types of locomotion is needed (i.e. sinusoidal or legged). Depending on the decision results, the behavior is

97

enabled (in software terms, various thread or program modules are executed) or disabled. The disabled robot behavior block indicates, behaviors which are not currently active and enabled robot behavior block indicate active behaviors. These modules are responsible for generating purposeful locomotion. Each behavior block lists all the required skills needed by the mobile robot. The basic building block of behavior-based control is a Skill. Combinations of skills generate required behavior. The new set of behaviors are stored in the knowledge-base and called by the reasoning manager. The selection of these skills is based on the robot context and required behavior. This architecture adds the flexibility of choosing different skills and behaviors dynamically as the robot operates. This architecture has two main advantages. First, there is flexibility in robot configuration (from Snake to Four-Legged robot) and different types of behaviors can be chosen dynamically (i.e., backward to forward).

Figure 5.18 describes the role of skill selection during reconfiguration. Let the total number of skills be 10 for the Snake and Legged robots, i.e., {S1, S2, S3, S4, S5, S6, S7, S8, S9, S10}. We assume that, {S1, S2, S3,S4} and {S5, S6, S7, S8} are skills related to the Legged and Snake robot respectively. In such a situation, all skills are enabled by the reasoning manager except {S9, S10}. Also it is assumed that based on the light sensor value (lsv) of the light sensor which is connected to the robot is responsible for switching the role from Legged to Snake robot and vice-versa. In such a situation, the rules described in Figure 5.18 are applied in order to manage the reconfiguration.

98

CLIPS Skill Reasoning Manager Decision Making

Enabled Robot Behaviors Behavior B

Behavior D

Disabled Robot Behaviors Behavior Y

Behavior A

Behavior C

Behavior X

Skills Skill-1

Skill-2

Skill-M

Skill-3

Sensors

S

S

S

Actuators

S

A

A

A

A

Figure 5.17. Skill reasoning manager for behavior management.

(defrule fire_legged_skills (get_light_sensor ?lsv) => (if ( ?lsv 200) then (apply_skill 1 1 9 1 10) (apply_skill 1 2 9 0 10) (apply_skill 2 1 9 1 10) (apply_skill 2 2 9 0 10) (printout t "Now Sinosoidal Movement" crlf)) (bind ?lsv 0))

;lsv >200 Sinosoidal Movements ;S5,S6 ;S7,S8

Figure 5.18. Rules describing role of skill selection during reconfiguration.

99

5.5.2 Community Behaviors with Variable Priorities In the case of unexpected situations, the need arises for behavior with intelligent actions by few sections or members of robot community. In the proposed AIP method, there is a scope for changing the role of the robots or sections by assigning different priorities to different behavior patterns. This greatly enhances the power of robot control and behavior. For example, in some cases it is necessary that only one robot (leader actions, for example (deffacts init (priority first) (priority second)) ;with salience declaration (defrule leader_actions ;higher priority rule, leader actions (declare (salience 300)) (priority first) (get_light_sensor ?lsv) => (if ( (if(> ?lsv 192) then (gbehb 2 4 4) ; Robot 1,2 move backward with speed 4 and distance 4 (printout t "Followers are Moving Backward" crlf)) (bind ?lsv 0))

Figure 5.19. Group behaviors with variable priorities.

robot 3) may need to move forward and the rest of the robots (1, 2) may need to go backward (follower actions). Figure 5.19 describes the role of priority during firing of rules with simulated light sensor value. Only one robot performs forward action, and rest of the robots move backward to allow the leader robot to perform its actions. When multiple activations are

100

on the agenda, CLIPS automatically determines which activation is appropriate to fire.

5.5.3 Flexibility in Dynamic Reconfiguration of Behaviors The proposed architecture supports flexibility for adapting different group behaviors within the community to adjust and manage the unexpected and cases of failures. This is important in real-time response applications. Figure 5.20 describes the switching of roles from disciplined behavior to random behavior. In this case, assuming that ,depending on the light sensor values (lsv) of light sensors which are embedded in the robot responsible for role switching between disciplined (only moving forward) and random (such as left, right, right, forward etc., with different Speed and Angle parameters) actions. (defrule fire_disciplined_actions (get_light_sensor ?lsv) => (if ( ?lsv 200) then ;lsv >200, Random Actions (gbehf 3 4 5) (gbehr 3 4 4) (printout t "Performing Random Actions" crlf)) (bind ?lsv 0))

Figure 5.20. Rules describing dynamic reconfiguration of behaviors.

101

CHAPTER 6

CONCLUSIONS AND FUTURE WORKS

102

In the traditional development methodology of ERTS the designers described the target systems’ behavior and constraints only using documentation (e.g. requirement specification) and it was impossible to check all the physical and behavioral limitations of the target system effectively at the early stages of development. For this reason, the development projects of ERTS were always time consuming. In this thesis, we suggest the defining mechanism of the physical constraints and system behaviors at the early stages of the ERTS development life-cycle by adopting the AIP concept. In addition, earlier prototyping approaches only focus on functional interaction within the system and appearance rather than its behavior. System behavior is the most important criteria for deciding the flexibility, usability, and reactiveness in ERTS applications. We present a methodology to map such behaviors into facts and rules, which can be expressed simultaneously in corresponding virtual and physical entities in VP and PP environments. Most of the decisions during the manufacturing process could be achieved autonomously rather than through repeated monitoring or manual intervention.

In this thesis, we present a novel approach towards rapid prototyping called Autonomous Integrated Prototyping (AIP), by combining the virtual and physical prototyping environments provided with reflective coherence along with knowledge-based system. AIP enhances the capability of system development process by supporting VP and PP autonomously. The advantages of an AIP environment are: not only the correctness of an ERTS model can be tested, but also the performance of a system can be evaluated, and different control schemas can be tested and experimented easily.

103

Moreover, the rules or facts can be written to test the behavior of the system with other neighboring or interacted systems. In summary, the fundamental idea of this thesis is to present approaches to enhance the ability of rapid prototyping along with autonomous decision making, for reducing ‘time-tomarket’ effectively.

The thesis targets especially embedded real-time systems and robotics, where these pose numerous challenges to the designers because of the complexity of behaviors involved during their heterogeneous development methods as they comprise of unlimited components such as sensors and actuators. The fundamental advantages of AIP can be summarized as: intelligent modeling of VP and PP with reflective coherence, autonomous ability of prototyping using knowledge-based production system and analysis of multiple physical and virtual prototypes simultaneously. The proposed approach is being demonstrated with domain-specific AIP toolkit developed for multi-shaped robots and mobile robots in WSN environment. The complex behaviors were generated using the rules. We have used realtime communication protocol CAN, as a sensor network to interconnect sensor and actuators in robots. The CLIPS expert system is found to be more appropriate as an expert system in the proposed architecture because of its advantages such as portability, integration, extensibility, interactive development, and cost effectiveness.

The thesis does not deals with practical comparison with other prototyping systems such as standalone VP and PP as they have been used for several years. Applications of AIP in different manufacturing applications are also not discussed. We have not yet studied all the

104

embedded real-time systems and their behavioral representations. In future, we hope to classify the devices and different embedded real-time systems according to their behavioral patterns. In addition, in the current version, the CLIPS shell is used to fire the rules and it is not so user friendly to the customers. For this, we also plan to develop an easy user interface for firing, asserting and retracting the rules and facts, so that different VP and PP primitives can be reconfigured and tested easily.

105

REFERENCES

106

[ARM]

ARM Processor Overview, Available: http://www.arm.com/products/CPUs/

[Arz93]

KE Arzen,

“Using real-time expert systems for control system

prototyping,” in Proc. of

International Conference on 'Systems

Engineering in the Service of Humans', Systems, Man and Cybernetics, Vol.2, 1993, pp. 25-30. [Boo93]

C. J. Booth and G. P. Kurpis, The New IEEE Standard Dictionary of Electrical and Electronics Terms [Including Abstracts of All Current IEEE Standards], 5th edition, New York: IEEE, 1993.

[Bro86]

R. A. Brooks, “A Robust Layered Control System for A Mobile Robot”, IEEE Journal of Robotics And Automation, RA-2, pp. 14-23, 1986.

[Bro90]

R. A. Brooks, “The Behavior Language; User's Guide”, M. I. T. Artificial Intelligence Laboratory, AI Memo 1227, 1990.

[Bru99]

Wilhelm F Bruns, “Complex Construction Kits for Coupled Real and Virtual Engineering Workspaces,” Lecture Notes in Computer Science 1670, Springer, pp. 69-78, 1999.

[Cho06]

SH Choi and HH Cheung, “A CAVE-based multi-material virtual

prototyping

system,”

Computer-Aided

Design

and

Applications, pp. 557-566, Vol. 3, No. 5, 2006. [CLIPS]

CLIPS, Available: http://clipsrules.sourceforge.net/

[Duf07]

BR Duffy, GMP. O’Hare, RPS O’Donoghue, CFB. Rooney, RW Collier, “Reality and Virtual Reality in Mobile Robotics,” PRISM Laboratory, Dept. of Computer Science, University College Dublin, 2007, Available:http://chameleon.ucd.ie/publications/manse99.ps

[Eri98]

Eric Celaya and Josep M. Porta , “A control structure for the

107

locomotion of a legged robot in difficult terrain,” IEEE Robotics and Automation Magazine, Vol. 5, No. 2, pp. 43-51,1998. [ESPS]

ESPS Mobile V 1.0, Manual, RT Lab, Kyungpook National University Korea, 2008.

[FISH]

Fischertechnik, Available: http://www.fischertechnik.de/en/

[GLG]

GLG Tool Kit, Generic Logic Inc., Available: http://www.genlogic.com/

[Hu05]

X Hu, and BP Zeigler, “Model Continuity in the Design of Dynamic Distributed Real-Time Systems,” IEEE Transactions on Systems, Man And Cybernetics— Part A: Systems And Humans, 35(6), pp. 867- 878, November, 2005.

[Hua04]

Jinfeng Huang, Jeroen P. M. Voeten, Andre Ventevogel, and Leo van Bokhoven, “Platform-Independent Design for Embedded Real-Time Systems,” Languages for System Specification, pp 3550, ISBN 978-1-4020-7990-0, Springer, 2004.

[Ian05]

Gibson Ian, Zhan Gao, Campbell Ian, “A Comparative Study of Virtual prototyping and Physical Prototyping,” International Journal of Manufacturing Technology and Management, 6(6), pp. 503-522, 2005.

[Jun02]

Gi Hoon Jung, Do Hoon Kim, Sung Ho Park, Ok Gu Kim, and Soon Ju Kang, “Experimental software engineering course for training embedded real-time systems,” in Proc. of the International Conference on Software Engineering Research and Practice (SERP-02), 2002, pp. 410– 416.

[Juu95]

EK Juuso, “Adaptive Interfaces in Synthetic Environments,” in Proc. of 2nd Workshop of the SiE Working Group (SiE-WG) on Improving the Modelling and Simulation Process, 1995.

108

[Kaj06]

Kaj Hanninen, and Jikka Maki-Turja, and Mikael Nolin, “Present and future requirements in developing industrial embedded real-time systems - interviews with designers in the vehicle domain,” in Proc. of 13th Annual IEEE International Symposium and Workshop on Engineering of Computer Based Systems, 2006, pp. 25-30.

[KBS95]

Introduction to knowledge-based systems ,IEEE, 1995, Available: http://ieeexplore.ieee.org/iel2/3190/9073/00403493.pdf

[Kor02]

Fabrice Kordon and Luqi, “An Introduction to Rapid System Prototyping,” IEEE Transactions on software Engineering, Vol.28, No. 9, pp.817-821, 2002.

[Law97]

Wolfhard Lawrenz, CAN System Engineering From Theory to Practical Applications, Springer Verlag, 1997.

[LEGO]

LEGO Home Page, Available: www.legomindstorms.com

[Li03]

Qing Li, and Caroline Yao ,Real-Time Concepts for Embedded Systems , ISBN:1578201241 , CMP Books, 2003.

[Liu03]

Z. Liu, and T. Nakamura, “Skill-based micromanipulation system for assembly operation,” in Proc. of International Symposium on Micromechatronics and Human Science, 2003, pp. 197–203.

[Luc07]

G.W. Lucas, “Using a PID-based Technique, For Competitive Odometry

and

Dead-Reckoning,”

2007,

Available:

http://www.seattlerobotics.org/encoder/200108/using_a_pid.html [Lyn07]

James P Lynch, Using open source tools for AT91SAM7S Cross development,

Revision

C,

May

2007,

Available:

www2.amontec.com/sdk4arm/ext/jlynch-tutorial-20061124.pdf [MakT]

MakingThings, Available: www.makingthings.com

109

[Mil05]

G. Milighetti, H.B. Kuntze, C.W. Frey, and B. Diestel-Feddersen, “On a primitive skill-based supervisory robot control architecture,” in Proc. of 12th International Conference on Advanced Robotics (ICAR), 2005, pp. 141–147.

[Nou00]

A. En-Nouaary, F. Khendek, and R. Dssouli, “Testing embedded real-time systems,”, Seventh International Conference on Real-Time Computing Systems and Applications (RTCSA'00), pp. 417, 2000.

[Osh03]

Robert Oshana, “Developing embedded real-time systems”, EETimes Asia, 16-Oct 2003.

[Rai05]

Laxmisha

Rai,

and

Soon

Ju

Kang,

“Multi-thread

based

synchronization of locomotion control in Snake robots,” in Proc. of The Eleventh IEEE International Conference on Embedded and Real Time Computing and Applications (RTCSA), 2005, pp. 559–562. [Rai06]

Laxmisha Rai, Soon Ju Kang, and Kee-Wook Rim, “Dynamic SkillBased Software Architecture for Locomotion Control in Shape Reconfigurable Mobile Robots,” Proceedings of Third IEEE Workshop on Software Technologies for Future Embedded and Ubiquitous Systems and the 2006 International Workshop on Collaborative Computing Integration, and Assurance (SEUS/WCCIA 2006), ISBN 0-7695-25601,2006, p.29-34.

[Rai07]

Laxmisha Rai, and Soon Ju Kang, “Intelligent Real-Time Software Architecture Supporting Remote Controlled Group Behavior for Widely Spreaded and Cooperative Mobile Robots with Sensor Network,” in Proc. of The Thirteenth IEEE International Conference on Embedded and Real Time Computing and Applications,2007, pp. 363-368.

110

[Rai08a]

Laxmisha Rai, and Soon Ju Kang, “Rule-based Software and Hardware Modular Architecture for Multi-Shaped Robot using Real-Time Dynamic Behavior Identification and Selection,” Knowledge Based Systems, Elsevier Science, Volume 21, Issue 4, ISSN: 0950-7051, pp. 273-283, 2008.

[Rai08b]

Laxmisha Rai, and Soon Ju Kang, “Rule-Based Community Framework for Widely Spreaded Mobile Robots in WSN Environment,” New Research on Mobile Robots (Book), Chapter 9, Nova Publishers, New York, USA, ISBN -978-1-60456-651-2, 2008. (In Press)

[Rai08c]

Laxmisha Rai, and Soon Ju Kang, "Knowledge-based Integration between Virtual and Physical Prototyping for Identifying Behavioral Constraints of Embedded Real-Time Systems," IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, IEEE Press, ISSN: 1083-4427, 2008. (Accepted, to be published)

[Ran07]

Rancan,

C.,

Pesado,

P.

and

García-Martínez,

“Toward

Integration of Knowledge-based systems and knowledge discovery systems,” Journal of Computer Science and Technology, Vol. 7 No. 1, March 2007. [Ros91]

Rosenblit, J. W., and P. L. Jankowski, “Integrated framework for knowledge-based modeling and simulation of natural systems,” Simulation, 57(3), 152-165, 1991.

[RTOS]

FreeRTOS, Available: http://www.freertos.org

[Ryb99]

VM Rybin, VV Ochinsky, GV Rybina, and VU Stepankov, “Realtime expert system for control of electrophysical complex,” in Proc. of International Conference on Accelerator and Large Experimental Physics Control Systems, 1999.

111

[Sas03]

L Sass, “Rule Based Rapid Prototyping of Palladio’s Villa Details: the exploration of rapid prototyping and rule building,” in Proc. of 21st Internationsl Education and Research in Computer Aided Architectural Design in Europe (eCAADe) conference, 2003.

[Spe01]

P.H. Speel, Guus Schreiber, Wouter van Joolingen, Gertjan van Heijst ,and Gertjan Beijer, “Conceptual Models for KnowledgeBased Systems,” In: Encyclopedia of Computer Science and Technology, Marcel Dekker Inc., New York, 2001.

[Tse97]

M Tseng, J Jiao, and C J Su, “A Framework of Virtual Design for Product Customization,” in Proc. of the 6th IEEE International Conference on Emerging Technologies and Factory Automation, 1997, pp. 7-14.

[Won95]

Raymond K. Wong, “Modeling and Simulation of Reactive Systems with Roles,” in Proc. of the Third Australian and New Zealand Conference on Intelligent Information Systems, 1995, pp. 1-6.

[Zei00]

BP Zeigler, TG Kim, and H Praehofer, Theory of Modeling and Simulation, 2ed New York, Academic Press, 2000.

[ZigBee]

ZigBee, Available: www.zigbee.org/

112

임베디드 실시간 시스템의 실물 환경 및 행동 패턴을 검증하기 위한 통합화된 지식 기반 프로토타이핑 방법론 라이락미샤 경북대학교 대학원 전자공학과 정보통신 전공 (지도교수 강 순 주) 초록

프로토타이핑은 임베디드 시스템의 개발 위험을 최소화하기 위한 최고의 방법이다. 대부분의 신제품 개발 과정에서 가상 프로토타이핑 (Virtual Prototyping, VP)과 실물 프로토타이핑 (Physical Prototyping, PP)은 개발 생명주기(life-cycle) 상에 포함되는데, 최근에는 인공지능(AI)과 모델링, 또는 시뮬레이션 기법을 조합하여야만 풀 수 있을 정도로 엔지니어링의 복잡도가 증가하고 있는 추세이다. 따라서, 다양한 형태의 신제품 개발 현장에서 효율적인 구현을 위한 지능형 개발 환경을 필요로 하고 있다.

한편, 신속 프로토타이핑(Rapid Prototyping) 방법론에 따라 제품 개발을 진행함에 있어 그 개발 단계에 자동적인 의사 결정 및 분석 방법을 도입할 경우, 전통적인 개발 방법론에 비해 시장 출시 시기(‘time-to-market’)를 대폭 축소시킬 수 있으나, 이러한 자동화 기술을 적용하기 위해서는 새로운 프로토타이핑 방법을 개발할 필요가 있다. 따라서, 본 논문은 반응적 일치법

113

(reflective coherence)을 바탕으로 가상 및 실물 프로토타이핑 방법론을 지식 기반 시스템 (Knowledge-Based System, KBS)에 집약함으로써 두 가지 방법론의 장점을 최대화하는 방법을 제시하고 있다. 본 논문에서는 이 방법을 자동

집약화된

프로토타이핑

(Autonomous

Integrated

Prototyping,

AIP)이라 한다. 이 AIP 를 위해 우리는 임베디드 시스템 개발 과정의 초기 단계에서 VP 와 PP 시제품 개발의 자동화를 용이하게 하고 두 방식의 동시 진행을 가능하게 하는 지능형 시스템 구조를 개발하였다. KBS 를 통한 가상과 실물 프로토타이핑 간의 반응적 일치법은 개발 중인 시스템의 새 행동 패턴(behaviors) 들을 조사하고 VP 와 PP 의 차이로 인한 행동 패턴 들의 차이점을 분석할 수 있게 해준다. 전통적인 디자인 방법들로는 대부분의 비실시간적 요소들은 쉽게 교체할 수 없으며, 전문가 시스템은 학습과 경험을 통해서만 습득할 수 있다. 따라서, 제한된 시간 내에 고객의 요구조건을 반영한 제품을 구상하기 위해서는 지능화된 방법을 동원하기 않고는 효율적인 자동화 구조를 만들 수 없다. 이를 위해, 제안된 방법은 KBS 를 이용해 제품 개발 과정을 배우고 구상하는 자동화된 능력을 갖추고 있으며, 비실시간 적인 문제 영역에서도 최선의 방안을 선택할 수 있도록 도와 준다. 이처럼 KBS 는 임베디드 실시간 시스템 제품을 효율적으로 개발하기 위한 전문가 시스템을 적용한 프로토타이핑 방법론의 필수적인 요소로 활용되고 있다.

본 논문에서는 소개한 방법을 임베디드 실시간 시스템과 로봇 공학 분야에 응용하는 경우를 상정하고 있다. 이 분야는 다양한 종류의 개발 방법을 복합적으로 적용할 가능성이 높고, 많은 수의 센서나 액츄에이터 등을 이용하는 경우가 많아 행동 패턴(behavior)의 복잡도가 크며, 그만큼 디자이너가 해결해야 할 많은 문제들을 가지고 있다. 이러한 응용 분야에서

114

AIP 는 반응적 일치법에 기반한 VP 와 PP 의 지능형 모델링, 지식 기반 생산 시스템을 이용한 자동화된 프로토타이핑, 다수의 물리 및 가상 시제품의 동시다발적인 분석을 가능하게 한다. 제안된 방법은 WSN 환경에서 다형상(multi-shaped) 로봇과 이동형 로봇을 개발하기 위한 AIP 툴킷으로 시연하였다. 우리는 로봇 내의 센서와 엑츄에이터를 연결하기 위한 센서 네트워크로 실시간 통신 프로토콜인 CAN 을 사용했으며, 시연을 위해 KBS 의 규칙 들에 따라 복잡한 행동 패턴들을 생성하였다. 제안된 방식에서는 적응성(portability), 통합성(integration), 확장성(extensibility), 상호적응형 (interactive) 개발, 비용 효율성 등을 고려해 CLIPS 전문가 시스템을 적용하였다.

115

ACKNOWLEDGEMENTS I wish to acknowledge my sincere gratitude to all those who have assisted me during my PhD program.

First of all, I am grateful to my adviser, Prof. Soon Ju Kang, Ph.D., for his unflinching support, valuable assistance, and ideas at every stage of my research in the Ph.D. program. His insight to scientific research and the way to carry it out have greatly inspired me and will continue to guide me throughout my career path. As my supervisor, he has constantly suggested me to remain focused on achieving my goal. His patience, inspiring comments, observations, and superb managerial skills inspired me do perform beyond my capacity. I am greatly thankful for providing an opportunity to work with him and his talented team of researchers.

My special thanks to Prof. Heung-Moon Choi, Prof. You Ze Cho, Prof. Jong Hee Park, Prof. Dong Ho Lee, Prof. Kwang Seon Ahn, Prof. Sangwook Kim, Prof. Miyoung Shin, Prof. Woo Jin Lee ,Prof. Sam Myo Kim of School of EECS, Kyungpook National University for their teaching and valuable guidance in my PhD program.

I like to appreciate the encouragement and support of Prof. Bae Geon-seong (Dean, EECS), Prof. Young-Ki Cho, Prof. Hong-Bae Park, Prof. Jong Tae Park of School of EECS, KNU, Prof. Byung-Man Kim of Kumoh National Institute of Technology, Lee Sung Won (CTO of Qian Hong Research and

116

Consulting Company Ltd, Beijing), Mo Young Ju of Korea IT Industry Promotion Agency (Beijing, China), Prof. S. Rajaram, Prof. K.C. Navada, Prof. Harischandra Hebbar , and Manamohana K of Manipal University (India), Prof. Mohan Kumar V (TAPMI, Manipal), Dr. Ajith Kumar (KNU), Ms. Gao Yan (Henan, China), Mr. Siddalingappa (IBM, India) , and Mr. Sumesh P (Motorola, India) for directly or indirectly helping in various stages of my PhD program.

I acknowledge the support of Kyungpook National University’s graduate department, Korean government Information Technology-IITA(Institute of Information Technology Assessment) scholarship committee, and ITRC (Information Technology Research Center) support program supervised by the IITA, Ministry of Information and Communication, Republic of Korea for providing scholarships and funding for projects during my doctoral studies.

I would like to thank all my colleagues in Real-Time Systems Lab: Gi-Hoon Jung, Dae-Ho Bae ,Dong-Kyu Lee, Jae-Shin Lee, Sung-Ho Park, Tae-Hyeon Kim, Baek-Gyu Kim, Yoon-Mo Yeon, Kyung-Min Han, Dinh Trong Thuy , Byung-Chul Kim, Moo-Jin Kang, Woo-Jung Kim, Kyung-Min Han and alumni members: Dr. Jun-Ho Park, Hwa-Yeong Chae, Hyo-Mun Jung, Myung-Jin Lee, Ju-Yong Oh, Seung-Ryun Lee, Won-Eui Hong , DongHyouk Kim, staff members of the graduate office of School of EECS, and International affairs office members at Kyungpook National University, who have been kind enough to advise and help in their respective roles.

117

Last, but not least, I would like to thank my teachers, classmates , friends, parents, my brothers Neeraj and Dinesh and family members for their constant support and moral encouragement since my primary school days.

-Laxmisha Rai 26 June 2008

118

Suggest Documents