partial fulfillment of the requirements for the degree of Doctor of philosophy in computer ..... software engineering as well as computer science, I have come to ..... The cheapest, fastest, and most reliable components of a computer system.
Blekinge institute of technology Dissertation series no. 2004:05
On the nature of open computational systems
ONLINE ENGINEERING
MARTIN FREDRIKSSON
Department of interaction and system design Blekinge institute of technology Sweden
Blekinge institute of technology Dissertation series No. 2004:05 ISSN 1650–2159 ISBN 91–7295–045–5 Published by Blekinge institute of technology © Martin Fredriksson, 2004 Jacket illustration – In the loop – by Societies of computation laboratories © Tomas Sareklint, 2004 Printed by Kaserntryckeriet Karlskrona, Sweden, 2004
D e d i c a t e d t o S o p h i a , m y f a m i l y, a n d t h e e n g i n e e r s a t Societies of computation laborator ies.
This thesis is submitted to the Faculty of technology at Blekinge institute of technology, in partial fulfillment of the requirements for the degree of Doctor of philosophy in computer science.
Contact information Martin Fredriksson Department of interaction and system design School of engineering Blekinge institute of technology Box 520 372 25 Ronneby Sweden
Online engineering
ABSTRACT
Computing has evolve d from isol at ed machines, providing calc ulat ive suppor t of applicat ions, t owar d c om m u n i c at i on n e t work s th at p r ov id e f u n c t i o n a l s u p p or t t o g rou p s o f p eo p l e a n d embedded systems. Pe rhaps, one of the most c o mp e l l in g fe at u re a n d b e n e f i t o f c om p u t e r s i s t h eir ove r whe l m i n g c o m p u t i n g e f fic i e n c y. Tod ay, we c on ce i ve d i st r ibut e d com p ut at i on al s ys t e ms o f an ever - i nc re a s i ng s op hi s t ication, wh ich we then apply in var ious settin gs – c r itic al s up p or t f unc t i on s o f o ur s oc i et y j us t t o nam e on e imp or t an t a pplicat ion area. Th e s prea d an d i m pact o f c o mp ut i ng , i n t e r m s o f s o - c a l l e d i n fo r m at i o n s o c i e t y t e c h n o l og ie s, h a s o bv i o u s ly gain ed a ve ry high momen t um over th e ye ar s an d today it deliver s a tec hn olog y that ou r s ocie t ie s have co m e t o de pe n d o n . To t h is e n d, c on cer ns re lated to ou r ac ceptan ce of qua lities of c omputing, e.g., dependabilit y, are increasingly emphasized by user s as we ll as ve ndor s. An indic ation of this increased fo cus on dep e n d a b i l i t y i s fou n d i n c o n t em p o rar y e f fo r t s o f m i t i gat i n g t h e e f fe c t s f r o m s y s t e m i c fai l u re s i n cr itical infrastr ucture s, e . g. , en erg y distr ibu tion, re sourc e log istic s, and financial transa ct i o ns. As s uch , t h e d ep e nda bl e fu nc t i on o f t h es e i n f ra s t r u c t u re s i s g over n e d by m e a n s of m o re or l e s s a u t o n om i c c o m p u ti n g s ys t e m s t h at inte rac t with c ognitive human agents. Howe ver, du e to in tr icat e s ys t em depen denc ies as we ll as b e i ng s i t u at e d in o ur p hy s i c a l e nvir o n me n t , even the slightest – unanticipated – per t urbation i n on e o f t h e s e e mb e d d ed s y s t em s c a n re s u lt i n d eg ra d at i o n s o r c at a s t r o p h i c fai l u re s o f o u r society. We argue that this contemporary problem of comp u t in g m ain ly is du e to ou r own difficulties in modeling and eng ineer ing the invo lved system complexities in an un der sta nda ble man ner. Co ns equ e ntly, we have to provid e su pp or t fo r d ep e nda ble comp uting sy stems by means o f n e w m e t ho d o l og i e s of s ys t em s e n g ine e r i ng. From a histor ic al per spective, c omputing has evolve d, fr om being suppor t ive of quite we ll defin ed an d und er stood tas ks of algo r ithmic c omp utat ions, into a disr uptive technolog y that e na bl e s a nd fo r ce s c han ge up on o rgan i zat i o ns as we ll as our society at large. In effe ct, a major ch al l en ge o f c o nt em p o rar y c omp ut i ng i s t o u n derst an d, predict , an d h ar n es s t h e involved s ys t e ms ’ i nc re a sing c om pl e x i t y i n t e r m s of c on s t i t u e n t s, d e p e n d e n c ies, a nd i nt e ract i o ns – tur n ing them into dependable systems. In this thesis, we therefore intr od uce a model of op en c o m p u t at i o n a l s y s t e m s, a s t h e m e ans to convey these systems’ factual be havior in re alistic situ at ions, bu t also in order to facilitat e our ow n under standing of how to monitor and control th eir co m pl e x i nt er de p e nd e nc i e s. Mo re ove r, sin ce th e c r itic al var iables t h at gove r n t h ese co m pl ex s y s t e ms ’ qu a l i t at i ve b e hav i or c a n b e of a ve ry elu sive n at u re , we also in t rod u ce a m e t ho d of on line en g ineer ing, whereby cogn itive agents – human and software – can instr u ment these open computat ional systems ac cording to their ow n subjective and temporal under s ta nding of some c omplex situat ion at hand.
Online engineering
TABLE OF CONTENTS
PREFACE
V
Par t 1 INTRODUCTION C hap t er 1 OUTLINE OF THESIS
1 .1 1 .2 1 .3 1 .4 1 .5
1
I nt r od uc t i o n . . . . . . . . . . . . . . . . Ch al l e n g e s i n d ep e nd a b l e c o mp u ti ng Co nt ri b ut i o n s f r om t he au t ho r . . . . Gu i d el i ne s to th e re a de r . . . . . . . . Co nc l ud i n g r em a rk s . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
1 2 4 6 8
C hap t er 2 DEPENDABLE COMPUTING SYSTEMS
2 .1 2 .2 2 .3 2 .4
I nt r od uc t i o n . . . . . Ge n er a l c on c e rn s . . Co g n i t i v e a g e n ts . . Co nc l ud i n g r em a rk s
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
11
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. 11 . 13 . 16 . 21
C hap t er 3 METHODOLOGY OF COMPUTING
3 .1 3 .2 3 .3 3 .4 3 .5 3 .6 3 .7
I nt r od uc t i o n . . . . . . . . . Fr a me w o rk of i n s tr um e n ts P ri nc i p l e s . . . . . . . . . . Mo d el s . . . . . . . . . . . . Me t ho d s . . . . . . . . . . . Te c hn olo g i e s . . . . . . . . Co nc l ud i n g r em a rk s . . . .
. . . . . . .
. . . . . . .
. . . . . . .
23
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. 23 . 25 . 28 . 30 . 33 . 35 . 36
I
II
ONLINE ENGINEERING
Pa r t 2 THEORY Ch apter 4 ISSUES OF COMPLEXITY
4. 1 4. 2 4. 3 4. 4 4. 5 4. 6
I n t ro du c ti on . . . . . E v o l ut i o n o f sy s t em s I s o l at i o n . . . . . . . Ad a pt a ti on . . . . . . Va l i d a ti on . . . . . . Co n cl u d i ng re m ar ks
41
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
41 43 45 47 49 52
Ch apter 5 OPEN COMPUTATIONAL SYSTEMS
5. 1 5. 2 5. 3 5. 4 5. 5 5. 6 5. 7
I n t ro du c ti on . . . . . M od e l fo r i s ola ti on . E nv i r on m en t . . . . . Do m ai n . . . . . . . . System . . . . . . . . . F a b ri c . . . . . . . . . Co n cl u d i ng re m ar ks
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
55
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
55 60 62 65 68 70 73
Pa r t 3 PRACTICE Ch apter 6 ONLINE ENGINEERING
6. 1 6. 2 6. 3 6. 4 6. 5 6. 6 6. 7
I n t ro du c ti on . . . . . . M e th o d o f a da p ta t i o n Ar t i c ula t i on . . . . . . Co n s tr uc t i o n . . . . . . O b s er v a t i o n . . . . . . I n s t ru me n t at i o n . . . . Co n cl u d i ng re m ar ks .
77
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
77 80 82 84 86 88 89
TA B L E O F C O N T E N T S
III
C hap t er 7 ENABLING TECHNOLOGIES
7 .1 7 .2 7 .3 7 .4 7 .5
93
I nt r od uc t i o n . . . . . . . . . . Arc h i t ec t ur e fo r v al i da t i o n SO L A C E . . . . . . . . . . . . DI S C E R N . . . . . . . . . . . Co nc l ud i n g r em a rk s . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . 93 . . 95 . . 97 . 100 . 102
Par t 4 CONCLUSION C hap t er 8 NETWORK ENABLED CAPABILITIES
8 .1 8 .2 8 .3 8 .4 8 .5
I nt r od uc t i o n . . . . . Ex pe r i m e nt i n g w i t h TW O S O M E . . . . . Be n ch m ar k . . . . . . Co nc l ud i n g r em a rk s
. . . . . . . . . d e pe n da b i l i ty . . . . . . . . . . . . . . . . . . . . . . . . . . .
109
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. 109 . 112 . 114 . 118 . 122
C hap t er 9 SUMMARY OF THESIS
9 .1 9 .2 9 .3 9 .4
I nt r od uc t i o n . . . . Ex pe r i e nc e s . . . . As s es s m e nt . . . . Fu t ur e c h al l e n g e s
125
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. 125 . 127 . 130 . 133
Par t 5 REFERENCES Ap pend ix A GLOSSARY
141
Ap pend ix B NOTES
145
Ap pend ix C BIBLIOGRAPHY
151
Online engineering
PREFACE
G re at d i s c ove r i e s a n d i m p r ove m e n t s i nva r ia bly i nvol ve t h e c o o p erat i o n o f ma ny min ds. A . G. B e l l
I was first introduced to the wonderful world of computing in the early 1980s. At that point in time, I considered computing as a mere area of personal indulgence, i.e., game playing. However, it would not be long until my curiosity regarding construction of software came into focus. How did people go about when creating those entertaining artifacts of computer games? Unfortunately, it was not until the late 1980s that I had the opportunity to get some real experience from construction of software. At the time being, I finally managed to get my hands on a Commodore Amiga 500 and this was really one fine piece of machinery. Not only did it support the notions of multitasking and graphical user interfaces, under the hood, it even had specifically tailored chipsets that relieved the central processing unit from media intensive tasks. By means of a few hardware reference manuals and assorted construction tools, I soon indulged myself with the activity of creating and integrating computer graphics, audio, and software into quite entertaining artifacts of my own – demos.1 However, when you get sufficiently skilled in programming, it is typically not long before you start asking yourself certain questions regarding the limits of computing. Moreover, you naturally also wonder if there are other individuals out there who ponder the same question. As a matter of social networking and so-called bulletin board systems, I was soon to find out that it most certainly only was the individual who set those limits and there was plenty of other individuals who were concerned with pursuing the same questions. In fact, there was a whole underground movement out there, who constantly tried to surpass the known limitations of some particular piece of computing machinery and, at the same time, demonstrate their excellence in a visually tractable and entertaining manner – the scene.2 V
VI
ONLINE ENGINEERING
The most important things that I learned during this time of my life was that technological innovation is pointless if one cannot demonstrate its superiority in an easily accessible format and, moreover, innovation is best served by a whole team of dedicated individuals. Experiencing those two lessons in such a concrete and practical manner as I did at the time has led me to believe that most any creative result in computing necessarily has to be produced as a team effort; emphasizing the vivid graphical and aural experience of real world computing phenomena. The potential benefit from such a mindset should not come as a surprise to anyone and yet, during my time as a student of software engineering as well as computer science, I have come to understand that there is a somewhat peculiar reluctance to acknowledge the importance of its very essence. During my time as a research student, people have continuously told me that the only thing I ought to concern myself with is to write one scientific report after another. Then, as soon as I have mustered myself to produce a sufficiently large number of reports; my research education will come to an end and I am about to become a full-fledged researcher. I consider this state of affairs to be most unfortunate and, in essence, a philosophy of research education that will lead us to a dead end. In my humble opinion, we ought to educate future researchers with the sole intent of making them skilled in the art of science as well as engineering. That is, I strongly believe that a research student needs to experience as many aspects of this fine art as possible and, preferably, to acquire proficiency in funding, investigations, authoring, demonstrations, development, experimentation, and innovation as a team effort. This has always been, and still is, my fundamental conviction regarding what I consider the most important activities involved in computer science as well as software engineering research. Consequently, when I met Professor Rune Gustavsson in the mid 1990s, which held a similar conviction close at heart, I decided to pursue the opportunity of an education in his research group – Societies of computation. In doing so, one of the first challenges I was introduced to was the group’s need for a demonstrations facility. In my own opinion, this was indeed a very promising start to this thing they call research education. In essence, the group’s research programme needed a facility, including personnel, which could provide for demonstrations of proficiency in development, experimentation, and innovation. It would not be long until our first laboratory was born – Societies of computation laboratories.3 Ever since the start of my research education, until this very day, it is almost exclusively through the challenges and opportunities in managing this laboratory that I have gained experience as a student of research. Of course, as is required, the monographic thesis presented herein is based on a number of publications that were authored or co-authored by myself (see Appendix A – Notes – for their individual abstracts4–10), and I sincerely hope that they convey at least some of my experiences over the years. However, this sufficient number of publications was not produced as a matter
PREFACE
VII
of focusing on refining my proficiency in authoring, but rather from demonstrating real world phenomena of computing – as a cooperative effort involving a whole team of dedicated individuals and organizations. I would therefore like to express my everlasting gratitude toward all parties involved, for helping me acquire the experience and confidence that I certainly will come to depend on as a practitioner of research. I would therefore like to express my gratitude toward the agencies and organizations, in no particular order, that certainly made the trip as efficient and pleasant as I could ever had hoped for: The knowledge foundation (KKS), The Swedish agency for innovation systems (VINNOVA), Societies of computation (SOC), Societies of computation laboratories (SOCLAB), and Kockums AB (KAB). Secondly, there were past or present individuals in these organizations that, by means of most capable suggestions, guidance, and help, have been instrumental in asserting that I was to reach my intended destination. In particular, the individuals that immediately come to my mind are Rune Gustavsson and Paul Davidsson (SOC); Markus Andersson, Jimmy Carlsson, Anders Johansson, Johan Lindblom, Fredrik Nilsson, Jonas Rosquist, Robert Sandell, Tomas Sareklint, Christian Seger, Björn Ståhl, Björn Törnqvist, and Tham Wickenberg (SOCLAB); Pär Dahlander, Ola Gullberg, Jens-Olof Lindh, and Tom Svensson (KAB). Finally, I have reached the destination of one very interesting journey and, to my great relief, without any alarming indications of a dead end in sight. I therefore feel quite confident and intrigued when I see a whole plethora of alternative routes and possible experiences that lie ahead.
Martin Fredriksson – August 2004 – Ronneby
Part 1
INTRODUCTION
1 OUTLINE OF THESIS
We cannot too care fully re cognize that science star ted with the organis at ion o f or dina ry e xp e r ie n ce s. A . N. W hi te h e ad
1.1 INTRODUCTION
Computing has evolved from isolated machines, providing calculative support of applications, toward communication networks that provide functional support to groups of people and embedded systems. Perhaps, one of the most compelling feature and benefit of computers is their overwhelming computing efficiency. Today, we conceive distributed computational systems of an ever-increasing sophistication, which we then apply in various settings – critical support functions of our society just to name one important application area. The spread and impact of computing, in terms of so-called information society technologies, has obviously gained a very high momentum over the years and today it delivers a technology that our societies have come to depend on. To this end, concerns related to our acceptance of qualities of computing, e.g., dependability, are increasingly emphasized by users as well as vendors. An indication of this increased focus on dependability is found in contemporary efforts of mitigating the effects from systemic failures in critical infrastructures, e.g., energy distribution, resource logistics, and financial transactions. As such, the dependable function of these infrastructures is governed by means of more or less autonomic computing systems that interact with cognitive human agents. However, due to intricate system dependencies as well as being situated in our physical environment, even the slightest – unanticipated – perturbation in one of these embedded systems can result in degradations or catastrophic failures of our society. We argue that this contemporary problem of computing mainly is due to our own difficulties in modeling and engineering the involved system complexities in an understandable manner. Consequently, we have to 1
2
CHALLENGES IN DEPENDABLE COMPUTING
provide support for dependable computing systems by means of new methodologies of systems engineering. From a historical perspective, computing has evolved, from being supportive of quite well defined and understood tasks of algorithmic computations, into a disruptive technology that enables and forces change upon organizations as well as our society at large. In effect, a major challenge of contemporary computing is to understand, predict, and harness the involved systems’ increasing complexity in terms of constituents, dependencies, and interactions – turning them into dependable systems. In this thesis, we therefore introduce a model of open computational systems, as the means to convey these systems’ factual behavior in realistic situations, but also in order to facilitate our own understanding of how to monitor and control their complex interdependencies. Moreover, since the critical variables that govern these complex systems’ qualitative behavior can be of a very elusive nature, we also introduce a method of online engineering, whereby cognitive agents – human and software – can instrument these open computational systems according to their own subjective and temporal understanding of some complex situation at hand. To this end, the model and method advocated herein should merely be considered as examples of certain instruments that we need in addressing the notion of complex computing phenomena. Therefore, in this introductory chapter’s first section – Challenges in dependable computing – we introduce some general concerns that, in effect, are addressed throughout the whole thesis. In particular, we emphasize the dichotomy of computer science and software engineering, as the basic means to attain more confidence in complex and embedded computing systems. In the following section – Contributions from the author – we therefore elicit those instruments that could be considered as applicable in dealing with the complex phenomena at hand. In fact, the coherence and complementary nature of these instruments reflect what should be considered as the thesis’ major emphasis, i.e., putting theory into practice. Consequently, in the following section – Guidelines to the reader – we present the general structure of the thesis as well as the specific topics of each chapter. Finally, in this chapter’s last section – Concluding remarks – we emphasize that almost all of the material presented herein has evolved as a matter of experience from designing as well as developing enabling technologies and systems. However, without any further ado, let us start with some general concerns and challenges in dependable computing systems. 1.2 CHALLENGES IN DEPENDABLE COMPUTING
During the last centuries, the scientific approach towards understanding of our physical world has been tremendously successful in areas such as physics, chemistry, and biology.
OUTLINE OF THESIS
3
As such, science relies on the continuous establishment of certain methodological instruments, i.e., principles, models, methods, and technologies; aiming at revealing the very nature of some particular phenomenon under investigation. Moreover, we conduct and evaluate real world experiments in order to, on the one hand, verify the functionality of methodological instruments in isolation and, on the other hand, to validate the applicability of comprehensive methodologies as such. In effect, the engineering community relies on science to establish the soundness of these methodological instruments as well as related work practices, in order to facilitate the construction of dependable systems with desirable qualities, no matter how complex or delicate their behavior might seem. Moreover, we consider the qualitative behavior of such systems to be more than the sum of their parts, i.e., there are constitutive qualities of an integrated whole that are difficult or even impossible to derive by means of analyzing isolated entities of some system. Due to this elusive nature of qualities in complex systems, a plausible answer to our concern regarding dependability is probably not to be found in the deployed systems as such, but rather in the methodological approaches we apply in dealing with them – from a scientific as well as engineering perspective. An illustrative example of such a state of affairs is typically found in the evolution of present day transportation systems, which has been going on during more than a century. In this particular case, systemic qualities, such as security and safety in transport, is an overall concern of our society at large. However, one does not address the concerns involved by emphasizing the qualities of one component in isolation, e.g., a car, but rather by means of emphasizing certain criteria of invariants that are relevant at all times, e.g., traffic regulations. As we have previously indicated, we consider any scientific methodology to involve fundamental instruments such as principles, models, methods, and technologies. It is by means of such instruments that we can design and implement qualitative systems. Moreover, in this context, we conduct scientific experiments in order to verify not only the isolated instruments, but also in order to validate the soundness of whole methodologies as such. In the field of computing, however, such methodological trials and experiments are quite seldom the primary subject of study. In any field of research, or development for that matter, the lack thereof is indeed most unfortunate. In fact, without the continuous development and validation of comprehensive and scientific methodologies, the engineering communities are left with no other alternative than to provide us with systems developed according to the art of best practice, i.e., when principles and models derived by science are nowhere to be found. In this respect, one should indeed question our confidence in the notion of dependable computing systems. The main difference between the physical world and the world of computing lies in the latter being designed and implemented, in contrast to the former, being there waiting for exploration. In effect, the dichotomy of science and engineering – already established
4
CONTRIBUTIONS FROM THE AUTHOR
in the disciplines of natural science – is less understood or established in the field of computing. Consequently, if our primary concern is to acquire a certain level of confidence in that computing systems of today can be ascribed with the quality of being dependable, we must establish the conception of a comprehensive methodology that is grounded in the systems as such. Obviously, we cannot prove the applicability of such a methodological approach by means of formal models and methods. Instead, we must try to establish the soundness of such a methodological stance by means of concrete trials and real world experimentation. The principle challenges of concern that we address in such an experimental endeavor are as follows. PROBLEM 1.1
I n wh at way c a n we und er s t an d t he nature of depen dable computin g sy s t em s ?
PROBLEM 1.2
In what way c an we har n ess th e c omplex ity o f d e pend able co mputin g sy s t em s ?
The hypothesis that we emphasize in this thesis could therefore be stated as follows. Firstly, in order to address certain crucial challenges of dependable computing systems, we need to develop and establish a methodological approach that – in a comprehensive manner – addresses the dichotomy of science and engineering in the field of computing. Secondly, since the applicability of such a methodological approach is impossible to establish by means of formal verification in isolation, we can instead validate its soundness by means of assessing the quality of service as provided for by its resulting products – developed and assessed by means of the methodological instruments. SOLUTION 1.1
We a i m at re ve a l i ng s y s t e m com plexit y – con st it u en t s, depen den cies, an d interactions – by means of a model of open c ompu t at ion al systems an d we aim at gover n ing the qualit at i ve b e h av i or o f t h e s e s ys t e ms by means of a method of online eng ineer ing.
1.3 CONTRIBUTIONS FROM THE AUTHOR
From the perspective of challenges in dependable computing systems, the author’s intent with this thesis is to present a comprehensive set of methodological instruments – developed by means of real world trials and experimentation. Certain principles, models, methods, and technologies are therefore discussed and should, from the perspective of this particular thesis, be considered as the major contributions from the author. However, the author would like to emphasize that the thesis as such should be interpreted as an even more important contribution than these components in separation. That is, the thesis provides for a context, in which all instruments of the methodology fit quite naturally
OUTLINE OF THESIS
5
and, in addition, it provides for a context in which an important example and evaluation of the methodology in its applied form can be introduced. The first contribution of this thesis outlines a framework of computing that, in essence, comprises a number of principled areas of investigation that could guide us in reasoning about theory and practice of dependable computing systems. In effect, the author advocates a principled approach toward theory and practice of computing, in which we consider computer science and software engineering as two interrelated disciplines that are addressing the same academic and industrial school of prowess – dependable computing systems. As such, our framework provides for guidance in emphasizing various concerns of computing professionals, i.e., models, methods, and technologies. The second contribution of this thesis involves a model of open computational systems that aim at conveying the essential characteristics of complexity in contemporary computing systems – constituents, dependencies, and interactions. As such, the model involves four abstraction layers, that each addresses particular concerns in modeling of open and fielded computational systems. Moreover, the foremost benefit of this model is its capability to capture the factual constituents of physical environments and cognitive domains of online computational systems. The third contribution brought forward in this thesis is a principled method and approach toward establishment of open computational systems – online engineering. The method emphasizes four iterative activities in dealing with complexity in open computational systems. As such, these activities all aim for an evolutionary development of system qualities, in such a manner that we can avoid the divide–and–conquer as well as debug– repair approaches toward engineering and verification of complex systems. Instead, the method advocated herein focuses on the continuous in situ refinement of behavior and qualities in open computational systems, according to temporary concerns and individual aspects of cognitive agents – human and software. The fourth contribution of this thesis involves an outline of certain enabling technologies, as required in modeling and development of open computational systems. A software architecture that emphasizes the activities of exploration and refinement simulation is consequently introduced. As such, the architecture’s constituent technologies form the basis in applying the methodological approach advocated throughout this thesis. With such technological instruments at hand, we are properly equipped to conduct real world trials and experiments in the field of dependable computing systems. The final contribution of this thesis involves a practical example of applying our advocated methodology. In essence, this particular contribution constitutes the summary and assessment of a real world experiment, involving the model of open computational systems as well as the method of online engineering. As such, the experiment aims to establish the soundness of our approach – toward a better understanding regarding the
6
GUIDELINES TO THE READER
dichotomy of science and engineering in the field of dependable computing systems. As the means to an end, the application domain of the experiment was that of networkenabled capabilities, in which concerns such as situational awareness and information fusion are of the essence. The author’s main experience in performing this methodological experiment was, indeed, that addressing qualitative behavior in complex computational systems is a daunting challenge in every respect. Just consider the time and effort it took us to gain confidence in the qualitative and dependable behavior of modern day transportation systems. However, for better or worse, in doing so one can gain an immense support from a methodology that at least includes the essential instruments required. 1.4 GUIDELINES TO THE READER
In order to emphasize the intent and specific contributions of this thesis, a general overview of the material presented herein as well as guidelines from the author to a potential reader would perhaps be appropriate. In essence, the author’s intent with this thesis is, on the one hand, to frame the complex challenges of dependable computing systems and, on the other hand, to exemplify those aspects of the frame that the author believes could be of benefit for a broader public of computing practitioners. Consequently, the material presented herein has been divided into five consecutive parts in order to reflect these intentions: Introduction, Theory, Practice, Conclusion, and References. Excluding this introductory chapter, the first part of this thesis includes two chapters; Dependable computing systems and Methodology of computing. As an introduction to the first part of the thesis, the second chapter introduces the reader to contemporary concerns, approaches, and challenges in the field of dependable computing systems. As such, this introduction depicts the field of computing as both highly innovative as well as in great need of addressing some major concerns. Consequently, the aim of this chapter is to strengthen certain problem statements that we, in effect, address throughout all chapters of this thesis. In the third chapter of the thesis, we therefore introduce a framework that addresses the previous chapter’s problem statements, by means of certain methodological instruments at hand. The thesis’ second part includes two chapters; Issues of complexity and Open computational systems. As a follow-up on the first part of the thesis, the fourth chapter introduces certain general concerns regarding complexity, due to interactions at different levels of system abstraction. In particular, the chapter highlights notions such as openness, survivability, and quality of behavior as fundamental concepts to consider in dependable systems. Consequently, in the thesis’ fifth chapter, we introduce a model of dependable systems that aims to capture the above concepts of openness and systemic quality. The
OUTLINE OF THESIS
7
model depicts four abstraction layers, each with its particular constructs and relations. The aim of this chapter, with its focus on a model of open systems in nature, is to act as a foundation for investigations as well as design of open and physically grounded computational systems. The third part of this thesis includes two chapters; Online engineering and Enabling technologies. As such, this part of the thesis delves further into certain practical issues that that one faces in dealing with open computational systems. Therefore, the sixth chapter of the thesis introduces a method for attaining approximate quality of service in complex systems of contemporary computing. The method, as such, characterizes control as a matter of continuous system evolution and, in effect, the aim of this particular chapter is to identify and discuss certain fundamental activities that are involved in so-called online engineering. The seventh chapter of this thesis introduces a number of enabling technologies of our methodological approach. These technologies, as such, are not only the practical result from many years of development, but they also represent the essence of our attempt to develop a comprehensive methodology for dependable computing systems. The aim of this chapter is, consequently, to summarize and discuss the architectural framework that, in fact, is the foundation of all material in this thesis. The fourth part of the thesis includes two chapters; Network enabled capabilities and Summary of thesis. In order to conclude the previous parts of the thesis, including this introductory chapter, the eighth chapter outlines a synthesis of what we consider as a first example of putting our theory into practice. As such, the example depicts a real life system in the domain of network enabled capabilities. The aim of this chapter is, consequently, to evaluate the model of open computational systems and the method of online engineering, but also to put our methodological approach to the test in a relevant application domain of societal concern. Finally, in the ninth chapter of the thesis, we introduce the reader to an important discussion regarding our results as well as an assessment and comparison with related work. In this particular chapter of the thesis, we also present what should be considered as relevant directions of future work in the field of dependable computing systems. As such, we emphasize the challenges of establishing qualitative performance envelopes of open computational systems as well as the conduct of empirical investigations in doing so. The fifth and final part of this thesis includes three appendices; Glossary, Notes, and Bibliography. As such, this part of the thesis is intended to function as a practical reference for those readers that call for more information regarding some particular topic. Consequently, the first appendix enumerates a number of basic terms and their general interpretation as applicable in the context of this thesis. When such a term appears for the first time throughout the chapters of this thesis it is marked with a ‘*’ symbol. The second appendix includes the author’s various notes as they appear throughout the thesis’ chapters. The third and final appendix includes a list of publications that should provide
8
CONCLUDING REMARKS
for additional and complementary information regarding certain topics addressed in the thesis. 1.5 CONCLUDING REMARKS
As previously indicated, the material presented herein constitutes the author’s intent and subsequent results from framing the complex challenge of dependable computing systems. As such, one could obviously question the author’s ambition in that the problems at hand are of such a magnitude that any results presented would be of mere value. However, the material must necessarily not be interpreted as a rigorous claim that any technical problems have been fully solved, but merely as an initial attempt to introduce a comprehensive methodology toward dependable computing systems, i.e., a methodology that includes all the essential instruments needed by a practitioner of computer science and software engineering. In effect, the author considers a methodology to guide us in matters of a theoretical as well as practical nature. To consider methodological instruments in isolation, devoid of contextual dependencies, is of mere benefit to a practitioner – scientist as well as engineer – that has to face the actual consequences from applying some particular instrument in a real world context. As the means to an end, the methodological approach advocated herein has therefore evolved as a matter of experience. It has continuously been refined over a number of years, by means of experimental trials, where all of its constituent instruments have been subjected to evaluation and supplemental refinement. Obviously, the experience from such an endeavor is difficult, if not impossible, to convey in terms of a discourse where the experience of others is the general origin of thought. Instead, the author’s currently acquired experience and understanding of a particular issue at hand – qualitative system evolution – has provided for the point of departure and subsequent goal with the material introduced herein. As such, this thesis is intended to convey a seldom acknowledged fact of scientific research and development. People make mistakes and it is from our experience in making those mistakes that we can engage in corrective actions. To this end, the foremost contributions brought forward in this thesis have all been, and still are, subjected to this continuous trial–and–error evolution of theory and practice. However, as previously indicated, this thesis is based on the current understanding of not only the author himself but also on the practical experience and efforts of many individuals in his own research and development group. In summary, the material presented herein should merely be interpreted as an initial effort to provide for a first outline of what we consider an applicable and comprehensive methodology toward dependable computing systems. As such, the methodology and its
OUTLINE OF THESIS
9
constituent instruments should be of particular interest to those practitioners of computer science and software engineering that value experimental research and development over scholastic proficiency. Obviously, in the particular context of this thesis, it is therefore very important to stress the general stance taken toward a better understanding of the dichotomy of computer science and software engineering. Of course, there is a difference between the disciplines of computer science and software engineering. One might be temped to interpret this difference as a matter of exclusive concern with theory and practice, i.e., computer science exclusively encompasses the theoretical part of computing whereas software engineering only deals with the practical part. However, this depiction of the difference between the disciplines is by no means correct. Naturally, they both concern themselves with theory as well as practice of computing. Instead, in the context of this particular thesis, we consider the principle difference between computer science and software engineering a matter of differing emphasis in the disciplines’ everyday conduct. On the one hand, we argue that software engineering emphasizes organizational theory and work practice, in attaining quality assurance of computing systems’ function. On the other hand, we argue that computer science emphasizes complexity theory and simulation practice, in attaining knowledge discovery of computing systems’ nature. In a sense, the material presented in this thesis could therefore be interpreted as aiming to even the balance between these two disciplines, i.e., to better understand the attainability of complex and dependable computing systems from the perspective of computer science as well as software engineering. It is in order to make this intent and goal explicit that the thesis was named Online engineering: On the nature of open computational systems. However, without any further ado, let us start our methodological journey toward dependable computing systems.
2 DEPENDABLE COMPUTING SYSTEMS
The cheapest, fa stest, and most reliable components of a computer system are those that aren’t there. G. Be ll
2.1 INTRODUCTION
From a historical perspective, there are many great ideas that have contributed to the birth of computing. However, the one idea that probably has had the most profound impact on our everyday lives, as well as society at large, is that of algorithmic computation [76]. The general idea is, in essence, to remove the need for human-based computing – performing computations according to a fixed set of rules and without the authority to deviate from these rules in any detail – by means of a mechanical counterpart. However, when the activity of human computation (reasoning) is projected onto a physical machine, it is important to understand that a fundamental change of the prerequisites has occurred – the notion of semantic awareness and rationality is transformed into that of syntax interpretation and performance. That is, by means of skills in observation, reasoning, and instrumentation, a human can decide to deviate from some pre-assigned set of rules whereas a machine cannot. The capability and performance of digital computers is limited to the way their physical constituents and mechanical components* interact – information storage, instruction execution, and program control – and the extent to which they are capable of interpreting and manipulating discrete representations of information. At first, the qualitative envelope of such machinery can indeed seem quite limited but, taking the performance of physical machinery into account, the distributed digital computer* certainly provides for an awesome quality of service that far surpass its human counterpart in specific computations. In this respect, the limiting factors of digital computers and their quality in perfor11
12
INTRODUCTION
mance are actually those capabilities, i.e., awareness and rationality, which their human operators employ to instrument the internals of computational systems.* Still, irrespective of this natural limitation, we have found that the number of application areas where computing machinery can provide for an essential and much appreciated quality of service is of an ever increasing importance. At first, during the second world war, we came to depend on its service in such isolated application domains as crypto analysis and trajectory prediction. Then, with the advent and subsequent incorporation of programming paradigms and digital communication [68], we eventually came all the way to depend on computing machinery in critical support functions of society, e.g., energy distribution, resource logistics, and financial transactions. However, as was the case of past decennia, instead of isolated machine instrumentation being the sole limiting factor of computing’s quality of service, we are now facing new factors of limitation, e.g., system complexity and behavioral semantics. Over the years, we have engineered ourselves into a situation where, on the one hand, we have become dependant on computing in critical support functions of society and, on the other hand, the quality of service provided by computing machinery, e.g., dependability, is severely hampered by limitations in our own understanding thereof. Even though the service of computing certainly should be considered as highly innovative and beneficial for humanity – individuals, organizations, and society – we need to deal with certain critical issues of great concern [62]. As a result, from contemporary trends of computing – developing distributed and interdependent service-oriented systems of an ever-increasing complexity – we would be well advised to consider the following concerns: PROBLEM 2.1
Wh at do we mean by depen dable compu t ing s y stems?
PROBLEM 2.2
Ca n we d e sign an d mainta in depend able compu t ing s ys t ems?
From a scientific perspective, there are typically three instruments at hand which would support us in providing an answer to these particular questions, i.e., ontology (theory of reality), epistemology (theory of cognition), and methodology (science of method). The aim of this particular chapter is, consequently, to further discuss the issue of dependable computing systems from the perspective of these scientific instruments, but also to strengthen and clarify certain problem statements that we, in effect, will address throughout the remaining chapters of this thesis. In the first section of this chapter – General concerns – we depict computing machinery as embedded in nature and, consequently, discuss the implications from such a stance. In particular, to consider computing machinery as a physical phenomenon introduces us to the occurrence of natural as well as unforeseen events and stimuli that inevitably will affect system quality as well as our ability to comprehend and control their operation. Consequently, in the following section
DEPENDABLE COMPUTING SYSTEMS
13
– Cognitive agents – we discuss the implications of a contemporary approach toward modeling and coordinating complex computational systems by means of cognitive agents.* Moreover, in that particular section, we also discuss the implications from taking on a cognitive agents approach, i.e., introducing operators that are capable of perceiving as well as experiencing their physical environment. Finally, in the last section of this chapter – Concluding remarks – we introduce the paradox of dependable computing systems and our understanding of the fundamental challenge at hand. As such, we consider practitioner confidence in the methodological approaches as the foremost challenge at hand. Any methodological approach advocated should at least emphasize requirements of a theoretical as well as practical nature, i.e., the dichotomy of science and engineering. To this end and without any further ado, let us start with a brief introduction to our general concerns of dependable computing systems. 2.2 GENERAL CONCERNS
The applicability of computing is gaining momentum and we are constructing, operating, and using this technological service in ever-increasing situations of sophistication and complexity. As a result, numerous paradigms and approaches toward computing are currently flooding the communities of computer science and software engineering, e.g., autonomic and ubiquitous computing. However, in this somewhat chaotic situation, one might wonder what essential problems that all of these approaches actually address. We argue that the common set of concerns they all implicitly share involves considering computational systems to be (1) embedded in nature, (2) constituted by programmable entities, and (3) governed by cognitive agents. However, with respect to the first property in this list – embedded in nature – there are certain implications that call for our attention. As is the case with all systemic phenomena in nature, embedded systems are under the influence of physical events and stimuli that have precedence over the best of our rational intentions in designing, constructing, and deploying some particular service of computing. Since the occurrence of these disruptive events is an unfortunate and unavoidable fact, one must necessarily take their occurrence as well as impact into account when constructing the involved systems, but also in operating their continuous behavior in a dependable manner. DEFINITION 2.1
Dependability: The tr ustwor thiness of a c omputing system which allows reliance to be ju stifiably placed on the servic e it deliver s. 11
Moreover, since the cardinality of system constituents and interactions tend to grow, as a matter of more sophisticated ways of service provisioning and safeguarding against disruptive events, the complexity of constituents, dependencies, and interactions in these
14
GENERAL CONCERNS
systems also tend to grow at an ever-increasing pace. In effect, the sustainable behavior of embedded systems, as well as our understanding thereof, is no longer easily comprehended or harnessed. Neither on an individual basis nor from a group perspective. The general vision of the future in computing involves a universal communication network* of automatically integrated physical devices of computing – sensors, computers, and actuators. Consequently, the involved systems are characterized as being immersed in a physical environment of pervasive communication and ubiquitous computing. The physical manifestation of these systems is considered to be of a temporal nature, in that the various devices and their constituent services are integrated with each other in an automated and ad-hoc fashion. Simply power on a particular device and, in a dynamic manner, it will automatically be integrated into a universe of interaction and service provisioning [15].12 An innovative vision indeed, but more importantly; what are the implications? The envisioned emergence of ubiquitous computing obviously introduces us to many interesting and unique opportunities – quality of computing at its finest. However, when the notion of ubiquitous computing was introduced in the beginning of the 1990s it was not introduced as a vision of some distant future, but rather in order to frame an already existing application domain in great need of more appropriate models, methods, and technologies than currently available. At the time being, even though the existence or attainability of this universal system of ubiquitous computing was somewhat questionable, the response from various computer science and software engineering communities was immediate – a plethora of competing models, methods, and technologies started to emerge, e.g., multi-agent systems, grid computing, ambient intelligence, and semantic web. Considering the exponential growth of the Internet and its impact on our perception of computing’s potential in providing for quality of service, the attainability of ubiquitous computing should indeed not be considered as fiction. However, as advocated by some of its proponents, further analysis of this topic implies certain profound implications. On the one hand, if we consider our physical environment as constituted by an everincreasing amount of computing devices – automatically integrated by means of pervasive communication – the summative performance envelope of these computational systems would be immense. On the other hand, due to their complex and situated nature, the constitutive capability envelope will continue to elude us. We have to realize that ubiquitous computing implicates an unfolding of the standard chain of command and control in computational systems. As previously described, the quality of an isolated computer is limited to some particular operator’s performance in instrumenting its internal constituents. However, when additional machines and operators are integrated into this chain of command, the previously isolated computer is all of a sudden subject to command and control instru-
DEPENDABLE COMPUTING SYSTEMS
15
mentations that emanate from potentially unknown as well as unanticipated origins. We are dealing with the complex dynamics of open systems. Moreover, when pervasive communication enters the scene, these computational systems and their performance are suddenly under the influence of not only operator and machine interactions, but physical environment* interactions as well. That is, by means of introducing sensor and actuator technologies into the command and control loop of computing systems, we have immersed the service of computing into nature itself and must now face the interesting consequences. As a matter of ubiquitous computing, the primary concern of computing has started to move away from mere computation, i.e., manipulation of isolated information, and instead we are emphasizing interaction [80] and coordination [50] of distributed computations. The prevailing model of computation is the Turing machine from 1936. As such, the model was not intended to convey aspects of some natural phenomenon. Instead, it was a mathematical model that addressed the Entscheidungs problem, put forward by the mathematician David Hilbert [74]. From a contemporary perspective, theoretical computer science has since the 1930s devoted much effort in investigating complementary models of computation. Particular strands involve investigations regarding computational power, equivalence, and strengths in verification. Moreover, the practical applications of those models typically included investigations regarding algorithm complexity, distributed computations, deadlock avoidance, fairness and so on. However, all of these efforts address particular theoretical concerns and refinements regarding the general model of computation. There is, obviously, also a somewhat more practical aspect to these efforts of progress in the field of computing – programming and operating the machinery. The first (American) digital computer – ENIAC – was designed by von Neumann and Turing, using the abstract Turing machine as a blue print for von Neumann’s architecture of computing machines. The digital computer executed stand-alone numerical algorithms in a way that faithfully mimicked the formal transitions defined by the Turing machine [75]. Since the semantics of these numeric calculations were faithfully mirrored by means of formal symbol manipulation, there was no semantic gap between the abstract model and the concrete architecture. The involved problems were, however, of a practical nature – how to engineer and maintain the digital computer’s operations as well as how to implement the algorithms in a correct and pragmatic way. Besides these practical problems, problems of semantics entered the scene when the involved algorithms were to be executed on different machines. The real semantics, i.e., semantics regarding the behavior of physical machinery, could then differ due to implementations of the same algorithm on different hardware platforms* – a problem of interpretation. The introduction of formal languages, abstract machines, and compiler techniques during the 1950s throughout the 1970s, combined with advancements in
16
COGNITIVE AGENTS
software engineering during the same period, led to the idea that formally defined and verified algorithms could, in principle, be implemented on different hardware platforms and still deliver the same computational behavior. In 1968, however, NATO sponsored a historic meeting with leading academic as well as industrial participants in order to discuss what they dubbed the software crisis. They saw software systems rapidly growing in size and complexity, at the same time as they were providing for computing services in application areas where failures could cost lives and ruin business. They believed that the fundamental notion behind programming – construction of programs that implement mathematical functions – could not cope with the complexity and fuzziness of requirements in embedded (safety-critical) systems. In short, they advocated a new discipline – software engineering – to remedy the situation. However, we have come to learn the hard way that we still have a long way to go toward such a principled discipline of (software) systems engineering. We are still infatuated and captured by the tradition, as well as mindset, where programs are considered in terms of mathematical functions, and programming is the fundamental practice in translating these functions into qualitative systems. To this end, we have only partially embraced the mindset that emphasizes software as systems of systems; necessarily modeled and designed under the severe constraint of unanticipated environmental events. However, even though industrial actors have emphasized the latter mindset for quite some time, the academic community persists in focusing on the prior case [12]. Meanwhile, more semantically complex applications have emerged since the 1960s. Notable examples are found in the areas of artificial intelligence and knowledge engineering. Techniques supporting knowledge representations – formal concepts and logics – were developed, as well as knowledge acquisition techniques to capture the relevant semantics of system behavior in some particular domain under investigation [69]. The limits of symbol processing, when conveying meaningful and understandable semantics of system behavior to human users, were tested in several applications. Typically, we were successful in closed domains where the involved concepts had an unambiguous formal meaning and were not susceptible to ambiguous interpretations by different users [72]. 2.3 COGNITIVE AGENTS
Today the techniques of artificial intelligence and methods of knowledge engineering have partly been incorporated into a research and development area of computing that typically is referred to as multi-agent systems. As such, the area primarily emphasizes the notion of computational entities – agents – that are endowed with mental capabilities and
DEPENDABLE COMPUTING SYSTEMS
17
social competencies. If one feels confident enough in trusting the sustainable behavior of such autonomous entities, one can even empower them with responsibilities and authority – on behalf of their human operators. However, in this respect, an important challenge that so far has attracted little, if any, attention from the academic community involves to what extent the involved models* of these autonomous agents, and the subsequent multi-agent systems they form (described by formal semantics), depicts the factual behavior that they are said to exhibit in their natural habitat – physical environments. The main reason for advocating autonomous agents and multi-agent systems is the involved models’ high level of abstraction as well as the agents’ natural capacity to exhibit capabilities similar to those of their human counterparts. In this respect, the notion of autonomous agents can best be characterized as a matter of human actors in their technological form. There is, however, something of a puzzle here. As we previously stated, digital computers already occupies such a role of performing complex and tedious tasks of computation. Why then do we introduce the notion of autonomous agents? In essence, these computational entities are not introduced in order to take over the role of the computer, but rather to take over the role of operators. Consequently, the paradigm of multi-agent systems aims to recapture the loss that occurs in projecting the role of a human computer onto a mechanical counterpart, i.e., involving cognitive capabilities such as awareness and rationality of reasoning. As previously stated, this projection typically involves the notion of authority being transformed into that of a mechanical capability to perform computations. However, in doing so, we still require the performance of computations to be supervised by an operator and this is where the notion of autonomous agents appears. These computational entities are empowered with the authority to carry out such cognitive actions that their human counterparts – operators – otherwise would be responsible for. In every respect, autonomous agents and multi-agent systems therefore aim to provide for an operational solution toward automation and control of complex computational systems, where cognitive capabilities are of the essence. In practice, the agent paradigm emphasizes the notion of a piece of software exhibiting certain behavioral qualities of cognition. With a focus on automation of cognitive tasks, these qualities typically involve the capability of an agent to autonomously perceive its environment through sensors as well as acting on it by means of actuators. Moreover, this particular focus of the agent paradigm also introduces the need for additional capabilities that often are associated with cognitive entities – learning and reasoning. Consequently, agents are required to exhibit the capability of discovering facts and rules about their surrounding environment, i.e., facts and rules regarding other agents that proliferate in the agent’s environment [36]:
18
DEFINITION 2.2
COGNITIVE AGENTS
An agent op erat es in some physic al o r c om pu t at i on al e nvir o nm e n t. A n agen t is itself a phys ical s ys t em of some s or t. Even a pu re s of t ware age n t i s e m b od i e d on a c om p u t e r t h at g ive s a h om e t o t h e agent’s inter nal str u ctures (dat a str uctures, if yo u will) and enacts i t s p r og ra m .
When dealing with the agent paradigm, it is implicitly understood that the environment as such is of either a physical or a computational nature. However, first class citizens in this world of agents are the agents themselves. In effect, the models applied in cognitive activities such as sensing and reasoning typically emphasizes normative behavior and mental constructs of cognitive agents – humans. In ubiquitous and dependable computing, on the other hand, the environment at hand is in a similar manner also of a physical nature, but the first class citizens involved are primarily physical artifacts – machines – not normative entities of a computational nature. Consequently, one would be well advised to note that there is a semantic difference between the systemic nature depicted by models of the agent paradigm and the models of physical systems. The former primarily conveys constituents and behavior of normative and sociological phenomena [9], whereas the later conveys the behavior of causal and natural phenomena. In addressing the ideas and intent of the agent paradigm, a common approach is however to assimilate conceptions and metaphors related to the notion of system organization and control in the area of sociology [25] and recently from the area of biology as well [52; 55; 56]. As argued by Malsch [44], there is a danger in such an approach since it more often than not focuses on construction of cognitive agents according to models and metaphors of sociological origin. In fact, the notion of agents as sociological entities, i.e., endowed with human capabilities such as believes, desires, and intentions, was at first only intended to identify the required capabilities in reasoning about sociological systems and contexts. Today, however, this intention and model still prevails as the major approach toward construction of autonomous and cognitive agents, even though the problems addressed in, e.g., dependable computing, exhibit little resemblance with systems of a sociological nature. In fact, most application areas advocated by industry and society are of a technological nature – embedded systems. It is important to understand that organization and control activities of cognitive agents in physical systems will not be attained as long as the idea of single agents with predetermined motives, interests, and intentions prevails. The general message brought to light by such statements can be considered as twofold. First, we must necessarily avoid making use of metaphors in construction of complex computational systems; if the sought for system behavior does not conform to the interaction principles of the metaphors in question. Secondly, since we consider complex systems, control over system behavior is a result from coordination of multiple agents with cognitive capabilities; as opposed to operations governed by a single agent with normative performance constraints.
DEPENDABLE COMPUTING SYSTEMS
19
Still, we are ourselves cognitive agents and, as such, we notice what goes on around us. This unique capability helps us in our everyday activities by suggesting what events we might expect and even how to prevent unwanted outcomes. In effect, the capability to observe and, subsequently, to understand our surroundings actually fosters our very survival. However, this peculiar expedient works only imperfectly. There are surprises – unanticipated events – and they are unsettling. We rely on our capability to perceive events in our surroundings and, by means of internal models of cause and effect scenarios, we feel confident in what to expect. Still, since unanticipated events keep occurring, then what could possibly be wrong with our expectations? We are faced with the problem of error. These are errors regarding our knowledge of the external world, i.e., how we come to understand things. Philosophers such as Thales, Socrates, and Plato devoted much thought along these lines of thinking. A crucial breakthrough came in 1290 when Roger Bacon introduced experiments and observations as key components in construction and instrumentation of our knowledge regarding the external world and how it evolves. In fact, this approach was ingeniously geared so that maximization of cognitive experience was attainable. Philosophers and researcher such as Copernicus and Galileo later on refined Bacon's fundamental approach toward knowledge exploration and discovery – experimental research – and we entered the era of natural science. In many respects, the introduction of experimental research can, at least in the domain of natural science, be considered as the introduction of what we today call methodology.* Meanwhile, other philosophers addressed the more deep issues of knowledge itself – epistemology. In that particular case, key ideas were typically Hobbes’ materialistic views, i.e., that our sensations are the effect enforced upon us by the otherwise unknowable world, Renk’s and Descartes’ thoughts about mind and matter, Locke’s notion of knowledge as a result from coherence of ideas, and Bishop George Berkeley’s intriguing stance that nothing exists except that which can be perceived. Hume focused on the issues of identification and validation whether two objects observed at different occasions are indeed the same and thus brought up the nature and difference between concepts such as identity and similarity. Locke, Berkeley, and Hume were the classic British empiricists and all three agreed that our lore about the world is a fabric of ideas – based on sense impressions. As Wittgenstein observed about 150 years later, even simple sensory acquired qualities are elusive unless they are mapped onto constructs of public language. For example, an individual might come to understand that many environmental events are recurrences of the same subject’s qualities, despite a substantial accumulation of slight differences between observations. Consequently, public naming and cognitive inspection are essential capabilities in approximation of physical identity and similarity. Public words anchor ideas and are the basis for common awareness of natural phenomena.
20
COGNITIVE AGENTS
Still, how do we know that the words we use to express our ideas and perceived phenomena are conjuring up to similar, or even identical, notions in the minds of others? According to Jeremy Bentham we can use contextual definitions. That is, to explain a term we (only) need to explain all sentences in which we propose to use the term. Bertrand Russell later on used such contextual definitions as a tool toward realizing the dream of empirical epistemology – the explicit construction of the external world from sense impressions. With this line of reasoning in mind, it is memories of perceived phenomena that link our past experiences with present ones and, consequently, induce our expectations. However, memories as such are in most cases not memories of quantitative sensory intake, but rather abstract qualitative posits of things and events perceived in the physical world. In essence, this understanding of human knowledge and its tight coupling with perceptions of the physical world introduce us to a shift from phenomenalism toward physicalism. Moreover, if one intends to pursue such a physicalistic philosophy, there are two directions suggesting themselves. One of them is typically the approach pursued by theoretical physicists, emphasizing conceptual clarity and economy. The other direction, embracing the intricate complexity of some physical subject of study, is that of naturalism [59]. As such, the direction of naturalism emphasizes the important idea of nature’s complexity and precedence over logic’s economy and rationale. Obviously, we are dealing with matters of a somewhat philosophical nature, but it should be noted that the stance of naturalism in many respects corresponding to the fundamental mindset advocated herein by the author. The complex systems we deal with in the contemporary field of computing are affected by natural events and stimuli to a greater extent and consequence than human beliefs, desires, intentions, and subsequent actions. Therefore, in order to emphasize the notion of a naturalistic mindset, we should primarily aim at grounding certain basic concepts in our physical world. However, to do so involves a quite intricate puzzle. Since the global perception of events and stimuli is a private experience, i.e., each unique event and stimuli is possible to experience by a temporally ordered set of observers, then by what means can we coordinate actions among individual observers? Firstly, it requires that two observers witness one particular scene together and, subsequently, that the same observers witness yet another similar scene together. Now, after a second observation, if both observers can agree upon the occurrence of a similar event recurring in both situations, then they can also agree upon the identity of a common (grounded) concept that depict the particular event in question. Finally, since the two observers now share a particular concept and its grounded semantics, the observers have established a mutual standard of perceptual similarity and identity. In summary, we consider the philosophy of naturalism to capture the essential characteristics of contemporary computing systems. However, in doing so, we also have to face
DEPENDABLE COMPUTING SYSTEMS
21
the consequences from dealing with natural phenomena. That is, dealing with natural phenomena involves the problem of public harmony in grounded semantics, i.e., the individual’s capability to communicate privately experienced events and stimuli in terms of publicly accepted cognitive concepts. To this end, the process* of standardizing these cognitive concepts must, on the one hand, necessarily be conducted as a public activity towards the common goal of understanding each other and, on the other hand, the process must emphasize the fundamental capability of every cognitive agent to perceive their physical environment. 2.4 CONCLUDING REMARKS
With the previous discussions of this chapter in mind, let us return to the concerns at hand. In summary, the summative performance of computing machinery – embedded in nature – continues to grow. Meanwhile, its quality* of being dependable continues to elude us. These embedded systems involve a vast number of constituents as well as an intricate web of dependencies and interactions. Consequently, we run into problems of comprehension that affect our confidence in their dependability. Yet, our societies at large are increasingly infatuated by computing’s possible usefulness and benefit. As this peculiar state of affairs continues to evolve into even more complex and unsettling situations, the establishment of a principled approach toward dependable computing should be of fundamental concern to us. To this end, we must necessarily not fool ourselves into thinking that we have the situation under control [60]. On the contrary, we have to acknowledge and deal with the following paradox of dependable computing systems. PROBLEM 2.3
O n th e o ne ha nd , we have com e t o depend on t h e qu alit y of service prov ided by comp uting mac hinery an d its perfor ma nc e. On the oth e r hand, due to its comp lexity, we find it difficult to establish its quality of b ei n g d ep e nda bl e .
Since we are dealing with systems embedded in our society, the practical implications from deviations in predicted behavior can be catastrophic in nature. Understanding and harnessing their complexity by means of a methodological approach is therefore perhaps one of the more challenging endeavors at hand. In essence, we have come to a point of no return. We feel confident in that we actually have the capability to build these complex systems and, as a matter of human nature, we will continue to do so. Meanwhile, we are discontent with those methodological instruments* that ought to induce a feeling of trust regarding our own capability to construct and maintain control in these complex systems. With this paradox in mind, we argue that our lack of trust in the involved systems stems
22
CONCLUDING REMARKS
from the fact that we have yet to identify a comprehensive methodology that, in an applicable manner, enables us to deal with the qualitative notion of dependable computing.13 Certain attempts to establish parts of such a methodological approach can typically be found within the research area of autonomous agents and multi-agent systems. The attractiveness of these efforts is mainly grounded in the high level of abstraction and descriptive power that they provide in analysis and design of complex computational systems [54; 53]. Indeed, similar models of abstraction are called for when it comes to describing complex information systems of the future. However, even though there have been certain attempts by the academic community to develop comprehensive (agent) methodologies, the academic stance has hitherto been almost devoid of industry's emphasis regarding models of dependable computing systems; typically focusing on technologies such as peer-to-peer computing [43], grid computing [17], and web services [71]. In essence, we strongly believe that agent-oriented approaches to system development come with a natural level of abstraction and therefore have something valuable to offer. However, any comprehensive methodology toward ubiquitous and dependable computing necessarily has to be grounded in concerns and solutions of relevance in the area of embedded systems as well. Many efforts of establishing methodological instruments within the agent paradigm focus on isolated models of normative behavior in sociological systems and traditional methods of software engineering [28]. These efforts are, of course, worthwhile in themselves, but have clear limitations when it comes to their contribution in application areas dealing with technological and embedded systems of societal concern. Instead, we believe that more applicable approaches and mindsets have been identified in the area of proactive computing. As such, comprehensive methodologies are called for that emphasize (1) pervasive coupling of networked systems to their environments, (2) proactive modes of operation in which humans are above the loop, and (3) bridging the gap between control theory, computer science, and software engineering [73]. As the means to an end, and with the previous discussions of this chapter in mind, we therefore propose to frame the paradox of dependable computing systems in terms of such a methodological approach.
3 METHODOLOGY OF COMPUTING
Sc ience has to be un derstood in its broadest sens e, as a method fo r compre hending all observable re ality, a nd n ot merely as a n in str u men t fo r acquir ing specialized knowledge . A . C a r re l
3.1 INTRODUCTION
In the early days of computing, the application domains addressed were mainly those of a scientific or administrative nature. As such, these application domains were not really involving the problems of dependable and plentiful computing as we encounter them today. Implementation of computational systems could be conceived in a rather straightforward manner, i.e., applications of symbol processing with formal semantics. Moreover, since the involved programs were executing on isolated computing machinery; if the algorithm was correct, the implementation valid, and the hardware was working just fine, the notion of dependable computing systems was indeed achievable – simply make sure that your implementation correctly mapped onto your functional algorithm. Consequently, a plethora of models, methods, and technologies were conceived that today comprise the very essence of research fields such as computer science and software engineering. In the contemporary field of computing, however, we still primarily concern ourselves with the basic problem of mapping algorithms onto programmatic constructs. Even though the resulting processes from implementing these algorithms can take on a distributed form and even though they are capable of communicating with each other, the focus on algorithm design, program implementation, and formal verification is still prevailing throughout the communities of computer science and software engineering. However, in this context, one ought to notice that the associated areas of applicability are somewhat different. The application domains of today not only involve algorithmic 23
24
INTRODUCTION
computation but, more importantly, they also involve the complex and distributed operation of embedded systems. Moreover, since this particular application domain of computing is concerned with those continuous operations that take place in the habitat of nature, we are no longer concerned with mere problems of algorithmic behavior. We are concerned with problems of coordination and control in complex systems that operate in physical environments – governance of organized complexity by means of mixed team operations [78]. In this respect, when the involved computational systems are embedded in nature, their structures* and processes are typically of an open and dynamic nature – a priori unknown to some extent – whereas algorithms perform in closed environments of a logical nature. Of course, one could, by means of previous experience, have a good idea of what to expect from embedded systems that perform in these open environments, but when we are dealing with nature the involved systems are typically susceptible to the occurrence of unpredictable events with unknown consequences and impact. Moreover, the coherent and sustainable operations of computing systems that are situated in open and physical environments is not easily established by means of formal instruments. Instead, we need methodological instruments that are specifically geared toward dealing with systems that are immersed in nature and operated by various teams and intelligent systems [37].14 As indicated by Highsmith, such a requirement involves the development of new methodological instruments [33] (p. 21): SOLUTION 3.1
Th e fir s t majo r strateg y fo r manag ing hig h ch ange th rou g h creatio n of a collaborative s t r u cture requ ires deployment of meth ods and tools th at apply increa sing r igo r to the re sults, that is, to the work stat e rather t h a n t o t h e wo rk f l ow.
Consequently, with the ultimate goal of providing for such quality of service in complex computational systems that we, with a sufficient amount of confidence, dare to immerse them in our societal fabric,* we propose a methodological framework toward establishment of systemic qualities in embedded systems. In essence, we advocate an approach that aims at establishment of such qualities by means of continuous exploration and refinement of complex phenomena. Moreover, as opposed to emphasizing formal verification, our methodological approach focuses on empirical validation, by means of real time inspection of some dynamically isolated system – in situ. To this end, the aim of this particular chapter is to explicate and discuss what we consider the general rationale and methodological instruments of such an approach. Consequently, in the first section of this chapter – Framework of instruments – we introduce a general framework of methodological instruments, i.e., geared toward exploration and refinement of complex computing phenomena. In particular, we discuss the need for a framework* that emphasizes the continuous interplay between knowledge
METHODOLOGY OF COMPUTING
25
acquisition and quality assurance. This discussion is then continued in the next section – Principles – where certain principle concerns of computer science as well as software engineering are introduced, i.e., mechanics and design. Moreover, in order to address the concerns at hand, the following section – Models – delves further into the main requirements involved in experiencing complex phenomena, i.e., models that are geared toward observation and instrumentation. In response to these requirements, certain considerations of yet another pivotal instrument of our framework is introduced. That is, in the following section – Methods – we emphasize the overall mindset that has led us to introduce the methodological approach advocated herein, i.e., attaining qualitative behavior in complex phenomena as a matter of iterating over experiences and adaptations. To this end, we emphasize the need for specifically tailored technologies. 3.2 FRAMEWORK OF INSTR UMENTS
As discussed in the previous chapter of this thesis, we are concerned with the identification of an applicable and methodological approach toward ubiquitous and dependable computing. The foremost important issue that we aim to address with such an approach is to increase our own confidence in that we understand what qualitative system behavior to expect in complex embedded systems. As such, we argue that there indeed have been certain efforts in a similar direction, but most of them come with certain limitations, due to a minimal emphasis on isolated instruments of the advocated approaches. That is, many contemporary methodologies of computing comprise only a subset of what we consider to be pivotal instruments of a comprehensive approach toward empirical investigations. These efforts are, of course, worthwhile in themselves but have clear limitations when it comes to their contribution in application areas that deal with complex and embedded systems. However, concerns similar to those addressed herein have previously been identified in particular areas, e.g., proactive computing, where three fundamental concerns are of the essence; grounded semantics, cognitive agents, and methodological integration. As the means to an end, we therefore propose to address these particular concerns of dependable computing systems in terms of an integrated and comprehensive methodological approach. In this context, one should perhaps pose a central and natural question; Is there an uncomplicated framework and methodological approach toward dependable computing systems? We argue that there is. However, we also have to acknowledge the fact that it is quite limited in its implicit application domain. In essence, we are not concerned with algorithmic complexity in general, but rather complexity in those embedded and distributed systems that provide for some critical support function in societal contexts, e.g., energy, healthcare, and defense. In such systems the essential complexity is due to
26
FRAMEWORK OF INSTRUMENTS
unpredictable interactions. Moreover, the framework advocated herein rests on one underpinning in particular. It should guide us in establishing qualitative system behavior as a matter of exploration and refinement. In other words, if we are concerned with our confidence in dependable behavior of complex computational systems, an applicable methodology must necessarily not emphasize isolated instruments of understanding and development. Instead, as advocated throughout this thesis, our fundamental concern must be interpreted as a matter of a comprehensive methodology. As such, dealing with the science of method, one necessarily has to identify the implicit requirements that such a comprehensive approach assumes. The main difference between natural phenomena and computational systems lies in the latter being designed and implemented, in contrast to the former, being there waiting for exploration. Moreover, the dichotomy of science and engineering – already established in the disciplines of natural science – is less understood or established in the field of computing. In general, science and engineering typically evolve in a symbiotic manner, i.e., the products emanating from one community function as the necessary input to the other. Consequently, if our primary concern is to acquire a certain level of confidence in that computing systems of today can be ascribed with the quality of being dependable, we should establish the conception of a comprehensive methodology that embrace concerns of science as well as engineering. Obviously, the correctness of such a dualistic approach is difficult to prove by means of formal models and methods – verification. Instead, we could try to establish the soundness of it by means of real world exploration and refinement – validation.* Consequently, the basic underpinnings of our advocated approach involve the explication of two concerns in particular, i.e., the scientific method of knowledge acquisition and validation as well as the engineering procedure of quality assurance and assessment. As such, we consider the scientific method to incorporate the full spectrum of activities ranging over observation of phenomena, formulation of theories, prediction of new observations, and performance as well as analysis of experiments. Moreover, as a complement to the scientific method, we consider the method of engineering to incorporate activities such as design of structures, implementation of processes, and validation of qualities. Traditionally, it is often the case that we make a distinction between the disciplines of science and engineering. However, they are in effect complementary in that the former aim at establishment of behavioral principles in complex phenomena, whereas the latter strive to apply these principles in the creation of new phenomena. The bottom line is that, even though our intentions in applying these complementary approaches may differ, they should help us in not only understanding phenomena but also in harnessing and sustaining their qualitative behavior. In this respect, we cannot have one without the other. Science and engineering might be separate disciplines, but they necessarily evolve
METHODOLOGY OF COMPUTING
27
and make significant progress together, as a joint venture with common goals and intentions. At this point, we are taking on a natural stance, and one should perhaps note that some systems at first not necessarily are a part of nature, but rather introduced by means of human intervention. Still, as a result from deploying them in our physical environment, they will become subjects to common events and stimuli of their natural habitat. Therefore, in order to understand and harness the causes and effects of computing systems’ dependable behavior in physical environments, we would be well advised to establish a methodology of computing that resembles those applied in the disciplines of science and engineering of physical systems. To this end, we therefore argue that a methodology can be considered as comprehensive if the applicability of its constituent instruments provides for support of those theoretical as well as practical requirements that practitioners of science and engineering call for. In the following section, we introduce those instruments of our methodological approach toward dependable computing systems. From a general perspective, the methodology we advocate depicts the somewhat traditional inductive–deductive approach of scientific investigations as tightly coupled with the concerns of engineering procedures. Furthermore, it should be noted that we assume that the term system in this context refers to a universal and open system, and not some closed subsystem, in isolation from its physical environment. However, more importantly, our methodological approach should be interpreted as a framework of instruments, i.e., instruments that typically are found in most any method of science and engineering. Consequently, we propose a framework of methodological instruments that encompass the basic notions of principles, models, methods, and technologies. Moreover, as indicated in Figure 3.1, this framework emphasizes the iterative application of its constituent instruments. First, some particular set of principles are typically used in order to characterize certain concerns, i.e., regarding some particular subject of investigation. We can then describe and characterize such a phenomenon more exactly by means of certain models and aspects at hand. These characteristics then provide for the basis in addressing our particular concerns, by means of specifically geared methods. However, not only the subjects of investigation, but also the methods at hand require support from enabling technologies, in order to make all possible aspects of the involved systems’ constituents, dependencies, and interactions susceptible to exploration and refinement activities. Finally, it should perhaps be stressed that the methodological framework introduced herein in every respect addresses the dichotomy of computer science and software engineering as the necessary means toward dependable computing systems. In this context, the methodology that we advocate should be interpreted as simultaneously concerned with exploration and refinement of systemic computing phenomena. Of
28
PRINCIPLES
principles (1)
technologies (4)
(2) models
(3) methods FIGURE 3.1 INSTRUMENTS
We con sid er a meth odological fra mew ork of in stru ments th at, add ress ed in a pr oced ura l a nd iterative manner, help us in dealing with qu alitativ e concern s of depen dable computin g systems. At the center of this process we alwa ys h av e some systemic ph enomena of compu tin g ; (1) characterized using principles, (2) descri b e d us i n g m odels, ( 3) addressed u sin g m et h ods, a n d ( 4 ) s u s c e p t ib l e by m ea n s o f t e c h n ol o g i e s .
course, other comprehensive approaches address the cumbersome process of producing qualitative behavior in complex computational systems. However, they are mainly focused on the general concern of other application areas than the one addressed herein, e.g., knowledge-based systems as opposed to critical support functions, and therefore differ from the general intent of the approach advocated in this thesis.15 Our framework of instruments should necessarily be considered as a never-ending feedback loop of methodological refinement. We return to this matter in the final section of this chapter – Concluding remarks. 3.3 PRINCIPLES
In merely six decades, computing has now come to occupy and serve a critical role in everyday science, engineering, and business. However, even with this tremendous success in mind, there is a matter other than its history and impact that calls for our immediate attention – understanding. Simply put, if we are ever going to feel confident in our dependence on computing machinery and, thereby, achieve the goal of dependable computing systems, we have to come to terms with its very nature. Even though we are increasingly infatuated by the marvels of computing, and even though we venture into increasingly delicate areas of applicability, the single most important issue we have to face
METHODOLOGY OF COMPUTING
29
is that of understanding the nature of complexity in computing. This dilemma is by no means new or introduced by the field of computing alone – on the contrary. On many occasions throughout history, we have projected and applied recently attained knowledge onto new areas of applicability. However, it is only after we have successfully applied this knowledge for the first time that we can come to understand the implications of our actions – opportunities and challenges emerge. To successfully instrument and control artificial systems is obviously a very intriguing activity to perform but inherently complex. In this respect, computing is no different from any established field of science or engineering, except on one account in particular – principles. In the natural sciences, e.g., physics, chemistry, and biology, the fundamental concern and instrument of knowledge acquisition has consistently been focused on identification, understanding, and application of behavioral principles – deduced from natural phenomena. Up until now, in the field of computing, the sole manifestation of such natural phenomena has mainly corresponded to the behavior of isolated computing machinery and their involved processes of computation and communication. Consequently, in scientific investigations and engineering endeavors of computing, the prevailing principles of systemic behavior have been deduced from the abstract mathematical models involved, i.e., Turing’s model of computation and Shannon’s model of communication. Now, with the introduction of pervasive communication and sensor technologies, computing systems can no longer be considered as isolated artifacts that perform within the capability and performance envelopes of the abstract models alone. Instead, the behavior of computing systems is also under the unpredictable influence of nature itself and we should therefore complement our set of principle concerns accordingly. That is, by means of immersing our computing systems into nature, we are dealing with embedded systems – unavoidably introducing problems of complexity into the command and control loop of computing systems. On the one hand, the occurrence of events in our physical environment introduces us to problems of complexity and unanticipated issues of performance and reliability. On the other hand, the involvement of multiple system operators introduces us to problems with respect to evolvability and dependability. At this point, one should perhaps note that there is a difference between principles of natural and artificial systems. The major difference between the two lies in what we consider the origin of behavior. In natural systems, we are primarily concerned with principles that depict an emergence of behavior. In the domain of artificial systems, on the other hand, we focus on principle areas of interaction mechanisms [79]. As such, these mechanisms are supposed to give rise to some sought for behavior. If we relate this line of reasoning to the field of computing, i.e., dealing with artificial systems, there are indeed certain principle areas of mechanics. As advocated by Denning, these principles of computing involve concerns of a theoretical as well as practical nature. In fact, he empha-
30
MODELS
(MECHANICS) co mp uta tio n
orm ance perf
ility lvab
n
coordination
sim pli cit y
evo
io mat auto
icat ion
y ilit ab d n pe de
com m un
rec
ion ect oll
(DESIGN)
reliability
FIGURE 3.2 CONCERNS
In the field of c omputing, involving comp uter scienc e as well as software en gineering, we typically have two strands of principles. On the one hand, comp uter science is primarily concerned with algorithmic efficienc y (mechani c s) and, on th e other hand, engineering is concerned with system qualit y (design).
sizes that the principles of computing could be categorized as a matter of mechanics, design, and practice [11]. The major contribution of such a classification scheme is that we all of a sudden can characterize computer science and software engineering as two strands of the same discipline – computing. However, the bottom line of Denning’s framework is that computer science is the strand of computing that emphasizes mechanical principles, whereas software engineering typically focus on principles of design. As such, the principles of mechanics involve certain focal areas of concern, i.e., computation, communication, coordination, automation, and recollection. The principles of design, on the other hand, emphasize certain fundamental qualities of software. According to Denning, these qualities typically include general conventions, i.e., simplicity, performance, reliability, evolvability, and dependability. As such, the principles of computing are geared toward attaining dependable programs, applications, and systems (see Figure 3.2). However, as previously indicated, this methodological instrument of principles alone cannot constitute a comprehensive methodology as such. It merely helps us in identifying our basic concerns regarding computing systems in general. We need the capability to describe the in situ systems in an appropriate manner as well. 3.4 MODELS
Science is above all concerned with the everlasting mission of human beings to understand the inner workings of their surroundings as well as themselves, i.e., the essence of nature.
METHODOLOGY OF COMPUTING
31
As such, the evolution of understanding and turning ideas into facts is above all based on our experiences from interacting with some particular subject of study – phenomena. Moreover, this continuous process of knowledge acquisition and validation typically involves two components in particular: behavioral principles and models deduced as a matter of experimental validation. In this context, it is important to acknowledge that it is mainly the invariant and contingent factors of phenomena, as well as the constraining principles that govern their behavior, that are our subjects of study. In fact, the core of mathematics as well as physics is the study and identification of invariants. As a matter of previous experience and individual interests or concerns, our instruments of exploration, i.e., models, methods, and technologies, remain in constant flux. In a sense, the methodological instruments at hand are constantly subject to refinement; in order for us to continuously strive toward more accurate understandings of nature. In its focus on mathematical models of computation, computer science emphasizes a strikingly different research focus compared to the natural sciences. In the latter case, the driving force is to understand different aspects of our given nature and, as such, we have been quite successful in setting up applicable models to capture the essential properties of particular phenomena. Moreover, in the context of the natural sciences, we have also excelled in constructing instruments that enable us to make measurements in controlled experiments. To this end, theories have been put forward that can be refuted or confirmed by means of applying such instruments in the context of conducting controlled experiments. However, regarding the involved models, no scientist would ever claim that his or her model perfectly maps onto phenomena in nature. Instead, the model is merely considered as an applicable abstraction of nature, with respect to some particular concern in mind. To this end, our aim and goal of understanding some particular subject of study will always involve an extensive number of different models – chosen on an individual and subjective basis – as a matter of individual or collective understanding of their applicability. If there are discrepancies between some of the involved models, all aiming at a comprehensive view of the same subject of study, we must necessarily consider them as a collection of competing theories. It is not until we have verified a certain theory to be a truthful depiction of some particular phenomenon that we have established a generic representation of nature. What then should we consider as a truthful depiction of a computational system? The pragmatic stance taken in our methodological framework is the following. A truthful depiction of some systemic phenomenon is a representation that is considered as applicable by practitioners of science as well as engineering. However, it is important that both disciplines agree upon this matter. If one of them starts questioning the rational and truth of the other’s models, we are in essence dealing with a refuted theory and not a truly applicable model.16 In the natural sciences as well as in related disciplines of engineering, the soundness of theories and technologies is established as a
32
MODELS
matter of asking nature itself, i.e., performing in situ experiments. In this thesis, we advocate that a similar approach should be taken in computer science and software engineering. Theories enable system analysts to specify which elements and concepts of the models that is of particular relevance in dealing with certain concerns and questions, but they are also used in order to identify certain assumptions about these elements and their possible dependencies. Thus, within the boundaries of a particular framework, theories involve certain (strong or weak) assumptions that are applied when analysts inquire and diagnose a particular phenomenon, to explain the involved processes and predict their outcomes. Typically, several theories are compatible with any given framework. However, it can be difficult to determine true alternatives. To develop and use models, explicit and precise assumptions about a limited set of parameters and variables are required. As previously indicated, we are making the explicit assumption that all instruments of our methodological framework, including models, are utterly concerned with exploration and refinement of systemic phenomena. Consequently, the foremost requirement of our methodology involves observation and articulation. We need to see what we think – so we know what to say. Firstly, since our primary concern with dependable computing is the inherent complexity of the involved systems, we need to acquire the means to observe phenomena. That is, we need the capability of observe some particular phenomenon and its constituents, dependencies, and interactions. Unfortunately, because we are dealing with embedded systems of a computational nature, this is easier said than done. In a similar manner as with many other natural phenomena, embedded systems are closed to us in the sense that they are difficult to perceive by means of the naked eye. In this context, one should remember that many advances in the natural sciences have come as a result from new technological instruments being developed, e.g., instruments that extended our perception into the cosmos as well as into the subatomic world. Secondly, if we already have acquired the capability to observe phenomena of embedded systems, by means of cognitive inspection, our primary concern would now typically involve the articulation of our observations. By means of cognitive inspections, we can typically isolate the occurrence of systemic events and stimuli, e.g., structures and processes, in some particular phenomenon. However, in order to communicate any concern of ours regarding these phenomena, we must be able to articulate them by means of commonly agreed upon conceptual structures. Moreover, as indicated in previous chapters of this thesis, such conceptual structures must necessarily be created and agreed upon by means of an iterative process performed by some temporal set of cognitive agents. In terms of being merely one out of many methodological instruments, we argue that models in computer science and software engineering necessarily should be conceived
METHODOLOGY OF COMPUTING
33
and treated with the same implicit care as is the case in the natural sciences. That is, models of embedded systems should be established with the primary intent of observing as well as articulating natural phenomena and, thereby, they should facilitate our understanding regarding their degree of being dependable. Our utmost concern regarding the methodological instrument of applicable models is that those currently applied in computer science and software engineering, e.g., emphasizing complex information systems, do not depict or convey the nature of physical systems. Instead, they focus on computational abstractions of little or no resemblance with the phenomena of our concern. Consequently, the methodological framework advocated herein involves a proposal for such a specifically tailored model. We will return to it in Chapter 5 – Open computational systems. 3.5 METHODS
The methodological instrument of methods supports us in the practical activities of analysis, synthesis, and adaptation of system behavior, with respect to the principles and models at hand. Investigations of mechanical principles for coordination presuppose, for instance, methods that are geared toward observation and subsequent articulation of systemic phenomena. Moreover, the methodological instrument of methods also supports us in design, implementation, and maintenance of dependable systems. However, in the field of computing, methods of such a dualistic character are somewhat missing now. Methods of science typically focus on understanding aspects of our natural environment, by means of establishing models and principles that capture relevant aspects of systemic phenomena. In engineering, those principles and models are tailored into commonly agreed upon conventions, allowing us to build artificial systems with intended functionality and qualitative behavior. We therefore claim that identification of new methods has to begin with the assessment of implicit assumptions of current software engineering paradigms as well as assessment of principled conventions behind other civil engineering paradigms that deal with complex systems. Yet another consideration of great concern in this context is in what way the involved methods address the fundamental notion of quality, as provided for by some envisioned system. In many cases, the primary application domain for complex computational systems is to provide for enhanced support of critical functions of our society, e.g., healthcare, energy, defense, and business. As previously indicated, our capability to predict and control the outcome of some system’s behavior is very much dependent on the lifetime of given preconditions and goals. At the one extreme, we have closed systems with a potential lifetime of the preconditions approaching infinity.
34
METHODS
That is, complexity due to some particular system’s constituents is stable while, at the same time, complexity due to dependencies and interactions can be in a constant state of flux. In that case, formal methods of validation and verification will sometimes suffice. At the other extreme, however, we have open systems with a potentially infinitesimal short lifetime of the preconditions. In that particular case, all aspects of system complexity is in a potential state of flux. Constituents, dependencies, and interactions continuously evolve and, in effect, call for command and control as a matter of online* exploration and refinement by means of cognitive agents and network enabled capabilities [6; 7]. In essence, we argue that when we are dealing with complex computational systems of this nature, we need to achieve control by other means than offline formal methods. We argue that it is by means of such methods that we need to address the pivotal notion of systemic qualities, i.e., the processes involved in identifying and assembling a particular set of software entities in such a temporal configuration that their constitutive behavior provides for some beneficial quality. As such, our methodological instrument of methods must necessarily be geared toward achieving an intended quality as perceived by an open set of cognitive agents. A principled approach toward command and control of such processes is therefore also a key issue in attaining the goal of dependable computing systems. The bottom line is that the notion of dependability, as well as to what extent we experience confidence in qualities of computing systems, involves our capability to explore and refine, in an online manner, the qualitative concerns of some common area of interest. With this line of reasoning in mind, let us assess contemporary software engineering. Traditional software engineering starts with requirements analysis based on some particular model of important aspects of the envisioned system. The next phases of these traditional methods comprise the design, followed by implementation and testing. In effect, design documents play the role of modeling some (non-existing) system and, more often than not, are of little value in real time exploration and refinement of an online system. A key question; are methods that emphasize offline modeling and formal verification relevant when it comes to maintaining crucial qualities in the behavior of an online system? The answer is, of course, yes. They are relevant under the pretence that some particular system in question is well understood as well as of a closed nature. However, in those situations where the system in question is of an open nature the answer is rather the opposite. It is well known that systemic properties such as trustworthiness or sustainability are non-functional. That is, properties neither decomposable into similar qualities of subcomponents nor composable from components exhibiting those qualities. Systemic properties of open systems can thus only be validated by the factual behavior of the system. The powerful principle of functional decomposition has to be complemented accordingly by some other means, in order to harness critical systemic requirements of open computational systems.
METHODOLOGY OF COMPUTING
35
To this end, mathematical models of computation and communication emphasize algorithms and formal semantics and are naturally the theoretical backbone for information processing. However, we have come to understand that a major challenge in conceiving the notion of dependable computing systems involves a concern of a somewhat more practical nature – grounded semantics. This concern appears when we address application domains where the involved systemic qualities necessarily must be dealt with in an online manner, by means of cognitive inspection and adaptation, i.e., when there is a gap between formal and behavioral semantics. To address this challenge we propose a method of continuous exploration and refinement of systemic qualities in an approximate manner. We will return to this proposal in Chapter 6 – Online engineering. 3.6 TE C H N O L O G I E S
No matter if we address methodological approaches that emphasize scientific investigations alone, or engineering endeavors for that matter, they all require support from enabling technologies, e.g., the Hubble telescope or the CERN particle accelerator. That is, in the case of scientific investigations, we typically require technologies that enable us to observe the complex phenomena of our concerns. This is also the case in engineering, where we aim for construction and assessment of phenomena, e.g., computer simulations of stress in some artifact’s structural properties. However, in both cases, the enabling technologies necessarily need to conform with certain constraints at hand, i.e., we cannot circumvent nature in development of applicable technologies. In the natural sciences, these constraints of technological innovations and development thereof typically adhere to the laws of nature. However, in the case of computer science and software engineering, the situation is somewhat different. As indicated in previous sections of this chapter, principles in the field of computing differ from those in the natural sciences in that the latter deals with spontaneous and de facto behavioral principles, whereas the former deals with principle areas of concern, i.e., mechanics and design. As such, the methodological instrument of technologies is required to emphasize and provide support for these principles of convention, rather than being constrained by principles of spontaneous behavior in nature. Consequently, in the field of dependable computing systems, we argue that the enabling technologies primarily are required to provide support for computing’s principles of mechanics. In the particular case of our methodological framework, emphasizing the dichotomy of computer science and software engineering, there is an additional requirement imposed onto the enabling technologies. As such, they must not only support a common set of principles, but a common set of models and methods as well. To this end, the main intent with the methodological framework advocated herein is to establish qualitative
36
CONCLUDING REMARKS
behavior in complex computing phenomena, as a matter of continuous exploration and refinement – a methodology toward qualitative approximation. Consequently, in our particular case, the methodological instrument of technologies should not be considered as necessarily geared toward system operation or administration as such, but rather toward cognitive experience and experimentation. Cognitive inspection and, subsequently, experience of embedded systems implicates the use of mediation artifacts, e.g., technological inventions such as the microscope. Moreover, such artifacts also require certain functionality in order to provide for the capability of cognitive agents to perform experiments, i.e., visualization, measurement, and instrumentation. Consequently, it is therefore fundamentally important to remember that cognitive agents who concern themselves with the activity of exploration or refinement – empirical investigations – typically come in two strands, i.e., human and software. In effect, we should consider enabling technologies as required to support human as well as software entities in the activities of exploring and refining systemic phenomena – in situ. We will return to the rationale as well as architectural proposal for this methodological instrument of ours in Chapter 7 – Enabling technologies. 3.7 CONCLUDING REMARKS
The continuous evolution of complex computational systems is in many ways a matter of iterative exploration and refinement of some particular subject under study. As such, the methodological operators involved are typically concerned with various aspects and qualities of the products resulting from some particular methodology. Moreover, as indicated in previous chapters of this thesis, we argue that our confidence in attaining some sought for quality of embedded systems probably is best achieved by means of continuously refining the methodological approaches at hand, as opposed to focusing on the complex systems as such. To this end, during the development of the particular methodological approach advocated herein, we have come to understand that, instead of trying to identify all dimensions* and complex dependencies of some particular system at hand, as the basis for comprehension and control, one should consider the methodological approach as such to be the primary subject of feedback and control. Moreover, and this is a key statement; the systemic phenomena we are dealing with are difficult to investigate by means of structural snapshots of some sought for behavior. In every respect, it is something of an oxymoron to aim for the establishment of qualitative processes of system behavior by means of verifying the involved structures’ formal adherence. In effect, we should perhaps question the rationale behind methodological approaches that aim for formal verification of structures when our understanding, or lack thereof, is concerned with the processes exhibited by some particular subject of study. In
METHODOLOGY OF COMPUTING
37
articulation model
observation
system
construction instrumentation
FIGURE 3.3 FEEDBACK
Ther e is a general feed-forward loop of explora ti o n a n d r e f in e m e n t i n attaining qualities o f embedded system s. Observation of a system, existi ng a priori in nature, results in the articulation of a model. The mo del can, subsequently, be used in co ns truc tio n a s w el l a s i n s t r u m e n tati o n of th e p r e v i o us l y ob s er ve d s y s t em .
this respect, since we are dealing with distributed systems of computation and communication, we would instead emphasize a methodological approach that focus on exploration and refinement of complex behavior in real time – online. In retrospect, we have come to understand, through practical experience from actually developing complex computational systems, that the framework of principles and conventions advocated by Denning [11] quite naturally is able to guide us in empirical investigations of computing. If one would start by identifying some particular application domain of concern, e.g., network based defense, the framework could be implemented as an iterative sequence of investigations. Starting with one of the basic principles of mechanics – coordination – the whole sequence of design principles should then necessarily be tested before one continues with the next principle of mechanics. This way, one would most probably achieve an increase in confidence regarding the quality and, subsequently, the dependability of some particular mechanism before one starts experimenting with and proclaiming benefits of an even more complex one – automation. In summary, we have come to understand that dealing with complex computational systems is mainly a matter of performing empirical investigations regarding some complex phenomenon at hand. Moreover, in doing so, we need to specifically address the dichotomy of computer science and software engineering; not from the somewhat traditional perspective of algorithm complexity and standardization of work processes, but rather from the perspective of continuous exploration and refinement of computational systems embedded in nature, as a cooperative team effort [64].17 To this end, the methodological approaches we apply, geared toward increasing our confidence in that we
38
CONCLUDING REMARKS
actually have attained such qualitative behavior and control that we dare to immerse these systems into our societal fabric, must also be subject to a continuous process of exploration and refinement as do the systems as such. However, without any further ado let us continue with our primary issue at hand, i.e., addressing the common concerns of science and engineering in the field of dependable computing systems – complexity.
Part 2
THEORY
4 ISSUES OF COMPLEXITY
In th e deve lopment of th e un der sta ndin g of c omplex ph enomena, th e mos t powe rful tool available to the human inte llect is abstrac t ion. Abstra ction ar ises from the rec ognition of si milar ities be tween cer t ain objects, situat ions, or proc es ses in the re al wo rld and the decision to concentrat e on t h e s e s i mi l a r it i e s a n d t o i g n o re , fo r t h e t im e be in g , t h eir differen ces. C. A. R. Hoare
4.1 INTRODUCTION
As previously indicated, we consider computing systems of today to involve such complexity that, in order to regain confidence in their continuous and dependable operation, we have to identify new models that appropriately depict the factual nature of the involved systems’ behavior as well as the forces and events they are subjected to. As such, these models should primarily convey the systems’ inherent complexity in a similar manner as those of natural phenomena. In doing so, it is important that affirmation and establishment as well as refutation of the involved models could be performed as a matter of empirical investigations, as is the case in most any discipline of the natural sciences. To this end, we consider the primary origin of complexity in computational systems as the natural result from their dynamic behavior, i.e., when constituents, dependencies, and interactions surmount to a continuously evolving and qualitative whole that is difficult to comprehend, as a matter of cognitive inspection and reasoning. In this context, when interaction and stimuli exchange take place between a particular system and its environment we often tend to denote the system as being open. However, characterizing some particular system as being open in such a manner also tend to converge toward being a context-dependent notion of openness – a system is typically characterized with respect to some temporal observer’s unique set of referential criteria in cognition. Throughout this thesis, we therefore emphasize the following stance regarding the notion of openness. 41
42
DEFINITION 4.1
INTRODUCTION
As a matter of re fere ntial cr iter ia of cognition – aspects – an in s i t u system can be obser ved by an agent that possesses th e capability of perception . In doing so, the agent’s individually selected c r it er ia o f p e r c e p t i o n w i l l re s ul t in t h e iden t ific ation of cer t ain dist in ct e n t i tie s a n d re l at i on s. The inter p re tation of behavioral semantics t h e re o f i s t h e p r i vat e c o n c e r n o f t h at p a r t i c u l a r age n t a lo n e . I f t h e p er ce i ve d e n t i t i es a n d re l at i o n s i n a ny way c a n b e for c e d t o c h a n ge, as a matter of exte r n al system stimuli, we consider the system agg regat e to be of an op en nature ; possible to character ize as p er for ming in a physic al–temp oral–con ceptua l s t at e s pac e.
Moreover, we argue that dealing with complexity is primarily a concern of quality, i.e., to define and establish systemic properties that can be observed and / or engineered. That is, if we aim at understanding the nature of some particular phenomenon we are currently interested in, we first need to isolate the phenomenon by means of cognitive inspection. Once we have successfully isolated some systemic phenomenon and identified its temporal constituents, dependencies, and interaction, adaptation takes over – focusing on systemic refinement of qualities. This process of approximating systemic qualities is typically performed repeatedly, with the sole intent of eventually being able to establish and validate certain qualitative concerns at hand. To this end, we therefore need to identify what general issues of complexity that practitioners of science as well as engineering in the field of computing might have to deal with in this common endeavor toward dependable computing systems. That is, we consider the scientist as well as the engineer to be the principle observers that are concerned with some complex phenomenon at hand. Consequently, in response to the previous chapter’s discussion on a methodological framework of practical instruments, this chapter aims to emphasize certain general concerns in dealing with complex computing phenomena. In doing so, this particular chapter’s the first section – Evolution of systems – introduces the notion of open systems and in what way our conception thereof differ in various disciplines of research and development. However, we argue that these differences in fact convey a common denominator in perceiving complex systems as such, i.e., bounding spaces. Therefore, in the following section – Isolation – we delve further into the rationale and requirements in identifying such bounding spaces. In particular, we argue that, considering systems in general, it is by means of exploration and subsequent isolation of systemic phenomena that we can start to address the real issue at hand, i.e., refinement. Consequently, in the next section – Adaptation – we discuss the notion of software and what it actually means to modify its behavior. However, more importantly, we argue that adaptation of systemic qualities is an activity performed by cognitive agents, i.e., agents of a human as well as computational nature. Therefore, in the following section – Validation – we argue that the one feature of any complex phenomena that all agents are interested in exploring as well as refining is that of systemic patterns,* i.e., constitutive phenomena characteristics.
ISSUES OF COMPLEXITY
43
4.2 EVOLUTION OF SYSTEMS
We consider the behavior of natural systems to be spontaneous, yet governed by physical laws and constraints, and we consider the behavior of artificial systems to be the result from engineering technological behavior of artifacts. Both types of systems manifest themselves in terms of processes and structural evolvement, but behavior in the former system type is not considered as being of a manufactured nature. Moreover, if we consider the complexity of in situ computational systems as a basic starting point for our investigation of dependable computing systems, we necessarily have to come up with a common conceptual framework that deals with observation of natural systems as well as construction of artificial systems. In pursuit of such a conceptual framework, we believe that general systems theory could serve as an appropriate point of departure. Contemporary research and development in conceiving complex computational systems primarily focus on the mechanical principles of communication and coordination. As such, practical application areas typically involve the manifestation of auctions, scheduling, and provisioning of services in the form of coordination mechanisms. However, in the case of application areas where coordination mechanisms are subjected to contextsensitive dependencies, the presence of unanticipated events in their physical environment introduce us to quite challenging issues regarding complexity and dependability. In those areas of applicability, there is a general assumption involved that calls for computational systems most appropriately described as being of an evolving nature, i.e., the involved structures and processes of some particular system are not completely established at design time. Instead, these systems are supposed to reflect upon contextual dependencies, with respect to temporal situations, and adapt accordingly. In this process, the loss of system quality is obviously not an acceptable system behavior. We characterize this class of artificial and evolving systems as being of an open nature. The behavior of such systems is typically of a temporal nature, due to the possible occurrence of factors that necessarily are difficult to anticipate prior to the involved systems being constructed and deployed. If we aim at a dependable system behavior, driven by frequent context switches, we therefore need to explicate the notion of evolution and, more importantly, we need to address the notions of openness and sustainability of invariant qualities. In other words, a better understanding with respect to in what way complex computational systems can be isolated, adapted, and validated in an online manner is sought for. From the perspective of dependable computing systems and quality of service,* those models that deal with the notion of openness plays a fundamental role in ascribing some particular system as dependable. At a first classical glance, the notion of some system being open typically implies that it is subjected to a flow of substance, i.e., stimuli
44
EVOLUTION OF SYSTEMS
exchange between the system and its environment. This implicit assumption is of primary concern when it comes to dynamics in physical systems. At a second glance, the notion of any system being open is solely dependent on the intent and capability of its observer. That is, the act of considering a system as open is our intent to ponder on the extent to which it is susceptible to observation and instrumentation, as a matter of some cognitive agent’s intervention. The concept of open systems as such has taken on a prominent role in disciplines that deal with complex phenomena, e.g., systems comprised of cognitive agents. Issues related to system organization and construction is addressed up front in these areas of research and development. Naturally, there is little consensus regarding a common definition of what an open system is. In the following, we will therefore try to clarify the notion of open systems as it is currently applied in certain relevant disciplines of science and engineering. These perspectives, or rather their differences, are then summarized in order to present an initial understanding of what the notion of open systems means in our particular context. From a general point of view, open systems in the habitat of nature are considered as systems moving closer or further away from an equilibrium state, i.e., showing tendencies to increase or decrease in orderliness [58].18 Moreover, in that context, disorder is considered as possible to export into the surrounding environment and, hence, some system's structural order can increase rather than decrease – at the same time as the system involves processes of change and evolution. Consequently, from the perspective of organization, an open system can develop structural properties that are the very opposite of moving into an equilibrium state. We can characterize the development of such far from equilibrium properties as attributed to a flow of mass or energy between some natural system and its environment [49]. In effect, it is primarily the physical boundary, between the origin and destination of such a flow, that characterizes a system as open or closed. Another perspective of open systems is that found in various research disciplines related to software engineering and computer science. The concept of an open system is then primarily considered in terms of a dynamic set of interacting entities and its associate simplicity or performance quality. However, the interactions that take place between the constituents in such a set are not of an open nature, in the sense that each interaction necessarily must be constructed according to some particular syntax and semantics, i.e. an interaction protocol, agreed upon prior to the construction and subsequent observation of the systemic phenomenon in question. Consequently, an open system in the field of computing could be characterized as a dynamic set of interacting entities, constrained by means of a priori established interaction protocols. Obviously, the difference between definitions on the notion of open systems lies in the very origin of the systems under study. The common denominator of such systems is their
ISSUES OF COMPLEXITY
45
inherent complexity, no doubt. However, the first perspective deals with system complexes that primarily are a priori a part of nature and behavior is considered as spontaneous and emergent. The second perspective, on the other hand, deals with systems of a manufactured nature, i.e. the involved phenomena convey an artificial behavior, as a result from certain conventions of mechanics and design. Still, both system types share the characteristic feature of being open. As such, an open system is in general terms a system that can be stimulated by means of some external force. Moreover, such forces of stimuli are, in the case of open systems, also possible to export back into some particular system’s environment. Consequently, it is this flow of substance, e.g., energy or information, which renders a system as being of an open nature. However, at this point one should notice that the notion of flow actually assumes a transport of substance between two distinct entities in some physical space. That is, no matter if we are dealing with energy or information, the flow of substance necessarily has to pass through (some sort of) a spatial medium and associate boundaries. No matter if such boundaries are of a conceptual or physical nature, it is necessarily within its delimiting space that system evolution takes place, i.e., constituents, dependencies, and interactions continuously evolve within some form of bounding space, enabling us to refer to a phenomenon’s unique state and identity. In summary, we have come to understand that if we are interested in broadening our understanding of dependability and evolution in complex computational systems, the conceptual framework we are looking for must necessarily address openness from a general perspective, involving the notion of open systems in nature as well as in computing. To this end, such a theoretical framework has previously been advocated in the form of general system theory.19 4.3 ISOLATION
In the theory of general systems, systemic phenomena are described in terms of complexes of interacting entities. As such, the physical–temporal–conceptual state of some particular system aggregate can be considered as unique, by means of four basic attributes, i.e., number of entities and entity* types as well as number of dependencies and dependency types (see Figure 4.1). Moreover, these different attributes of an aggregate constitute the basis for two types of system characteristics, i.e., summative and constitutive. Summative system characteristics are those that may be obtained by means of summation, e.g. number of entities or dependencies, and constitutive system characteristics are those characteristics that depend on temporal and unique configurations within the complex.
46
entity
ISOLATION
cardinality type
dependence
cardinality
type FIGURE 4.1 CONSTITUENTS
As a matter of primitive constituents , system co mplexity is primarily du e to an ev olution o f entities and their dependencies. A s su ch , system entities an d de p e ndencies typica lly change in ca r di n al i ty an d ty p e . Mo r e ov e r , t he p r e r e qui s i t e in identifying and enumeratin g these cardinalities and types are very much dependent on our c apability to i s o l a t e t h e b o u n d a r i e s o f s o m e partic ular system under investigation.
In essence, the constitutive characteristics render some particular system as complex and difficult to understand by means of formal reasoning. Constitutive system characteristics and qualities are the resulting products of continuous processes, whereas summative characteristics more appropriately could be described as structural snapshots. In other words, constitutive qualities are accumulated over time, whereas summative qualities only reflect parts of system states at a single point in time. To this end, we have come to understand that this line of reasoning in fact depicts the major challenge of methodological approaches in computing. It seems quite difficult to advocate approaches where summative qualities are established and verified, when our utmost important concern involves dealing with validation of constitutive system qualities. Above all, the notion of complex computational systems involves the exploration and refinement of physical systems, as opposed to programming of abstract machines. To conduct empirical investigations involving these complex systems therefore requires us to focus on fundamental issues of relevance in observing physical phenomena. As such, investigations of open systems have been fundamental in our scientific understanding of nature. Moreover, as previously indicated, an important model of openness in natural systems considers the dimension of energy flow as the primary source of evolving system states. In this respect, we are dealing with complex computational systems, i.e., physically grounded systems of interacting entities, but instead of considering a flow of energy as the primary dimension of system evolution, we consider a flow of information. Dynamics in such artificial systems is the direct result from entities that transfer information
ISSUES OF COMPLEXITY
47
between each other over some physical space. However, as opposed to the primitive entities of natural systems, the event that triggers an interaction between two entities in our case is not due to the laws of nature. Instead, the catalyst of dynamics in complex computational systems is observation of physical phenomena. That is, as opposed to involving physical system isolators and boundaries, agents in our complex computational systems act upon events and facts that are isolated and deduced by means of cognitive inspection and conceptual boundaries. To make observations of physical phenomena necessarily requires the capability of isolation. That is, we must know where to look for what, at the same time as we must have applicable instruments of observation at our hands. The perhaps foremost issue of concern in empirical investigations of embedded and evolving systems would therefore be the identification of some particular system aggregate’s boundaries in the previously mentioned physical–temporal–conceptual state space. Hence, we argue that the boundaries of systems in nature necessarily must be considered from two different perspectives – physical as well as conceptual. That is, the boundaries of some particular system are primarily a cognitive construct that lies in the eye of some beholder. To isolate some systemic phenomenon is therefore solely dependent on a particular observer's capability to identify a system’s conceptual boundaries. Consequently, a physical system can only be isolated for further studies if an observer first articulates the system's conceptual boundaries and then, subsequently, apply them in identifying its physical boundaries. To this end, it is within the space of such physical boundaries that a cognitive agent will find the unique and yet temporal constituents, dependencies, and interactions that then can be subjected to refinement – adaptation. 4.4 ADAPTATION
As previously indicated, we consider the methodological instruments of models and methods to help us in identifying as well as dealing with the universal elements and concepts of some particular subject under investigation. That is, models provide the basic ontology as applied in some method of discourse. As such, models help us in organizing diagnostic and prescriptive inquiry as well as indicating lists of common variables that can be used in subsequent activities of analysis, synthesis, and adaptation of systemic phenomena. Therefore, the elements and concepts outlined by a model should primarily be geared in such a way that they help the cognitive agent in identifying central questions and concerns that need to be addressed, e.g., in pushing some systemic phenomenon back into such a state that it continuously exhibit some preferred quality. Moreover, a model should also provide for the fundamental aspects of some systemic phenomenon’s structures, processes, and patterns of behavior [45; 30]. The notion of
48
ADAPTATION
systemic structures corresponds to temporal configurations of constituents, dependencies, and interactions. The notion of processes reflects the physical evolution of the structures involved and, finally, we have the somewhat tricky idea of patterns. That is, the notion of patterns in some particular system corresponds to the abstract manifestation of its continuously evolving structures and processes – as perceived by an autonomous and cognitive agent with unknown referential criteria as well as unknown capabilities when it comes to inspection. From a pragmatic perspective, even though we primarily are addressing exploration and refinement of computing systems, it can be said that no matter what type of system or specific phenomenon we are currently dealing with, it must be possible to both observe and instrument it using the same model. If this is not the case, the basis for all affirmation and refutation of theories in empirical investigations is lost, i.e., the joint progress of science and engineering is quite difficult to uphold. Consequently, before one starts to address such pivotal concerns as system evolution and adaptation, it is crucial that we come to terms with the rationale behind the methodological instruments involved. That is, if we intend to set up models of mutual benefit for science and engineering, we must first agree upon the invariant sustainability criteria that both disciplines assume in addressing their common subject of study. If we do not agree upon such criteria, we will most certainly end up in a situation where the results from one community are of little or no benefit for the other. As indicated in previous chapters of this thesis, we are primarily concerned with dependable computing systems. However, due to the involved systems’ inherent complexity as well as being immersed in the very fabric of society, we argue that they have to be continuously explored and refined by means of cognitive agents. As such, these agents can be of a human as well as computational nature. In other words, some of the agents that are in charge of operating these systems are considered an intrinsic part of the very phenomena we are concerned with. The implicit rationale for this state of affairs is, however, in every respect intentional. We consider the most important sustainability criteria of dependable computing systems to be that of survivability. As a matter of the natural application domains of embedded systems, i.e., providing support for critical functions of society, there is no situation more alarming than that where the whole system break down and all the qualities we have come to depend on are lost. In a sense, we strive to uphold the very existence of some complex configuration of software. As such, it is therefore important to come to terms with the nature of software and what the notion of survivability means in our particular context. As advocated by Bassett, software can most appropriately be considered as exhibiting two pairs of dual properties [4]. On the one hand, software is a set of well-formed data that satisfies syntactic rules. In the form of data, software is possible to reproduce in an exact and indefinite manner. On the other hand, software is a function traversing discrete states that
ISSUES OF COMPLEXITY
49
satisfy semantic rules. In the form of functions, software is possible to repeat in an exact and indefinite manner. Moreover, as argued by Bassett, there are of course many structures and processes in nature that satisfy some of these properties, but there is in fact only one phenomenon that simultaneously encompass all four – software. In doing so, software also exhibits the property of reflexivity, i.e., software’s ability to modify software. It is with such a line of reasoning in mind that we can understand what survivability – the ability to adapt – means in our context of open and embedded systems. As a matter of cognitive agents – human and software – that continuously observe and isolate phenomena in their environment, the software system that proliferates within some particular bounding space must be adapted in such a way that the existence of its data as well as function is sustained. Of course, one could allow for graceful and acceptable degradations, but the bottom line prevails – survival is attained as a matter of adaptation. It is with the above mindset that one could assess and question the rationale behind contemporary approaches in the field of software engineering. Notable examples include the paradigm of service-oriented programming and associate architectures* as well as grid computing. As such, they make a quite clear separation between basic resources, such as information, and the software that processes the available resources – services. Moreover, the involved services are then envisioned as temporarily combined and integrated, in order to provide for an adaptive system behavior. However, there is an implicit assumption here that renders these approaches as somewhat questionable, at least in our particular pursuit of dependable computing systems. The general rationale of service-oriented computing is that flexible reuse of software – at the time of design and construction – will lead to an increase in software quality. However, even though this assumption to some extent has been proven applicable, it does not address the pivotal concern at hand – flexible adaptation at runtime. Instead, we argue that an applicable approach toward dealing with complexity in computational systems necessarily has to focus on adaptable behavior at runtime [31]. As such, some particular system’s continuous struggle for survival would preferably be dealt with as a matter of adaptation, resulting from online assessments of system qualities – validation. 4.5 VA L I D A T I O N
From a cognitive perspective, all phenomena can be considered as distinct if perceivable, as such, by some cognitive agent. That is, the unique state of some phenomenon can be considered as distinct if relevant aspects, according to the cognitive preferences of some agent, are susceptible to inspection. However, the previously discussed notions of isolation and conceptual boundaries introduce us to the somewhat difficult activity of
50
VA L I D A T I O N
actually articulating the factual behavior of some system. Since cognitive agents isolate systemic phenomena in terms of individual and preferential criteria, no agent can possibly be considered as possessing the true knowledge of some particular phenomenon’s factual evolution of structures and processes.20 Instead, as is the case when we are dealing with models, an observation only comprise those parts of some phenomenon that we beforehand have articulated as most important and relevant, due to some individual’s intent of actually performing an observation. Hence, an important question we should ask ourselves is therefore what systemic properties in general that all cognitive agents could be interested in observing and, subsequently, capable of articulating. As we have previously discussed, the general properties of any given system involves their structures and processes. However, it is not the structures or processes as such that are our primary concern in exploratory observations, but rather some phenomenon’s physical–temporal–conceptual patterns and the evolution thereof. Moreover, these patterns are only possible to deduce if we, prior to some observation, have established and articulated the essential characteristics we are looking for. As such, we could therefore consider patterns to be the temporal as well as conceptual manifestation of some particular phenomenon’s structures and processes [8]. Hence, if we are interested in system evolution and the validation thereof, we are primarily concerned with exploration of patterns, presupposing that they are abstract derivations of behavior in some natural or artificial system. With this understanding at hand, let us return to the general notion of some system and its structure. As previously indicated, we consider the structure of some particular system to involve the four properties of entity cardinality and type as well as dependency cardinality and type. Moreover, we have described patterns to be the abstract manifestation of these structures. In effect, patterns in open systems are the conceptual networks – ontologies – that individual agents apply in observation and reasoning about phenomena. However, we should perhaps emphasize that patterns typically depict structures, and not the other way around. To this end, it is the patterns of system behavior that our cognitive agents apply in observation and articulation of some particular phenomenon of their concern. For example, a number of required entities have shut down for unknown reasons and the intended pattern of behavior starts to degenerate. However, certain instances of the required entity types are still available and, consequently, the involved system structure can be adapted in such a manner that the observed pattern once again will reflect an acceptable system behavior. The general issue at hand is, naturally, that of validation. That is, if we succeed in establishing mechanisms for phenomena isolation and adaptation, we must understand that neither is performed in just about any way we please. In fact, the activities of isolation and adaptation should only be performed if we feel confident in our capability to validate the occurrence of an emergent situation as well
ISSUES OF COMPLEXITY
51
reach
richness
value FIGURE 4.2 QUALITIES
As s es s m en t of s o me p a r t i c ul a r a g e nt ’ s c o nt ex t is co ns trained b y c er t ain dimens io ns (e.g ., richness, reach, and value) that are charac ter is t i c o f s o me s p e c i f i c d o m a i n u n d er s t u d y ( e . g . , economic s of infor mation). The outer area illustrates c riteria along a number of r elevant dimens ion s. An a ccepta ble qu a l i t y s ho ul d a l w a y s p er form within the inner area.
as being confident in our capability to instrument that situation in an appropriate and predictable manner. In every respect, we are dealing with the need for an objective notion of quality, i.e., to what extent did the shutdown of certain entities in our above example affect the overall system’s quality of service? Moreover, we also need an objective notion of quality in order to feel confident in that our available countermeasures indeed will achieve an acceptable increase in system quality. Even though we consider it the single most difficult issue of complexity in dependable computing systems, we necessarily have to address the notion of systemic quality in an objective and generally applicable manner. As such, we consider the notion of quality to be a constitutive property of some systemic phenomenon, i.e., it is attained in a cumulative manner by means of some individual agent that continuously observe and experience the behavior of some particular system. Therefore, we consider the notion of quality, at least in our current context of dealing with complexity, as difficult to depict by means of a single component’s structure or intrinsic processes. On the other hand, we have already introduced and discussed the notion of patterns, i.e., conceptual snapshots of system behavior, according to the cognitive preferences of some agent, which actually maps quite naturally onto our need for depicting quality. However, patterns are in essence objectively depicted structures, constituted by concepts and relations,* whereas qualities more appropriately could be described as a set of subjectively perceived dimensions of satisfaction, according to some agent’s preferential criteria, e.g., reach and richness (see Figure 4.2) [16; 6].
52
CONCLUDING REMARKS
The one thing that really matters, when it comes to quality of service, is that some individual agent, due to its subjective preferences and concerns, typically selects the involved dimensions. It is in terms of these dimensions that an agent needs to quantify and reason about its unique concerns and then transform these concerns into actions. In essence, when we are dealing with complex systems, quality is possible to achieve, but probably not in a universal and objective sense. Instead, we should strive to provide for mechanisms that enact all agents to measure whatever general aspects of the involved phenomena that they require. In summary, the qualitative validation of system behavior assumes that as many aspects as possible of systemic phenomena are susceptible to isolation, validation, and adaptation. 4.6 CONCLUDING REMARKS
The perhaps foremost issue of complexity addressed throughout this chapter of the thesis is the notion of systems and their everlasting goal of survival. In clear contrast to the otherwise traditional focus and emphasis of software engineering, to establish quality and dependability in complex computational systems is in many ways a matter of fostering their very survival, and secondarily to emphasize the supportive function they are supposed to provide for. Obviously, it is more important to provide for an acceptable degree of computational service as opposed to none at all. As such, with our intent of exploring and refining system qualities, such as dependability, it is more important that we are able to keep some particular system alive, i.e., focusing on system adaptation and validation, than to emphasize adaptation of its essential functionality. With this basic mindset at hand, we have characterized the primary origin of complexity in our particular application domain to be the notion of openness. That is, the involved systems are continuously subjected to change in that their constituents, dependencies, and interactions evolve over time. This property of evolution can be the result from an intended programmatic behavior, but also due to intervention of cognitive agents or unanticipated events that occur in some physical environment involved. Consequently, one way of dealing with issues of complexity is to enact cognitive agents with the capability of adaptation, i.e., to instrument phenomena at runtime, according to some predefined and qualitative goal. However, to do so introduces us to two concerns in particular. On the one hand, since our cognitive agents can be of a human as well as computational nature, observation and isolation of systemic phenomena must be articulated by means of certain constructs that are generally applicable to all agents in a given group. On the other hand, since we are dealing with open systems, agents can potentially strive to uphold qualitative system dimensions that stand in conflict with each other. However, we consider these two concerns of cognitive agents, i.e., the
ISSUES OF COMPLEXITY
53
need for general as well as diverse modeling constructs and coordination, simply as an indication of requirements involved in development of our methodological instruments. In doing so, we advocate a general systems perspective on structures, processes, and patterns of some particular system of our concern. As such, this perspective focuses on certain system primitives, i.e., entities and dependencies. To this end, however, the foremost concern regarding this general systems perspective is our need to establish a somewhat more refined model that effectively addresses the notion of cognitive isolation and subsequent adaptation. Such a model must necessarily form the basis for scientific exploration as well as engineering refinements regarding embedded computational systems. In the next chapter of this thesis, we therefore delve further into an attempt to identify such a model – open computational systems.
5 OPEN COMPUTATIONAL SYSTEMS
When evalu atin g a model, at least two broad sta nda rds are re levan t. O ne is whether th e model is consistent with the data. The other is whether the mo d e l i s c o ns i s t e nt w it h t h e ‘ real wo rld’ . K. A. Bollen
5.1 INTRODUCTION
Having framed the general issues of complexity in dependable computing systems, i.e., isolation, adaptation, and validation, we have now come to a point where a model of primitives in complex computing machinery can be explicated and discussed. As previously indicated, we consider models to be one of the basic instruments of any methodological framework that, in essence, addresses the concerns of knowledge acquisition as well as quality assurance. Moreover, in our particular case, these concerns are in fact typical examples of issues that practitioners in computer science and software engineering have to deal with in their everyday conduct. Furthermore, even though we consider the methodological instrument of principles to guide us in identifying relevant questions and problems as well as indicating appropriate ways to pursue them, one should interpret the essential intent of models to guide us in articulation and understanding of phenomena. Moreover, whereas we argue that any method toward dealing with complex computing phenomena necessarily must emphasize the manner in which we perform activities of exploration and refinement, we still need the methodological instrument of models. It helps us to convey our principle concerns as well as the constituents, dependencies, and interactions of some particular phenomenon that we are investigating. It is from this perspective that we require applicable models, since they are supposed to effectively depict the very essentials in reasoning about phenomena as well as in reasoning about the associate theories involved. Of course, one needs to characterize the notion of applicable models in this context. The stance taken in this particular thesis is that we 55
56
INTRODUCTION
consider a model applicable if it appropriately conveys the essential – real world – characteristics of some particular phenomenon under investigation. To this end, we introduce the notion of cognitive frustums* in order to frame the conceptual constituents of some open system at hand (see Section 5.3 – Environment). Nevertheless, we are primarily concerned with the dependable behavior of complex computing phenomena, i.e., embedded systems providing support for critical functions of society. In this context, we consider an applicable model to convey the general constituents, dependencies, and interactions of these systems. As we discussed in the previous chapter, the model at hand should therefore also be geared in such a way that it emphasizes the general activities in dealing with the involved systems, i.e., isolation, adaptation, and validation [20]. To this end, it is important that we emphasize the requirement of making all relevant aspects in complex phenomena susceptible to cognitive inspection and subsequent refinement. Consequently, we introduce the notion of physical parcels* in order to frame the factual constituents of some particular phenomenon at hand (see Section 5.3 – Environment). However, even though the general intent of some particular model is to properly depict the fundamental cognitive and physical constructs present in all phenomena of our concern, one should perhaps take some precautions in interpretation here. That is, a model primarily aims at conveying the constructs and aspects in all systems of a particular type. In this respect, one should necessarily not (re)apply models; geared toward one class of systems, in contexts where the type of phenomena investigated does not involve the same type of behavioral principles that one is concerned with. Examples of such a questionable application of models are typically found in paradigms where models of sociological phenomena are applied in investigations of collective behavior of technological systems. In effect, we consider the model introduced in this particular chapter to provide support for certain requirements in addressing complex phenomena of dependable computing systems. More specifically, the model introduced herein explicitly addresses the fundamental concerns of grounding and cognition in those embedded systems where computation as well as communication is considered as the basic elements of control in open computational systems. In doing so, our model has been geared in such a way that the practitioner, e.g., a system designer or analyst, should be able to develop and investigate factual system behavior as a matter of prior experiences. This is in fact our main reason for introducing the concepts of frustums and parcels. As such, frustums enable us to model the behavior of some particular system, at different levels of abstraction, and physical parcels can subsequently be identified along the unique dimensions of the frustums involved. In effect, one could consider frustums to depict our expectations or experiences, whereas parcels correspond to the factual behavior of an in situ system.
OPEN COMPUTATIONAL SYSTEMS
57
Let us therefore return to our previous discussion on factual versus experienced system behavior. In particular, we characterized these aspects of behavior as conforming to different levels of abstraction, i.e., structures, processes, and patterns. In this context, since we are concerned with grounding of behavioral semantics and our experience thereof, we introduce a layered model of open computational systems, where grounding and cognition is explicitly accounted for. As such, the different levels of abstraction that our model involves are domain, system, fabric, and environment. In the following, to support and illustrate our reasoning, we introduce some formal notations and their intentional readings. We aim at investigating some system s i that belongs to a set S ( s i ∈ S ). The constituent elements of some system are distinct entities ej that belong to the set E ( ej ∈ E ). The representation of a system of entities as well as the representation of a system of systems can now be expressed as follows. 1
S =
∑ ej , where ej ∈ E
and the summation has restrictions addressed below.
j
2
S = ⊕ s i , where s i ∈ S
again with restrictions addressed below.
Moreover, the state σ of entities, systems, or systems of systems conforms to the following quadruple. 3
σ = ( location, reach, richness, mission )
It should be noted that we consider the system state σ to be time-dependant and, consequently, the behavior of a system, e.g. system of entities or system of systems, corresponds to a trajectory in an associate state–time space. The coordinates of states have a certain range in the four abstraction levels of a model M . More precisely, we introduce the notion of state projections onto the different levels of M . In doing so, one could consider these state projections a matter of some observer’s particular viewpoint and range, with respect to the different frustums of open computational systems. •
σ
domain
= mission , where mission
is an isolated part of the domain frustum.
•
σ
system
= richness , where richness
is an isolated part of the system frustum.
•
σ
fabric
= reach , where reach
σ
environment
•
frustum.
= location ,
is an isolated part of the fabric frustum.
where location is an isolated part of the environment
58
INTRODUCTION
The intended reading of the state σ of an entity ej is as follows. The purpose and intent of its behavior is described at the domain level. By means of certain conceptual structures at the domain level, the description of the entity’s behavior is coupled with its behavioral structures at the system level. Moreover, these behavioral structures are, subsequently, coupled with yet another set of constructs at the fabric level. As such, the constructs present at the fabric level interconnects the information processing units at the system level with sensors and actuators* manifested at the environment level. Now, in order to validate an entity’s factual behavior, i.e., according to its mission specification in a parcel at the domain level, one can visualize its behavior as it manifest itself in a parcel at the environment level. Given this depiction of entity states, we can model the state of a system by means of proper combinations of frustum constructs. In this context, the concepts of reach and richness should be interpreted as the physical constraints imposed on the entity along the dimensions of information communication and processing (see Section 4.5). Indeed, one should note that the restrictions involved in composing a system from a set of entities, or from composing a system from a set of systems, could become a very intricate and complex activity. However, given our shorthanded formalism, we can now state the properties and challenges of dependability in a more precise manner. We note that the concept of system has two interpretations in particular. One is introduced in terms of the above definitions and yet another interpretation is introduced by the way we denote the respective abstraction levels of our model, i.e., environment, fabric, system, and domain. The main reason for this overlap in definition is that the notion of systems in the field of computing traditionally correspond to some set of software components and their associate networking element (similar to the system and fabric levels of our model). The same overlap in definition also holds for the concept of an entity. The intended reading conforms either to the above notation or, in specific cases where the context is given, as the projection of an entity onto a particular level of our model.21 Nevertheless, the notion of dependable computing systems can now be more precisely stated given our descriptive formalism. In fact, we are dealing with two aspects of openness. One of them can be derived from natural sciences and the other from disciplines of computer science and software engineering. We consider the notion of open computational systems to comprise a combination of both; as an effect from our state descriptions. Consequently, we are dealing with the following aspects of openness: •
Flow – A system is open if and only if we have a flow of substance absorbed and/or emitted from a particular system parcel. In the case of open computational systems, we are dealing with a flow of information. That is, signals that carry messages and mechanisms that enable connectivity between entities. A system is
OPEN COMPUTATIONAL SYSTEMS
59
thus open with respect to information flow if and only if it allows for signal processing. It should be noted that the notion of signal processing might involves constructs on all levels of our model, but it must at least involve constructs at the fabric (conjunction) and system (interaction) levels. Obviously, the other two levels of our model, i.e., environment and domain, introduce additional constraints on an acceptable and / or possible information flow, i.e., locations and missions. These constraints on information flow are, in fact, the key factors that frame our notion of dependable computing systems. •
Structural – This perspective of openness involves the deletion or addition of entities and, consequently, changes to structures and processes of open computational systems. Deletion or addition of entities is typically restricted by the composition laws above. In practice, these laws of composition require appropriate mechanisms of mediation at the fabric level as well as proper capabilities of computation at the system level. Our model thus refines the traditional notion of a plug-and-play type of openness. That is, since our model introduces the qualitative constraints of mission and location, the model of open computational systems emphasizes openness with respect to changes of structures and processes – system organization and autonomic reconfiguration thereof (see Chapter 8 – Network enabled capabilities).
At this point, we are now able to define the concept of interaction as well as the concept of dependency between entities and systems. An interaction between two entities is possible if and only if there can be a signal transported between the entities. A similar definition is applicable for systems. A dependency between systems is possible if and only if there can be a sequence of signals transported between the systems (possibly traversing other systems in the process). Moreover, with these definitions in mind, one should perhaps also note that critical dependencies will occur when a particular signal, or sequence thereof, causes the state of an entity, or system, to move out of its intended and acceptable state space. With the above introduction as well as intended reading of certain fundamental concepts in mind, we have now reached the point where our model of open computational systems can be introduced. Consequently, in this chapter’s first section – Model for isolation – we return to our previous discussion on factual versus experienced system behavior. In particular, we characterize these aspects of behavior as conforming to different levels of abstraction, i.e., structures, processes, and patterns. In this context, since we are concerned with grounding of behavioral semantics and our experience thereof, we introduce a layered model of open computational systems where grounding and cognition is explicitly accounted for. In the following section – Environment – the first
60
MODEL FOR ISOLATION
conceptual constructs of our model are presented and discussed. As such, these concepts aim at physical isolation of some embedded system, by means of identifying certain unique viewpoints. Moreover, these viewpoints, or frustums, can subsequently be used in filtering out those physical constructs of phenomena that we are interested in exploring. In the next section – Domain – we therefore introduce those applicable concepts and dependencies that most effectively characterize the semantic features of some isolated system. In essence, the concepts and relations introduced at this particular level of abstraction correspond to the notion of online ontologies. In the following section – System – we elicit those concepts and dependencies that we consider convey the origin of factual and grounded behavior. As such, these constructs should be considered as physical similarities of those present at the domain level of our open computational systems. The next section – Fabric – concludes this chapter’s discussion regarding our model of open computational systems and its primitive levels of abstraction. As such, we introduce the general concepts and dependencies of communication infrastructures. In essence, the systemic phenomena we are concerned with should be considered as an intrinsic part of these infrastructures and, therefore, our model necessarily has to convey their fundamental aspects as well. 5.2 MODEL FOR ISOLATION
A key aspect of our embedded systems is their essential character of being open to change. Even though we consider an important property of these systems to be their embedding in physical environments, it is in fact their property of being open to change that is the major problem at hand. As we discussed in the previous chapter, change in some particular system takes place on different levels of abstraction. On the one hand, we have change and dynamics present at the structural level. In that particular case, it is mainly the dependencies among entities that are subjected to change. Moreover, at the structural level, there are also situations in which entities are induced or withdrawn from some system; resulting in dynamics. On the other hand, we have dynamics present at the procedural level of abstraction. In those cases, dynamics are mainly a matter of state changes, due to a flow of substance between entities, e.g., energy and information. Finally, there is a third level of abstraction that can be used in depicting system dynamics – patterns. As such, we consider patterns to be a temporal manifestation of some particular system’s structures and processes, i.e., conceptual structures that are deduced by some cognitive agent as a matter of observation. In this respect, there are three levels of abstraction that could be considered in observation of open computational systems, i.e., structures, processes, and patterns. However, at this point, one should be aware of an important difference between these abstractions.
OPEN COMPUTATIONAL SYSTEMS
61
Two of them – structures and processes – are of such a public nature that anyone can observe them, by means of appropriate instruments, whereas one of these abstraction levels – patterns – only exists as a private construct in the mind of some observer, e.g., designer or user. That is, structures and processes reflect the factual behavior of systemic phenomena, whereas patterns reflect the cumulative experience of cognitive agents. As we have discussed in previous chapters, this is where one of the challenges in dependable computing systems dwells. Indeed, even though the continuous and, at times, unanticipated evolution of systemic structures and processes should be considered as complex, the major concern at hand stem from those situations where multiple agents are supposed to explore and refine complex phenomena, by means of models that possibly are inconsistent with the factual behavior at hand. In essence, it is due to this line of reasoning that we advocate our model of open computational systems. We argue that an applicable model of these systems primarily have to realize the notion of public harmony, i.e., providing for grounded behavior semantics. To this end, we must bridge the gap between offline formalism and online naturalism of embedded systems in such a way that observation and instrumentation of the involved phenomena can be grounded in a commonality that is shared by all agents involved – a model of primitive concepts and dependencies. As such, these concepts and dependencies must be relevant and applicable for all cognitive agents involved. With such an aim, we introduce a layered perspective of open computational systems, in order to account for an appropriate set of abstractions in modeling and maintaining the primary cause and effect of these systems’ qualitative behavior in general, i.e., the continuous evolution of structures, patterns, and processes [19–21; 24]: •
Environment – As we have previously argued, the foremost feature of open computational systems is their embedding in an open and physical environment. No matter what systemic constituents, dependencies, interactions, or events some particular agent would be interested in exploring or refining, everything should be grounded in real world semantics.
•
Fabric – At the next level of abstraction, we have the fabric of interaction. As such, this aspect of some systemic phenomena involves a network of communication nodes* that mediate a flow of substance, i.e., information. Consequently, it is at this level that the basic principles of computation and communication are relevant to pursue.
•
System – As an intrinsic part of the fabric, certain computational entities proliferate at the system level. Some of these entities are mere software components,
62
ENVIRONMENT
whereas others manifest themselves as cognitive agents, i.e., capable of exploring and refining systemic phenomena. It is at this abstraction level of our model that we consider the mechanical principles of coordination and automation. •
Domain – The final aspect of our model involves the domain of observation. As such, it should be considered as a network of conceptual structures – readily available for exploration and refinement by entities at the system level – depicting some qualitative phenomena of concern.
Now, before we continue with our discussion regarding the primitive concepts and dependencies at each level of abstraction in our model, one should perhaps stress certain features of models in general. Firstly, one should be aware of the fact that a model is merely a conceptual apparatus that we apply in depicting phenomena. As such, one should never interpret the intent of some particular model to aim for a complete depiction of phenomena. Instead, models should necessarily be considered as an approximation, according to certain context-dependent dimensions. In our particular case, one could consider our model’s different abstraction levels to conform to such a notion of dimensions. Secondly, along each dimension of some model, one typically explicates certain key concepts and dependencies. As such, these conceptual constructs are, in a similar manner as with phenomena, merely approximations of the most influential variables involved. Finally, taking these considerations into account, it stands clear that the essential feature of models is, in a sense, to function as a pair of sunglasses – they help us polarize and filter perceptual phenomena of our concern. 5.3 ENVIRONMENT
The first layer of our model is that of a physical environment of evolution. From the perspective of open systems, the notion of a system’s physical environment is perhaps the most fundamental aspect as it explicitly aims at identifying the physical constituents’ involved in phenomena evolution. Consequently, if exploration and refinement of system behavior is our goal, identifying the physical environment and its ramifications will increase our understanding of potential factors that might shape the behavior of some particular system of our concern. In our context, we consider a physical environment to involve two features in particular – space and medium. It is within such a spatial volume that physical entities can interact with each other, i.e., by means of some particular medium. Moreover, one should perhaps note that, in dealing with physical environments, the foremost property of any entity is that of being distinct, due to spatial locale and extent. That is, any entity –
OPEN COMPUTATIONAL SYSTEMS
63
computational or not – necessarily resides at some particular location in a physical environment. However, one should be careful not to misinterpret the property of spatial extent. In essence, this property deals with physical as opposed to logical extent. In the case of computational entities, one might interpret the notion of extent in terms of communication range (reach), but this latter property is something more appropriately dealt with at the fabric level of our model. Embedded in a physical environment, entities interact with each other by means of temporal interactions. Moreover, when an entity is subject to such stimuli a sequence of changes can occur in its current state, possibly giving rise to a situation where the subject itself imposes some new stimuli on another entity in the environment. The conception of such a continuous flow of stimuli between a set of entities is characterized as the behavior of a system, and defined as a set of dependencies between some set of distinct entities. That is, temporal interactions can take place between distinct entities, through the physical media present in some environment. As such, we denote temporal interactions in terms of dependencies. In essence, the notion of dependencies addresses behavioral relations between two distinct entities, i.e., when state changes in one entity are reflected upon the state changes of another via a signal. Obviously, there is a strong connection between the notion of dependencies and that of factual interactions between two physical entities. However, as opposed to interactions, where focus lies in the notion of local and physical stimuli exchange, the concept of dependencies between entities reflects the fundamental idea of conceptual relations. Moreover, even though two entities not necessarily have an established and direct physical connection to each other, they can share an indirect chain of interactions, i.e., signals, via some set of other entities. Now, we have identified the general concepts involved in systemic phenomena: space, medium, entity, interaction, and dependency. These are all conceptual constructs of relevance in articulating physical phenomena. As such, let us therefore return to the discussion regarding our model’s first level of abstraction – the environment. As previously indicated, the main intent with our model is to guide the practitioner in addressing systemic phenomena of a computational nature – open computational systems. To this end, our first and common concern in dealing with such phenomena is to isolate the constituents, dependencies, and interactions involved. However, in doing so, one has to remember that the phenomena at hand are of an open nature. Given our state description of entities, i.e., c = ( location, reach, richness, mission ) , phenomena isolation is typically considered as a matter of physical boundaries, or restrictions, along the dimensions of entity location and reach. In this context, the dimensions of richness and mission are somewhat more difficult to grasp regarding in what way they correspond to a physical boundary. However, these dimensions are more appropriately considered as the cognitive boundaries of some system.
64
ENVIRONMENT
Phenomena constituents, dependencies, and interactions of open computational systems are dynamic in nature. Moreover, with multiple observers involved in simultaneous exploration and refinement activities, the invariant and physical boundary of an open computational system is only possible to identify and maintain if we take the complete state of the system into consideration. At this point, we should therefore emphasize the notion of cognitive boundaries in isolation of physical phenomena. The main idea is that cognitive agents are able to articulate their concerns regarding the behavior of a particular set of entities. This being the case, they are in fact able to describe the cognitive boundary of a system. Such a description can then be translated into a mission specification. With this intent in mind, we introduce two additional concepts of our model: frustums and parcels (see Figure 5.1). •
Frustum – The concept of frustum is of geometrical origin, i.e., the portion of a solid that lies between two parallel planes cutting the solid. In our particular context, we emphasize this definition as well as its standard usage in visualization and perception, i.e., a view frustum. As such, we make use of the concept in order to depict a solid volume – a cognitive bounding space defined by some distinct agent – that is cut by the levels of our model, i.e., domain, system, fabric, and environment.
•
Parcel – To this end, we argue that phenomena depicted as systems of systems require a more appropriate term. As such, we introduce the concept of parcels to depict what we consider physical parts of a greater whole. That is, a parcel should be considered as the physical bounding space, within an environment, that is dynamically identified by means of a distinct agent’s definition of some particular frustum.
The introduction of these additional concepts of open computational systems, i.e., frustums and parcels, enables us to effectively address one of the most difficult issues at hand in dependable computing systems. Complexity in these embedded systems that society has come to depend on is in many ways due to their open and dynamic nature. However, as such, complexity has to be dealt with as a matter of online phenomena. That is, the physical manifestation and quality of these systems’ behavior is simply not susceptible to offline verification. By means of introducing the concept of frustums, we can effectively model the open computational systems involved as a matter of online phenomena. Each abstraction level of our model cut the cognitive bounding space of some particular agent into a number of frustums that, in effect, contain the manifestations of certain general concepts and relations of particular relevance. Moreover, as we will
OPEN COMPUTATIONAL SYSTEMS
65
discuss in the following sections of this chapter, the concepts present at each abstraction level have explicit relations that cross the different frustums involved. This way, our model should be considered as a filtering tool that cognitive agents can apply in observation and subsequent articulation of those online phenomena we call open computational systems. Finally, one should perhaps note that, as a matter of multiple agents being involved in observing the same phenomenon, different frustums could partially intersect with each other. In effect, different agents can observe the same parcel from different points of view – cognitive domains. 5.4 DOMAIN
In summary, we consider the philosophy of naturalism to capture the essential characteristics of contemporary computing systems. However, in doing so, we also have to face the consequences from dealing with natural phenomena. That is, dealing with natural phenomena involves the problem of public harmony in grounded semantics, i.e., the individual’s capability to communicate privately experienced phenomena in terms of publicly accepted cognitive concepts. To this end, the process of standardizing these cognitive concepts must, on the one hand, necessarily be conducted as a public activity towards the common goal of understanding each other and, on the other hand, the process must emphasize the fundamental capability of every cognitive agent to perceive their physical environment. Since the public perception of phenomena is a private experience, i.e., each distinct phenomenon is possible to experience by a temporally ordered set of observers, then by what means can we attain public harmony among individual observers? As previously described, the development of such harmony in perception it requires two observers to witness one particular scene together and, subsequently, to witness yet another similar scene together. After the second observation, if both observers can agree upon the occurrence of a similar phenomenon recurring in both situations, then they can also agree upon its identity – a common concept that depicts the particular phenomenon in question. That is, since the two observers now share a particular concept and its grounded semantics, the observers can be considered as successful in establishing a common standard of perceptual similarity and identity – public harmony. However, this state of affairs in developing public harmony in grounded semantics assumes that at least two observers share some common area of concern. That is, to take part in the development of public harmony toward grounded semantics – standardizing an ontology – assumes that the cognitive agents involved share an interest in some particular domain of mutual concern. It is within such cognitive domains that certain concepts and dimensions are of particular relevance to identify, define, and agree upon.
66
DOMAIN
(CONCEPTUAL)
Ai
(PHYSICAL)
Domain
Fi
Pi
System
Fi + 1
Pi + 1
Fabric
Fi + 2
Pi + 2
Environment
Fi + 3
Pi + 3 Bi
FIGURE 5.1 ISOLATION
When we aim at investigating comp lex comp uting phenomena, w e first identify a cognitive bou ndin g sp ace of s ome agen t (A i ) that subsequently can be divided into four frustums (F i ) . Ea ch fru stu m involv es certain concep ts an d dep e nd encies that help us in identifying the actual cons titu ents as temporarily en comp ass e d by s ome physic al p arcel (P i ) a t va rio us levels o f abstraction in open comp utational systems behavior (B i ) .
For example, consider the domain of network-enabled capabilities, e.g., network-based defense. As such, the domain is of mutual concern to a quite extensive set of cognitive agents and the identification, definition, and agreement regarding certain fundamental concepts and dimensions are therefore of mutual benefit. At this point, one would, however, be well advised to refine the scope of our domain in question. Is it supposedly geared toward online exploration, refinement, or operations? If one would consider an emphasis on operations at this point, several concepts would reveal themselves as unique as well as relevant in depicting our common domain of concern – distributed operations in network-centric defense. Primitive concepts would typically correspond to phenomena such as organizations, capabilities, artifacts, and attributes. Moreover, these concepts are also quite easy to exemplify in terms of certain real world experienced instances, e.g., defense, propulsion, corvette, and speed. Of course, with different domains in mind, the concepts of one domain can be considered as a relevant dimension of some other. Finally, we should consider the notion of two conceptual instances as being dependent on each other, e.g., a particular propulsion instance and that of a corvette. It is by means of this construct that we can depict constitutive characteristics, e.g., a particular corvette is a mobile entity. In summary, the most abstract level of our model is that of domain for observation. As such, the general concepts and relations constituting this frustum – from the perspective of a cognitive agent – are those of domains, concepts, and manifestations as well as dimensions and dependencies (see Figure 5.2). If the reader is interested in the rationale
OPEN COMPUTATIONAL SYSTEMS
Domain
Dimension
Concept
67
Domain Concept
Manifestation
Manifestation Dependency SYSTEM FRUSTUM
FIGURE 5.2 DOMAIN FRUSTUM
The gener al constituents and de p en d e n c i e s i n a f r u s t u m a t t he domain level inc lude those o f cognitive domains, concepts, an d por ts. Moreov er, we introduce the notion of dimensions betw een s o me par tic ular conc ep t a nd a second -ord e r d om ai n a s we l l a s co gn i t iv e d e pe nd e ncie s be t w e en t w o c on c e p t u a l in s t a n c e s .
behind these general constructs, certain writings of Gärdenfors might be an appropriate source of inspiration [32]. •
Domain – The cognitive construct of a domain addresses the need for scoping in conceptual structures, i.e., a starting point for elicitation of relevant concepts involved. As such, it should however be noted that a particular domain can only be explicated if there exists at least one cognitive agent out there who consider it relevant. Moreover, each domain can incorporate internal as well as external concepts, where internal ones are those of first-order relevance, according to some particular agent. Second-order concepts are those external ones that are connected to the domain in question by means of dimensions.
•
Concept – Each domain can contain a number of first-order concepts that effectively depict the most relevant features of the domain in question, as a matter of public harmony in grounded semantics, i.e., according to some temporally ordered set of cognitive agents. Moreover, each first-order concept of some particular domain can be considered as a second-order concept of some other domain, by means of the cognitive relation of dimensions. As is the case with concepts in general, these dimensions are also the product from development of public harmony.
•
Manifestation – In a similar manner as is the case with concepts’ first-order relation to some particular domain, each concept can comprise a number of so-called manifestations. As such, they depict instances of concepts, e.g., the concept of
68
SYSTEM
capability can have a manifestation named propulsion. Moreover, as previously indicated, two such instances can have an indirect relation with each other – dependency – that, in effect, enable us to model constitutive characteristics of physical phenomena. From a practical perspective, the domain frustum and its intrinsic constructs is perhaps one of the most important parts in our methodological instrument of models – open computational systems. In effect, the constructs involved serve us in the most fundamental of activities in exploration and refinement of open computational systems, i.e., articulation and construction. As such, the foremost benefit of the domain frustum is that it supports us with those cognitive constructs that effectively depict summative as well as constitutive characteristics of some particular phenomenon. Moreover, as we will discuss in the next chapter of this thesis – Online engineering – these constructs also introduces the possibility of implementing the required online mechanism of cognitive exploration, i.e., one can make use of distinct constructs in the domain frustum to effectively explore it in search for other constructs that subsequently will lead us to isolation of related phenomena. However, it should be noted that the main intent with the domain frustum is essentially twofold; to provide support for cognitive exploration and subsequent refinement as well as to encompass physical phenomena with grounded semantics. In summary, we have introduced the general concepts and relations present at the domain level of our model. To this end, the constructs present in one particular frustum are considered as connected with the constructs in its neighboring frustums. Consequently, let us continue with the discussion on the constituents at the system level – depicting the factual behavior of our complex and open computational systems. 5.5 SYSTEM
The second frustum of our model involves the actual system that a cognitive agent is concerned with in activities such as exploration and refinement. Consequently, it is in this frustum of open computational systems that we consider some factual behavior to take place. Therefore, in light of the previous discussion on general systems theory, let us define the notion of a system in our context more precisely. We can consider a system to be a temporally ordered set of physical entities that interact with each other. As such, since we are dealing with open systems, the occurrence of entities as well as interactions can render itself as being of an unanticipated nature. In effect, even though we have constructed or identified portions of some particular phenomenon and its constituents, new entities and interactions can appear at the same time as already existing entities can
OPEN COMPUTATIONAL SYSTEMS
69
disappear. It is the cause and effect of such dynamics as well as their unanticipated nature that renders the involved systems as a matter of intended behavior. Therefore, when we are dealing with open systems, one should perhaps acknowledge that there is a difference between intended and factual behavior. Consequently, as the means to an end, we do not consider open computational systems to be the physical manifestation of an abstract model, but rather the other way around. They are grounded in the physical world and, therefore, their behavior can only be approximately deduced as a matter of more or less effective models. To this end, one should therefore question what concepts and dimensions that most effectively capture relevant aspects of open computational systems in general. As we have previously indicated, our utmost important requirement to consider in doing so should necessarily be that of general applicability. Of course, since we are dealing with computing as such, it would perhaps be easiest to involve concepts such as operating system processes or programmatic concepts such as components or services. However, such an approach would merely address concerns of the software engineering community. Instead, since we persist in arguing that the fundamental concern at hand is addressing the dichotomy of computer science and software engineering, we advocate a somewhat more general approach. That is, we consider any system to be a population of a partially unknown and temporally ordered set of physical entities. We denote such physical entities as primitives. Moreover, due to temporal interactions, an entity can also be considered as a combination of two or more primitive entities that interact with each other. We denote such temporal entities as composites. However, these composites are in fact the same construct as already present in the domain frustum, i.e., two conceptual manifestations connected to each other by means of a cognitive dependency. Nevertheless, if an entity is supposed to interact with another entity, there are two additional features present in our model’s system frustum, i.e., identity and interface. As previously discussed, when it comes to cognitive inspection in some particular domain, the foremost feature of a physical entity is that we necessarily must be able to perceive it as being distinct. As such, if one intends to interact with some particular entity, the first prerequisite involved is that of identity, i.e., the property that renders an entity as not only distinct but unique as well. Moreover, if one is able to identify a unique entity and whishes to interact with it, the next prerequisite involved is that of an interface, i.e., a channel for transportation and flow of substance – information. To this end, we denote such interfaces in terms of ports.* In summary, the system frustum of our model is now complete. It involves general concepts and relations such as entities and ports as well as interactions (see Figure 5.3).22
70
FABRIC
DOMAIN FRUSTUM Interaction Port
Port
Entity
Entity FABRIC FRUSTUM FIGURE 5.3
SYSTEM FRUSTUM
The general constituents and de pendencies in a frustum at the system level include thos e o f ph ys i c a l e n t i tie s a n d p or t s . M or eo v e r , w e i n t r od uce the notion of interaction between uniq ue ports of two distinct entities.
•
Entity – The fundamental concept present in system frustums is that of entities. As such, entities are computational in nature, i.e., they exhibit a programmatic behavior of some factual behavior and intended quality. Moreover, in our particular context, entities can seek to interact with other entities in order to provide for a coordinated as well as automated behavior. In doing so, they typically need to explore their environment, in which cases we denote them as cognitive agents.
•
Port – Every entity provide for a finite set of ports. In case of open computational systems, ports are used in transferring information between entities, i.e., to interact with each other. As such, each port can have a corresponding construct in the preceding domain frustum by means of the manifestation construct. Consequently, a physical interaction between two entities corresponds to a conceptual dependency between two manifestations. This connection between ports and manifestations provide for the notion of grounded system behavior semantics.
As indicated in Figure 5.3, the system frustum is connected to not only the domain frustum, but the fabric frustum as well. We will delve further into that particular frustum’s general concepts and dependencies in the next section of this chapter. 5.6 FABRIC
Each entity present at the system level of open computational systems typically performs computations and takes part in communications with other entities. Moreover, some of
OPEN COMPUTATIONAL SYSTEMS
71
the involved entities also perform cognitive tasks such as qualitative exploration and refinement. However, in doing so, they all require the general support for mediation – processing and distribution of information between various system peers. It is due to such a general requirement of system parcels that we introduce the notion of a fabric frustum. As previously indicated, in dealing with physical environments, the primitive features of space and media come with an inherent differentiation in precedence. That is, when we are primarily concerned with an environment parcel, we should focus on spatial locale and extent. However, when it comes to fabric parcels the situation is slightly different. In fact, we are dealing with the opposite situation. Within the boundaries of a fabric parcel, interaction media and communication range are of the essence. As such, we consider the fabric to be an integral part of open computational systems’ physical environment. All entities that proliferate in some particular system parcel require support of mediation. Consequently, we consider the fabric of an open computational system to explicitly support the general existence as well as fundamental capabilities of all computational entities, e.g., computation and communication. In essence, these capabilities correspond to mechanical functions, as conveyed by our methodological instrument of principles. To this end, a fabric parcel should be considered as constituted by mediators of computing. Moreover, these mediators depend on each other as a matter of temporal connections – conjunctions. That is, in order to facilitate the mechanical principle of communication in a seamless manner, the mediators in some fabric parcel need to be connected with each other in an ad-hoc and automatic manner. As we have discussed in previous chapters of the thesis, this requirement is of principle concern in the fields of proactive and ubiquitous computing. However, the requirement also implies yet another function of mediators – to act as the bridge between the world of computing and the physical world of nature. To this end, two additional concepts necessarily need to be considered in fabric frustums, i.e., sensors and actuators. That is, even though the mediators that are present in some fabric parcel at a first glance seem to primarily function as the facilitators of whatever exists in system parcels, they need to provide for yet another function of mediation. In essence, since we are dealing with embedded systems that supposedly should provide support for various critical functions of society, e.g., energy, healthcare, and defense, the systems and agents involved require some means of deducing information about phenomena in the physical environment, as do they need the means to induce changes in such physical environments. For example, in modeling constituents and dependencies in some fabric parcel, we need the ability to convey the presence of artifacts such as global positioning system (GPS) sensors as well as various propulsion actuators. •
Mediator – The basic constituents of an interaction fabric is that of mediators. Their principle function is to facilitate the flow of substance – information – in
72
FABRIC
SYSTEM FRUSTUM Conjunction Mediator
Mediator
Sensor
Actuator ENVIRONMENT FRUSTUM
FIGURE 5.4 FABRIC FRUSTUM
The general constituents and de pendencies in a frustum at the fabric level inc lude those of ph ys ical media t ors , s en sors , an d ac tuators . More over , we introd uce the notion of conjunctions be t w e en un iq u e m e dia to r s .
open computational systems. As such, the essential features provided for include system entity proliferation, environment observation as well as instrumentation, and mediator conjunction. •
Sensor – In order to acquire information about some mediator’s understanding of the surrounding physical environment, e.g., locale, media, events, artifacts, etc., one can make use of sensors in various forms. However, they should necessarily be considered as providing for a supportive mediator function and not as some physical constituent of system parcels in open computational systems.
•
Actuator – In a similar manner as is the case with sensors, the notion of actuators are preferably conveyed as a constituent concept in fabric frustums. As such, actuators function as a semantic bridge between system parcels and the factual behavior present in a physical environment. That is, the actuators in some fabric parcel enable system entities to actually change phenomena in nature.
In summary, we consider a fabric frustum to depict the general concepts and dependencies of mediation infrastructures in computing (see Figure 5.4). As such, there are in principle three abstract requirements imposed on such an infrastructure; local entity interaction, automatic infrastructure formation, and remote entity interaction. If the constituents of fabric parcels support such mechanisms, we stand a good chance of constructing a physical environment of seamless information flow that localized entities can take for granted, i.e., they will be able to observe and interact both with local entities as well as entities that in practice are situated at some remote location.
OPEN COMPUTATIONAL SYSTEMS
73
5.7 CONCLUDING REMARKS
In dealing with open computational systems, we necessarily have to consider computing phenomena that are constitutive in nature. In principle, we are dealing with imperfect knowledge of system behavior in physical environments; where entities can come together and interact with each other in unforeseen ways, i.e., unpredictable events (signals) can occur. We propose a layered model to cope with these situations. In particular, our model enables us to make explicit what should be considered as an acceptable state space evolution. Consequently, the model of open computational systems refines the notion of openness in such a way that design, implementation, and maintenance of complex systems can be performed with the intent of respecting invariant criteria of sustainable system behavior. However, as previously described, our primary concern is to provide for isolation of phenomena so that the notion of dependable computing systems can be addressed. To this end, the various abstraction levels – frustums – as accounted for by our model, aims at identifying certain key concepts of particular relevance at each perceivable level of some open computational system. Moreover, these general concepts involve certain dependencies that not only connect concepts within one particular frustum, but they also connect concepts between the various levels. In doing so, one can start at one point in these conceptual structures – patterns – and walk up and down the frustums, so to speak, when identifying the physical constituents of some particular parcel or combination thereof. Given this feature of our methodological instrument, one should in every respect consider this as a model for isolation. By means of introducing the environment level, we have explicated the need for grounding of behavioral semantics. Moreover, by means of different levels of abstraction, we have created a separation of concerns, i.e., certain principles of mechanics and design are more or less relevant to investigate at different levels of our model for open computational systems. Finally, and perhaps most importantly, we introduce the notion of a domain level in the involved systems. In effect, this enables us to consider the existence of domain-specific semantics that, on the one hand, can be made publicly available online and, on the other hand, are linked with the physical phenomena they depict. DEFINITION 5.1
Open c omp utat ion al systems are physical complexities of intera cting cognitive entities; human and software . By means of network enabled capabilities, suc h c ognit ive entities str ive to attain controlled a nd s usta inable s ys t em beh avior s, acc or ding to prec on dition s an d cons train t s of p oten t ia lly infinitesimal duration.
In summary, the model introduced in this particular chapter of the thesis should in every respect be considered as our suggestion of a methodological instrument that can guide
74
CONCLUDING REMARKS
cognitive agents – human and software – in the process of exploring complex phenomena in open computational systems. In a sense, the model could therefore be described as a tool for knowledge acquisition of online phenomena. However, having said that, we have also come to understand, from personal experiences in applying the model, that it is an effective instrument in prototyping of embedded systems of computing as well. That is, since the model emphasizes the notion of physical environments, one can easily make use of it in discussions with concerned parties that only have partial experiences in the field of complex computing machinery. As such, the environment level of our model helps us in identifying those physical artifacts and events that some concerned party consider as crucial in establishing relevant scenarios in real life situations, i.e., dependable computing systems in the service of those critical functions of society. Moreover, with such an understanding of relevant scenarios at hand, the knowledgeable engineer has suddenly a somewhat easy task to perform – turning the scenario into parcel constructs according to the conceptual structures of the domain, system, and fabric frustums. To this end, however, we are dealing with the subsequent application of our methodological instrument of methods – Online engineering – and that is the main topic of this thesis’ next chapter.
Part 3
PRACTICE
6 ONLINE ENGINEERING
Although th is may s eem a parad ox , a ll ex ac t s cienc e is dominated by the id ea of ap prox imat ion . B. Russel
6.1 INTRODUCTION
As a matter of their inherent nature, we consider open computational systems to appropriately frame the primitive constituents and dependencies of complex computing phenomena. In this context, the model of open computational systems enables us to frame and identify the dominant aspects of online phenomena – grounded in physical environments and possible to isolate by means of frustum explorations. However, as we have previously discussed, models must necessarily be considered as mere approximations of real phenomena. From the perspective of natural science, models aim to convey the essential dimensions of an already existing and spontaneous phenomenon. In the field of computing, however, such phenomena are of an artificial nature. That is, computing systems do not come into existence by themselves; we have to articulate and construct our phenomena before we can conduct any scientific investigations regarding their qualitative properties. In most any engineering discipline, nature has the final saying and it is impossible to build an artifact whose functionality or behavior requires the circumvention of natural laws. If we related this line of reasoning to dependable and open computational systems, it could be argued that the time has come for us to take on a similar mindset. That is, either we create artifacts that in an abstract form remind us of their physical equivalence, or we focus on developing artifacts that by virtue are considered as subjected to the forces of nature. An alternative way of looking at this state of affairs is as follows. Either one feels confident enough in the designs we apply, in that they supposedly convey the invariant and complete set of system constituents as well as interactions ever to be 77
78
INTRODUCTION
involved in some complex phenomenon of computing, or one feels confident in the methods applied. That is, feeling confident in methods that are geared in such a way that the dynamic and unanticipated behavior of complex phenomena is acknowledged and eventually harnessed. By means of the methodological framework advocated in this thesis, we argue that the latter mindset is the most rational choice at hand. We are dealing with physical phenomena of computing, including the occurrence of unanticipated events, and it is therefore difficult to argue that there ever will be a design that perfectly anticipates the in situ nature of these systems. Instead, let us embrace the perplexing features of open computational systems and concern ourselves with the methodological challenges at hand. Throughout the history of humankind, we have been quite successful in harnessing the essential features of nature. Apparently, even the most complex phenomena can be effectively dealt with, if we address them in an approximate and iterative manner. First, we need to identify the most dominant variables of some phenomenon in question. Moreover, some standard way of articulating such dimensions of complexity is needed in doing so, i.e., in order for us to be capable of communicating our findings. Secondly, we need the capability of reproducing the phenomenon at hand. To this end, the capability of construction is required, i.e., appropriate instruments as well as experienced expertise. Thirdly, when we have reached a point where the complex phenomenon in question can be articulated as well as constructed, we typically perform a procedure that requires us to be in possession of cognitive capabilities – observation. As cognitive beings, we can thereby gain certain experiences with respect to the phenomenon at hand, i.e., identifying potential inconsistencies as well as inadequacies in the articulation, construction, and observation activities. Finally, we feed our cumulative experiences from performing these activities back into the methodologies we apply as well as into the systems under study. In doing so, they can be instrumented in such a way as to become even more effective services toward knowledge acquisition and quality assurance than ever before. This line of reasoning is of fundamental importance in addressing the dichotomy of science and engineering in computing. In fact, we would like to argue that it is perhaps one of the greatest challenges at hand in attaining the degree of confidence as well as dependability in open computational systems that we seek. To this end, we have previously advocated a methodological framework of dependable computing systems, where the principle domain of application is embedded systems that provide for critical functions of society, e.g., energy, healthcare, and defense. In the context of this thesis, we have now come to a point where the methodological instrument of methods can be introduced and discussed. We advocate a method that primarily aims to conform to standard approaches toward empirical investigations in natural science [18; 24; 27]. However, in addressing the
ONLINE ENGINEERING
79
dichotomy of science and engineering in computing, the method also strives to be equally applicable in engineering of complex computing phenomena. SOLUTION 6.1
If we c on s t r u c t ou r em b e d d ed s ys t e m s a c c o r d i n g t o t h e s a m e m o d e l s a s we apply in observing them, the me thod of ar ticulating and instr ume n t i ng so m e s ys t e m’s fac t u al be h avio r is rat h er st ra ig h t forward, i. e . , it erat ive qualit y re finement as a matter of approximate in situ explorat i o n – o nl i ne e ng ine er in g of o p e n c o m p u t at i o n a l s y s t e m s.
Moreover, at this point it is important to emphasize the general stance taken in this thesis, regarding the notion of cognitive agents. In principle, we consider such agents to be of a human as well as computational nature. In our current context of this chapter, one should therefore consider the method as geared in such a way that it is intended to be equally applicable in the case of human agents as well as in the case of their computational counterpart. Consequently, in this chapter we introduce our methodological instrument of methods. In doing so, this chapter’s first section – Method of adaptation – comprises a discussion regarding the crucial notion of intent in performing system adaptation. In particular, we introduce three strands of cognitive agents that typically are involved in such activities, i.e., exploration, refinement, and operation. Moreover, the principle aspects of open computational systems that these agents intend to deal with are introduced. In the following section – Articulation – we note that, in harnessing the complexity of open computational systems, the previously discussed aspects can be reduced to a common requirement of framing ones cognitive experiences. As such, these experiences of individual agents thereafter need to be mapped onto frustum constructs. Consequently, in this chapter’s next section – Construction – we discuss the procedure of turning such constructs into physical system constituents. In particular, we note that there is an implicit difference between construction of artificial systems in general and those of a computational nature. In effect, we argue that the architectural designs involved must be made available online in the form of behavioral models. Moreover, in a subsequent section – Observation – we come to the conclusion that all cognitive agents involved in dependable computing systems in fact concern themselves with the notion of exception states, i.e., to appropriately deal with the occurrence of unanticipated system events. As a result from this conclusion, the following section – Instrumentation – discuss the notion of quality as a matter of counteracting exception states in dependable computing systems. In essence, we argue that an open comptutational system can be considered as dependable if it is governed by cognitive agents that effectively manage to move a system, disrupted by some unanticipated event, back into its intended physical–temporal–conceptual state space envelope. Finally, in this chapter’s last section – Concluding remarks – we summarize the intended applicability as well as limitations of our method. In particular, we emphasize that implementing it is an activity to be performed by cognitive agents that, in
80
METHOD OF ADAPTATION
effect, necessarily relies on technological support in doing so. However, without any further ado, let us address our proposal for a methodological instrument of methods – online engineering. 6.2 METHOD OF ADAPTATION
In the thesis’ previous chapter, we introduced a model of open computational systems that, in essence, aimed for isolation of systemic phenomena. The model emphasized two principle constructs in doing so, i.e., frustums and parcels. Moreover, the constructs where related to each other in such a way that any cognitive agent should be able to choose some temporal and conceptual frustum – aspect – and subsequently use it as a mechanism for isolation, i.e., in order to frame and identify the physical constructs of some in situ parcel. As such, the notions of frustums and parcels account for the previously described physical–temporal–conceptual state space of open computational systems. Furthermore, it should perhaps be noted that the introduction of parcels also was motivated by the fact that no physical phenomena can be assumed to come with a de facto physical bounding space. That is, in dealing with open computational systems, cognitive agents must apply a top–down approach in isolating online phenomena. In effect, our model was geared in such a way that any cognitive agent – human as well as software – should be able to deduce approximate patterns of behavior in temporarily isolated parcels. However, in this context of physical environments, constituted by an unknown and continuously changing set of interacting constructs, we cannot explain qualitative behavior in terms of static patterns. Such patterns can only be considered as a temporarily correct approximation of the behavior at hand. An approximation of behavior in our context therefore corresponds to the most dominant state space variables, as identified at one particular point in time. Obviously, if other dominant phenomena and events are introduced into our temporarily isolated parcel, those patterns currently held by a cognitive agent can no longer be considered as a correct approximation of behavior and therefore need to be updated accordingly. When a cognitive agent finds itself in such a situation, i.e., some already observed pattern of behavior needs to be updated; there is a decision to be made. This decision has to do with the agent’s intent in monitoring the parcel at hand. On the one hand, the deviation in behavior could have been anticipated and possibly even correspond to an event that the agent wanted to occur. On the other hand, the event could have been unanticipated, in which case it certainly would not have been of an intentional nature. The occurrence of unanticipated events in some parcel could be characterized as the qualitative dimension of a concern. However, if this is the case, one needs to understand
ONLINE ENGINEERING
81
why the event should be considered as a matter of concern. We argue that this has to do with the very nature and fundamental intent of some agent performing isolation of parcels. In doing so, we consider cognitive agents of open computational systems to come in three strands. Firstly, we have those agents that primarily concern themselves with exploration of systemic phenomena. In this particular case, it is mainly the mere occurrence of unanticipated events that are of interest, i.e., what behavioral principles could one expect in open computational systems? The occurrence of unanticipated events should be considered as an indication of the need for currently applied frustums to be refined – discovery. Secondly, we have those agents that primarily concern themselves with refinement of systemic qualities. These agents seek to counteract the negative effects that unanticipated events impose on some parcel’s qualitative behavior – sustainability. Finally, we have those cognitive agents that concern themselves with the continuous operation of qualitative functions in embedded systems, supposedly providing some service of societal concern. In this particular case, the occurrence of unanticipated events must be dealt with from an organizational perspective. Some agents need to reconfigure available resources within parcels in such a manner that certain organizational missions and goals are fulfilled – coordination. In summary, we are dealing with three kinds of agent concerns, i.e., exploration, refinement, and operation. Moreover, even though the common prerequisite of all these concerns is that of phenomena isolation, they all share yet another commonality – adaptation. That is, with each type of agent’s general intent in mind, they all require the capability to adapt certain aspects of the open computational systems involved. In doing so, we argue that this general requirement of adaptation can be formulated as a process of four distinct activities, i.e., articulation, construction, observation, and instrumentation. Moreover, as indicated in Figure 6.1, we consider the various components subjected to adaptation in such a process to involve those concerns, aspects, frustums, parcels, and events that are of particular relevance to some unique agent. Still, as previously indicated, we consider these agents to come with certain disparate intentions in performing some kind of adaptation. Therefore, in the following section of this chapter, we aim for a discussion regarding these activities with respect to the different types of agents involved. Moreover, one should perhaps note that, at various points in these discussions, there seem to be an implicit need for technological support in conducting the activities of adaptation – especially in those cases where the agents are of a computational nature. This is of course a correct observation. However, this implicit requirement will be dealt with in the next chapter of the thesis – Enabling technologies. Therefore, without any further ado, let us return to the outline of our methodological instrument of online engineering.
82
ARTICULATION
concerns
aspects
frustums
events
parcels
FIGURE 6.1 COGNITIVE FEEDBACK
M e th o d s t o w a r d q ua li t a t i ve b e hav i or of o p e n c omp utationa l systems s hould emph as ize th e iterative exploration, refinement, and operation of syst em ic ph en om en a. As su ch , cog n it ive agents address certain c oncerns in terms of individual aspects, e.g., discovery, sustainability, an d co ord i nat i o n, a nd d ea l wit h t h e se by m e ans of fru stu ms an d isolated pa rcels. In es sen ce, it is the in situ dynamics of parc e ls that agents aim to har ness .
6.3 ARTICULATION
As previously indicated, we consider all agents involved in upholding the qualitative behavior of some system as primarily concerned with systemic exploration, refinement, or operation. Our model of open computational systems therefore comprises four levels of abstraction, or separation of concerns, i.e., domain, system, fabric, and environment. Moreover, in order to elicit the physical constituents and interactions of some phenomenon, we have explicitly emphasized the first-order nature of the domain level. The practical reason for doing so is that, irrespective of some particular agent’s primary function or unique intentions, all agents share the common need to explicate certain aspects of phenomena. The frustum construct of our model was therefore introduced in order to provide for cognitive agents’ need to isolate those physical constructs that they, for whatever reasons, intend to adapt. Such an activity of isolation is, however, appropriately characterized as an observation being performed. As such, we will return to this in a subsequent section of this chapter. In our current context, one should instead emphasize the construct of frustums to exhibit one feature in particular; they are accessible online. As such, frustums function as an individual agent’s a point of departure in exploration, refinement, and operation of grounded semantics. Moreover, as a matter of these constructs being available online, they are accessible to just about any cognitive agent involved. Therefore, we should consider them as mere aspects, i.e., aggregates, of the same physical phenomenon. In every respect, frustums are the means to an end of cognitive agents to make their concerns explicit – articulation.
ONLINE ENGINEERING
83
Depending on the different types of agents that we are dealing with, the activity of articulation can be a consequence from different agents’ intentions. For instance, if we would be dealing with an exploratory agent, the activity of articulation would typically focus on identifying those conceptual structures that, for the time being, are understood as most dominant in some particular phenomenon of concern. If the agent’s principle concern instead would be that of refinement, the activity of articulation would rather focus on identifying those physical constructs that convey the current behavior of some system aggregate. Finally, if we are dealing with an operator agent, the activity of articulation typically emphasizes the need to identify both conceptual as well as physical constructs so that they, in different configurations, reflect the intended behavior from coordination or automation. The common denominator in all these cases is that the agents involved require the capability to identify and articulate their experience of a natural phenomenon. It should perhaps be noted that all agents – human as well as software – typically do so in terms of individual aspects. However, these individual experiences of natural phenomena are subsequently used by agents in performing their unique functions. To this end, we argue that our model of open computational systems provides for such a general requirement and activity of articulation, i.e., the need to articulate individual experiences of phenomena and mapping them onto the conceptual structures of frustums: DEFINITION 6.1
a r t i cu l atio n = e xp e r ie n ce → f r us t um
For example, as cognitive agents, we perform this activity of articulation in science as well as engineering. In particular, when it comes to software engineering, articulation work is in many ways the essence of requirements elicitation and architectural design, whereas it perhaps is more appropriately characterized as a matter of process modeling in the field of computer science. In the case of exploratory agents, we therefore consider articulation work to aim for the creation of applicable models, whereas in the case of agents concerned with refinement, such articulation work emphasizes the creation of applicable designs. Moreover, when it comes to agents of operation, the case is actually a bit of both, i.e., modeling and design, in that they aim for conveyance of mission objectives and plans. However, even with these slight differences in wording, all agents necessarily need to articulate their individual perspective – aspect – of some particular concern and context at hand. We do so in order to increase the likelihood of an effective outcome in dealing with those phenomena that troubles us. As previously indicated in the thesis, such articulation work is, in essence, based on the individual experiences of those agents involved, aiming for a better understanding of some particular situation at hand. Nevertheless, even though articulation work as such might be considered as an abstract and, at times, theoretical activity, the main reason for any agent to take part in it is to effectively deal
84
CONSTRUCTION
with the concerns at hand, in a most concrete and practical manner. To this end, we consider the activity of articulation to involve the identification and definition of frustum constructs at all abstraction levels of open computational systems – domain, system, fabric, and environment. A practical example of this articulation procedure will be further discussed in Chapter 8 – Network enabled capabilities. 6.4 CONSTRUCTION
When it comes to construction of physical and artificial systems, the process as such is based on a quite straightforward procedure. First, we identify the conceptual domain, or application, of the requested behavior. Secondly, we identify the primitive concepts as well as relations of the application domain and, eventually, we will end up with a design. Finally, we map the design concepts and relations onto physical equivalences. Once we have implemented these equivalences, and they start interacting with each other, the physical embodiment of our designed behavior is in place. If the system in question was a stand-alone application, this methodological approach to construction of artificial systems exhibits no apparent drawbacks. However, if our application is intended to be embedded in the habitat of nature, we encounter a key drawback in this procedure. The somewhat traditional approach toward construction of artificial systems, as previously outlined, assumes the existence of a one–to–one mapping between the design and model of some particular system’s behavior, i.e., a mapping that is assumed to be a correct representation of an a priori existing phenomenon with verified qualities. However, if we are to assume that a particular design is a correct representation of an established behavior in some system, the system must necessarily be of a closed nature from a stimuli point of view. In principle, the construction of most any artificial system relies upon the assumption that the envisioned product never is subjected to any unanticipated and dominant stimuli from the surrounding environment. It is only under such circumstances that we can assume a design to be a true representation of some particular system behavior. Moreover, it is only when this assumption can be held as true that we can we make use of the corresponding design as a model in assessment of the artificial system’s factual behavior. From this perspective of dependency constraints between designs and models, it stands clear that the previously described approach to construction of artificial systems exhibits a quite fundamental drawback. In general, it can be said to assume that the system to be constructed will be of a closed nature. More specifically, the approach assumes that no part of the target system will be subjected to a dominant and external influence that was not accounted for in designing the system. Consequently, the assumption does not hold if the target system is considered to be of an open nature, i.e., a
ONLINE ENGINEERING
85
system that possibly can be under the influence of unaccounted for stimuli from the physical environment it is embedded in – which is the particular case with open computational systems. In the process of constructing and in the end deploying an artificial system, we tend to solely focus on achieving the so much sought for behavior of some particular system, and therefore neglect to consider whether or not the corresponding design reflects the factual behavior of an open system. Does the architectural design map onto a valid model of behavior? If the target system in fact is of an open nature, this necessarily means that the original design only encompasses an incomplete set of possible state space transitions that the in situ system will undergo during its lifetime – according to the empirically established model. Consequently, if we aim at constructing an open system, it is imperative that we understand the requirements involved in transforming a design into a physical equivalence, i.e., system behavior. SOLUTION 6.2
The mos t impor t a nt asp e ct of tran sfor min g desig ns in to systems is not pr imar ily that the or ig inal design is pres erve d th ro ug ho ut co ns tr u ction of the target system, bu t rather that it is preserve d an d ma de ac cess ible fo r adaptation dur ing the system’s in sit u e vo l u ti o n , i n t he fo r m o f an online accessible model of behavior.
It is important that we make this distinction since the design of a system is of a closed nature, whereas the system as such can be of an open nature. In the construction of artificial systems, that by nature are open, we should therefore strive for a decoupling of conceptual structures and their physical equivalence, at the same time as both of them should be opened up and made accessible in an online manner. In effect, this is why our model of open computational systems includes the particular constructs of frustums and parcels. As described in the previous section of this chapter, the first activity in the method of online engineering is that of articulation. As such, it primarily aims for mapping individual agent experiences onto frustum constructs. Moreover, due to individual intentions of the various agents involved, the characteristic feature of these frustums can be described in terms of models, designs, or even plans. Naturally, the next phase in online engineering is that of construction, where we aim at implementing the constructs involved. That is, during the construction phase of online engineering, we consider the agents involved to turn their individual experiences and expectations into concrete manifestations of system behavior. In practice, this activity of construction therefore aims to instantiate the most dominant aspects of an open computational system, as identified at one point in time, in the form of frustums and parcels. DEFINITION 6.2
c o n st r u c t i o n = f r us tu m + pa r c e l
86
OBSER VATION
In essence, the construction phase of online engineering takes all frustum constructs, as identified during articulation, as input to the process of transforming them into concrete instantiations at the environment, fabric, system, and domain levels of open computational systems – in that particular order. In effect, the construction phase of online engineering is mainly a matter of standard procedure in software engineering. However, instead of discarding the possibility of online access to the original design, we instantiate this as well in its concrete form at the domain level of our systems. A practical example of this construction procedure will be further discussed in Chapter 8 – Network enabled capabilities. 6.5 OBSER VATION
As indicated on numerous occasions throughout this thesis, we consider phenomena in nature to occur as a matter of spontaneous processes. Moreover, even though we are concerned with artificial systems in such environments, we are primarily dealing with constructed phenomena. Initially, we consider these systems to be engineered from the outlet of an articulated design and, hopefully, we feel confident in that the resulting systems act according to our initial intentions. However, as we have previously discussed, this confidence is severely hampered when the system in question all of a sudden is subjected to external stimuli – unaccounted for in the original design. Consequently, we have proposed to design and implement these systems in accordance with the same model that we advocate in observation of their dynamic behavior. In doing so, we address the dichotomy of computer science and software engineering so that practitioners of both fields can articulate their concerns in a similar manner – the model of open computational systems. The foremost challenge that faces us in dealing with open computational systems, i.e., no matter if we consider ourselves agents concerned with exploration, refinement, or operation, is that of unaccounted for stimuli. That is, stimuli of such dominant dimensions that the intended behavior as well as continuous operation of some particular system is more or less negatively affected. Of course, at the very core of these events lies the fact that we did not anticipate them when originally designing, implementing, and deploying some particular system. However, instead of advocating new design patterns that supposedly would be better at facilitating our anticipation of these unforeseen events, we advocate a model and method that help us in dealing with their in situ impact. In essence, we therefore emphasize a method of online engineering that, first of all, emphasizes the articulation of open and online phenomena and, secondly, it emphasizes that the subsequent construction of such phenomena should be geared in conformance with the model of open computational systems. Finally, and most importantly, all model constructs
ONLINE ENGINEERING
87
identified during the articulation phase, i.e., domain, system, fabric, and environment frustums, must be constructed and made readily accessible in an online manner. When an agent has completed these particular activities, it goes into an online phase. An online phenomenon is now readily available for cognitive inspection – observation. At this point, it is perhaps important to note that the observable phenomena at our hands necessarily were constructed by cognitive agents, that each come with their individual concerns, aspects, and intentions. Hence, if we consider a physical environment that was empty, prior to the construction phase, the only aspects of the now embedded and open computational system that is available for cognitive inspection are those articulated by the agent that constructed the phenomenon in the first place. Even so, as we have previously stated throughout this chapter, all constructs resulting from the articulation phase have been implemented during the construction phase and are now available for observation. Moreover, as we discussed in the previous chapter of the thesis, in order to observe some particular phenomenon, an agent must first isolate it, which is done by first selecting an appropriate domain frustum and subsequently apply this in order to identify all constituents and relations of the agent’s concern. We consider the resulting volume of constituents and relations to be that of a physical parcel. DEFINITION 6.3
o b s e r vat i o n = f r u s t u m → pa r c e l
Now, with some dynamically identified parcel at its hands, the agent can analyze the current situation it is concerned with. However, only from the particular aspect of an online accessible domain frustum. Given the individual intent of the agent – exploration, refinement, or operation – the quantities manifesting themselves, in terms of the parcel’s constituents and relations, can be considered as various forms of system qualities. For example, let us consider an exploratory agent with the intent of network monitoring. By means of continuously performing frustum–parcel isolations, i.e., observation, the agent have come to understand that at the time of one observation there are 512 mediators present in some particular configuration, whereas at the time of yet another subsequent observation there are only 256 mediators left. The goals state of the agent is to keep all mediators up and running according to some particular configuration and, therefore, some form of quality reduction has occurred. At this point, one should consider the agent to go into an exception state. That is, the system needs to be refined, due to the agent’s general function as well as individual experiences. In effect, agents need the capability to observe their evolving environment and understand the current context they are situated in. They need this capability in order to execute the most appropriate actions according to their intended function. When it comes to open computational systems, this call for the quantities in some parcel to be qualified, i.e., measured and analyzed, in a dynamic manner. If we related this line of reasoning to dependable computing systems, it is in fact the effective management of
88
INSTRUMENTATION
exception states that renders a system as dependable. This stands in clear contrast with the somewhat traditional mindset of the agent paradigm, i.e., that the qualitative behavior of systems can be ascribed to offline formal properties. In the context of this thesis, we therefore argue that the qualitative behavior of systems is equally important to consider, i.e., their dependable function in dealing with disruptive events. In essence, we argue that all agents therefore need the capability to adapt their current context, in terms of the same constructs as they applied in observing it. To this end, all agents need the capability to instrument open computational systems at run time, which is the final activity in our method of online engineering. 6.6 INSTRUMENTATION
The first three activities in our method of online engineering were those of articulation, construction, and observation. When an agent – human as well as software – performs these activities it is primarily by means of our model of open computational systems. As such, the main emphasis of this particular model is the notion of online phenomena that perform within the constraints of a physical–temporal–conceptual state space. Moreover, as a result from unanticipated events and stimuli, the intended behavior of such phenomena can deviate in such a manner that the quality of service provided for reaches unacceptable levels. Unfortunately, whereas the occurrence of such events is unquestionable, so is the fallacy of standard divide–and–conquer approaches toward establishment of system qualities in software engineering. To this end, we therefore advocate a complementary method of online engineering, in which we emphasize the establishment of qualitative system behavior, as a matter of iterative in situ refinement. As previously discussed in this chapter, we consider such systemic refinements to rely on three activities in particular, i.e., articulation, construction, and observation. Moreover, in each activity the notion of an applicable and online accessible model is of the essence. At first, one needs it in order to characterize some sought for phenomenon. Then, the model is needed in order to transform such phenomena characteristics into a programmatic behavior. Finally, it is needed in order to explore the dominant aspects of some online phenomenon under study. As indicated at an early stage in this chapter, the method of online engineering is in essence geared toward in situ systemic refinements. That is, when we have appropriately isolated and understood our concerns regarding some systemic phenomenon, we are primarily interested in instrumenting it in such a way that it once again can perform within the state space we originally intended. Moreover, in doing so, we would also like to understand the impact of our refinements, i.e., to what extent did our refinements increase the currently provided quality of
ONLINE ENGINEERING
89
service – or lack thereof – from an online perspective. Apparently, to instrument an open computational system is to perform the activities of articulation, construction, and observation all over again. However, in such a second iteration, there already exist model constructs in the physical environment, as a result from the original iteration. These can be changed, removed, or created by any cognitive agent with access to them. DEFINITION 6.4
i n s t r u me n t ati o n = e x p e r ie n c e → f r u s t u m
Consequently, validation and establishment of stable system behavior is at the core of online engineering. The method therefore emphasizes an iterative approach where real time observation and instrumentation of systemic qualities is of the essence. Furthermore, if observation indicates that the behavior of some particular system is unstable or about to fail; agents need the capability to instrument the system in such a way that it at least can be kept online. A complete shutdown should be considered as a complete failure. In this respect, if a negative system quality has been observed, the activities of articulation, construction, and observation are simply performed once again. This time the susceptible constructs of an online phenomenon can all be subjected to adaptation and refinement, in such a way that the system is moved back into its intended physical–temporal–conceptual state space envelope. However, to instrument a system in such a way that it can be considered as dependable requires the involvement of cognitive agents that are effective in management of unanticipated exception states. As such, appropriate management of these states is in every respect dependent on the level of capacity to deal with complex cognitive phenomena. 6.7 CONCLUDING REMARKS
In summary, the material presented in this particular chapter of the thesis aimed for a discussion regarding an applicable method of adaptation. However, it should be noted that, as such, the method is implicitly assumed to deal only with adaptation of online phenomena. Obviously, in doing so it does not address adaptation in general, which is often the challenging case advocated throughout many paradigms of computer science and software engineering. For example, in the field of proactive computing, it is stressed that, in order to harness the complexities of dependable computing systems, one need to implement computational entities – agents – that are capable of bringing humans out of the loop. In effect, it is argued that, in order to deal with certain principles of mechanics in complex computing machinery, we need to design and implement autonomous agents that, in an automated fashion, are capable of relieving humans in their traditional role of operating the involved systems. With the intricate features of such a requirement as well
90
(ARTICULATION) (INSTRUMENTATION)
CONCLUDING REMARKS
(CONSTRUCTION) (OBSERVATION)
FIGURE 6.2 APPROXIMATING QUALITY
Me th o ds to war d qua l i t a t i v e be ha v io r o f op en computa t ion al systems a re c on cern ed with th e iterative exploration, refinement, and operation of syst em ic ph en om en a. As su ch , cog n it ive agents in genera l a ddres s certain concern s in terms of in div idua l a spec ts , e.g., disc ov ery, sustainability, and coordination.
as the dichotomy of computer science and software engineering in mind, the author would like to stress that taking humans out of the loop in these systems perhaps is not such a good idea after all. As initially indicated in this chapter’s first section, we argue that cognitive agents are of the essence in dealing with complexity. As is the case in proactive and autonomic computing, we also envision the involved agents to address the fundamental challenge of harnessing complexity in contemporary computing machinery. In our case, we do so in terms of open computational systems that are harnessed by means of cognitive agents that perform the method of online engineering. However, in doing so, we consider these agents to be of a human as well as computational nature. Consequently, a critical feature of these cognitive agents is the notion of their functional intent, i.e., exploration, refinement, and operation. We claim that these particular functions are of crucial importance in providing for dependable behavior in complex and embedded systems of computing. Still, the functions are in fact capabilities of a cognitive nature, i.e., they require the explicit capability of an agent to understand its physical–temporal–conceptual context and act accordingly. Some of these contexts are of such a complex nature that agents of computation simply will not do. To this end, the author would like to stress the need for methodological instruments in computing, e.g., methods, which primarily emphasize the principle intent of bringing certain cognitive agents back into the loop. Throughout this thesis we therefore advocate instruments that should be considered as equally applicable by those particular agents – no matter if the agents that implement them are of a human or computational nature. In
ONLINE ENGINEERING
91
this particular chapter, we have therefore advocated a method of adaptation that involves the general agent intents of exploration, refinement, and operation as well as the common activities of articulation, construction, observation, and instrumentation. Moreover, we claim that performing these activities in an iterative manner eventually will lead to a particular agent feeling (yes; only if it is a human agent) confident in that some approximate quality of concern has been attained (see Figure 6.2). However, in doing so, we should also acknowledge the fact that all agents require different forms of technological support in attaining such confidence. That is, all cognitive agents – human and software – require technologies in performing their intended function as well as in fulfilling their intended missions and plans. Consequently, we will address this issue head on in the following chapter – Enabling technologies.
7 ENABLING TECHNOLOGIES
The re al voyage of discovery consists not in seeking new landscapes, bu t in havin g n e w e ye s. M . P ro us t
7.1 INTRODUCTION
The model of open computational systems could be considered as the somewhat theoretical underpinnings of the methodological approach and mindset advocated throughout this thesis. The more practically oriented instrument of our approach is the method of online engineering. However, there is still one pivotal instrument of the methodology that we have not discussed yet. Ironically, this particular instrument is perhaps the most important one of them all, but the introduction and discussion of its applicability would not make much sense without the background of the other instruments. Naturally, the instrument in question is that of enabling technologies. Enabling technologies of some particular methodology typically aim to support the practitioner(s) in applying the involved models and methods. In our everyday conduct of computer science and software engineering, we need technological support when we model systems, design architectures, and perform experiments. One should consequently consider enabling technologies as one of the pivotal instruments in any methodology of computing. However, in doing so, the involved technologies are always conceived with some particular intent in mind. For example, we develop simulators to verify certain models and we develop whole development environments to facilitate the design, implementation, and debugging of complex computing systems. With such technologies in mind, the general intent of developing them is to provide for a much-needed service in either the field of computer science or in the field of software engineering. However, we typically do not develop such technologies that simultaneously address the concerns of computer science and software engineering. We argue that the reason for 93
94
INTRODUCTION
this state of affairs is a natural consequence from not considering scientific models and engineering designs the same instrument. However, as argued in the previous chapter of this thesis, in the context of open computational systems this is a crucial prerequisite. If we are to address the dichotomy of computer science and software engineering, we stress that all cognitive agents involved – human and software – must apply the same model, and its corresponding method, in establishment of dependable computing systems. It is with this reasoning in mind that we propose our methodological instrument of technologies. In essence, the enabling technologies toward online engineering of open computational systems aim to provide support for technological requirements, as introduced by those cognitive agents that intend to apply the methodology in its full form. We have previously identified these agents to primarily concern themselves with exploration, refinement, and operation of complex computing phenomena. With this understanding at hand, and even though the primary requirement of these agents is cognitive inspection, one should note that some of the agents are of a computational nature. Consequently, the mere existence of some agents and their proliferation is even more important to support. However, since we consider software agents to be part of the very phenomena they intend to explore, refine, and operate, one should consider this requirement in a similar manner as we consider those involved in supporting the existence of open computational systems as such, i.e., technical platform requirements. To this end, it is important that we explicate on what level of abstraction that the involved requirements operate. For example, if one would focus on development of a simulation engine, certain technological concerns such as discrete versus continuous resolution of time must be appropriately managed and implemented. If we instead would focus on the development of a compiler, general concerns such as lexical analysis and type checking could be of the essence. However, in both cases, the requirements involved focus on an isolated application. In our particular case, we are concerned with requirements of a more general nature, i.e., the requirements of a methodology. It should however be noted that, irrespective of application domain, all requirements considered in development of a methodology’s enabling technologies should operate on the same level of abstraction. An erratic application of this rationale would typically correspond to the activity of looking for bacteria by means of an electron microscope, and then perform surgery by means of a kitchen knife, with the sole intent of removing bacteria. In essence, all application domains are constrained by their unique and implicitly articulated levels of abstraction. In our particular case we are dealing with the application domain of a comprehensive methodology toward dependable computing systems. As such, we claim that our fundamental levels of abstraction involves those identified in our model of open computational systems, i.e., domain, system, fabric, and environment, and the principle intent of our technologies is to provide support for the method of online engineering, i.e., performing articulation, construction, observation, and instru-
ENABLING TECHNOLOGIES
95
mentation. Moreover, all technologies as well as their requirements are naturally also geared toward the needs of some particular group of users. In our case, this set corresponds to all cognitive agents involved in online engineering, i.e., agents that emphasize exploration, refinement, and operation of open computational systems. In summary, the requirements imposed on our enabling technologies therefore involve levels of abstraction, practical activities, and intended users. Since the general users of the involved technologies are those cognitive agents that concern themselves with issues of complexity, the fundamental design rationale applied in developing our enabling technologies is that of validation. To this end, we envision a software architecture that features cognitive exploration and qualitative refinement of open computational systems. Consequently, in this particular chapter of the thesis, we aim to introduce an outline of this architecture as well as a discussion regarding the rationale of its design. In this chapter’s first section – Architecture for validation – we introduce a historical perspective on the evolution of this architecture. In particular, we emphasize that the technologies involved have been developed in parallel with the methodological framework advocated throughout this thesis. However, the architecture outlined herein should by no means be considered as complete or comprehensive, but rather as a reflection of our current progress in development. In the following section – SOLACE – we therefore introduce the historical background as well as certain experiences in development of the first architectural component. As such, this particular component supports us in experiments regarding the mechanical principles of computing. Moreover, it is on top of this component, or platform, that all systems are implemented. In the next section – DISCERN – we introduce yet another historical background and discussion on experiences gained from development of enabling technologies. However, this time the topic of choice is our architectural component for exploration of open computational systems. Finally, in this chapter’s last section – Concluding remarks – we discuss certain insights gained during development of our architecture for validation as an activity parallel to that of methodological framework development. 7.2 ARCHITECTURE FOR VALIDATION
If appropriately implemented, the mechanical principles of computing and communication enable us to provide for effective symbol manipulation and information sharing. Scientists and engineers nowadays use the resulting artifacts to develop computational systems at an unprecedented scale and complexity. However, as the desirable property list of monitoring, evaluating, and instrumenting system complexity increases, so does unfortunately also our uncertainty whether or not a particular system will behave in a predictable manner – given certain preconditions and systemic goals.
96
ARCHITECTURE FOR VALIDATION
An indication of the problem at hand manifests itself by means of the following two interrelated and implicit assumptions involved in design and maintenance of complex networked systems. On the one hand, networked systems that provide society with supportive functions are in many cases of a more or less physical nature. In effect, they are simultaneously subjected to physical stimuli and able to influence the state of their surrounding environment. On the other hand, as a direct consequence from their physical and networked nature, these systems have no clear boundaries. That is, the overall behavior of these systems varies from one point in time to the other, as a result from the occurrence of possibly unanticipated events – open computational systems. If this is the case, we cannot validate the factual behavior and semantics of some particular system in an offline manner. Instead, we need an approach that explicitly addresses validation of behavior in continuously evolving systems. Moreover, to observe and instrument these open computational systems can necessarily not involve stopping some particular phenomenon in question; just in order to study its complex structure at a given point in time. Instead, we need methodological instruments that convey the essence of system behavior in constant flux – online engineering. To this end, we have advocated a comprehensive methodology that, on the one hand, emphasizes the dichotomy of computer science and software engineering and, on the other hand, it emphasizes exploration, refinement, and operation of in situ behavior in complex computing systems. In effect, we need enabling technologies that, above all, provide support for cognitive agents to create, change, and inspect the real time behavior of supposedly dependable computing systems – qualitative system validation. In pursuit of such features of enabling technologies, one should necessarily also stress the concerns as identified in our methodological instrument of principles, i.e., computation, communication, coordination, automation, and recollection. We argue that, no matter which application domain of distributed computing systems we are currently dealing with, these principles necessarily have to be addressed in the above order. For example, before addressing concerns regarding coordination, one must necessarily have dealt with the concerns of computation and communication in an appropriate manner. It is from this perspective that we introduce the subsequent chapter of this thesis – Network enabled capabilities. As such, the material discussed in that particular chapter is concerned with the coordination of various system entities and services, by means of cognitive agents – human as well as software. However, with such a principle concern at hand, we argue that it cannot be effectively addressed or dealt with until the enabling technologies for computation and communication have been developed. We believe that this kind of iterative development of all methodological instruments are crucial in attaining the sought for quality of dependable computing systems. That is, one must necessarily start all methodological endeavors in dependable computing as a matter of identifying certain fundamental principles of concern, e.g., coordination. These
ENABLING TECHNOLOGIES
97
principles must then subsequently be supported in terms of applicable models and associate methods. However, it is not until we have developed technologies that enable the application of these models and methods that we actually can perform experiments with respect to the particular concern at hand. Consequently, in 2001 the author got involved in a research project that addressed the notion of trustworthy and distributed systems of a service-oriented nature (SOTDS).23 The project emphasized two strands of research and development in computing: methodologies and systems. In other words, the project aimed to develop experimental systems in the field of dependable computing and use these to validate the applicability of a comprehensive methodology. The attentive reader should note that this is exactly the kind of approach advocated throughout this thesis – putting theory into practice. In essence, we considered the experimental systems developed in the project as subjected to a kind of methodological benchmarking, i.e., addressing the dichotomy of computer science and software engineering. We will delve further into an applicatory example [22; 23] of such systems in the next chapter of the thesis. In our current context, however, it should be noted that in order to provide for methodological benchmarking activities, as envisioned by the SOTDS project, the development of enabling technologies is a crucial activity to perform. The general intent and rationale of such technologies is, as previously indicated, to provide us with the supportive and principle functions of computation as well as communication. Moreover, the technologies are required to provide for such functionality that certain models and methods can be assessed and, consequently, in our particular case they also need to provide for different forms of cognitive exploration, refinement, and operation. It is in essence these requirements that must be fulfilled by our methodological instrument of technologies. In summary, we therefore consider these functional requirements as involved at four levels of importance: principles, models, methods, and users. Consequently, in the following sections of this chapter, we will delve further into a historical perspective of insights gained from developing such functional support at the various levels of importance involved. In particular, we will describe one set of architectural support for distributed computing and one set of support for cognitive inspection and interaction. 7.3 SOLACE
Due to the Service-oriented trustworthy distributed systems (SOTDS) project, the author has acted as the principle investigator in development of certain computing platforms. As such, these platforms were primarily required to provide support for fundamental concerns of distributed software systems. Of course, there is a whole plethora of such
98
SOLACE
concerns, as indicated in previous chapters of this thesis. However, in development of such platforms, one must necessarily first consider those concerns that precede the principle question at hand. For example, if we are about to perform scientific investigations and experiments with respect to the mechanical principles of coordination, one must first attain confidence in that technological issues, with respect to the preceding principles of computation and communication, have been resolved. Consequently, since the principle concern of the SOTDS project was that of coordination, an applicable platform for computation and communication had to be established. To this end, an extensive development effort was initiated by the engineering division at Societies of computation laboratories (SOCLAB)24 in 2001. Due to current technological trends at the time being, our development initiative emphasized the use of certain industry standard technologies for distributed computing, e.g., service gateways.25 However, since these particular technologies necessarily were conceived with a somewhat different intent and rational than our own, they had to be complemented with additional functionality. Consequently, our first initiative toward applicable technologies of dependable computing systems was that of a Service-oriented architecture for communicating entities (SOLACE) [21; 24]. As previously stated, the first step toward addressing the pivotal notion of coordination in distributed computing systems is to establish an applicable support for the mechanical principles of computation and communication. In the particular case of SOLACE, we therefore identified a suitable candidate and platform for such support. However, it would soon turn out that it merely provided sufficient support for computation and not communication. As such, the platform we utilized supported the deployment and management of services, e.g., software components. However, even though the general requirement of support for computation thereby was provided for, as well as possible to administrate in a remote fashion, the support for communication was not sufficiently supported. The main reason for this state of affairs was that the candidate platform apparently had been developed with requirements of isolated mediators in mind. Consequently, the platform at hand needed to be complemented with an enabling technology that supported mechanisms such as mediator conjunction and port interactions as well (see Figure 5.4 and Figure 5.3). In more precise terms, the basic platform we utilized lacked the support for automatic network formation and, consequently, it also lacked support for seamless entity communication. This was the primary reason for us to introduce the platform of SOLACE, i.e., to provide system entities with the technological support of seamless computation and communication. Our second concern regarding the requirements imposed on our enabling technologies was, as previously mentioned, support for the methodological instrument of models. Obviously, this was not a requirement addressed in any way by the original candidate we had come to utilize. In effect, the functional support for our model of open
ENABLING TECHNOLOGIES
domain system fabric computer environment
99
dependence interaction conjunction signal stimuli
domain system fabric computer environment
FIGURE 7.1 NODES
T h e l a y e r e d a r c hi t e c t u r e o f S O LA C E i s d e d u c ed from the mo del of open computational systems. In add ition , the arch itecture inclu des an intermediary layer of computer–to–computer signaling, which cor res ponds to t he t r a d i t i onal I SO-OSI mo del.
computational systems had to be completely supported by means of the SOLACE platform alone. However, with functional support for seamless computation and communication in place, certain constructs provided for by the original candidate could now be utilized in a quite effective manner. The notion of service interfaces actually corresponds to that of our ports in some dynamically isolated system parcel. Moreover, the frustum construct of system entities stood in direct correlation with the concept of a distinct service. Still, all other frustum constructs, and subsequent parcel instantiations, had to be provided for by the SOLACE platform. This leads us to our enabling technology’s support for those activities emphasized by the method of online engineering, i.e., articulation, construction, observation, and instrumentation. As we discussed in the previous chapter of this thesis, there are typically two iterations of this method that needs to be performed. At first, we are dealing with an environment that is devoid of any constructs of open computational systems. Then, with the intent and rationale of some human agent(s), the activities of articulation and construction are undertaken in order to fill this void with constructs of a computational nature, i.e., a system is about to be created. In doing so, the articulated constructs at all levels of abstraction in our model, i.e., frustums, are to be transformed into concrete instantiations, in the form of parcel constructs. Consequently, the SOLACE platform needed to provide engineers with the capability of populating the various parcels with the previously articulated frustum constructs. In effect, at the environment level these populations naturally correspond to physical and networked computers. As an intrinsic part of each computer, the notion of mediators takes on the form of the SOLACE platform as such. Moreover, as a part of each mediator, we have a number of system entities that, in effect, correlate with the candidate platform’s concept of services. Finally, all those frustum constructs present at the domain level where provided for in terms of a
100
DISCERN
shared and distributed information space that was managed by our distributed mediators (see Figure 7.1). With all these practical prerequisites in place, the SOLACE platform now manifested itself as a more applicable candidate that the one originally identified. However, as we will discuss in the last section of this chapter – Concluding remarks – when one have successfully implemented a candidate technology, new challenges of a theoretical as well as practical nature arise. Nevertheless, without any further ado, let us continue with our introduction of yet another enabling technology involved – supporting cognitive exploration, refinement, and operation. 7.4 DISCERN
Ever since the start of the Service-oriented trustworthy distributed systems (SOTDS) project, we were aware of the fact that one of the major issues involved in dealing with complex phenomena is that of perception. That is, as human beings, we are quite used to ground our processes of reasoning and analysis in sense experiences. Moreover, in doing so, we are able to deduce qualitative aspects of fairly complex phenomena in an instantaneous manner. For example, consider the case where a person utilizes his or her sight and hearing in order to deduce the fact that some town square is occupied with a crowd of people engaged in a meeting. Without the need for quantitative and continuous monitoring of the situation, the observer can at an instance understand that there are many people – a crowd – and everyone, except a speaker, are just sitting there listening – a meeting. The cognitive agent in question could deduce these qualitative properties, or grounded semantics, instantaneously. Consequently, in 2002 yet another development effort was initiated by the engineering division at Societies of computation laboratories (SOCLAB). The intent and rationale of this particular effort was to provide for an enabling technology that effectively addressed requirements such as demonstration and comprehension of systemic qualities. That is, the development effort emphasized the notions of cognitive inspection and exploration of open computational systems. At this point, one should perhaps note that the cognitive agents involved in utilizing such a technology were only considered as those of a human nature, whereas in the case of SOLACE both human as well as computational entities were involved. Nevertheless, our second initiative toward applicable technologies of dependable computing was that of a Distributed interaction system of complex entity relation networks (DISCERN) [21; 24]. Whereas SOLACE addressed principle concerns such as computation and communication, one could think that DISCERN would address some principles of standard software engineering, e.g., performance or reliability. However, we argue that the time to
ENABLING TECHNOLOGIES
aspect DISCERN SOLACE computer phenomenon
101
sessions visualization mediation communication events
aspect DISCERN SOLACE computer phenomenon
FIGURE 7.2 PERSPECTIVES
Natu ral computing phenomena are c om m u n i c a t e d t h r o u g h c o mp u t e rs a n d t h e e v e n t s t h e r e o f a r e med iated by mean s of SOLACE. Th e ex plorator y te ch n olog y of DI SCER N can t h en con n ect t o various SOLACE mediators in a network and visu alize differen t localized aspect s of t h e ph en o me n a at ha nd b y m e an s of se s si o ns .
do so is yet to come. In our efforts of developing DISCERN, we came to understand that before one starts to address such requirements as performance or reliability, it is even more important to fully comprehend the context in which those requirements are called for. That is, technological requirements do not make much sense, if called for in a context where their applicability has not been understood yet. For example, what does high performance phenomena isolation mean? Consequently, the principles addressed in development of DISCERN were instead geared toward a first experience in visualization of open computational systems instead. As such, the major requirement that DISCERN needed to fulfill was providing for such functionality that different aspects of the same phenomenon could be perceived and interacted with in real time. Moreover, it was required to do so in accordance with the abstraction levels as defined by our model of open computational systems, i.e., visualization of environment, fabric, system, and domain parcels. In fact, it is due to this particular requirement that the notions of frustum and parcel constructs first appeared in our model of open computational systems. That is, as human beings we needed some way of isolating system phenomena so that they could be rendered and interacted with on screen in a dynamic manner. Moreover, in order to do so, one necessarily needs to respect the idea that natural phenomena perform irrespective of some cognitive agent’s intention of observing them. Consequently, in parallel with development and evolution of some particular system, we needed a technological instrument that, without disrupting the phenomena of our concern, could provide for real time rendering of isolated phenomena in open computational systems. As indicated in the previous chapter of this thesis, observation of phenomena typically involves more than one agent. With this in mind, we must also consider the fact that by
102
CONCLUDING REMARKS
means of multiple agents involved, there suddenly exists a whole plethora of possible aspects of the same phenomenon. In DISCERN we dealt with these requirements in a quite trivial, yet effective, manner. A user of our instrument could select a predefined session that contained constructs corresponding to those normally found at the domain frustum level of open computational systems. Moreover, since we were dealing with observation of open computational systems, the corresponding parcel constructs of an online phenomenon were readily available for exploration by means of SOLACE. Consequently, DISCERN could be configured to provide for different session configurations – aspects – which then could be selected by a user in order to dynamically deduce and visualize parcel constituents and relations in real time (see Figure 7.2). 7.5 CONCLUDING REMARKS
In this particular chapter of the thesis, we have introduced an architecture for validation of systemic qualities. As such, the architecture involves two interrelated components. On the one hand, we have a technological platform that provides support for the model of open computational systems as well as the method of online engineering – SOLACE. On the other hand, we have a visualization and interaction tool that provides support for cognitive exploration of open computational systems. During the years of developing these enabling technologies of our methodology, they have proven themselves a continuous source of inspiration. However, such inspiration typically comes in two strands, i.e., opportunities and challenges. Let us therefore conclude this chapter with a brief summary of our experiences from developing the two technologies. As we have mentioned at an early stage in this chapter, development of such a methodological instrument as technologies requires that they are consistent with the other instruments at hand. Consequently, since the methodology introduced in this thesis was a development effort in itself that started at the same time as our initiative of SOLACE, the first experience gained from developing our enabling technologies was in fact a better understanding of the dichotomy of computer science and software engineering. With the ultimate goal of addressing complex and embedded computing phenomena in the SOTDS project, one soon realizes that the challenge at hand involves talking the same language, applying the same instruments, and, above all, sharing the same experiences. With such an understanding in mind, the importance of the role that enabling technologies for visualization plays in our context must necessarily not be underestimated. In fact, we argue that this particular type of technology is crucial in our individual expectations of behavior in complex phenomena as well as in our sharing of experiences as a group – public harmony.
ENABLING TECHNOLOGIES
103
At this point, the author must admit that there in fact have been so many opportunities and challenges experienced during the development of our enabling technologies that it is simply out of this thesis’ scope to introduce them all. Instead, certain outstanding examples of a technological nature will be discussed. If we start with SOLACE, the perhaps most important challenge dealt with was an implicitly assumed design pattern of service-oriented computing. As we previously mentioned, our platform was developed on top of a platform candidate that in itself emphasized the notion of open services gateways. To this end, we have come to understand that such an emphasis in fact is a design pattern that focuses on the management of localized services at one particular computer, i.e., evolvability of coordination. If one aim for distributed systems of services, additional functionality must be provided for; and most notably the notion of a distributed name space of services. In standard engineering terms, such functionality is referred to as a lookup service. However, at this point one should note that, in a similar manner as that of localized service management, the notion of a lookup service is often conceived as a centralized component. That is, the lookup service is localized at one singular point in some network of nodes and all services involved are required to register their existence and capabilities with this particular component. Now, if we consider our principle concern at hand, i.e., dependable computing systems, just imagine the negative impact on coordinated system behavior if an unanticipated event of physical dimensions occur, e.g., resulting in the disappearance of the centralized lookup service. The whole system involved would seize to function. It is in order to anticipate the impact of such events that we introduced the notion of domain parcels, i.e., to provide for dependable coordination. In effect, the parcel constructs at the domain level of SOLACE is distributed throughout some network and possible to query from any singular point in the network. As such, we have therefore attained the notion of a decentralized system, not only from the perspective of physical locale, but from the perspective of conceptual discovery as well. However, even though this solution has proven itself quite effective in dealing with the critical event of node shutdowns, there is yet another challenge that we experienced in developing the enabling technology of SOLACE. Even though we had attained the notion of a dependable coordination mechanism, in accordance with the model of open computational systems, there was a more primitive issue of reliable communication manifesting itself, i.e., network formation. In essence, there exists yet another implicit assumption in service-oriented computing that the type of protocol one should apply in mediator conjunction is so-called broadcast protocols [57]. At design time of open computational systems, we are typically not aware of the temporal computer network topology available at run time. Therefore, the fabric level of our systems, i.e., mediators and conjunctions, is supposedly formed as a matter of automation. In the case of SOLACE, this principle mechanism of automation was implemented as a traditional TCP/IP broadcast protocol
104
CONCLUDING REMARKS
FIGURE 7.3 STATE SPACE
By mean s of DISC ER N, v isu aliza t ion of some op en computational system’s state space was attain able. In particu lar, one could dedu ce struc t u r a l p r op e r t i e s of s ys t e m s , i n c l u d i n g i t s constituents and relations on four levels of ab s t r a c t i on , i . e . , en v i r on m en t , f a b r i c , s ys t e m, a n d domain.
that was able to identify only those computers available within one particular mediator’s subnet. In effect, our open computational systems could only comprise 256 mediators at most. Of course, this is not in any way an acceptable property of supposedly open and dependable computing systems. In retrospect, we have come to understand that mediators of open computational systems should apply a completely different approach toward reliable communication, i.e., peer–to–peer networks. When it comes to DISCERN, the challenges dealt with are of a somewhat different nature than those found in our enabling technology of SOLACE. In essence, the main challenge involved manifest itself as a problem of visually tractable formats of the physical–temporal–conceptual state space of our open computational systems (see Figure 7.3). As we previously mentioned, DISCERN was developed with the sole intention to facilitate cognitive experience of human agents. However, it should be noted that our development thereof was in fact the first attempt ever to render the state space of a phenomenon that we never had seen before. Still, we had come to understand that the visualization of an open computational system’s state space should conform to the various levels of abstraction as identified by our model. Consequently, with a fair amount of software engineering prowess at hand, we managed to visualize the parcel constituents and relations present at all abstraction levels of an open computational system – simultaneously. However, in retrospect, we have come to understand that this feature of DISCERN was not really called for in every situation of phenomena exploration. In fact, it is only particular combinations of the frustums that can be considered as relevant to
ENABLING TECHNOLOGIES
105
particular agents in certain situations. For example, just because we were able to visualize the environment, fabric, system, and domain frustums simultaneously did not mean that the experience gained by agents, with respect to situational awareness, would increase. On the contrary, too much information at once can in many situations result in confusing agent experiences. Consider the case where two operator agents are concerned with the organizational perspective of a network-based defense system. Suppose that one of the agents is primarily concerned with the coordination of defense artifacts in a marine environment, whereas the other agent – a smart mine – is intended to wreck as much havoc in the same environment as possible. The former agent’s situational awareness is not increased to any greater extent just because it is able to observe and reason about the systemic constructs present in some dynamically isolated fabric, system, and domain parcels. In fulfilling its intended function of system operator, it would typically suffice if it had the capability to articulate, construct, observe, and instrument constructs present at the environment level of an open computational system, i.e., coordinating physical artifacts. Any more information than this would simply confuse the agent and, in effect, result in a less dependable behavior of the system as a whole. In the case of the latter agent, however, the effective application of these online engineering activities would be greatly increased if all its attention could be focused on the fabric, system, and domain parcels. Consequently, we will delve further into this applicatory example of online engineering and open computational systems in the thesis next chapter – Network enabled capabilities.
Part 4
CONCLUSION
8 NETWORK ENABLED CAPABILITIES
The strongest arguments prove nothing s o lo ng a s the co nclu sio ns a re no t ve r ified by exper ience . Exper imental scienc e is the queen of sciences and th e goal of all speculat ion. R. Ba c on
8.1 INTRODUCTION
The applicability of a methodological approach is, by all means, difficult to establish as a matter of formal methods and proofs. Therefore, as previously indicated, we consider it more appropriate to validate the applicability of it by means of concrete trials and real world experiments – theory in practice. This way, as a result from continuously performing experimental trials, the soundness of not only the methodology as such, but also its constituent instruments, can be further established as a matter of practical experience. In principle, we argue that an increase in our confidence regarding the methodological approaches we apply will help us in attaining confidence in the material they produce – dependable computing systems. Throughout this thesis we have advocated a methodological framework that emphasizes certain principle concerns of computing, the model of open computational systems, the method of online engineering, and a set of enabling technologies. However, none of these methodological instruments really matter if we do not have an applicatory domain at hand, in which all of the methodological instruments can be trialed simultaneously. That is, all instruments need to be trialed simultaneously so that we can consider a positive outcome from an experiment as an indication of our methodology being of a comprehensive and applicable nature. As the means to an end, we therefore need a concrete subject of study that encompasses all aspects of our framework. In pursuit of such an applicatory subject of study, we have identified the domain of network enabled 109
110
INTRODUCTION
capabilities as a suitable point of departure in performing our methodological benchmarking. In this context, there is currently an international development process taking place that greatly affects the intent and rationale of most any nation’s defense organization – a revolution in military affairs [1]. One could characterize this development as an increased emphasis on our national defense agencies’ organizational capability in terms of information technology. The general assumption at play is that most any defense organization should be able to increase its current capability by means of better exploitation of information and network technology. Of course, the rational underpinnings of this revolution in military affairs is in many ways what has characterized most any commercial organization for quite some years now. That is, if one increases quality in service provisioning along certain dimensions of organizational relevance, i.e., information reach and richness, the perceived value and benefit of the organization will increase [16]. However, due to the nature of defense organizations, we are not primarily dealing with the benefit of customer services, but rather with capabilities such as situational awareness and information fusion. To this end, the notion of a revolution in military affairs is pursued by a great many nations now, each with its own concept and particular emphasis in mind. For example, in the United States of America the initiative is called Network-centric warfare (NCW) [1] and in Sweden it goes under the name of Network-based defense (NBF) [86]. The concept adopted in this thesis, however, is that advocated by the United Kingdom – Network enabled capabilities (NEC) [61]. The main reason for doing so is simply that we consider it to most effectively encompass the rationale and intent of NCW as well as NBF, but also the principle concerns of other critical support functions of society, e.g., energy and healthcare. In principle, the concept of network enabled capabilities emphasizes the idea that a better exploitation of information and network technology will increase some organization’s capability of effective performance. As such, it is assumed that a wide range of actors can perform in a more effective manner if; on the one hand, they are supported by enabling technologies and, on the other hand, they are empowered with cognitive capabilities. Naturally, in this context one is tempted to solely focus on various qualitative dimensions of network technology as such. After all, it is the most prominent feature and principle facilitator of our subject of study. However, we consider such an exclusive focus to be something of an oxymoron. As we have argued throughout this thesis, we consider enabling technologies to be a subordinate instrument of some methodological approach at hand. Obviously, it is crucial that this particular instrument is part of the methodologies we advocate, but in isolation it will not provide for the kind of confidence we seek in dependable computing systems. Only a positive experience from applying some comprehensive methodology can achieve this.
NETWORK ENABLED CAPABILITIES
111
Consequently, with this notion of network enabled capabilities in mind, we argue that it is the application domain’s general concerns that primarily should be addressed; by means of a methodological approach at hand, in which enabling technologies are only one out of many instruments, in deliverance of some qualitative and dependable system behavior. Now, with this principle argument in mind, one could question what the major concerns of network enabled capabilities really are. As a matter of fact, no one seem to know for sure – only that an implementation of the concept as such indicates that our national defense organizations will attain an increase in capability as well as efficiency [86]. The natural explanation for this peculiar state of affairs is probably twofold. On the one hand, our general experience from computing machinery’s awesome performance in information manipulation and distribution, i.e., computation and communication, is by now considered as a quite established fact. On the other hand, no one has yet seen the phenomenon of network enabled capabilities in all its supposedly splendid and efficient might. There are however indications within the research and development communities of agent interaction26 and associate coordination strategies27 of what kind of capabilities the individual as well as the group require. This is in fact why we consider the application domain of network enabled capabilities a suitable candidate for our methodological benchmarking activity at hand. The general concerns involved call for a better understanding of a particular type of dependable computing system’s nature and complexity, as do they call for a better understanding of appropriate ways to harness the complex behavior of these systems – all in the service of a critical support function of society. These particular concerns perfectly match those initially identified in this thesis (see Section 1.2 – Challenges in dependable computing). Consequently, the main intent with the material presented in this chapter of the thesis is to discuss our experimental development of an applicatory system and demonstrator of network enabled capabilities. As such, the system and our construction thereof is used as a reference case for assessment of the methodology advocated throughout the thesis – applied in its full form. Consequently, this particular chapter of the thesis is structured as follows. In the next section – Experimenting with dependability – we present the rationale of developing the experimental system discussed herein. In particular, we are concerned with the notion of dependable coordination and the conduct of cognitive agents in provisioning thereof. In the following section – TWOSOME – we consequently outline the envisioned system as well as its principle constituents and function. Then, in the next section – Benchmark – an example of applying the model of open computational systems, as well as the method of online engineering, in constructing the particular system in question is introduced. Finally, in this chapter’s last section – Concluding remarks – we introduce certain insights gained, with respect to our experiences from developing this particular system, as related
112
EXPERIMENTING WITH DEPENDABILITY
to the notion of dependable coordination. However, without any further ado, let us start our discussion with a historical background of the experiment at hand. 8.2 EXPERIMENTING WITH DEPENDABILITY
Due to our current progress in developing the means for computation as well as communication, information manipulation and distribution can be conducted in increasingly sophisticated ways. Moreover, as the performance envelope of the respective technologies increases, so does our own pace at which we envision new and innovative ways of combining them. Perhaps, our general intent in doing so is that computing systems, with their awesome performance in dealing with information, can provide for a much-appreciated service in various application domains of great benefit for humanity, e.g., critical support functions of society. However, in the wake of this progress follows a peculiar effect. As the involved systems of computing operate at an ever increasing speed as well as scale, we simultaneously call for new approaches of command and control thereof [73]. That is, computing systems far surpass our own performance envelope in conducting mechanical operations, at the same time as we far surpass them in processes of cognition. In other words, computing systems are fantastic when it comes to automation of processes where information is the key asset dealt with, whereas human beings are equally fantastic when it comes to taking appropriate actions in processes where qualitative dimensions are of the essence [45]. If we consider these outstanding features of computing systems and human beings in combination, one could argue that the general concern at hand, i.e., in understanding and harnessing dependable computing systems, is in what way these capabilities can be integrated with each other. Perhaps even more important is the concern of how these capabilities can be integrated with each other in such a way that we do not lose our confidence in their interplay and subsequent behavior – dependable coordination. In this context, even though there is an ongoing debate regarding what an intelligent agent really is, the importance of coordinating their behavior is unquestionable. If not properly conceived, the harmonious behavior of cognitive agents – human as well as software – will end up in waste of efforts, resource loss, or mission failures. As argued by Durfee, we face the challenge of developing a better understanding of how far different approaches scale along various dimensions of complex coordination problems [13]. In the context of dependable computing systems and network enabled capabilities, we argue that coordination necessarily must be considered as a joint effort performed by human as well as software agents. Each type of these cognitive agents plays a pivotal role in the harmonious and dependable behavior of open computational systems. Therefore, in
NETWORK ENABLED CAPABILITIES
113
order for us to understand in what way their individual capabilities and competences should be applied in the most appropriate manner, we necessarily need to develop such experimental systems that, due to the occurrence of complex situations, all involved agents are physically challenged to perform at the boundaries of their constitutive performance envelope. Consequently, given our overall concern of dependable coordination, we should aim for experiments where in situ coordination of cognitive agents is the principle mechanism of system governance. Moreover, each agent involved in such a system requires the capability to perceive the temporal state of its physically grounded environment as well as the capability to effectuate those particular actions it considers most appropriate, for the time being. Even though these are the general prerequisites of agent capabilities, we should perhaps emphasize yet another required capability; to share experiences with each other by means of a common terminology. Since the cognitive agents involved come in two fundamentally different strands, i.e., human and software, and we still expect them to collaborate in fulfillment of some particular mission, it is important that they all make use of a common denominator in communicating their experiences. This is yet another reason why we advocate the notion of online availability of parcel constructs at the domain level of open computational systems (see Section 5.3 – Environment); to act as a facilitator for grounded semantics. Nevertheless, in our pursuit of an applicatory system to experiment with network enabled capabilities, as well as being an attempt to establish the soundness of our methodological framework, the attentive reader should note that we are in fact addressing the dichotomy of computer science and software engineering in this chapter of the thesis. That is, the foremost important instrument of our methodological framework is certain principles of concern; of equal importance to science as well as engineering communities involved (see Section 3.3 – Principles). We consider the fundamental concerns of dependable computing systems to be those of mechanics and design, and we have by now identified an applicatory domain in which these concerns can be appropriately addressed. Moreover, within the context of such a mechanical principle as coordination, it is now time to address the principle concerns of design, i.e., simplicity, performance, reliability, evolvability, and dependability. According to Durfee’s statement regarding challenges in agent coordination [13], we argue that it is along these dimensions that coordination of network enabled capabilities should be addressed and understood; in combination with the principle dimensions as advocated by the application domain as such, i.e., situational awareness and information fusion.
114
TWOSOME
8.3 TWOSOME
In 2002, the author got involved in yet another research and development project that, in effect, addressed the notion of Trustworthy and sustainable operations in marine environments (TWOSOME). As such, the project aimed to develop an experimental system to be used in validation of the methodological approach now advocated throughout this thesis. However, in doing so, the project also emphasized those dimensions of network enabled capabilities as stated in the previous section of this chapter. That is, we aimed for development of an open computational system, in which we could explore dependable coordination along the dimensions of situational awareness and information fusion. Of course, since the practical development of such an experimental environment was required, the intended gain from conducting the project was not only to validate the theoretical aspects of our methodological framework, but also to gain experience from actually engineering the type of systems we advocate. In summary, the TWOSOME initiative was conducted as a part of the SOTDS project (see Section 7.2). Moreover, the research and development activities that took place as a consequence from this initiative was a common effort performed by the Societies of computation (SOC) research group, the engineering division at Societies of computation laboratories (SOCLAB), and research personnel at Kockums AB (KAB).28 It should perhaps also be noted that, at the time being, the concepts of network enabled capabilities and networkbased defense was even less understood than they are today. Consequently, with its emphasis on visual demonstration and susceptibility, the TWOSOME project was to become an immensely effective facilitator of concrete discussion regarding opportunities and challenges in the field of network enabled capabilities. Nevertheless, the open computational system of TWOSOME was intended to frame a physical environment in which certain critical events could occur. Consequently, different missions where to be carried out by a coordinated group of cognitive agents, in order to counteract the events that occurred. In such coordinated missions, each agent involved was required to exploit its capability in exploration, refinement, and operation of the in situ systemic qualities at hand, i.e., situational awareness and information fusion. Moreover, if the coordinated group’s constituent capability to carry out some particular mission was negatively affected by yet another critical event occurring, it was required to remedy the situation in such a way that the general service provided for was sustained, i.e., addressing the principle concern of dependability along the dimensions of evolvability, reliability, performance, and simplicity. Consequently, at the core of the open computational system of TWOSOME was the development of a multi-agent system, where interacting and coordinated entities and
NETWORK ENABLED CAPABILITIES
115
FIGURE 8.1 ENVIRONMENT
I n t h e op en co m put a t ion al s ys t em o f Tru st w or th y and su stain able op erations in marin e environments (TWOSOME), the cogni t i ve a ge n t s o f t w o org a n i z at i on s – a tt a c k a n d d e f e n s e – aim for dep en da ble coor dina tion in creating or r e m o vin g p hy si ca l t h r e at s a t s om e par tic ul ar en vi r on m en t l oc a t i on .
services temporarily came together in a physical environment, in order to fulfill certain missions under dynamic and hostile conditions. From an organizational perspective, the systemic phenomenon involved could be characterized as a matter of creating and removing physical threats at some particular environment location – attack and defense. As such, TWOSOME involved a scenario that took place within a particular environment parcel of imaginary locale and topology; with the physical dimensions set to 1 * 1 * 1 nautical miles (see Figure 8.1). Within this volume, a number of natural constructs formed a marine environment where our systemic phenomenon could take place. Moreover, within this environment parcel, there were also certain artifacts involved that, as such, constituted the two organizations’ cognitive agents at hand. •
Mainland – The environment parcel involves mainland at two different locations. The two landmasses form a natural harbor within the parcel, in which naval vessels can start and end arbitrary operations. In particular, an operations centre is local-
116
TWOSOME
ized at one of the landmasses. It gathers intelligence and acts upon its currently available information history. •
Island – Localized between the two landmasses, the environment parcel includes an island that brings about two separate navigation channels. The depths of the channels are defined by the seabed topology, in relation to the sea surface. The navigation channels involved supposedly offer the safe passages for different marine vessels.
•
Operations centre – The environment parcel includes an operations centre that can gather intelligence, by means of sensors, and, as a result from recently acquired information of a critical event, decide that removing an identified threat would be of strategic importance. Consequently, an operations centre can create missions, delegate these to other agents, and then continue with its surveillance of the environment parcel.
•
Transporter – Mission performance is positively affected if a multi-purpose vessel, e.g., capable of transportation as well as surveillance, is involved. Therefore, the aforementioned operations centre can delegate a mine sweep mission to a multipurpose vessel that, in turn, transports a group of defenders from one location to another and, upon arrival, delegates the mission to the deployed defenders.
•
Attacker – An attacker, or mine, is considered to come with a predefined and fixed mission; to detonate when certain acoustic and magnetic signatures are identified in its surrounding environment. Consequently, a mine is equipped with sensors that recognize the occurrence of particular vessel types. By means of combining currently available sensor information, a mine can make informed decisions whether or not to detonate, given its current environment state history and identified vessel signatures.
•
Defender – The defender is a small autonomous vessel that emanates different acoustic and magnetic signatures in order to exhibit characteristics similar to those of general naval vessel properties, e.g., propeller cavitations and engine acoustics. In doing so, the defense vessel has the ability to trigger the detonation mechanisms of artifacts such as mines. This signaling feature of the vessel can also be combined with those of neighboring vessels’ and, consequently, create complex signatures.
•
Sensor – The sole purpose of a sensor is to acquire information concerning their surrounding environment and to pass it forward, in an appropriate format, to
NETWORK ENABLED CAPABILITIES
117
those artifacts that depend on such information in order to fulfill their particular missions. The first critical event that can occur in our open computational system is that an attacker – a smart mine – is positioned within the environment parcel and set to detonate whenever the presence of a particular vessel class is identified in the surroundings. Correspondingly, three coordinated defenders – autonomous vessels emanating acoustic and magnetic signatures – are assigned the role of sweeping different environment locations and aim at removing potential attackers, i.e., making mines detonate in a harmless way. The dependable and network enabled capability of this coordinated group of defenders is to outsmart the mine by means of providing fake signatures of real vessels. In TWOSOME, this group of defenders therefore explicitly accounts for the concern of dependable coordination, in that they continuously strive to sustain the capability of emitting a particular signature at hand. For example, due to the occurrence of such critical events as mediator shutdowns, this involves the continuous monitoring and sustenance of the group’s temporarily available capabilities. In TWOSOME, an attacker is deployed somewhere in the physical environment and thereby triggers three different sensors. Their acquired information is then passed on to the operations centre that, subsequently, delegates a mine sweep mission to the transporter. Thereafter, the transporter deploys the defenders, at a location estimated by the operations centre, and travels back to its origin (see Figure 8.2). It is at this point that we can start to understand the opportunities and challenges with dependable coordination of network enabled capabilities. Each defender possesses the capability of computation, communication, and coordination. Moreover, each defender possesses the capability to emit simple or complex signatures of standard vessel signals, e.g., propeller cavitations and engine acoustics. A group of defenders can thereby produce a combined signature that is difficult to distinguish from that produced by such real vessels that are threatened by an attacker. The physical configuration of the involved defenders is, however, very sensitive to perturbations in the surrounding environment. Since the quality of the coordinated team of defenders corresponds to their effectiveness in deceiving some potential attacker, i.e., making it detonate and thereby removing the threat, it is essential that each defender produce an appropriate complex signature at the appropriate location. In order to do so, each defender must be able to observe and instrument its current context, with respect to its temporal mission and available resources. For example, if a particular defender's capability of producing some specific signature suddenly is removed, the whole team needs to reconfigure and reuse its currently available services. This state of affairs requires the group to reconfigure in an online manner. Obviously, the question is how the group is supposed to reconfigure in such a way that our dependence on their coordinated behavior is not reduced.
118
BENCHMARK
FIGURE 8.2 DEFENDERS
I n t h e op en co m put a t ion al s ys t em o f Tru st w or th y and su stain able op erations in marin e environments (TWOSOME), a transporter (left) has just deploy ed three defenders (center) in or de r to sw e ep a cha nne l wh e r e a p ot e n t i al t hr e at – a smart mine – has been detected (right).
8.4 BENCHMARK
As we have argued throughout this thesis, we consider a better understanding of the dichotomy of computer science and software engineering of pivotal concern in harnessing the complexity of dependable computing systems. Consequently, we consider this dichotomy of the fields to manifest itself as a need for engineering to apply the models of science in order to design complex, yet dependable, computing systems. Moreover, we have argued that the essential nature of the involved systems calls for an in situ mindset in establishing their qualitative and sustainable behavior – online engineering. To this end, we advocate a methodological framework and approach toward exploration, refinement, and operation of dependable computing systems. The general intent and rationale of this particular framework is to provide us with certain theoretical as well as practical instruments; geared in such a way as to guide us in conducting empirical
NETWORK ENABLED CAPABILITIES
119
investigations on those systemic qualities that we have come to depend on. For example, in the above application domain of network enabled capabilities, the methodological instrument of principles tells us that our scientific concern at hand is that of coordination mechanics and our subsequent concern of engineering is the design thereof. Moreover, with the subordinate principles of design at hand – dependability, evolvability, reliability, performance, and simplicity – the methodological framework has helped us to identify certain general dimensions of relevance in performing the empirical investigations to come. At this point, we should perhaps stress that these general dimensions necessarily have to be complemented with those of principle concern in the particular application domain one addresses for the time being. For example, in the domain of network enabled capabilities the general dimensions of our concern should be understood as subordinate to those of situational awareness and information fusion. With some particular set of relevant dimensions at hand, our methodological framework now stresses the application of yet another instrument – models. However, as stated in Chapter 5 – Open computational systems – one should never interpret the intent of models to aim for a complete depiction of phenomena. Instead, they should be considered as approximations along certain context-dependent dimensions – aspects. To this end, considering our applicatory experiment of TWOSOME, the methodological framework at hand suggests that certain fundamental aspects of the systemic phenomenon we intend to investigate should be identified, before the instrument of models is applied. In doing so, we emphasize the aspects of a defense organization as well as that of an attacker. In summary, the methodological framework has now guided us in explicating a principle concern – coordination – and two aspects thereof – defense and attack. As stated in Chapter 6 – Online engineering – the involved aspects of our systemic phenomenon should now be mapped onto model constructs of open computational systems – articulation. As such, this activity involves the explication of frustum constructs at the environment, fabric, system, and domain levels. In fact, at this point the author would like to stress the articulation of frustum constructs in accordance with the above order of abstraction levels. As indicated in Chapter 5 – Open computational systems, if we start with articulating the frustum constructs in such an order that the most concrete one comes first, discussions regarding experiment ramifications are positively facilitated. For example, even though the involved partners in the TWOSOME project came from different backgrounds and, consequently, at times lacked a common terminology when it came to articulation of system constituents or mediator requirements, there already existed such a sought for common grounds, in terms of the envisioned experiment’s environment. As a human being, it is always much easier if one can rely on a terminology that is grounded in physical phenomena and tangible artifacts – public harmony.
120
BENCHMARK
Consequently, as a first iteration over the activities of our method of online engineering, we explicated all frustum constructs required, in accordance with the envisioned environment parcel as outlined in the previous section of this chapter: operations centre, transporter, attacker, defender, and sensor. Moreover, being the constituents of an environment parcel, each artifact and its required capabilities of computation and communication was appropriately modeled as, respectively, system and fabric parcel constructs. Finally, taking the grounded semantics of environment, fabric, and system constructs into account, the behavioral semantics of TWOSOME was articulated at the domain level of our model. For example, a defender was articulated on the domain level as belonging to the domain of defender as well as involving the three concepts of engine, propeller, and magnetism. Each concept was then articulated as manifested in terms of a particular signature. In a similar manner, the concept of location was also articulated as belonging to the defender domain, with corresponding manifestations in the form of coordinates. Now, crossing the abstraction levels of open computational systems, these domain frustum constructs of the defender were articulated as having corresponding port and entity constructs at the system level. As such, those system frustum constructs are considered as the physical embodiment of the behavioral semantics modeled at the domain level. For example, the manifestation of a signature is embodied by a system port that actually is considered to emanate the particular signature at hand. Finally, once again crossing the abstraction levels of open computational systems, the system frustum constructs and their need for mediator support was articulated at the fabric level. For instance, if each defender was envisioned as a distinct artifact at the environment level of our systemic phenomenon, it also required an individual mediator to be articulated in the fabric frustum. With those enabling technologies discussed in the previous chapter at hand, the articulation phase of our first iteration now went into the activity of construction. This being the first iteration over the activities of online engineering, the construction phase merely aims to implement all constructs articulated as constituting the fabric, system, and domain parcels. In effect, the construction of an open computational system, such as TWOSOME, addresses the concrete implementation of parcel constructs at all levels of abstraction. For example, the previously articulated domain, concept, and manifestation constructs of a defender, as well as its port and entity constructs, where implemented in the form of a software agent, performing its capabilities by means of mediator support from SOLACE (see Chapter 7 – Enabling technologies). In our first iteration over the principle activities of online engineering, the method now goes into an in situ state of affairs, as soon as the construction phase is finished. That is, when all frustum constructs have been implemented, in the form of parcel constructs, and the systemic phenomenon is deployed, we start performing the online activities of
NETWORK ENABLED CAPABILITIES
121
observation and subsequent instrumentation. In the particular case of TWOSOME, the envisioned scenario of dependable coordination is now readily available for exploration, refinement, and operation. As we have previously described, the scenario of TWOSOME involved two separate aspects of the same systemic phenomenon. On the one hand, the critical event of a smart mine being deployed in some physical environment occurs; calling for the performance of a mine sweeping mission. On the other hand, the performance of such a mission is necessarily conducted in a coordinated and dependable manner. In both cases it is imperative that the dimensions of situational awareness and information fusion are considered in such a manner that the system aggregates, i.e., as a result from the different aspects involved, can be investigated along the dimensions of dependability, evolvability, reliability, performance, and simplicity. In the particular case of TWOSOME, we consider all these dimensions in terms of the network enabled capabilities exhibited by cognitive agents. From the perspective of cognitive agents, the constitutive performance envelope of some mission being carried out can be considered as involving four activities of cognition: observation, orientation, decision, and action. That is, all agents continuously observe their environment and, when a critical event has been identified, they need to orient themselves in relation to the event. They need to do so in order to make an appropriate decision regarding which actions it should take in accordance with the event that occurred. It is the performance of all agents, as well as any group they are part of, in conducting this continuous loop of cognition that is the subject of our investigation. There are in essence two such loops involved in TWOSOME; the team of defenders and the smart mine. The latter loop involves observing the marine environment and, upon identification of certain vessel signatures, the locale of a potential target as well as distinctive signature types is the subjects of a decision process. If the target can be characterized as adhering to a particular vessel type, the observer will act accordingly – detonation. The former loop also involves observing the marine environment, but upon identification of a group wide degradation event, i.e., loss of capability, it is the locale of group participants and their individual capabilities that are the subjects of a decision process. If the group can be characterized as degraded in its constitutive capability, the observer will act accordingly – reconfiguration. In TWOSOME, the loops of cognition involved are performed by means of the agents continuously observing the temporal structures present in some dynamically isolated domain parcel. From the perspective of the mine the involved parcel structures corresponds to observed vessel signatures (see Figure 8.3), and in the case of the team of defenders they correspond to signature configurations. Moreover, all cognitive agents are also able to appropriately instrument these parcel constructs; as the means to an end in upholding the qualitative dimensions of the other parcels dependable behavior. No matter if the cognitive agents involved are of a human or computational nature, the structures of
122
CONCLUDING REMARKS
an open computational system are possible to dynamically isolate, adapt, and validate. Consequently, in terms of the open computational system of TWOSOME, we have exemplified one way of addressing the problems as initially stated in the thesis. That is, as stated in Chapter 6 – Online engineering; if we construct our embedded systems according to the same models as we apply in observing them, the method of articulating and instrumenting some system’s factual behavior is rather straightforward, i.e., iterative quality refinement as a matter of approximate in situ exploration – online engineering of open computational systems. 8.5 CONCLUDING REMARKS
Within a particular methodological framework, principles and models are developed using the practical tools and methods at hand. All these instruments of a methodology aim to comply with scientific and engineering rigor. Moreover, to uphold such rigor, it is essential that there exist a priori systems to be investigated in pursuit of qualitative behavior of natural as well as artificial phenomena. Applicatory contexts where we are concerned with such investigations are those real world situations where manipulation and distribution of information simply is not feasible to perform in neither a manual nor an automated manner, but rather a combination of both. In this particular chapter of the thesis, we have therefore introduced such an area of investigation – network enabled capabilities. The continuous and qualitative behavior of systems in the area is necessarily considered as understood and harnessed by means of cognitive agents. However, in doing so, one should note that the agents come in two strands – human and software. In effect, we are dealing with investigations where the constitutive quality of some systemic phenomenon is a result from cognitive agents cooperating toward achieving certain mission objectives. In some cases, the coordinated processes only involve agents of a computational nature, whereas only those of a human nature are involved in others. However, due to the complex nature of the processes involved, in combination with the occurrence of unanticipated or critical events, there are also cases where both agents of a human as well as computational nature need to collaborate, in order to solve some complex situation at hand. It is in those situations where the need for synchronization arises. That is, there exist situations in open computational systems that call for an in situ synchronization of understanding – public harmony – among the cognitive agents involved. Of course, since synchronized systems only operate within the performance envelope of their least efficient actor, one should necessarily consider in what principled manner these systems could be designed so that minimization of synchronization delay is attained.
NETWORK ENABLED CAPABILITIES
123
FIGURE 8.3 ATTACKER
I n t h e op en co m put a t ion al s ys t em o f Tru st w or th y and su stain able op erations in marin e environments (TWOSOME), a mine (right) has ju st identified two distinct vessel signatures (left) and is now trying to decide if they correspond to a targeted vessel’s signature.
By means of our specifically geared instruments of models and technologies, we claim that our methodological framework addresses this particular issue in a quite effective manner. We argue that human agents are the limiting factor of performance in dependable computing systems, since we require all decisions and actions involved in synchronization to necessarily stem from a sufficiently understood observation of some complex situation at hand. Moreover, since our model facilitates a common terminology of grounded semantics and since our technologies facilitate cognitive inspection thereof, we claim that our methodological framework effectively increases human agents’ capability to understand complex situations at hand. In particular, we enable human agents to perceive the complex situations involved, according to the same model of grounded behavior semantics as the computational agents apply in their continuous operations. Consequently, since human agents thereby are empowered with the capability of understanding observed situations under the same premises as those constraining the computational agents involved, delays in synchronization activities among agents of both types are effectively reduced.
124
CONCLUDING REMARKS
This approach of ours toward dependable coordination stand in clear contrast to those approaches of coordination where the model alone is considered as the facilitator of effective and dependable system behavior. In terms of recent classifications of coordination approaches in multi-agent systems [67], the dependable coordination of open computational systems should be considered to account for the exploitation of a coordination strategy that simultaneously span over subjective as well as objective approaches. The need to thoroughly understand the dualistic relationship between these two approaches has previously also emerged in the complex coordination scenarios of computer-supported cooperative work and workflow management systems [65; 3; 10]. On the one hand, when it comes to complexity of subjective coordination, the involved processes and required mechanisms are imposed exclusively upon the individual (software) agents involved. Approaches of subjective coordination are realized by means of high-level agent communication languages and conversation protocols [70] that are supported by the software agents alone. On the other hand, in the case of objective coordination, the support for coordination is delegated to computational entities other than those participating in the coordinated behavior. In contrast, coordination activities in open computational systems are strongly context-dependent in that the involved systems are situated in unpredictable physical environments. Such premises of dependable coordination are not addressed by subjective nor objective approaches and their particular protocols toward coordination. In sufficiently complex situations, one need to integrate multiple approaches [64] and typically strive for context-driven system behavior. Therefore, we focus on the possibility of any agent – human and software – to take the appropriate actions in a collaborative and dynamical manner; not constrained by their relation to some particular coordination mechanisms, but primarily as a result from observing their in situ temporal context.
9 SUMMARY OF THESIS
T he o ut c o m e o f a ny s e r iou s re s e a r c h c a n o nly b e t o m a ke t wo q ue s t i o ns g row whe re o n ly o n e g re w b efo re . T. Ve ble n
9.1 INTRODUCTION
To immerse oneself in the field of science is in many ways a matter of identifying problems and, subsequently, establishing applicable solutions thereof. However, there is a downside to this somewhat simplified picture of science. One does not contribute to the progress of science, or engineering for that matter, by means of identifying just about any problem of choice. Instead, the problems we address must, on the one hand, be justified as a matter of impact and, on the other hand, they must be scoped in terms of a relevant and applicable framework as well. It is with such a mindset that the intent and goal of this thesis was stated. That is, dependable computing systems affect critical functions of society and we therefore need to identify a methodological framework that enables us to address relevant problems at hand in an applicable manner. With this reasoning in mind, the material presented in this thesis should not be interpreted as an intent to claim that the final solution to problems of dependable computing systems has been introduced. The author’s intent was simply to identify a sound framework that should help us in addressing more appropriately stated problems within this particular domain of our concern. In essence, we believe that by means of identifying such a framework, and in establishing its soundness, our confidence in that dependable system qualities could be attained would be facilitated. As the means to an end, we have therefore introduced a methodological framework that we consider applicable in the subsequent identification of problems within the particular domain of dependable computing systems. Moreover, we have also introduced an applicatory example of such complex phenomena that we typically face within this 125
126
INTRODUCTION
domain. As such, the example has helped us in establishing the soundness of our framework; in that it effectively could be addressed and discussed in terms of our methodology. Still, we consider this example to be just one out of many reference cases. Throughout this thesis, we have advocated a methodological framework that was geared in such a way that it should be reusable by other practitioners in the field of dependable computing systems. Moreover, as a matter of personal experience, we argue that our particular framework in fact also addresses the dichotomy of computer science and software engineering. Even though it unfortunately is out of this thesis’ scope, it should be noted that the applicatory example of TWOSOME (see Chapter 8 – Network enabled capabilities) is actually only one out of several benchmarks that our framework has been subjected to. Repeatedly, the soundness of our methodology has proven itself, by means of the ease at which it appropriately addresses the principle concerns of computer science as well as those of software engineering within the field of dependable computing systems. However, at this point we should acknowledge a very important and natural constraint that is imposed on this particular framework. It has merely been geared in such a way that it should be applicable in endeavors of empirical investigations and experimental development thereof. In other words, we do not consider our methodology as applicable in contexts where commercial productification or manufacturing is of the essence. Of course, with the features of certain methodological instruments in mind, e.g., the method of online engineering as well as its enabling technologies, there are aspects of the framework that are of high relevance in the aforementioned activities. For example, in productification, features such as rapid prototyping could possibly be derived from our methodological instruments. As could possibly also the emphasis on continuous establishment of systemic qualities be of relevance in manufacturing. However, as initially argued, our methodological framework is necessarily applied in its full form with merely two principle intents in mind – empirical investigations and experimental development. Consequently, in order to further discuss this limitation of our methodological framework, as well as the principle contributions brought forward in this thesis, the last chapter of the thesis is structured as follows. In the first section – Experiences – we return to our initial challenges of dependable computing systems and discuss in what way the various instruments of our methodology has contributed in addressing these challenges. In the following section – Assessment – our contributions are discussed from the perspective of related approaches and, consequently, their individual limitations as well as complementary benefit are introduced. Finally, in this chapter’s last section – Future challenges – our experiences from the methodological endeavor presented herein are summarized. In particular, new challenges in empirical investigations and experimental development are presented. However, without any further ado, let us summarize the
SUMMARY OF THESIS
127
initial challenges of dependable computing systems as well as our contributions in addressing them. 9.2 EXPERIENCES
The general premise of the material presented throughout this thesis is the existence of complex and dependable computing systems. We consider a particular class of these systems to involve those upholding and providing for certain critical functions of our society, e.g., energy, healthcare and defense. Due to their intricate interdependencies as well as being situated in our physical environment, even the slightest perturbation in one of these embedded systems can result in catastrophic failures of our society. However, we argue that our concerns regarding their dependability mainly are due to our own problem of investigating and developing their complex behavior in an understandable manner. In an attempt to address this major concern of dependability in complex and embedded computing systems, we claim that new methodological approaches need to be established, that provides for guidance and support in investigation as well as development activities. Moreover, in doing so, the methodologies involved should emphasize an even balance between computer science and software engineering. To this end, we consider an applicable methodology to necessarily comprise fundamental instruments such as principles, models, methods, and technologies. Moreover, we claim that the involved instruments must be geared in such a way that empirical investigations as well as experimental development activities are facilitated. After all, we consider the general challenges in dependable computing systems to involve (1) in what way we can understand their nature and (2) in what way we can harness their complexity. Consequently, with the intent of framing the involved systems’ general nature as well as the control thereof, we noted that they all could be considered as embedded in nature, constituted by programmable entities, and governed by cognitive agents. Of course, these characteristic features of the phenomena at hand are of a somewhat simplistic nature, but, even so, we consider them to at least point us in appropriate directions of more exact problem statements. Moreover, we claim that the perhaps most important feature involved is that of cognitive agents; considered as the main facilitators in understanding and harnessing the complex phenomena of our concern. However, with this emphasis on cognitive agents – human and software – certain additional challenges are introduced. If the general premise at hand is to govern system behavior by means of coordinated cognitive agents, then one should question if all agents involved really understand each other. Of course, we argue that this is not the case. However, in doing so, one could in fact facilitate an increase in common understanding – public harmony – by means of empowering all involved agents with the capability to
128
EXPERIENCES
experience their physical environment in a similar manner. The main idea advocated at this point is that, whatever abstract constructs we make use of in communication and interaction among cognitive agents, the involved semantics must necessarily be grounded in experience of physical and tangible phenomena. In other words, we propose to enact all individual agents with the capability to communicate privately experienced events and stimuli in terms of publicly accepted cognitive constructs. With this general mindset at hand, we aim for a methodological approach that, by means of certain specifically geared instruments, should help all cognitive agents in performing the previously stressed empirical investigations and experimental development activities. Consequently, with the ultimate goal of providing for such qualitative computing systems that we, with a sufficient amount of confidence, dare to immerse them in our societal fabric, we advocate our particular methodological framework. In essence, the framework emphasizes the identification and establishment of systemic qualities, by means of cognitive agents that continuously perform activities such as phenomena exploration and refinement. The foremost important issue that we aim to address with such this methodological framework is, consequently, to increase our own confidence in that we understand what qualitative system behavior to expect from complex embedded systems. The basic underpinnings of our advocated approach therefore involve the explication of two concerns in particular, i.e., the scientific method of knowledge acquisition and validation as well as the engineering procedure of quality assurance and assessment. Traditionally, it is often the case that we make a distinction between the disciplines of computer science and software engineering. However, in this thesis we consider them as complementary; in that the former aim at establishment of behavioral principles in phenomena, whereas the latter strive to apply these principles in the creation of new phenomena. Even though our intentions in applying these complementary approaches may differ, they should help us in not only understanding phenomena but also in harnessing and sustaining their dependable behavior. To this end, our methodological framework emphasizes the iterative application of its constituent instruments. First, some particular set of principles in computing mechanics and design are used in order to characterize certain concerns of a phenomenon at hand. We can then use our model of open computational systems to depict and understand various aspects of this phenomenon. These aspects then provide for the basis in dealing with our particular concerns, i.e., by means of applying our method of online engineering. However, not only the subjects of investigation, but also the methods at hand require support from enabling technologies. The reason for this is that we want all dominant phenomena constituents, dependencies, and interactions to be susceptible in some cognitive agent’s in situ activity of exploration and refinement.
SUMMARY OF THESIS
129
By means of immersing our supposedly dependable computing systems into nature, we are introduced to problems of complexity. On the one hand, the occurrence of unanticipated events in our physical environment introduces us to problems of performance and reliability. On the other hand, the involvement of multiple agents introduces us to problems with respect to evolvability and dependability. However, this being the case, we argue that it necessarily is the latter problem being the major challenge at hand. Natural events are to a great extent unavoidable, whereas problems involving the lack of certain agent capabilities are not. We have to realize that, since our primary concern with dependable computing systems is their inherent complexity, we need to empower all cognitive agents involved – human as well as software – with the functional means to observe and understand complex phenomena. Without such a capability, the idea of empirical investigations is simply not attainable. The bottom line is that the notion of dependable computing systems, as well as to what extent we experience confidence in qualities thereof, involves our capability to explore and refine the qualitative aspects of some common area of interest – in situ. In order to do so, we need enabling technologies to facilitate our common situational awareness as well as information fusion thereof. No matter if we address methodological approaches that emphasize scientific investigations alone, or engineering endeavors for that matter, they all require support from enabling technologies. As such, they must not only support a common set of principles, but a common set of models and methods as well. To this end, even though the main intent with the methodological framework advocated in this thesis is to understand and harness the qualitative behavior of dependable computing systems as such, we have come to understand that the major constraint imposed on the notion of dependability is a matter of the final instrument at hand – enabling technologies. Since all dependable computing systems necessarily are of an artificial nature, i.e., primarily articulated and constructed by human agents, our conception of an event – technological as well as natural – that would render their intended behavior as erroneous stem from a lack of understanding by the same agents that built them in the first place. This being the case, one could of course blame the agents for not anticipating or understanding the occurrence of certain critical events. However, we argue that the situation at hand is a bit more complicated than that. To create a dependable system behavior, no matter how complex, necessarily call for established principles, models, and methods – all deduced as a matter of prior experience. However, all experiences involved in such a methodological endeavor can only occur if appropriately facilitated by enabling technologies and relevant problem statements. Examples of this state of affairs are numerous throughout the history of natural science, e.g., the identification of bacteria – by means of the microscope – due to the critical event of people dying from unknown diseases. To this very end, in our pursuit of establishing an understandable behavior of dependable computing systems, we have developed certain enabling technologies. The
130
ASSESSMENT
first technology to be developed was that of SOLACE; facilitating the existence of systems to be investigated in accordance with our model of open computational systems and developed according to our method of online engineering. The second technology to be developed was that of DISCERN; enabling the observation and instrumentation of in situ open computational systems. However, we would like to stress that these particular technologies also involved certain limitations. Even though they were completely consistent with the model and method at hand, it was only from our practical experiences of the phenomena at hand, that we would come to understand the actual limitations of dependability. The foremost challenges at hand is not primarily to understand or harness the behavior of complex computing systems, but rather to represent their evolving behavior in an correct and tractable manner. This was in fact the major shortcoming of our enabling technologies and, subsequently, our whole methodological framework; visualization of system evolution was not sufficiently provided for. We will return to this issue in Section 9.4 – Future challenges. 9.3 ASSESSMENT
Throughout this thesis, we have advocated a methodological framework to be applied in activities of computer science as well as software engineering, i.e., empirical investigations and experimental development. As such, the framework emphasizes the understanding and governance of open computational systems as a matter of cognitive agents that perform the activities of online engineering. Being of a human as well as computational nature, these cognitive agents are considered as applying our advocated methodology in its full form, i.e., its constituent principles, models, methods, and technologies. However, at this point, it is important to stress that they do so in order to address concerns of dependable behavior in complex computational systems per se. We do not believe that the isolated instruments of a methodology provide for any significant applicability if taken out of their intended context. Even so, from this perspective one should perhaps consider in what way our methodological framework exhibits any similarities, or discrepancies for that matter, with other approaches toward dependable computing systems. At this point, the author would again like to stress the fact that the particular methodology advocated herein is specifically geared toward an increased understanding of the dichotomy of computer science and software engineering, i.e., to even the balance between theory and practice in establishment of behavioral qualities in complex and distributed computing systems. However, this emphasis of our methodology stands in clear contrast to most any other methodology within the field. Either one tend to foucs on particular concerns of computer science, e.g., communication and coordination, or
SUMMARY OF THESIS
EMPHASIS PRINCIPLES
131
MAS
SOS
OCS
mechanics design
MODELS
physical temporal conceptual
METHODS
articulation construction observation instrumentation
FIGURE 9.1 COMPARISON
W he n i t co m es t o co m pl e x c om pu t i ng s ys t em s , t he extent of emphasis provided for by three different approac hes toward understanding and governing system ic qualities could be compared in terms of pr in ciples , models, a nd methods . Th e ap proach es w e cons ider are multi-a gen t systems (MAS), serv ice-orien ted systems (SOS ) , a n d o p e n c o m p u t a t i o n a l s ys t em s ( O C S ) .
some methodology at hand focuses on concerns related to software engineering, e.g., performance and reliability. Nevertheless, the involved approaches are in fact possible to depict and assess as a matter of the support they provide in terms of the same instruments that our own methodology emphasize (see Figure 9.1). No matter if our concerns with dependable computing systems exhibit an emphasis on engineering or a principle focus on science, we must necessarily acknowledge that our common problem addressed is that of dependable behavior in complex computing systems. It is from this perspective that one should understand that no isolated instrument of some methodology can ever provide for a comprehensive support as such. It is only the methodological approach applied in its full form that possibly can address concerns of this nature in a comprehensive manner. Nevertheless, we consider two approaches in particular as related to the methodology advocated herein (OCS), i.e., multi-agent systems (MAS) and service-oriented systems (SOS). In a sense, the former approach could be characterized as a contemporary methodology advocated by the computer science community, whereas the latter primarily is an approach advocated by the software engineering community.
132
ASSESSMENT
With respect to our methodological instrument of principles, there is a large portion of contemporary research and development efforts within the agent paradigm that emphasizes qualitative system behavior as a matter of distributed computation. A noteworthy example of such an emphasis includes the realization of optimal solutions to complex problems, e.g., computational markets dealing with coordination of resource and commodity consumption [82; 83; 87]. Within the domain of service-oriented systems, on the other hand, the general emphasis on principles involved is more concerned with design than with mechanics. For example, one stresses the notion of abstract architectures that facilitate the conception of services, to be assembled and adapted in an online manner [51; 46]. However, in doing so, the envisioned application domains of these services are not explicated; merely that they will ease the development and maintenance of dynamic software systems. Of course, with their respective emphasis on mechanics and design, the paradigms of MAS and SOS both stress the notion of qualitative system behavior. Still, without an equally balanced emphasis on the principle concerns of mechanics as well as on those of design, it is easier said than done to attain some comprehensive quality of service. In practice, we require models that allow us to quantify the envisioned qualities of some particular system’s behavior in run time, as opposed to models that enable us to quantify design time qualities alone. Consequently, if we instead consider the methodological instrument of models, there exists yet another portion of contemporary research and development efforts within the MAS and SOS paradigms. However, it should be noted that in both cases, it is primarily design models of isolated components that are in focus. Of course, both paradigms emphasize qualitative behavior of software but, as an effect from not considering the comprehensive and run time behavior of distributed computing system per se, the involved models tend to focus on schemas of notation, e.g., modeling languages and communication protocols [35; 41]. Even though these models are of utmost importance in describing the structures, interfaces, and interactions of most any distributed computing system, they merely convey offline aspects of some sought for system behavior. Once the involved systems are deployed, we need models that effectively capture the in situ behavior of our concern. In the case of open computational systems, we have therefore tried to explicate the need for a model that, on the one hand, captures the factual state space of some complex computing system and, on the other hand, is of equal applicability in offline articulation and construction work as well as in online observation and instrumentation activities. When it comes to our methodological instrument of methods, this is perhaps where the largest portion of contemporary research and development efforts in the SOS and MAS paradigms is found. Noteworthy examples of such methods typically include viewpoint-oriented modeling [2] and aspect-oriented programming [14]. In these cases, there is indeed an emphasis on flexible methods for articulation and construction work
SUMMARY OF THESIS
133
that, appropriately applied, will ease the adaptation of complex systems. However, as is the case with models of service-oriented systems, the methods involved are not typically geared towards in situ adaptation of systemic qualities, but rather offline component design, implementation, and modification. As we stated in Section 2.3 – Cognitive agents – the notion of software agents has proven itself quite the effective metaphor in reasoning about design and implementation of complex computing phenomena. Noteworthy examples of the metaphor involve concepts such as societies [18; 30; 37], ecologies [34], and ecosystems [29; 48; 53]. However, as argued by Malsch, we should necessarily acknowledge the fact that these concepts are merely metaphors that help us in reasoning about complex phenomena [44]. They are not approximate models of real world phenomena in computing. Nevertheless, these concepts are actually used as the principle facilitator in many endeavors toward agent-oriented software engineering [39; 40; 84; 85]. This state of affairs stands in clear contrast with the intent and rational of our methodological instrument of methods. As discussed throughout this thesis, we consider our method of online engineering to involve cognitive agents that require capabilities such as articulation, construction, observation, and instrumentation of physical phenomena. However, in applying these capabilities, they do so according to a model that we in every respect consider as an applicable approximation of real world computing phenomena, not a metaphor thereof. 9.4 FUTURE CHALLENGES
In summary, a methodological framework for online engineering of open computational systems has been introduced in this particular thesis. As such, the framework was intended to be of a comprehensive and configurable nature, i.e., encompassing those methodological instruments that the practitioner needs in empirical investigations as well as in experimental development of dependable computing systems. However, this framework must necessarily not be interpreted as a contestant to the already available methodologies of multi-agent systems (MAS) and service-oriented systems (SOS) engineering. On the contrary, the methodology introduced herein (OCS) was intended to serve as a bridge between these two paradigms, since we consider both alternatives to offer a great deal of practical guidance in articulating requirements on model design as well as development of complex computing systems. Moreover, we consider the applicability and soundness of those methodological instruments advocated by the paradigms to be of mutual interest in the establishment of dependable computing systems. In effect, we have tried to incorporate certain qualities of the methodologies for MAS and SOS in OCS that were considered as applicable and sound in our own context of addressing the dichotomy of computer science and software engineering, e.g., agents and aspects.
134
FUTURE CHALLENGES
However, with such an experience at hand, one should also acknowledge that the foremost negative experience from actually doing so was that neither paradigm seemed to emphasize the notion of systemic qualities from a comprehensive and in situ perspective. That is, even though both paradigms claim to effectively address pivotal concerns such as quality of service and establishment thereof, there seem to be an implicit assumption that qualitative behavior of complex systems can be deduced from particular instruments alone. Instead, we argue that establishment of dependable system behavior is the effective result from applying all instruments of some specifically geared methodology. Consequently, in addressing the notion of dependable computing systems, we have introduced what we consider a comprehensive and configurable methodology. However, this does not mean that the methodological framework advocated in this thesis by any means should be interpreted as complete, when it comes to enabling technologies or methods. We have only provided a framework that encompasses the essential instruments required in empirical investigations and experimental development of dependable computing systems [81]. It is through the practical experience from developing this comprehensive methodology that we have come to understand that, even though the challenges of understanding and harnessing complexity in dependable computing systems were initially appropriate to address, our problem statements should be further refined. Of course, we could only have identified these new challenges as a result from actually putting our theory into practice. In particular, it should be noted that the new problems are related to our experience from performing online engineering of open computational systems. By means of our experiences from using DISCERN (see Section 7.4) in exploration of online computing phenomena, we have come to understand that our model of open computational systems is of an applicable nature. It emphasizes the most dominant dimensions of those physical phenomena of computing that we are concerned with. Moreover, we have experienced it as equally applicable in analysis as in design of some dependable computing system. However, in doing so, we have also come to understand that the foremost challenge at hand is to formalize this model and turn it into an applicable instrument; effectively conveying the continuous evolution of state space transitions. Such a formal representation of state space evolution would enable us to investigate the behavioral semantics of (1) different system aggregates at the same point in time, (2) the same system aggregate at different points in time, and (3) different system aggregates at different points in time. In effect, a formal representation of state space evolution in open computational systems would facilitate empirical investigations regarding critical system dependencies and interactions. Moreover, such a formal representation could also be considered as the practical means at hand to establish applicable design patterns of software engineering.
SUMMARY OF THESIS
135
As a matter of developing and applying SOLACE (see Section 7.3) in construction and governance of online computing phenomena, we claim that our method of online engineering is of an applicable nature. In an equally balanced manner, the method emphasizes activities of relevance for any cognitive agent that is concerned with the dependable behavior of an open computational system. However, we have also come to understand that the phenomena we are dealing with are of an even greater complexity than initially anticipated. As such, even though we consider our model of open computational systems to effectively capture the most dominant dimensions of state space evolution in dependable computing systems, the designers involved necessarily require the effective support from enabling technologies in actually implementing the envisioned systems at hand. Consequently, if one were to formalize the representation of an envisioned state space – design pattern – the challenge at hand would be to develop enabling technologies that facilitate the automation of turning articulated frustum constructs into online parcel constituents. Of course, to investigate systemic qualities of any artificial phenomenon require our subject of study to exist in the first place. It is not until we have some phenomenon at hand that we can address certain concerns of ours. However, as argued throughout this thesis, these concerns are in essence based on grounded semantics of our sense impressions from experiencing a physical and tangible environment. Consequently, this is why the foremost important dimensions of our model should be understood as the environment and domain levels. These dimensions are typically not considered in contemporary methodologies. Specifically, interactions between these levels stabilize the behavior at the system and fabric levels of an open computational system and, hence, enable the sustainable behavior of dependable systems. Moreover, since many of the agents involved in understanding and harnessing the complexity of dependable computing systems are of a human nature, this state of affairs explicitly calls for an enabling technology that facilitates the visual experience of these systemic abstraction levels. This is why DISCERN was implemented in the first place – to facilitate the cognitive inspection of online phenomena. However, when it comes to human agents’ cognitive inspection of online phenomena in DISCERN, we consider it as tractable only in terms of the physical–conceptual state space represented, i.e., structures in open computational systems (see Figure 7.3). Now, if we would involve the requirement of the complete state space being visually tractable, i.e., involving processes and patterns of the complete physical–temporal–conceptual state space as well, DISCERN did not provide for sufficient support. In fact, we did try to implement a prototype for real time visualization of the complete state space evolution in open computational systems (see Figure 9.2). Unfortunately, at that time, our model of open computational systems had not been formalized and an appropriate visualization of state space transitions was, consequently, yet to be established.
136
FUTURE CHALLENGES
FIGURE 9.2 EVOLUTION
S c r e e ns ho t f r o m t he p r o t ot y p e vis ua l iz a t i o n i n Distributed interaction system for complex e n t i t y r e l a t io n n e t w o r ks ( D I S C E R N ) o f a n o p e n compu ta t ion al system’s ev olv ing state sp ace.
In addition, with the comprehensive methodology of ours at hand, we have come to an understanding that, in order to appropriately facilitate empirical investigations as well as experimental development, the foremost challenge and opportunity in dependable computing systems is the capability to induce state space transitions in an online fashion. That is, with the appropriate solutions toward (1) formal representation, (2) automated implementation, and (3) visualized transitions of open computational systems’ state space, we envision the possibility to establish systemic qualities – dependability – as a matter of inducing critical events. With the ultimate goal of deploying and providing for critical support functions of our society, the envisioned systems all come with certain implicit state space constraints. For example, in the application domain of network enabled capabilities and network-based defense, certain performance constraints are involved when it comes to situational awareness and information fusion. If we were able to appropriately represent, implement, and visualize the in situ evolution of such an open computational system’s complete state space, we would also be able to establish its invariant quality and dependability; by means of inducing such critical events that the system necessarily must be resilient against in performing within its envisioned performance envelope. Finally, an implicit assumption of the methodological framework advocated in this thesis is that, even though we aim for in situ exploration and refinement of online phenomena, the environment parcels are initially empty. In other words, the method-
SUMMARY OF THESIS
137
ology presupposes that the phenomena we aim to study are constructed according to the method of online engineering in accordance with the model of open computational systems (OCS). Only then can we engage in empirical investigations and experimental development of some online phenomenon. But what if we would like to conduct an empirical investigation with respect to a phenomenon that was not constructed in accordance with our methodological framework? Indeed, this is an important problem to address. Already deployed systems will obviously not be developed from scratch, just in order to conform with the model of open computational systems. In this context, we should consider in what way we can observe the structures and processes of already deployed systems and, in effect, perform the method of online engineering with the intent of creating a replica thereof. Moreover, this procedure of phenomena replication could also be performed the other way around. That is, when we already have an open computational system at our hands; in what way could we then subsequently integrate it with already deployed systems? We argue that both procedures require a simulator of the physical environment, to be used as an intermediate deployment platform. At least, we have come to understand that the lack thereof severely complicates the procedure of deploying open computational systems, in a piece–by–piece fashion. To summarize our current problem definitions: PROBLEM 9.1
For m alize th e re presen tation of stat e space evolution in op en computation al systems (ar t iculat ion).
PROBLEM 9.2
Automate the implementation of open computat ional systems from stat e space re presentations (con str uction).
PROBLEM 9.3
Vis u a l i ze t he t ra n s i t i o ns o f s t at e s p a c e evo l ut i o n i n o p e n c om p u t at io n a l systems (obs ervation ).
PROBLEM 9.4
Induce the occur rence of stat e sp ace t ran sit ion s in open com pu t at io n al system (instr umentation).
PROBLEM 9.5
R e p l i c at e t h e b eh avio r o f a ny o nl i n e p h en o me n on a s a n op en c o m p u t a tion al system.
With the above problem statements at hand, we have come to the end of this thesis. However, in order not to disappoint the potential reader and, in particular, those that value experimental research and development, the author would simply like to conclude the material with the following information. By the time of this thesis’ publication, the above problem statements have already been partially addressed and yet another reference case of open computational systems has been conceived. Consequently, the enabling technologies presented herein have also been subjected to a major revision and, in summary, we have only come to an end of this thesis per se – the nature of open computational systems is yet to be further explored and experienced.
Part 5
REFERENCES
A GLOSSARY
ACT UATOR
A device responsible for invoking a mechanical action, such as one connected to a computer by a sensor link.
AGE NT
A human or artificial instrument by which a guiding intelligence achieves a result as a matter of delegation.
ARC HI TE CTURE
An abstract representation of a supportive set of software components and their organization.
COM P ONENT
A separate part of a system with predifined interfaces for environment interaction.
COM PUTER
A programmable device that execute instructions according to the specification of internally stored programs of arithmetic and logic operations. A computer is typically also capable of communicating with other computers by means a network link.
DEPE NDABL E
The ability of a system or component to perform its required functions under stated conditions for a specific period of time.
DI ME NSI ON
An abstract relation that binds distinct concepts and associate domains; a physical property.
DOM AI N
A conceptual area of activity, concern, or function. In mathematics the notion of a domain is typically interpreted as an open connected set that contains at least one point, whereas in computer science it is interpreted as a group of networked computers that share a common communications address.
E NT I TY
Something that exists as a particular and discrete unit; the existence of something considered apart from its properties.
E NVI RONME NT
The concept of those external conditions that, in combination, affect and influence the growth, development, and survival of an organism. In computer science and software engineering the notion of an envi-
141
142
ONLINE ENGINEERING
ronment is typically interpreted as the conditions that affect the performance of a system or function. FABR I C
A complex underlying structure; a structural material.
FRAM EWORK
A structure for supporting or enclosing something else; the set of assumptions, concepts, values, and practices that constitutes our way of viewing real world phenomena.
FR UST UM
The part of a solid, such as a cone or pyramid, between two parallel planes cutting the solid. In the model of open computational systems, these parallel planes correspond to the abstraction levels explicated by the model.
I NSTR UME NT
Something used to facilitate work; the means by which something is done.
ME THODOL OGY
The collection of theories and practices used by those who work in a particular discipline. In philosophy of science the notion of a methodology is interpreted as the science of method.
MODE L
A schematic description of a system, theory, or phenomenon; accounting for known or inferred properties which, consequently, may be used in further studies of certain characteristics at hand.
NETWO RK
A fabric of interaction that connects several remotely located computers by means of communication links.
NODE
A junction or connection point in a network, e.g. a computer.
ONLI NE
The real time mode of operation in which information flows between computer nodes in a network.
PARCEL
A volume of a fluid, e.g., information, considered as a single entity within a greater volume of the same fluid.
PAT TER N
A representative sample of an observed phenomenon; a composite of conceptual features that are characteristic of an individual or a group, e.g., behavior.
P LATF OR M
The practical means of supporting a software system in its appropriate performance of an intended function.
POR T
A computational entity’s input/output channel for information flow.
GLOSSARY
143
PROCES S
A series of actions och changes that characterize a system’s evolution. In computer science and software engineering the notion of a process is typically interpreted as the part of a computer architecture that perform program instructions.
Q UALI TY
The degree or state of excellence in a complex phenomenon’s essential characteristics.
RE LATI ON
A conceptual or physical association between two or more things; relevance of one to another.
S ENSO R
A device that is capable of receiving a particular type of signal or stimulus.
S ER VI CE
The performance of work or duties for a superior or as a servant; typically manifested as an individual agent or group of entities.
S OFT WARE
Programs, procedures, and rules pertaining to the operation of a computer.
S TR UCTURE
The organization of components or entities such that they form a qualitative whole; the arrangement of entity relations in a complex system.
S YST EM
An isolated set of interacting, interrelated, or interdependent entities forming a complex and qualitative whole.
VAL IDATI ON
To establish the practical evidence which provides assurance that a specific process will consistently produce a product meeting its intended function and/or quality.
VER I F IC ATI O N
To demonstrate the practical evidence which provides assurance that a specific process is consistent, complete, and correct in its production of a particular function and/or quality.
B NOTES
1
“A demo is the result of the cooperation of multiple young programmers, music and graphics artists. They work as a group (demogroup) on a demonstration program (demo) in which they show their skills at graphics and algorithmic programming, computer generated graphics and music. With these demos the groups then compete with others at large parties in various competitions all over the world.” – http://www.scene.org/demoscene.php
2
“The demoscene is as important for the computer industry as street soccer is for the professional world of soccer. It is the breeding place for very talented programmers, musicians and graphicians. Stimulating young artists (programmers, musicians and graphicians) to measure their skills with others and to learn from each other.” – http://www.scene.org/demoscene.php
3
In order to address opportunities and challenges of NEC, Societies of computation laboratories (http://www.soclab.bth.se) focus on engineering of sustainable and distributed software systems, by means of combining models of open multiagent systems with methods of online engineering and quality assurance. That is, we have to support and maintain high-level mission goals at runtime in physical environments and situations where connectivity types, actor roles, and resource availability continuously change. Perhaps even more importantly, the behavior of these systems has to be trustworthy and dependable for all parties involved. The application areas where we investigate these complex opportunities and challenges currently involves Network based defense (NBF) and Critical infrastructures (CRIS).
4
A methodological perspective on engineering of agent societies [18] – We propose a new methodological approach for engineering of agent societies. This is needed due to the emergence of the Embedded Internet. We argue that such communication platforms call for a methodology that focuses on the concept of open computational systems, grounded in general system theory, and natural systems from an engineering perspective. In doing so, it stands clear that forthcoming research in this problem domain initially have to focus on cognitive primitives, rather than domain specific interaction protocols, in construction of agent societies.
5
Theory and practice of behavior in open computational systems [19] – Design and maintenance of future open computational systems calls for a reassessment of current methodological approaches, theories, and practice. We have identified shortcomings in contemporary approaches, in terms of a too strong focus on exclusive models of system behavior. However, we argue that the same set of approaches also exhibits a com-
145
146
ONLINE ENGINEERING
monality in the powerful abstraction of domains. In effect, we advocate the incorporation of this conception by means of a methodological focal point at all levels of behavior in open computational systems. We illustrate this perspective in practice by describing the general outline of explanatory and regulatory principles of behavior in a service-oriented layered architecture for communicating entities (SOLACE). 6
Sustainable coordination [21] – Coordination, accounting for the global coherence of system behaviour, is a fundamental aspect of complex multi-agent systems. As such, coordination in multi-agent systems provides a suitable level of abstraction to deal with system organisation and control. However, current coordination approaches in multi-agent systems are not always fully equipped to model and support the global coherence of open computational systems, i.e., multi-agent systems that are situated in complex and dynamic physical environments. We therefore emphasise the critical roles of observation and construction to sustain coordination in open systems. We present the methodological framework VOCS (Visions of open computational systems), exemplified in terms of a naval multi-agent system scenario (TWOSOME) and the tools explicitly developed and used in construction and observation of this system, SOLACE and DISCERN.
7
Quality of service in network-centric warfare and challenges in open computational systems engineering [23] – The concept of open computational systems represents physical and dynamic environments populated with selforganizing networks of cognitive entities, i.e., open systems of human users and software technology continuously deliberating in a physical environment. The foremost challenge with this class of systems is that of automated sustainability, i.e., online assertion of systemic qualities. To that end, we introduce the application domain of trustworthy and sustainable operations in marine environments and our experience of applicable models in experimenting with quality of service in network-centric warfare. Finally, we conclude the paper with a discussion regarding challenges in the domain of open computational systems.
8
Sustainable information ecosystems [29] – Fundamental challenges in engineering of large-scale multiagent systems involve qualitative requirements from, e.g., ambient intelligence and network-centric operations. We claim that we can meet these challenges if we model our multi-agent systems using models of evolutionary aspects of living systems. In current methodologies of multi-agent systems the notion of system evolution is only implicitly addressed, i.e., only closed patterns of interaction are considered as origin of dynamic system behaviour. In this paper we argue that service discovery and conjunction, by means of open patterns of interaction, are the basic tools for sustainable system behaviour. In effect, we introduce a framework for sustainable information ecosystems. Consequently, we describe basic principles of our methodology as well as a couple of applications illustrating our basic ideas. The applications co-exist on our supporting agent society platform SOLACE and their respective behaviour is visualized using our system analysis tool DISCERN. The paper is concluded with a summary and a number of open research issues in the area.
NOTES
147
9
Online engineering and open computational systems [24] – We strongly believe that agentoriented approaches to system development come with a natural level of abstraction and therefore have something valuable to offer. However, in doing so, any comprehensive agent-oriented methodology necessarily has to be grounded in issues and solutions of relevance in contemporary research and development areas such as grid computing and autonomic computing – in order to realize the visions of ambient intelligence. Current efforts of agent-oriented software engineering, however, mostly focus on traditional methods of software development, provides implementations of stand-alone agent systems, or isolated experimental platforms. These efforts are, of course, worthwhile in themselves but have clear limitations when it comes to their contribution and fulfillment of visions such as ambient intelligence. Consequently, in this chapter we introduce the methodological approach of online engineering. As such, this methodology has explicitly been designed to meet what we conceive as the major challenges and limitations in contemporary approaches of agent-oriented software engineering. In fact, we argue that these limitations primarily are due to a strong focus on current practice in software engineering, rather than on engineering of grounded open computational systems. In this respect, online engineering provides us with the models, methods, and tools to facilitate the necessary transition from programming of abstract machines towards development of grounded physical systems, e.g., from software engineering to engineering of open computational systems.
10
Process algebras as support for sustainable systems of services [31] – Process algebras are indispensable tools in modeling concurrent processes in theoretical computer science. We propose a novel use of process algebra as a back-bone in designing and maintaining complex open distributed information systems. Our π -calculus approach allows us to create and maintain service based mission oriented tasks with intended behaviors and with support for observing and maintaining mission critical systemic criteria.
11
For more information on the notion of dependable computing, see homepage of the Internation federation of information processing (IFIP) and the working group of Dependable computing and fault tollerance – http://www.dependability.org/wg10.4.
12
“Add reliable wireless communications and sensing functions to the billions of physically embedded computing devices around the world for a new universe of ubiquitous networked computing.” [15], p. 39.
13
“Very little has been done by way of adopting lessons learned in the mainstream computer science systems and theory communities or in rethinking the fundamentals of the various engineering disciplines in light of cheap, plentiful, and networked computation.“ [73], p. 50.
14
“... intelligent systems do not function in isolation – they are at the very least a part of the environment in which they operate, and the environment typically contains other such intelligent systems.” [37], p. 81.
148
ONLINE ENGINEERING
15
An important contribution in the area of comprehensive methodologies for complex information systems, i.e., knowledge-based systems, has previously been introduced by Schreiber et al. [66].
16
Unfortunately, this state of affairs is becoming all too common place in computing. For example, the computer science community that advocate multi-agent systems receive few indications that the software engineering community find the involved theories and models applicable in real world contexts.
17
“In cooperative work settings characterized by complex task interdependencies, the articulation of distributed activities requires specialized artifacts which, in the context of conventions and procedures, are instrumental in reducing the complexity of articulation work and in alleviating the need of ad hoc deliberation and negotiation ...“ [64], p. 163.
18
Structural disorder in a system is described in terms of entropy; corresponding to a measurement of some subject's structural organization. Initially, the notion of entropy stems from the second law of thermodynamics; stating that the molecular disorder of a closed system can only increase until it reaches its maximum, i.e. total disorder.
19
“Thus, there exist models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind, the nature of their component elements, and the relations or ‘forces’ between them. It seems legitimate to ask for a theory, not of systems of a more or less special kind, but of universal principles applying to systems in general.“ [5], p. 32.
20
“Because it exists, an entity must be self-contained; that is, its behavior, under every possible scenario, must be completely defined within itself. Unless the entity shares its behavior with a different entity, no one has knowledge of its unique behavior.“ [26], p. 20.
21
A full fledged description of challenges and frameworks on this high level of abstraction is beyond the scope of this thesis and requires much more advanced mathematical modeling techniques, e.g., category theory [42]. However, we are convinced that the path we have outlined will be fruitful to pursue in future work.
22
In practice, the constructs of entities and ports can effectively be dealt with by means of process algebra techniques, e.g., Milner's calculus of communicating systems [47]. However, since this topic is out of the thesis’ scope, we refer to [31] for a discussion regarding the general idea and its applicability.
23
Principle investigator of the Service-oriented trustworthy distributed systems (SOTDS) project was Professor Rune Gustavsson, also heading the research programme of Societies of computation (SOC) at Blekinge institute of technology (BTH). As such, the project was funded during 2001–2004 by The Knowledge Foundation (KKS). More information regarding this particular project is, at the time of this thesis’ publication, available at http://science.soclab.bth.se/projects.
NOTES
149
24
As such, SOCLAB is part of the research programme of Societies of computation (SOC), Department of interaction and system design (AIS), School of engineering (TEK), at Blekinge institute of technology (BTH). More information about the general activities of Societies of computation laboratories (SOCLAB) are readily available at the following URL – http://www.soclab.bth.se. Moreover, SOCLAB is divided into three different divisions, i.e., those of science, engineering, and education. Their respective home page is available at: http://science.soclab.bth.se, http://engineering.soclab.bth.se, and http://education.soclab.bth.se.
25
An applicatory example of such technologies as well as the standardization efforts thereof is, at the time of this thesis’ publication, available at http://www.osgi.org.
26
Each computational entity must reason “... about its local actions and the (anticipated) actions of others to try and ensure the community acts in a coherent manner ...“ [38], p.187.
27
A coordination strategy is “... on one hand a strategy aiming at coordination technology reducing the complexity of coordinating cooperative activities by regulating the coordinative interactions, and on the other hand a strategy that aims at radically flexible means of interaction . . . (leaving users) to cope with the complexity of coordinating their activities ...“ [65], p. 1.
28
Kockums stands for leading-edge, world-class naval technology – above and below the surface. They design, build and maintain submarines and naval surface vessels that incorporate the most advanced stealth technology. Other successful products include the Stirling Air Independent Propulsion (AIP) system, the Kockums Submarine Rescue Systems and mine clearance systems. The Submarine Division is based in Malmö (Sweden) and the Surface Vessel Division, including all production facilities, in Karlskrona (Sweden). Kockums is part of the German Howaldtswerke-Deutsche Werft AG (HDW) Group. More information regarding Kockums AB is, at the time of this thesis’ publication, available at http://www.kockums.se.
C BIBLIOGRAPHY
1
Alberts, D., Gartska, J., Hayes, R., and Signori, D. (2001) Understanding information age warfare. CCRP Publication Series.
2
Andrade, J. et al. (2004) A methodological framework for viewpoint-oriented conceptual modeling. In IEEE Transactions of software engineering, vol. 30(5), pp. 282–294. IEEE Press.
3
Bardram, J. (1998) Designing for the dynamics of cooperative work activities. In proceedings of conference on Computer supported cooperative work, pp. 179–188. ACM Press.
4
Bassett, P. G. (2004) Adaptive components: Software engineering’s ace in the hole. In executive report on Agile project management, vol. 5(5), pp. 1–26. Cutter Consortium.
5
Bertalanffy, L. (1988) The meaning of general system theory. In General system theory: Foundations, development, applications, pp. 30–53. George Braziller.
6
Boertjes, E., Akkermans, H., Gustavsson, R., and Kamphuis, R. (2000) Agents to achieve customer satisfaction: The COMFY comfort management system. In proceedings of 5th international conference on Practical application of intelligent agents and multi-agents (PAAM), pp. 75–94. The Practical Application Company Ltd.
7
Bussman, S. (2000) Self-organising manifacturing control: An industrial application of agent-technology. In Andre, E. and Sen, S. (eds.) proceedings of 4th international conference on Multiagent systems, pp. 87–94. IEEE Press.
8
Capra, F. (1996) A new synthesis. In The web of life: A new scientific understanding of living systems, pp. 157–176. Anchor Books.
9
Castelfranchi, C. (2000) Engineering social order. In Omicini, A., Tolksdorf, R., and Zambonelli, F. (eds.) Engineering societies in the agents’ world, Lecture notes in artificial intelligence (LNAI), vol. 1972, pp. 1–18. Springer Verlag.
10
Dayal, U., Hsu, M., and Rivka, L. (2001) Business process coordination: State of the art, trends and open issues. In Apers, M. G., Atzeni, P., Ceri, S., Paraboschi, S.,
151
152
ONLINE ENGINEERING
Ramamohanarao, K., and Snodgrass, R. T. (eds.) proceedings of 27th conference on Very large databases (VLDB), pp. 3–13. Morgan Kaufmann Publishers. 11
Denning, P. J. (2003) Great principles of computing. In Communications of the ACM, vol. 46(11), pp. 15–20. ACM Press.
12
Denning, P. J. (2004) The field of programmers myth. In Communications of the ACM, vol. 47(7), pp. 15–20. ACM Press.
13
Durfee, E. H. (2001) Scaling up agent coordination strategies. In IEEE Computer, vol. 34(7), pp. 39–46. IEEE Press.
14
Elrad, T., Filman, R. E., and Bader, A. (2001) Aspect-oriented programming: Introduction. In Communications of the ACM, vol. 44(10), pp. 29–32. ACM Press.
15
Estrin, D., Govindan, R., and Heidemann, J. S. (2000) Embedding the Internet: Introduction. In Communications of the ACM, vol. 43(5), pp. 38–41. ACM Press.
16
Evans, P. and Wurstel, T. (1997) Strategy and the new economics of information. In Harvard business review, pp. 71–82, September.
17
Foster, I., Kesselman, C., Nick, J., and Tucke, S. (2002) The physiology of the grid: An open grid services architecture for distributed systems integration. In Open grid service infrastructure (OGSI), Global grid forum.
18
Fredriksson, M. and Gustavsson, R. (2002) A methodological perspective on engineering of agent societies. In Omicini, A., Zambonelli, F., and Tolksdorf, R. (eds.) Engineering societies in the agents' world, Lecture notes in artificial intelligence (LNAI), vol. 2203, pp. 10–24. Springer Verlag.
19
Fredriksson, M. and Gustavsson, R. (2002) Theory and practice of behavior in open computational systems. In proceedings of 3rd international symposium on From agent theory to agent implementation, 16th European meeting on Cybernetics and systems research, April 3–5, Vienna, Austria.
20
Fredriksson, M. and Gustavsson, R. (2002) Methodological principles in construction and observation of open computational systems. In proceedings of 1st international joint conference on Autonomous agents and multiagent systems (AAMAS), pp. 692–693, ACM Press.
21
Fredriksson, M., Gustavsson, R., and Ricci, A. (2003) Sustainable coordination. In Klusch, M., Bergamaschi, S., Edwards, P., and Petta, P. (eds.) Intelligent information agents: The AgentLink perspective, Lecture notes in artificial intelligence (LNAI), vol. 2586, pp. 203–233, Springer Verlag.
BIBLIOGRAPHY
153
22
Fredriksson, M. and Gustavsson, R. (2003) Trustworthy and sustainable operations in marine environments. In proceedings of 25th international conference on Software engineering (ICSE), pp. 806–807, IEEE Press.
23
Fredriksson, M. and Gustavsson, R. (2003) Quality of service in network-centric warfare and challenges in open computational systems engineering. In proceedings of 1st international workshop on Theory and practice of open computational systems (TAPOCS), 12th International workshops on Enabling technologies: Infrastructure for collaborative enterprises (WETICE), pp. 359–364, IEEE Press.
24
Fredriksson, M. and Gustavsson, R. (2004) Online engineering and open computational systems. In Bergenti, F., Gleizes, M., and Zambonelli, F. (eds.) Methodologies and software engineering for agent systems: The agent-oriented software engineering handbook, pp. 377–391. Kluwer Academic Publishers.
25
Garijo, F. J., and Boman, M. (1999) Multi-agent system engineering. Lecture notes in artificial intelligence (LNAI), vol. 1647. Springer Verlag.
26
Ghosh, S. and Lee, T. S. (2000) Principles of modeling complex processes. In Modeling and asynchronous distributed simulation: Analyzing complex systems, Microelectronic systems, pp. 19–30. IEEE Press.
27
Gustavsson, R. and Fredriksson, M. (2001) Coordination and control in computational ecosystems: A vision of the future. In Omicini, A., Zambonelli, F., Klusch, M., and Tolksdorf, R. (eds.) Coordination of Internet agents: Models, technologies, and applications, pp. 443–469. Springer Verlag.
28
Gustavsson, R., Fredriksson, M., and Rindebäck, C. (2001) Computational ecosystems in home health care. In Dellarocas, C. and Conte, R. (eds.) Social order in multiagent systems, Multiagent systems, artificial societies, and simulated organizations, vol. 2, pp. 201–220. Kluwer Academic Publishers.
29
Gustavsson, R. and Fredriksson, M. (2003) Sustainable information ecosystems. In Garcia, A., Lucena, C., Zambonelli, F., Omicini, A., and Castro, J. (eds.) Software engineering for large-scale multi-agent systems: Research issues and practical applications, Lecture notes in computer science (LNCS), vol. 2603, pp. 127–142, Springer Verlag.
30
Gustavsson, R. and Fredriksson, M. (2004) Humans and complex systems: Sustainable information societies. In Olsson, M. O. and Sjöstedt, G. (eds.) Systems approaches and their application: Examples from Sweden. Kluwer Academic Publishers.
31
Gustavsson, R. and Fredriksson, M. (to appear) Process algebras as support for sustainable systems of services. In Viroli, M. and Omicini, A. (eds.) Algebraic approaches for multi-agent systems, Journal of Applicable algebra in engineering, communication and computing (AAECC). Springer Verlag.
154
ONLINE ENGINEERING
32
Gärdenfors, P. (2000) Conceptual spaces: The geometry of thought. MIT Press.
33
Highsmith III, J. A. (2000) Adaptive software development: A collaborative approach to managing complex systems. Dorset House Publishing Co.
34
Huberman, B. A. and Hogg, T. (1993) The emergence of computational ecologies. In Nadel, L. and Stein, D. (eds.) Lectures in complex systems, SFI Studies in the sciences of complexity, vol. 5, pp. 185–205. Addison–Wesley.
35
Huget, M-P. (2003) Agent UML class diagrams revisited. In Kowalczyk, R., et al. (eds.) Agent technologies, infrastructures, tools, and applications for e-services, Lecture notes in artificial intelligence (LNAI) vol. 2592, pp. 49–60, Springer Verlag.
36
Huhns, M. N. and Singh, M. P. (1998) Cognitive agents. In IEEE Internet computing, vol. 2(6), pp. 87–89. IEEE Press.
37
Huhns, M. N. and Stephens, L. M. (1999) Multiagent systems and societies of agents. In Weiss, G. (ed.) Multiagent systems: A modern approach to distributed artificial intelligence, pp. 79–120. MIT Press.
38
Jennings, N. (1996) Coordination techniques for distributed artificial intelligence. In O’Hare, G. M. P. and Jennings, N. (eds.) Foundations of distributed artificial intelligence, pp. 187–210. Wiley.
39
Jennings, N. (1999) Agent-based computing: Promise and perils. In proceedings of 16th international joint conference on Artificial intelligence, pp. 1429–1436. Morgan Kaufmann Publishers.
40
Jennings, N. (2000) On agent-based software engineering. In Artificial intelligence, vol. 117(2), pp. 277–296. Elsevier.
41
Labrou, Y., Finin, T., and Peng, Y. (1999) Agent communication languages: The current landscape. In IEEE Intelligent systems, vol. 14(2), pp. 45–52. IEEE Press.
42
Lawrence, F. and Schanuel, S. (2002) Conceptual mathematics: A first introduction to categories. Cambridge University Press.
43
Loo, A. W. (2003) The future of peer-to-peer computing. In Communications of the ACM, vol. 46(9), pp. 56–61. ACM Press.
44
Malsch, T. (2001) Naming the unnamable: Socionics or the sociological turn of/to distributed artificial intelligence. In Jennings, N. and Sycara, K. (eds.) Autonomous agents and multi-agent systems, vol. 4. pp. 155–186. Kluwer Academic Publishers.
45
Maturana, H. and Varela, F. (1980) Autopoiesis and cognition: The realization of the living. Kluwer Academic Publishers.
BIBLIOGRAPHY
155
46
McKinley, et al. (2004) Composing adaptive software. In IEEE Computer, vol. 37(17), pp. 56–64. IEEE Computer Society Press.
47
Milner, R. (1980) A calculus of communicating systems. Lecture notes in computer science, vol. 92. Springer Verlag.
48
Nardi, B. A. and O’Day, V. L. (1999) Information ecologies: Using technology with heart. MIT Press.
49
Nicolis, G. and Prigogine, I. (1989) Complexity in nature. In Exploring complexity: An introduction, pp. 5–45. W. H. Freeman and Co.
50
Omicini, A. and Zambonelli, F. (1999) Coordination for Internet application development. In Autonomous agents and multi-agent systems, vol 2(3), pp. 251–269.
51
Papazoglou, M. P. and Georgakopoulos, D. (2003) Service-oriented computing. In Communications of the ACM, vol. 46(10), pp. 24–28. ACM Press.
52
Parunak, V. D. (1997) Go to the ant: Engineering principles from natural agent systems. In Annals of operations research, vol. 75, pp. 69–101. Kluwer Academic Publishers.
53
Parunak, V. D., Sauter, J., and Clask, S. (1998) Toward the specification and design of industrial synthetic ecosystems. In Singh, M., Rao, A., and Wooldridge, M. (eds.) Intelligent agents IV: Agent theories, architectures, and languages, Lecture notes in artificial intelligence (LNAI), vol. 1365, pp. 45–60. Springer Verlag.
54
Parunak, V. D., Brueckener, S., Sauter, J., and Matthews, R. (2000) Distinguishing environmental properties and agent dynamics: A case study in abstraction and alternate modelling technologies. In Omicini, A., Tolksdorf, R., and Zambonelli, F. (eds.) Engineering societies in the agents’ world, Lecture notes in artificial intelligence (LNAI), vol. 1972, pp. 19–33. Springer Verlag.
55
Parunak, V., and Bruekner, S. (2001) Entropy and self-organization in agent systems. In proceedings of the 5th international conference on Autonomous agents, pp. 124– 130. ACM Press.
56
Parunak, V. D., Brueckner, S., and Sauter, J. (2002) Digital pheromone mechanisms for coordination of unmanned vehicles. In proceedings of 1st international joint conference on Autonomous agents and multiagent systems (AAMAS), pp. 449–450. ACM Press.
57
Petersson, K. (2002) Handbook for application development with the OpenSIS. Technical report, no. EMW/FT/K–NBF:025. Ericsson.
156
ONLINE ENGINEERING
58
Popper, K. R. (1992) Further remarks on reduction, 1981. In The open universe: An argument for indeterminism, pp. 163–174. Routledge.
59
Quine, W. V. (1995) From stimulus to science. Harvard University Press.
60
Randell, B. (2002) Challenges and directions for dependable computing: Some reflections. In proceedings of 41st meeting on Challenges and directions for dependable computing, International federation for information processing working group on Dependable computing and fault tolerance (IFIP WG10.4), January 4–8, Saint John, Virgin Islands, USA.
61
Robinson, K. (2004) Breaking down the barriers. In Kincaid, B. (ed.) Journal of RUSI Defence systems, Royal united services institute for Defence and security studies (RUSI), vol. 7(1), pp. 62–65. Sovereign Publications.
62
Rinaldi, S. M., Peerenboom, J. P., and Terrence, K. K. (2001) Identifying, understanding, and analyzing critical infrastructure interdependencies. In IEEE Control systems magazine, pp. 11–25. IEEE Press.
63
Schmidt, K. and Bannon, L. (1992) Taking CSCW seriously: Supporting articulation work. In international journal of Computer supported cooperative work, vol. 1(1–2), pp. 7–40. Kluwer Academic Publishers.
64
Schmidt, K. and Simone, C. (1996) Coordination mechanisms: Towards a conceptual foundation of CSCW systems design. In international journal of Computer supported cooperative work, vol. 5(2–3), pp. 155–200. Kluwer Academic Publishers.
65
Schmidt, K. and Simone, C. (2000) Mind the gap: Towards a unified view of CSCW. In proceedings of 4th international conference on Design of cooperative systems (COOP).
66
Schreiber, G. et al. (1999) Knowledge engineering and management: The CommonKADS methodology. MIT Press.
67
Schumacher, M. (2001) Objective coordination in multi-agent system engineering: Design and implementation. Lecture notes in artificial intelligence (LNAI), vol. 2039. Springer Verlag.
68
Shannon, C. E. and Weaver, W. (1998) The mathematical theory of communication. University of Illinois Press.
69
Shapiro, S. C. (ed.) Encyclopedia of artificial intelligence. Wiley Interscience.
70
Singh, M. (1998) Agent communication languages: Rethinking the principles. In IEEE Computer, vol. 31(12), pp. 40–47. IEEE Press.
BIBLIOGRAPHY
157
71
Stafford, T. S. (2003) E-services. In Communications of the ACM, vol. 46(6), pp. 26– 34. ACM Press.
72
Steels, L. and McDermott, J. (1993) The knowledge level in expert systems conversations and commentary. Academic Press.
73
Tennenhouse, D. (2000) Proactive computing. In Communications of the ACM, vol 43(5), pp. 43–50. ACM Press.
74
Turing, A. M. (1936) On computable numbers, with an application to the Entscheidungs-problem. In proceedings of London mathematical society, ser. 2, vol. 42, pp. 230–265.
75
Turing, A. M. (1946) Proposed electronic calculator. In report of National physical laboratory.
76
Turing, A. M. (1950) Computing machinery and intelligence. In Mind, vol. 59(236), pp. 433–460. http://www.loebner.net/Prizef/TuringArticle.html.
77
Waldo, J. (1999) The JINI architecture for network-centric computing. In Communications of the ACM, pp. 76–82. ACM Press.
78
Weaver, W. (1948) Science and complexity. In American scientist, vol. 36, pp. 536– 544. Rockefeller Foundation.
79
Wegner, P. (1997) Why interaction is more powerful than algorithms. In Communication of ACM, vol. 40(5), pp. 80–91. ACM Press.
80
Wegner, P. (1998) Interactive foundations of computing. In Theoretical computer science, vol. 192(2).
81
Wegner, P. (1999) Toward empirical computer science. In The Monist, vol. 82(1).
82
Wellman, M. P. (1993) A market-oriented programming environment and its application to distributed multicommodity flow problems. In journal of Artificial intelligence research, vol. 1, pp. 1–23. Morgan Kaufmann Publishers.
83
Wellman, M. P. and Wurman, P. R. (1998) Market-aware agents for a multiagent world. In Robotics and autonomous systems, vol. 24, pp. 115–125. Elsevier Science.
84
Wooldridge, M. Jennings, N.R, and Kinny, D. (2000) The GAIA methodology for agent-oriented analysis and design. In journal of Autonomous agents and multi-agent systems, vol. 3(3), pp. 285–312. Kluwer Academic Publishers.
85
Wooldridge, M. and Ciancarni, P. (2001) Agent-oriented software engineering: The state of the art. In Ciancarni, P. and Wooldridge, M. (eds.) Agent-oriented software
158
ONLINE ENGINEERING
engineering, Lecture notes in artificial intelligence (LNAI), vol. 1957, pp. 1–28. Springer Verlag. 86
Ydén, K. (2003) Krigsvetenskap och nätverksbaserat försvar. Research report, no. 83353:1. Swedish National Defence College.
87
Ygge, F. and Akkermans, H. (2000) Resource-oriented multi-commodity market algorithms. In journal of Autonomous agents and multi-agent systems, vol. 3(1), pp. 53– 72. Kluwer Academic Publishers.
88
Zambonelli, F. and Parunak, V. D. (2002) Signs of a revolution in computer science and software engineering. In Petta, P. Tolksdorf, R., and Zambonelli, F. (eds.) Engineering societies in the agents’ world, Lecture notes in computer science (LNCS), vol. 2577, pp. 13–28. Springer Verlag.