Knowledge-based operating systems, Operating systems design, Pervasive ...
portion of the total base of software that runs on a particular system, Deitel (1984)
...
NEXT GENERATION OF OPERATING SYSTEMS DESIGN BASED ON KNOWLEDGE ABSTRACTION Mauro Marcelo Mattos,Dr.
FURB – University of Blumenau – Campus IV R.Braz Wanka, 238 - CP 1507 – CEP 89035-160
[email protected]
ABSTRACT This work presents an overview of an endogenous self-adaptive and self-reconfigurable approach to operating system design that is being built to improve the system’s usability based on (i) a hyper-dimensional world model, (ii) on DEVS formalism as a runtime environment and, (iii) uses the concept of plans instead of programs. KEYWORDS Knowledge-based operating systems, Operating systems design, Pervasive computing.
1. INTRODUCTION As computing continues to become more accessible, security problems increase. Networking and user friendliness also imply increased vulnerability; sharing and protection are conflicting goals. It is in this sense that Hebbard (1980) states: “It’s important to note that probably no large operating system using current design technology can withstand a determined and well-coordinated attack, and that most such documented penetrations have been remarkably easy”. Since the operating system normally represents only a small portion of the total base of software that runs on a particular system, Deitel (1984) states: “it is much easier to make a system more secure if security is designed into the system from the beginning rather than retrofitted to an existing system. Security measures should be implemented throughout a computer system.”. Another disturbing trend generating considerable interest in information security is the accelerating growth path of security exploitation. According to Waters (2004), "Three years ago, the time between discovery of vulnerability and the exploit was maybe 500 days," she explained. "Now it' s down to fewer than 40 days. As soon as a flaw is discovered, someone is ready to launch an attack that is going to exploit that flaw." When the fact that our actual commercial operating systems are descendants of some version of old Unix is added to this context, it is not difficult to anticipate that those same problems will continue while we continue to building operating systems as we do today. Traditional operating systems support the notion of hardware abstraction level in which each application is supposed to have its own processor (and other resources). This situation, and the fact that, in general, all commercial operating systems are based on the multitasking concept, contributes to problems that were identified 30 years ago (Linde, 1975). These problems range from security to usability, including a lack of adequate behaviour during fluctuating execution conditions and the user’s privacy. Besides this, there are two other concepts that contribute to making matters worse: (i) the operator concept and, (ii) the program concept (Mattos, 2003). The operator function was necessary during the first years of computing since computers were big and difficult to use. At that time, operators were responsible for turning the machine on/off, starting programs, collecting reports, restarting programs, and so on. This scenario changed as computers became smaller, cheaper and faster, as they are today. However, what had been a real need in the past is still employed today, as if there were no other way to interact with computers. In fact, today we are all operators – all of us that use some kind of computer (desktops, palmtops, cellular phones). We are trained today to learn how to pull
virtual buttons the same way that former operators were trained to pull real buttons in real panels on those old mainframes. This aspect has various consequences, the principal one being the program concept. A program can be thought of as the programmer’s hands extended, virtually, inside our machines. The programmer has knowledge about some specific domain and knows how to establish the correct sequence of steps in order to solve the problem. In this scenario, we are users of such routines – in other words: operators. In this context, programs are the ways that programmers can implement their procedural knowledge about the domain of the problem. These limitations have been a central driving force behind the creation of a new operating system based on knowledge abstraction. The main goal is to bring together knowledge about artificial intelligence, robotics and physics in order to produce a new intelligent hybrid operating system to cope with those problems. This work presents an overview of a knowledge-based operating system approach to next generation operating system design that is being built based on Mattos (2003). The framework is based on (i) a hyper dimensional world model, (ii) on DEVS formalism (Ziegler and Sarjoughian,2002) as a runtime environment and, (iii) uses the concept of plans instead of programs. This paper is organized as follows: section 2 presents an overview of the architecture we are building. Section 3 discusses an aspect related to knowledge acquisition in an operating system context. The paper ends with some conclusions and further work presented.
2. KNOWLEDGE-BASED OPERATING SYSTEM According to Mattos (2003), several research works have been described in the literature that aim to develop a complete knowledge-based operating system. Other approaches basically consist of applying artificial intelligence techniques, by making kernel implants in order to get better user interfaces in traditional operating systems. However, one important aspect to be considered is that all of them have failed to specify clearly what knowledge means in the operating system context. This aspect has transformed those supposed new operating systems projects into traditional operating systems architectures with a lot of specialized libraries running over some multitasking platform. The first step in our work was to define what a knowledge-based operating system is: “an embodied, situated, adaptive and autonomic system based on knowledge abstraction which has identity and intelligent behavior when executed (Mattos, 2003). Knowledge in this context is conceived as being a set of logical-algebraic operational structures that enables one to organize the functioning system according to laws of interconnection and behavior. The identity aspect results from the characteristics of embodiment, situatedness, adaptiveness and autonomy. Those characteristics enable the system to perceive in an individualized manner a set of events occurring in some instant of time. The intelligent behavior results from the previous characteristics plus the relationship the system has with the surrounding environment. The last important aspect to be considered is that all of this “intelligent behavior” is possible only when the system is running. This means that if (or when) the system is turned off and later turned on again, we are faced with a symbol-grounding problem (updating the surrounding characteristics in order to present intelligent behavior in this “new possible world”). The whole system is built inside a shell (an endogenous characteristic) - outside is the real world. A hyper-dimensional world model (fig.1) enables the entire system to perceive evolving and/or fluctuating execution conditions. We have identified 3 dimensions on which such a new operating system paradigm must be based: (i) Physical dimension, (ii) Logical dimension and (iii) Temporal dimension. Physical dimension describes the physical components and their structural relationship. Logical dimension describes the functional characteristics of each physical component, called the physical context of a device. A state machine describes the dynamic aspects of the component’s behavior. By joining together the entire physical context of all physical devices described at the physical dimension, we obtain the world’s physical context (WPC), according to the operation: WPC = DFC1 ⊕ DFC2 ⊕ . . . ⊕ DFCn; where the operation ⊕ denotes concatenation. As the WPC describes the physical availability of the system’s world model, the next logical level is referred to as the device’s logical context. This level describes the logical availability of that device. A state
machine describes the dynamic aspects of the component’s behavior.
Figure 1. Hyper-dimensional World Model. Combining the entire logical context of all logical devices described in the logical dimension (fig.2), we obtain the world’s logical context, according to the operation: WLC = DLC1 ⊕ DLC2 ⊕ . . . ⊕ DLCn
Figure 2. Overall
Architecture
Figure 2. Overall Architecture of the System
One important aspect to consider is that changes in availability in one logical device affect the whole system. This situation is immediately sensed by all logical devices of the world model in such a way that any fluctuation in the availability of resources will be noticed by all applications running in the system (including the OS). The main advantage of this structure is that besides resuming all possible states of the world model, this information is instantaneously made available to all applications running in the system. This is because the DLC is in fact a sub-state of its DFC, in such a way that any hardware event is sensed immediately upon any change to the DFC state. When this happens, the DLC is also affected immediately, and this is propagated to applications running in the system. The temporal dimension is related to the fact that any intelligent being perceives the flow of time in a natural manner. In general, our software constructions perceive time by calling specific API system service to return the date or time of day in a number that will be further used to make comparisons. In a real production environment, there are many hardware interruptions during the application’s time slice, which happen in such a way that it is difficult to guarantee that an application will not be interrupted by hardware events when it receives the processor. One consequence of this is that the application doesn’t know what happened in its previous execution or what its behavior was in the previous executions. Thus there is no history, and if there is no history there is no knowledge, mainly because knowledge is acquired based on the history of past events – we learn from our previous experiences. As a means of resolving this question the proposed model has adopted the DEVS formalism (Ziegler and Sarjoughian, 2002) as a run-time environment. This model, associated with the parallel functional decision trees specified in Schaad (1998), enables the system to substitute the simulation behavior of DEVS specification and turn it into a run-time environment in which the applications can instantaneously sense not only the actual internal world situation but the time flow, in such a way that an application can know (without recording log files) what happened in the past execution and decide what path to follow based on this historical information and the actual situation of the world. Another important aspect is that by using DEVS formalism as a run-time environment, it is possible to enable applications to sense the “presence” of other applications running at the same instant of time. This sense condition is perceived as a matter of delay (or advance) related to previous execution of a particular point of the ‘program’ (or plans in the proposed model). As the run-time model enables the applications to immediately notice that they are delayed (or advanced) in comparison with previous executions, it is possible to make the applications choose logical paths themselves, which better adapts the applications to a particular world situation. It is important to point out that this implies that the programmer has to be aware of other possibilities instead of just implementing the faster (or more efficient) algorithm that solves some problem in the application domain.
3. KNOWLEDGE ACQUISITION AT THE O.S. CONTEXT It is well known that the knowledge acquisition process is a significant obstacle to the construction of knowledge-based systems. The key to this process is how we may effectively acquire the knowledge that will be implemented in the knowledge base. In some cases, the knowledge engineer may be able to gather adequate and sufficient information from non-human sources such as textbooks and manuals. Usually however, one needs to consult a practicing expert. In an operating system environment, this is not an easy task. It is usually done by hooking the OS API calls and recording logs for further analysis. This is a time and resources consuming process. The main drawbacks to this approach are: (i) the information gathering process impacts the overall performance, influencing other applications that are not involved in the application context being considered; (ii) this impact on performance also interferes with the application being considered; and (iii) this scenario is probably different from that where the application was developed (Mattos,2003). In our proposed model we make use, at run-time, of a dynamic model that describes a software to be executed.. This can be achieved by combining two technologies: (i) the scheduling strategy based on DEVS formalism, and (ii) parallel functional decision trees (Schaad, 1998), which implements new concepts able to address the coherence-reactivity trade-off and the problems of active perception and context-dependent plan use.
This framework can make it possible to enable the operating system to acquire knowledge about what is going on at some point in the execution of an application. The proposed application model consists of representing the application' s dynamic model as a state machine. Time stamping the dynamic model as the code is being executed, makes it possible to enable the operating system to acquire an application' s profile in a natural way, i.e. not based on external characteristics such as memory use, execution time, etc, but on internal ones based on the dynamic aspects of its execution. Each time an application changes its execution flow, the system sets the time that it happened at the appropriate position in the running model. This model shows some important characteristics: • By using the information collected and updated in real-time, after several runs the operating system can know in advance, based on the actual state of the system, what kind of impact such an application could cause if started at that moment. Instead of informing the user that the system does not have enough resources available to run the application (as is done now), the message can rather be in the sense of helping the user, for example, in closing some application that is not so important at this time; • When an error occurs, the system marks the model with a flag. In the future, when this application starts running again, the system can follow its path in order to prevent the problem damaging other applications. This trace can be used by the application developer to fix the bug; • A more effective and wider use of formal techniques can be used in order to obtain better quality software products; • Security restrictions can be developed and applied to the system' s world model in such a way that it would be possible to obtain safer environments; • As the application either knows or can query the system' s internal state, it can change its own parameters in order to achieve better service quality. • Instead of working with problem reports collected by call centers, developers can now make use of a kind of black box technology similar to that used by the airline industry in planes. When crashes happen, its possible to consider not only the application' s internal state but the system' s internal state in debugging as well. • Simulation can be applied to the model to excite and train the model, as is done in neuronal networks' learning processes. Then, instead of debugging the code, it is possible to follow its behavior before delivering the code. The trained model, supplied to the operating system, will improve confidence in the program' s behavior. If, or when, a problem occurs, the operating system can set it into the model, and the model can be used by software developers in order to identify the failure point more precisely. This improves the debugging process since it is no longer necessary to generate so many execution logs or spend time analyzing them. Considering that a knowledge-based operating system has some kind of identity property, it is possible to promote the system to an agent category. Being an agent, the theory of Organizations in Multi-Agent Systems (MAS) can be adopted in order to facilitate interactions with other agents living in surrounding environments. A Multi-Agent System organization is usually conceived of as a global set of constraints that targets the agent’s behavior towards those socially intended. In the MAS realm, the way an organizational model achieves this goal is normally by focusing on the structure, the functioning, or the deontic aspects of the MAS’s organization. The structural aspect defines the agent’s relations through the notions of roles and links. The functional aspect describes how a MAS usually achieves its global goals, i.e., how these goals are decomposed (by plans) and distributed to the agents (by missions). The deontic aspect describes the permissions and obligations that the role has for its missions. In this context, any classical model used to describe or design an organization (agent centered, or organization centered) can be adopted to enable the agents cooperation in improving the efficiency of some particular solution that: • one does not have enough resources (or knowledge) to solve, or • even with having enough knowledge and/or resources, it knows that its neighbors have the capacity to solve the problem in a faster way.
4. CONCLUSIONS AND FURTHER WORK This is an ongoing research project aimed at developing an operating system architecture that supplies appropriate support to the development of more intelligent applications and that allows interaction between the user and the computational resources in a more transparent way. In order to achieve those objectives, this work has presented an overview of an endogenous self-adaptive and self-reconfigurable approach to operating system design based on (i) an hyper dimensional world model, (ii) on DEVS formalism as a runtime environment and, (iii) uses the concept of plans instead of programs. Besides supplying the operating system with a dynamic model produced during the analysis and design phases, the system adopts the DEVS formalism as a run-time environment in order to define a kind of biological clock (as exists in humans). This approach enables the applications to instantaneously perceive the presence of other applications sharing the machine resources at any time during the execution. We have briefly described an application model that proposes to use information produced during the analysis and design phases of software process development, by supplying the operating system with a dynamic model in order to provide a semantic aggregate. Our hope is that with this information, it will be easier to follow an application path during its execution in order to build a private application profile. However, there are other critical directions in which we must refine our design: (i) Automatic selfconstruction of the internal world model by the operating system in order to achieve a kind of machine identity; (ii) Internal knowledge representation and handling specification in order to make sharing and communication between different machine identities possible and (iii) Consideration of security and privacy aspects of the project. According to our research, a new generation of operating systems depends on resources that should be provided by the nucleus of the operating system, in such a way that unifies control and knowledge of the one that the applications are accomplishing. As it does this, several aspects previously commented on can be addressed in a more natural way. This should contribute to the development of more efficient applications; more user-oriented instead of the application-oriented type available today. Besides, tendencies indicate a movement towards incorporating artificial intelligence concepts in the several stages that comprise operating system architecture. This can make them user-friendlier and really enable them to do what they are supposed to: become a resource administrator, as stated in the original concept.
REFERENCES Deitel, H.M. An introduction to operating systems. Addison-Wesley Publishing Company, 1984. Hebbard,B. et al. A penetration Analysis of the Michigan Terminal System. Operating Systems Review, Vol.14,No.1, pp.7-20, June 1980. Linde, R. Operating Systems Penetration. AFIPS Conf. Proceedings, Vol 44,1975. Mattos, M.M. Fundamentos conceituais para a construção de sistemas operacionais baseados em conhecimento. Doctoral Thesis. UFSC- Federal University of Santa Catarina, Brazil, Nov., 2003. Schaad,R.. Representation and Execution of Situated Action Sequences. Doctoral Thesis. Fakultät Der Universität Zürich., August, 1998. J.K.Waters. Gates set to kick off security conference. [On line] available at < www.adtmag.com\article.asp?id=8970> 2/23/2004. Ziegler,B.P., Sarjoughian.H.S. DEVS Component-based M&S Framework: An Introduction. Proc.of the 2002 AI, Simulation and Planning in High Autonomy Systems. Fernando J.Barros and Norbert Giambiasi Editors.Lisbon,Portugal, April 2002.