Autonomous Perception and Decision Making in Building Automation

21 downloads 0 Views 670KB Size Report
Index Terms—Autonomous decision making, autonomous per- ception, bionics ... The aim of modern building automation is to build technical systems, which are ...
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 57, NO. 11, NOVEMBER 2010

3645

Autonomous Perception and Decision Making in Building Automation Rosemarie Velik, Member, IEEE, and Gerhard Zucker, Member, IEEE

Abstract—System complexity has reached a level where it is hard to apply existing information analysis methods to automatically derive appropriate decisions. Building automation is on the verge of being unable to extract relevant information and control a building accordingly. Many different industries in today’s automation could provide information by means of different sensors, but the ability to integrate this information is missing. This paper describes approaches on how to cope with increased complexity by introducing models for perception and decision making that are based on findings in neuroscience and psychoanalysis, scientific disciplines that are far-off from engineering but nevertheless promise valuable contributions to intelligent automation. Index Terms—Autonomous decision making, autonomous perception, bionics, building automation, cognitive automation.

I. I NTRODUCTION

C

LASSICAL building automation is mainly concerned with monitoring the environment (e.g., temperature and humidity) and adjusting it to predefined value ranges targeting comfort and energy preservation. However, as outlined in [1], in the future, this will shift toward more complex applications needing a detailed perception of what is going on in a building and a selection of adequate reactions based on this perception. More and more sensor information will be needed for processing. Existing approaches are challenged by this abundant amount of data and the way in which it shall be responded to. There is a need to introduce new concepts for handling the demands of the upcoming future. In the last years, research started to focus on bionic principles for designing new concepts in this area. The aim of modern building automation is to build technical systems, which are capable of perceiving their environment and reacting adequately on situations going on in this environment. Artificial intelligence has long tried to achieve humanlike abilities in decision making and combinational logic. The authors, however, believe that a key to these abilities is not found in neural networks [2] or fuzzy logic [3], but rather on much higher levels. It is the ability to perceive objects, events, scenarios, and situations in an environment.

In the year 2000, under the supervision of Dietmar Dietrich, an interdisciplinary research team of scientists was formed at the Institute of Computer Technology (ICT), Vienna University of Technology, which has since then focused on developing next-generation intelligent automation systems. The main application domain for these systems lies in the field of building automation. A second domain are autonomous robots. For processing and interpreting vast amounts of sensory data and taking decisions based on them, we use the human mind as an archetype for model development. The motivation to use the human mind and its information-processing principles came from the consideration that humans perform so well in perceiving their environment and (re)acting adequately to it. Today, existing technical solutions can, by far, not compete with human cognitive capabilities [4]. By taking over these concepts, technical systems shall operate similarly efficient to human operators. Within these last ten years, we suggested different models for perception and decision making. In this paper, we give an overview of the results of this research focusing on the main three models [Artificial Recognition System–PerCeption (ARS-PC), NeuroSym, and Artificial Recognition System–PsychoAnalysis (ARS-PA)] that were defined in the research field of cognitive automation. The first two are concerned with autonomous perception; the last deals with autonomous decision making in technical systems. Inspirations for model development are taken from the research disciplines of neuroscience, psychology, and psychoanalysis, since these disciplines contain the necessary knowledge to model abilities like perception, recognition, and decision making as they are found in the human mind. II. P ERCEPTION In the following, the two models of perception are presented. The first one (ARS-PC) bases on symbolic information processing, and the second one (NeuroSym) introduces a new information processing principle based on so-called neurosymbols. A. Model for Symbolic Perception

Manuscript received January 3, 2009; revised July 23, 2009; accepted November 10, 2009. Date of publication January 8, 2010; date of current version October 13, 2010. R. Velik is with the Biorobotics Department, Fatronik-Tecnalia, 20009 Donostia-San Sebastian, Spain (e-mail: [email protected]). G. Zucker is with the Energy Department, Austrian Institute of Technology, 1220 Vienna, Austria (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIE.2009.2038985

The first model that is presented is rooted in the field of building automation for automatic surveillance systems. Assuming that buildings are equipped with a large number of diverse sensors—which can already be found today in office buildings and is an expected development for the near future—we need to process the information coming from these sensors. The ARS-PC model is a layered model for sensor data processing and was introduced in [5] and [6]. Sensor data are processed

0278-0046/$26.00 © 2010 IEEE

3646

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 57, NO. 11, NOVEMBER 2010

Fig. 1. Overview of ARS-PC model.

bottom–up in three layers to perceive different scenarios that occur in buildings. The layers are labeled as microsymbol layer, snapshot symbol layer, and representation symbol layer (see Fig. 1). In these layers, information is processed in terms of symbols, which are called microsymbols, snapshot symbols, and representation symbols. They all exist simultaneously, but on different levels of abstraction. A symbol is seen as a representation of a collection of information. The model uses static definitions; thus, symbols and correlations between symbols are predefined. This means that the system is only capable of recognizing known information patterns of the sensors. Symbols can be created; they have properties, which can be updated, and they can be deleted. In Fig. 1, symbols are shown as cuboids of different sizes, indicating that their level of sophistication increases with each layer. The number of symbols is different at each layer. At the lowest layer, there occur a large number of microsymbols. At the representation layer, there exist only a few symbols, where each symbol represents a lot of information of higher quality. The three types are defined as follows. 1) Microsymbols: Microsymbols are formed from sensor input data. They present the basis of the symbol alphabet and bear the least amount of information. Similar to the many different sensations that the human mind has to process every moment, a microsymbol is created from a few single sensor inputs (e.g., motion detectors or light barriers) at a specific instant of time. Microsymbols are created whenever the real world changes, and this change causes sensors to trigger. In the targeted application, microsymbols deliver basic information like where movements and objects have been detected. 2) Snapshot Symbols: A group of microsymbols is combined to create one snapshot symbol. These symbols represent a part of the world at a certain moment of time. The combined snapshot symbols represent how the system perceives the world at a given time instant. Whenever the system perceives a situation or an object of interest, it creates an according snapshot symbol. However, it is important that snapshot symbols are solely created from information that represents the current state of the outside

world. This results in an increased creation of snapshot symbols as there are no associations between new snapshot symbols and previously created ones. For example, if the system perceives a person moving around, multiple symbols at different positions and with different time stamps are created. Each of these symbols only exists in the single instant of its respective detection. Establishing associations between these symbols happens in the next symbol level. 3) Representation Symbols: The third level of symbolization is the representation of the world (i.e., the perceived environment). Similar to snapshot symbols, representation symbols are used to present what the system perceives. The fundamental difference is that representation symbols are created and updated by establishing associations between snapshot symbols. Thereby, the representation level contains not only the information on how the world is perceived at the current instant but also the history of this world representation. Compared to the lower levels of symbols, there exist only a few representation symbols, and these are seldomly created or destroyed. Only their properties are updated regularly. On the representation level, the system has information about the current state of the world together with the history of recent events. Following the example aforementioned, the representation level is supposed to contain only one person symbol as long as there is only one person physically present. All occurrences of snapshot symbols for this person are—if possible—associated to one representation symbol. As the positions of the different snapshot symbols vary, the representation symbol experiences a series of updates. It is important to note that the world representation does not contain the entirety of all sensory information available but just what is defined as relevant. If, for example, a person walks around, the world representation does not present information at which exact positions the person has placed his/her feet. Rather than that, it presents just a position for this person, which may be more or less accurate. The representation layer can be regarded as the interface to applications. Applications are required to monitor the world representation in order to obtain the information needed to fulfill their specific tasks. The ARS-PC approach relieves applications from handling large amounts of sensory information and provides a condensed and filtered composition of all this information in a highly reusable way. When an application is running, it searches the existing world representation for scenarios that the application knows (e.g., an elderly person has collapsed on the floor). The events that are required for the scenario to take place can be found on the representation level. Therefore, the application augments the representation by noting that it has found a scenario. It does so by creating a scenario symbol. This makes it possible to study the output of applications later. Additionally, an application can create higher level scenarios by linking together lower level scenarios of other applications. That way, the hierarchy can be even further extended by having lower level applications looking for simple

VELIK AND ZUCKER: AUTONOMOUS PERCEPTION AND DECISION MAKING IN BUILDING AUTOMATION

Fig. 2.

Overview of NeuroSym model.

scenarios and higher level applications using these scenarios to find more complex scenarios. The ARS-PC model is based partly on findings from brain research and partly on engineering methods. The concepts taken from brain sciences are the hierarchical processing of information in different layers and the fact that information is processed in terms of symbols. In [7], this model was extended by introducing additional neuroscientific and neuropsychological concepts. He adapted symbolic information processing in a way that is more compliant with the neuroscientific model of information processing in the perceptual system of the human brain as described by Luria [8]. He softened the rule that microsymbols and snapshot symbols can only contain information perceived in one instant of time, because this rule resulted in difficulties in the implementation. He therefore allowed the processing of sensory information within a certain time window. Additionally, he suggested to process sensor data from the same sensor type first separately and to combine this information only later with information derived from other sensor types, which is in accordance to the neuroscientific archetype. Additionally, in [7] and [10], a technical model for emotions was introduced, which can influence perception based on the model of Panksepp [11]. B. Model for Neurosymbolic Perception Today, neural networks are used for control tasks and predictions [12] and also in multilevel architectures [13]. A modified application of neural networks is used in a revised model that refined the existing model and extended it to use more insights about the human brain by integrating the research disciplines of neuroscience and neuropsychology; these disciplines provide most insights about the structural organization of and the information flow within the perceptual system of the human brain. The model integrates neural networks with symbolic perception in a novel way. Fig. 2 shows an overview of the NeuroSym model as it is described in [4], [14], and [15]. Perception starts with sensor values, which are then processed in a so-called neurosymbolic network and result in the perception of what is going on in the environment. The perception process is assisted by mechanisms of memory, knowledge, and focus of attention. The neurosymbolic network is the central element of the model

3647

and is concerned with neurosymbolic information processing. For the current discussion—due to limited space—we will focus only on the description of this module. For a detailed description of the other parts, see [16] and [17]. The basic information processing units of the neurosymbolic network are neurosymbols. The inspiration for using neurosymbols as elementary information processing units came from the following consideration: In the brain, information is processed by neurons. The mental correlates of neurons that fire in timed patterns are symbols (e.g., a face, a person, or a melody). Neural and symbolic information processing can be seen as information processing in the brain on two different levels of abstraction. The interesting question is whether there exists a correlation between these two levels. In fact, several studies [8], [18], [19] report about neurons in the brain, which react exclusively on certain perceptions, e.g., if a face is perceived in the environment. This fact can be seen as evidence for such a correlation and was the motivation for using neurosymbols as basic information processing units. Neurosymbols show certain characteristics of neurons and others of symbols. Neurosymbols represent perceptual images—symbolic information—like persons or faces. Each neurosymbol has an activation grade, which indicates whether the perceptual image it represents is currently present in the environment. Neurosymbols have a number of inputs and one output. Via the inputs, information about the activation grade of other neurosymbols is received. These activation grades are then summed up. If this sum exceeds a certain threshold value, the neurosymbol is activated and information about its activation grade is transmitted via the output to other neurosymbols. To perform complex tasks, neurosymbols have to be structured to neurosymbolic networks (see Fig. 3). A crucial question is how this interconnection of neurosymbols shall look like, because a random connection of neurosymbols will not lead to the desired result. For answering this question, the structural organization of the perceptual system of the human brain, as described by Luria [8], is taken as archetype. The starting point for perception is the sensory receptors of different modalities (visual, acoustic, somatosensory, gustatory, and olfactory perception). This information is processed in three levels. In the first two stages, the information of each sensory modality is processed separately and in parallel. In the third level, the information of all sensory modalities is fused and results in a multimodal perception of the environment. In the first level, simple features are extracted from the data coming from sensory receptors. Giving an example for the visual system of the human brain, in this first level, neurons fire to features like edges, lines, colors, movements of a certain velocity and into a certain direction, etc. In the second level, a combination of extracted features results in a perception of all aspects of the particular modality. For the visual system, perceptual images like faces, a person, or other objects are perceived. Finally, on the highest level, the perceptual aspects of all modalities are merged. This results, for example, in a person consisting of his visual features, his voice characteristics, and his odor. In analogy to the modular hierarchical structure of the perceptual system of the human brain, neurosymbols are combined to neurosymbolic networks.

3648

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 57, NO. 11, NOVEMBER 2010

Fig. 3. Neurosymbolic network.

Again, sensor data are the starting point for perception. The input data are processed in different hierarchical levels to more and more complex neurosymbolic information until they result in a multimodal perception of the environment. Neurosymbols of different hierarchical levels are labeled differently according to their function. Neurosymbols of the first level are called feature symbols, neurosymbols of the next two layers are labeled subunimodal and unimodal symbols, and neurosymbols of the highest level are referred to as multimodal symbols. Concerning the sensory modalities, we can use sensor types, which have an analogy in human sensory perception like video cameras for visual perception (which is the most active sensor domain, see, e.g., [20]), microphones for acoustic perception, tactile sensors for tactile perception, and chemical sensors for olfactory perception. Additionally, other sensors, which have no analogy in the human senses, can be integrated—like the perception of electricity or magnetism. Sensor data and neurosymbols are connected by means of forward and feedback connections. These connections are not fixed structures, but can be modified by learning, thus allowing flexibility and adaptation of the system. A method for learning also temporal correlations between symbols is presented in [9]. III. D ECISION M AKING Based on perceptions of the building environment, the system has to act and react accordingly. How this can work is demonstrated with the ARS-PA model. The aim of ARS-PA is to take decisions on how to react adequately to objects, events, scenarios, and situations. Inspiration for model development is taken from neuropsychoanalysis, since the psychoanalytic model that was created by Sigmund Freud and developed since then contains the most consistent functional description of the human mind. It has been described in detail in [21]–[23]. The decision making process is a multilayered process and contains a number of feedback loops. The base for the ARS-PA model are neuropsychoanalytic research findings about the human mental apparatus. The model includes concepts of emotions, drives, episodic and semantic memory, as well as Sigmund Freud’s Id–Ego–Superego personality model. According to the

model, decisions are taken based on perceived images of the world and internal states of the system, which are represented by concepts corresponding to emotions, drives, and memory of different kinds. Fig. 4 shows an overview about the developed model. It consists of various interconnected modules. The arrows indicate informational and control flows between the different units. The architecture considers two key ideas of neuropsychoanalysis. The first is the fact that human intelligence is based on a mixture of low- and high-level mechanisms. Low-level responses are predefined and may not always be accurate, but they are quick and provide the system with a basic mode of functioning in terms of built-in goals and behavioral responses. The second key idea of the model is the usage of emotions as evaluation mechanisms on all levels of the architecture. By emotions, the system can learn values along with the information they acquire. In this respect, the introduction of an episodic memory containing emotionally evaluated previous experiences is a very important feature of the model. In the architecture, there exist several types of memories. Image memory is the most basic one and is used extensively by the external perception module while processing external input data. It also stores what is currently perceived in the environment in a symbolic form. For symbolization of perceptual information, knowledge stored in semantic memory is needed. It contains facts and rules about the environment, e.g., what kinds of objects are there, how they are related to each other, what the physical laws of the world are, etc. In contrast, episodic memory consists of previously experienced episodes, which have been given an emotional rating. An episode is a sequence of situations. The Superego manages a special kind of memory that contains rules for social behavior. Necessary information for the execution of routine behaviors is stored in the procedural memory. Working memory is conceptualized as an active explicit kind of short-term memory that supports higher level cognitive operations by holding goal-specific information and streamlining the information flow to the cognitive processes. The whole decision-making and behavior-selection process runs as a loop: External stimuli originating from the environment are processed by the perception module using knowledge

VELIK AND ZUCKER: AUTONOMOUS PERCEPTION AND DECISION MAKING IN BUILDING AUTOMATION

Fig. 4.

3649

Overview of ARS-PA model.

stored in the image and the semantic memory. The resulting representation of the current situation is passed on to the basic emotions module of the predecision unit. The predecision module consists of the drives and the basic emotions module. If one of the internal variables of the perception module is about to exceed its limits, it signifies this to the drives module, which, in turn, raises the intensity of a corresponding drive. If a drive intensity passes a threshold, an action tendency is invoked. In case that the basic emotions module does not release a competing action tendency, the decision to execute an action is passed on to the execution unit. The basic emotions module gets its input from the perception module and the drives module. It connects stereotype situations with action tendencies that are appropriate with a high probability. An important task of the basic emotions module is to label the behavior or action, which the system has finally carried out, as “good” or “bad.” This rating is based on the perceived consequences (mainly on the internal states) of the executed actions. Successful actions are rewarded with lust; unsuccessful behavior leads to avoidance. Through basic emotions, the system can switch between various modes of behavior based on the perception of simple but still characteristic external or internal stimuli. This helps to focus the attention by narrowing the set of possible actions and the set of possible perceptions. The system starts to actively look for special features in the environment while suppressing others. If the predecision module does not trigger a response, perceived situations are handed over to the decision module. In the decision module, again, an emotional rating takes place—this

time, by the complex emotions module. Here, current situations are matched with one or more social emotions like contempt, shame, compassion, etc. Additionally, current desires influence the decision process. The decision module heavily interacts with episodic memory and the superego. The episodic memory is searched for situations similar to the current one including emotional ratings. If no similar situation can be found, the acting-as-if module is activated, which mentally simulates different responses to a situation as well as their potential outcomes. After a final decision on how to react to a certain situation has been taken, the according behaviors/actions have to be carried out physically. While actions carried out by the predecision unit are of reactive nature, actions coming from the decision unit are of more complex nature and allocate patterns from the procedural memory. One important fact is that the higher level decisions from the decision module can inhibit (suppress) the execution of actions selected by the predecision module. IV. I MPLEMENTATION To validate the developed models, different test platforms were used. To verify the concepts developed during the ARSPC project and the project NeuroSym, the kitchen of the department was taken as test environment [25], [26]. For this purpose, the kitchen was equipped with about hundreds of sensors of different types: tactile floor sensors, motion detectors, door contact sensors, and cameras. From these sensor data, different scenarios had to be perceived following the

3650

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 57, NO. 11, NOVEMBER 2010

information processing principles proposed in the two models. The scenarios were activities of persons like that persons make coffee, that a meeting is held, that a person manipulates diverse objects in the room, that an unattended child is present next to the hot stove, etc. The models proved to be suitable to detect all defined scenarios. By these measures, the kitchen became a system capable of autonomously perceiving what is going on in it. In the course of time, the kitchen test environment turned out to allow only limited testing due to its spatial restriction and the relatively small number of sensors. High costs and assembly effort did not allow it to enlarge the physical test environment. To overcome this problem, a simulator was developed, which allows it to generate sensor values in order to perceive scenarios in a virtual office environment [27]. The reason for simulating the sensor values is, on the one hand, the already mentioned cost reduction for testing in comparison to real physical installations. On the other hand, the simulator allows it to evaluate which sensors are necessary to detect scenarios most effectively and efficiently and where they should be mounted. For the ARS-PA model, several attempts were made to apply this model to building automation. However, for a first implementation, it turned out to be rather recommendable to test the model in an autonomous agent environment. Difficulties occurred particularly when applying concepts like emotions, drives, etc., to a building, which consists, in fact, of dead matter and has no living body with internal states that need to be represented by emotions and drives. Even when ignoring the needed embodiment of emotions and drives, nevertheless, there arose the question what emotions and drives could be useful for a building. Therefore, to be able to test the concepts developed in the ARS-PA model, the so-called Bubble World was developed [1]—a system that employs software agents as the containers for the automation system. Agents have often been used in automation [28] and have a long history in both computer science and automation. The Bubble World is a virtual simulated environment with virtual autonomous agents called Bubbles. These agents can navigate through a 2-D world. They can perceive their environment through sense organs. They can detect the presence of other agents, energy sources, and obstacles. The current goal of the agents is to survive in the environment by finding energy sources and filling up their energy level. Agents compete in different groups and try to find an optimum strategy in diverse (unknown) situations [22], [29], [30]. The results of the Bubble World simulation will provide a base for an implementation in building automation systems that have capabilities beyond the kitchen environment, which are not only able to detect and evaluate scenarios but also employ the implementation of the human psychic apparatus to improve decision making. V. D ISCUSSION AND O UTLOOK As simulation results turned out, the models that are covered in this paper are promising approaches for more efficient autonomous perception and decision-making processes, and many new insights can be gained from them. As this area is a completely new field of research, model development is

no straightforward process. Research in this area brings along many unexpected results, and many lessons need to be learned, which seem obvious afterward but are, by far, not clear at the beginning. As these results are most interesting for further research and other research groups, the most important issues are discussed in the following. 1) ARS-PC: To sum up, the ARS-PC model is a model for bottom–up information processing of sensory data for scenario detection. Bottom–up information processing means that the starting point for processing are sensor values, which are then processed in several steps. Information processing is performed in terms of symbols arranged in three layers. Associations between symbols of different hierarchical layers are defined by rules and are not subject to learning. The model proved to be successful for the detection of around 30 defined test scenarios (e.g., person makes coffee, unattended child near hot stove, meeting, elderly person fell down, etc.). In the extended version of the model, it was suggested to process information in a modular hierarchical fashion, which is in accordance to neuroscientific research findings. When introducing emotions into the model, it turned out to be difficult to define useful emotions and correlations between emotions and symbols for the control systems of buildings, a problem that is still a topic for ongoing research. 2) NeuroSym: In the NeuroSym model, research findings from neuroscience and neuropsychology are used for model design. To perceive the environment, sensor data are processed in a neurosymbolic network, and the perception process is facilitated by knowledge and the focus of attention. By this model, a new more schematic understanding of the information processes involved into human perception was gained and a solution to the wellknown binding problem—one of the key issues to the understanding of the brain—is suggested. Although this approach is new and promising, some questions are still open. Improvement could particularly be achieved for the concepts of knowledge and focus of attention. The problem that exists so far is that only little is known about the representation and the information processing of abstract knowledge in the brain as well as the mechanisms of focus of attention and the levels on which these two concepts interact with perception. Therefore, using insights from brain research as archetype for model development is only feasible to a certain extent. Further research in the areas of neuroscience and neuropsychology that deals with these issues is needed. 3) ARS-PA: The ARS-PA model can be summarized as a technical model for the decision making of intelligent systems based on concepts like emotions, drives, episodic and semantic memory, as well as Sigmund Freud’s Id–Ego–Superego personality model. The model provides significant schematical and analytical insights into processes taking place in the mind; this is unseen so far in its clarity. While working on the integration of neuropsychoanalysis and engineering, it became clear

VELIK AND ZUCKER: AUTONOMOUS PERCEPTION AND DECISION MAKING IN BUILDING AUTOMATION

that psychoanalysis alone does not, by far, cover all necessary topics for a successful implementation [24]. Neuropsychoanalysis, which builds the bridge between neuroscience and psychoanalysis, is a promising field to provide more answers. Currently, the discipline of neuropsychoanalysis has not grown far enough to give all answers that are needed. Therefore, it is necessary to integrate concepts of different brain research areas. Sigmund Freud’s Id–Ego–Superego model is a purely psychoanalytical concept [31], [32], while episodic and semantic memory are terms used in neuropsychology [33]. The concept of emotions occurs in both disciplines [34], [35] and, thus, has to be merged to a common concept that can be implemented in a technical system. On the way to creating a building automation system that employs the neuropsychoanalytic model, it also became clear that mechanisms like emotions and drives need grounding in a body. This body does not have to be a biological body, but can be a virtual representation. Nevertheless, embodiment is necessary for a system to develop humanlike abilities. Thus, a model of the body has to be created. As a matter of fact, an integration of the model in the first envisioned application of building automation could not be achieved, because buildings do not have a visceral body. Instead, simulated autonomous agents were used as testing platform. The setting of parameters like the level of certain emotions and drives had to be assumed by the system engineer and was not provided by values coming from the body of the agents. An integration of the agent’s body with the building automation system is therefore necessary. As an overall summary for all models, it can be said that a cooperation between engineers and brain researchers turned out to be more than recommendable. Applying knowledge about the structure and the function of the brain allows, on the one hand, to build more efficient and more “intelligent” technical systems. On the other hand, the actual implementation of neurological and psychoanalytical models into technical systems can also help to identify inconsistencies in brain theories and find blind spots so far not considered. R EFERENCES [1] D. Dietrich, G. Fodor, G. Zucker, and D. Bruckner, Simulating the Mind: A Technical Neuropsychoanalytical Approach. Berlin, Germany: SpringerVerlag, 2008. [2] H. K. Lam and F. Leung, “Design and training for combinational neurallogic systems,” IEEE Trans. Ind. Electron., vol. 54, no. 1, pp. 612–619, Feb. 2007. [3] C.-F. Juang, C.-M. Lu, C. Lo, and C.-Y. Wang, “Ant colony optimization algorithm for fuzzy controller design and its FPGA implementation,” IEEE Trans. Ind. Electron., vol. 55, no. 3, pp. 1453–1462, Mar. 2008. [4] R. Velik and D. Bruckner, “Neuro-symbolic networks: Introduction to a new information processing principle,” in Proc. Conf. Ind. Informatics, Jul. 2008, pp. 1042–1047. [5] G. Pratl, “Processing and symbolization of ambient sensor data,” Ph.D. dissertation, Vienna Univ. Technol., Vienna, Austria, 2006. [6] G. Pratl, D. Dietrich, G. Hancke, and W. Penzhorn, “A new model for autonomous, networked control systems,” IEEE Trans. Ind. Informat., vol. 3, no. 1, pp. 21–32, Feb. 2007. [7] W. Burgstaller, “Interpretation of situations in buildings,” Ph.D. dissertation, Vienna Univ. Technol., Vienna, Austria, 2007. [8] A. Luria, The Working Brain—An Introduction in Neuropsychology. New York: Basic Books, 1973.

3651

[9] D. Bruckner, Probabilistic Models in Building Automation—Recognizing Scenarios with Statistical Methods. Berlin, Germany: VDM-Verlag Dr. Müller, 2008. [10] W. Burgstaller, R. Lang, P. Poerscht, and R. Velik, “Technical model of basic and complex emotions,” in Proc. Conf. Ind. Informatics, 2007, pp. 1033–1038. [11] J. Panksepp, Affective Neuroscience: The Foundations of Human and Animal Emotions. London, U.K.: Oxford Univ. Press, 1998. [12] C.-H. Lu and C.-C. Tsai, “Adaptive predictive control with recurrent neural network for industrial processes: An application to temperature control of a variable-frequency oil-cooling machine,” IEEE Trans. Ind. Electron., vol. 55, no. 3, pp. 1366–1375, Mar. 2008. [13] C. Guo, Q. Song, and W. Cai, “A neural network assisted cascade control system for air handling unit,” IEEE Trans. Ind. Electron., vol. 54, no. 1, pp. 620–628, Feb. 2007. [14] R. Velik, R. Lang, D. Bruckner, and T. Deutsch, “Emulating the perceptual system of the brain for the purpose of sensor fusion,” in Proc. Conf. Human Syst. Interactions, May 2008, pp. 657–662. [15] R. Velik, “A bionic model for human-like machine perception,” Ph.D. dissertation, Vienna Univ. Technol., Vienna, Austria, 2008. [16] R. Velik, A Bionic Model for Human-Like Machine Perception. Berlin, Germany: VHS-Verlag, 2008. [17] R. Velik and D. Bruckner, “A bionic approach to dynamic, multimodal scene perception and interpretation in buildings,” Int. J. Intell. Syst. Technol., vol. 4, no. 1, pp. 1–9, 2009. [18] E. Goldstein, Wahrnehmungspsychologie. Berlin, Germany: Spektrum Akademischer Verlag, 2002. [19] E. Goldstein, Sensation and Perception. Belmont, CA: Wadsworth, 2007. [20] T. Bucher, C. Curio, J. Edelbrunner, C. Igel, D. Kastrup, I. Leefken, G. Lorenz, A. Steinhage, and W. von Seelen, “Image processing and behavior planning for intelligent vehicles,” IEEE Trans. Ind. Electron., vol. 50, no. 1, pp. 62–75, Feb. 2003. [21] P. Palensky, B. Palensky, and A. Clarici, “Cognitive and affective automation: Machines using the psychoanalytic model of the human mind,” in Simulating the Mind—A Technical Neurophsychoanalytical Approach, D. Dietrich, G. Fodor, G. Zucker, and D. Bruckner, Eds. Vienna, Austria: Springer-Verlag, 2009, pp. 178–227. [22] C. Roesener, “Adaptive behavior arbitration for mobile service robots in building automation,” Ph.D. dissertation, Vienna Univ. Technol., Vienna, Austria, 2007. [23] B. Palensky, “Introducing neuro-psychoanalysis towards the design of cognitive and affective automation systems,” Ph.D. dissertation, Vienna Univ. Technol., Vienna, Austria, 2008. [24] P. Palensky, D. Bruckner, A. Tmej, and T. Deutsch, “Paradox in AI—AI 2.0: The way to machine consciousness,” in Proc. 1st IEEE ITRevolutions, 2008, pp. 194–215. [25] S. Goetzinger, “Scenario recognition based on a bionic model for multilevel symbolization,” M.S. thesis, Vienna Univ. Technol., Vienna, Austria, 2006. [26] A. Richtsfeld, “Szenarienerkennung durch symbolische Datenverarbeitung mit fuzzy-logic,” M.S. thesis, Vienna Univ. Technol., Vienna, Austria, 2007. [27] H. Hareter, G. Pratl, and D. Bruckner, “A simulation and visualization system for sensor and actuator data generation,” in Proc. Conf. Fieldbus Syst. Appl., 2005, pp. 56–63. [28] A. Colombo, R. Schoop, and R. Neubert, “An agent-based intelligent control platform for industrial holonic manufacturing systems,” IEEE Trans. Ind. Electron., vol. 52, no. 1, pp. 322–337, Feb. 2006. [29] T. Deutsch, A. Gruber, R. Lang, and R. Velik, “Episodic memory for autonomous agents,” in Proc. Conf. Human Syst. Interactions, Krakow, Poland, 2008, pp. 621–626. [30] R. Lang, H. Zeilinger, T. Deutsch, R. Velik, and B. Mueller, “Perceptive learning—A psychoanalytical learning framework for autonomous agents,” in Proc. Conf. Human Syst. Interactions, Krakow, Poland, 2008, pp. 639–644. [31] S. Freud, “Triebe und Triebschicksale,” in Studienausgabe Band 3: Psychologie des Unbewussten. Berlin, Germany: Fischer Taschenbuch Verlag, 1915. [32] S. Freud, “Das Ich und das Es,” in Studienausgabe Band 3: Psychologie des Unbewussten. Berlin, Germany: Fischer Taschenbuch Verlag, 1923. [33] E. Tulving, Elements of Episodic Memory. New York: Oxford Univ. Press, 1983. [34] A. Damasio, Descartes’ Error—Emotion, Reason, and the Human Brain. Baltimore, MD: Penguin, 1994. [35] J. Laplanche and J. B. Pontalis, Das Vokabular der Psychoanalyse. Berlin, Germany: Suhrkamp, 1973.

3652

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 57, NO. 11, NOVEMBER 2010

Rosemarie Velik (M’06) received the M.Sc. degree in electrical engineering and information technology, with specialization in automation and control engineering, and the Ph.D. degree in technical sciences from Vienna University of Technology, Vienna, Austria, in 2006 and 2008, respectively. She is a Research Fellow with the Biorobotics Department, Fatronik-Tecnalia, Donostia-San Sebastian, Spain, and a former Assistant Professor with the Institute of Computer Technology, Vienna University of Technology. Her current research interests are in the area of cognitive automation, bionics, and neuroscience.

Gerhard Zucker (M’05) received the M.Sc. degree in electrical engineering and information technology, with specialization in computer technology, and the Ph.D. degree in technical sciences from Vienna University of Technology, Vienna, Austria, in 1998 and 2006, respectively. He was a Research Fellow with the Institute of Computer Technology, Vienna University of Technology. He is currently a Research Fellow with the Energy Department, Austrian Institute of Technology, Vienna. His research focuses on communication technology in industrial and building automation, sustainable building technologies, and the integration of classical building automation with psychological and neuropsychoanalytical models.

Suggest Documents