Light Control Architecture for Multimodal Interaction in Physical and Augmented Environments Jaakko Hakulinen University of Tampere Kalevantie 4 FI-33014 University of Tampere
[email protected]
Markku Turunen University of Tampere Kalevantie 4 FI-33014 University of Tampere
[email protected]
ABSTRACT In this paper, we introduce software architecture for interactive lighting applications. The architecture is based on a layered model where the low-level control of lighting hardware is separated from a spatial model and the application logic that integrate user interactions. We provide a detailed description of the design of the different layers and give examples of how the architecture has been used to develop spatial multimodal interactive environments. Author Keywords Interactive lighting, software architecture, user experience. ACM Classification Keywords H5.m. Information interfaces and presentation: Miscellaneous. INTRODUCTION Lighting hardware is becoming increasingly controllable through software. We are able to use lighting as a promising interaction modality in multimodal interaction with our everyday physical environments. This is not limited to specific hi-tech environments; instead we can use interactive lighting as part of our everyday life in homes, schools, and offices. For example, the lighting in an office room could change in response to user interaction with the environment [3]. From our own work with lighting as an interactive element, we have identified five categories for interactive lighting: 1. Contextual lighting, e.g., optimization of apartment lighting to save energy and provide efficient and pleasant living/working environment 2. Expressive lighting, e.g., stage lighting, spatial games, hometheater usage (e.g., Philips Ambilight) 3. Informative lighting, e.g., guidance in indoor environments, visualization of energy consumption 4. Explicit control of lighting, e.g., speech and gesture interfaces to control lighting 5. Augmented lighting, e.g., simulations, telepresence, and augmented environments featuring both physical and virtual lighting. We see lighting as one element in interaction between humans and technology. Our approach is application driven, i.e., we incorporate lighting in different interactive systems and study how it forms a part of interaction. Our case studies range from traditional entertainment applications (stage lighting) to more intelligent versions of lighting in interactive everyday spaces, such as elevators and classrooms. In terms of multimodal interaction, we use lighting together with
Tomi Heimonen University of Tampere Kalevantie 4 FI-33014 University of Tampere
[email protected]
spoken and gestural interaction, physical (embodied interaction), spatial audio and three-dimensional graphics, among others. Currently, there is lack of software solutions to incorporate lighting for interactive applications. There are low-level light control protocols and product specific higher-level solutions, but no comprehensive software architectures. In particular, there is little work done that takes into account both low-level control and interaction level needs. We introduce a layered architectural model for interactive lighting. The architecture is not fixed to a particular lighting scenario, but designed to take into account the needs of different applications. We describe the layered architecture, called “Interactive Lighting Engine”, and its components, including: • Low-level light control layer • Spatial model, which lays foundation for more advanced lighting control • Functional and user experience layer • Interaction and application logic layer We present how the implementation of the presented architecture has been used in various case studies prototypes and real world installations. We conclude by presenting the future development plans for the architecture. INTERACTIVE LIGHTING ENGINE The Interactive Lighting Engine provides layered software architecture for light control. At the core of the engine is “Interactive Spatial Model”, a model of the physical environment where light fixtures are located in. In addition, a model of a virtual (3D) environment can be included using the same, underlying spatial model and make a linkage between these. The architecture is implemented as a set of services running on a PC connected to external devices, such as dedicated light control hardware and input and output devices (e.g., sensors, videocameras, loudspeaker, microphones, interaction devices, and projectors). Next, we present the layers of the architecture in detail. Low-level Light Control The lowest architectural level abstracts the actual light control hardware and protocols in the form of a software server component. The low-level server provides a network interface based on XML, listening for connections on a specified socket and reading zero-byte terminated XML documents. Each document can consist of multiple light requests. Example 1 describes a document, which controls the color of one light fixture (“robohead”) and the rotation of another moving head light (“microspot”). The parameters depend on request type.
• Prism effect (parameter is a byte value passed directly to DMX)
The requests can also contain two time related parameters: duration and delay. Delay means that the request should take effect after the specified delay. Meaning of duration depends on the request. For a color request, it tells the server to fade to the specified color during the duration (see Example 1). In rotate request, the duration means that the rotation should take, in the case of the example, 1000 milliseconds. Requests to switch the gobo filter ignore duration, since filter can only be in one mode at a time. Finally, in chase and fluctuate requests, duration tells how long to maintain the effect. New requests override the old requests by light and type: color, fluctuate and chase requests override each other, while rotation, gobo, focus and prism effect are all classes of their own.
255 0 0
Example 1. Control document with requests for two lights. Our reference implementation uses DMX 512 to control lights using Enttec DMX USB Pro adapter. Since DMX has been originally developed for stage lighting, DMX compatible lights provide great flexibility for building prototypes. Various home control platforms (e.g. [5]) are further potential control solutions. Example 1 shows how lights are identified with string type identifiers. The identifiers are mapped to controls in a separate profile file. In the case of DMX controlled lights, the profile file specifies for each light object the DMX channel(s) of each controllable feature. For example, for lights whose output color can be controlled using RGB model, channel numbers for red, green and blue are specified. For lights with only dimmer control and selectable color filters, the profile file specifies the color of each filter. The server accepts RGB type requests and calculates the best matching filter and dimmer settings. Some spot type lights have controllable focus and gobo filters, and prism effects may also be available in some lights. For moving head lights, the profile specifies how many axes can be controlled and the range of rotations in degrees. In addition to the XML based protocol, the server has an OSC based protocol with the same functionality. Each request, i.e., the “light” tag in the XML protocol, is mapped to a single OSC message. The “light id” and request type are part of the address of the message and the parameters can be found in the content. Low-level Control Requests The current set of requests includes the following: • Color (RBG value) • Chase (switch from color to color; parameters include a list of color and frequency of color changes) • Fluctuate (alter the light color smoothly around a specified color; parameters include the color, the range of alteration and the frequency and amplitude values for two components specifying the actual change) • Rotate (adjust the rotation of the different axis of the moving head light; parameters include pan and tilt angles, and secondary pan for three axis lights, rotations are specified as degrees) • Gobo (select a gobo filter; parameter is byte passed directly to DMX) • Focus (adjust the focus; parameter is a byte value passed directly to DMX)
The request type set presented above has been implemented as needed in our case studies, and is not meant to be comprehensive. It can be easily extended and in this way we can build more features based on the needs of concrete use cases. Spatial Lighting Model The Spatial Light Model maps the light fixtures into space. The profile file provides information also for this level, including spatial (i.e., 3D coordinates of the light in the space) and light type information. The coordinates specifying the light locations can be freely chosen with respect to the location of origin and orientation. In addition to the location, the way the light spreads in space is specified. The different light types include: • Omnidirectional lights (e.g., simple incandescent light bulbs) • Spotlights, i.e., lights which point at certain direction For spotlights, the orientation of the lights and spread angle are specified. Currently, the model assumes an open space enclosed by walls. These assumptions hold true for most single room spaces, but more complex rules must be implemented for spaces consisting of connected rooms or rooms containing large furniture or architectural shapes. This is addressed in our future work. Spatial Lighting Requests In addition to the Spatial Light Model, this level maintains a set of lighting requests consisting of spatial directives. The requests are registered via an API to a component that feeds them to the Lowlevel Light Control component. The Spatial Light Model is independent of the low-level light control method / protocol used. The requests are conceptually on a higher level than the low-level requests; they may not be connected to individual lights and they incorporate the spatial coordinates. The requests also have a builtin dynamic component; they consist of a list of values and the durations of those values. In addition, the requests have a looping property. This way, meaningful patterns can be expressed in a single request. The currently implemented request types include direction, area, ambient and spot. Directional light request specifies the direction and size of a lit area in degrees and the RGB value for color, with which the area should be lit. For example, we can have a request: “show red light on the left hand side in an area that is 45 degrees wide”. The direction requests are relative to the perceiver: the model must have information of a perceiver, i.e., a person present in the lit space. The spatial model realizes the directional requests by finding the lights falling inside the lit area. Normal distribution is used to calculate the intensity of the lights close to the required
direction so that if a request smoothly travels around the room, the light intensities change gradually. Area light request lights the floor of the room. It receives the coordinates of the center of the lit area, its diameter and the RGB value of the color to use, e.g., “show blue light on to the area 2 meters wide that is 3 meters from the front wall”. To realize the location requests, the spatial model looks up lights, whose light falls on the specified area and selects the most appropriate lights. Ambient light request specifies the ambient lighting level of the room. In practice, the ambient light RGB value is added to the color value of each fixed light in the space. Spot light request controls a specified moving head light. Its parameters include the RGB value of the color to use and the 3D coordinates of the point to light. For example: “show yellow light on the head of the person at the center of the room”. The location and moving head geometry of the light is used to rotate it to the correct orientation, so that the lit area falls on the specified point. Any number of these requests can be active at the same time and the spatial model compiles them together to a set of requests for the lower-level control layer. Ambient, direction and area requests are combined together to control fixed lights and a single light may contribute to several requests of these three types at the same time. This provides dynamic and sophisticated, but tightly controlled and synchronized lighting to application developers. We are currently working to optimize the algorithm to include the moving head lights into the unified model of direction, area and ambient requests. Since the rotation speed of light hardware is limited, we cannot swap the mapping of a single moving head light “in flight”, but in an optimal case we could swap it to a different request once a fixed light, or some more optimally positioned moving head light, can take over the request. Functional and User Experience Layer Building on top of the Low-level Light Control and Spatial Light Model layers it is possible to control lighting on higher-level and model the lighting from functional and user experiment viewpoints. An example of functional lighting control is turning people's attention to a certain location or direction to give, for example, guidance in indoor environment. An example of an experience level function is to set the ambient light in a location to reflect different moods (i.e., calm or exiting feelings). A key element for creating sophisticated functional and user experience related lighting solutions is the notion of “lighting patterns”. We define a lighting pattern as ‘a sequence of spatial light requests’ to create re-usable patterns which can be employed with the right parameters and dynamic adaptation (e.g., adapting to the current lighting environment) across applications to convey similar functional and user experience elements. For example, a sequence of ambient, area, and directional light request can capture the user’s attention to move from a one place to another either as a warning or as a suggestion depending on the parameters defined on the application layer. In this work, both well-known techniques (e.g. McCandless Method [4]) and novel solutions can be used to create a rich and expressive “lighting language”. We have found this approach to be highly successful in other areas of multimodal interaction, such as haptic and auditory feedback [6]. Another purpose of the Functional and User Experience Layer is to abstract lower-level layers from the interaction and application layers control, and in this way allow general, re-usable lighting
components to be created. In many ways, this is the most promising but also most challenging part of the architecture. For human-computer interaction, it is the most crucial layer, in which our work will mainly focus on in the forthcoming case studies. Interaction and Application Layer It is possible to create sophisticated interactive lighting applications using the lower level layers. Each layer can be called directly depending on the application needs. For example, if we want to have explicit control of individual lights in a UI for light control (e.g., based on gesture or speech control as in one of our cases) it is useful to communicate directly with the Low-level Light Control Layer. On the other hand, if we want to communicate at a more abstract level with the same application, input such as speech commands can be mapped to spatial lighting requests. In the first case, a relevant example is speech input such as “Turn the toilet lamp on”, while in the second case the user might say something like “I would like to have dim reading lighting here”. Further practical examples include mapping of user positions, such as input from sensors and machine-vision-based techniques (e.g., generated by Microsoft KinectTM), to make the lights of the room react to users’ positions by using spatial light requests. So far, our applications have used the architecture in the way presented in the examples above, i.e., by using the Low-level Light Control Layer or the Spatial Lighting Model directly from the Interaction and Application Layer. Our future work focuses on the possibilities to create more sophisticated and re-usable lighting patterns using the Functional and User Experience Layer. CASE STUDIES The architecture is used in our daily research work and has been applied in several concrete case studies. Stage Lighting Automation We implemented an installation for the Finnish Science Centre Heureka utilizing the architecture. The installation, which is used regularly in real science theater shows, controls the lighting sounds and visuals in a production that includes various fire related experiments. In this installation, the low-level light controller is used to control the lights. The central logic of the system sends XML documents at correct times to the light controller. Some of the features of the low-level light control, namely the possibility to specify durationbased requests such chase and fluctuate, were implemented to support this type of use. Mapping Between Real and Virtual Worlds in Laboratory Dedicated infrastructural support is necessary for developing interactive lighting applications. Similarly to the “Incubation environment” described in [2], we have created an environment to support iterative design. Our “Spatial Interaction Laboratory” contains a lighting setup that is controlled by the architecture. It consists of nine LED PAR RGB lights and two different moving head lights, controlled using Enttec DMX USB Pro controller. Natural light and reflective properties of the room can be controlled with black and white curtains, both of which can cover all the walls in the laboratory. The lights are used together with projectors, sensors, cameras, and auditory equipment for sound capturing, and spatial and located audio output in various prototypes, often employing virtual 3D-technologies. The goal of
this environment is to offer a ‘Smart Lighting Environment’, which allows construction of different test-beds and living-lab setup for studying smart and interactive lighting solutions. We have used the laboratory to study speech-based control of lighting, in particular for use in apartments designed for elderly. The low level and spatial model of the architecture were used to build a prototype that mixes the virtual and real world, to help people get better feeling of how it would feed like to control lighting with speech. In the very first prototype, the speech recognition results were simply mapped to the XML messages sent to the low-level lighting controller. However, to better reflect the real use case, we added a virtual representation layer. A 3D model of an actual apartment was built for the lighting simulator (see below). User locations in the laboratory were displayed in the lighting simulator by mapping real life dimension to the virtual world in the scale of 1:2 (Figure 1). Speech commands controlled the virtual lights in the apartment. Finally, the real lights in the laboratory were mapped with directional light requests to the virtual lights in relation to the user’s position in the virtual model. Thus, speech commands controlled not only the virtual lights on the visualization, but also the real lights in the physical space.
Another important goal is to incorporate natural and other light sources currently not under control of the architecture. The aim is to provide more accurate and natural rendering and consistent lighting experience even when these external light sources change. This work needs to include techniques such as machine vision, sensors, and information found from different databases to have a live loop that automatically adjusts the rendering parameters in the process, similarly to [1]. Spatial model could also include information on color and reflective properties of surface materials in the space to enable more accurate lighting calculations. Technical details on the light fixtures need to include such features as maximum intensity white balance and speed of operation (rotation speed, luminosity change speed) when different lights are used together. Finally, we will focus in our future work for the HCI aspects of interactive lighting. In practice, we will apply the architecture to a variety of pervasive computing scenarios, including: • Guidance and ambient lighting in complex built environments (e.g., office buildings, shopping malls) • Visualization of energy consumption data in homes • Intelligent lighting for active learning spaces • Ambient lighting for public multi-user displays An important part of these scenarios is to study interactive lighting in the context of multimodal interaction. For example, in our current and future projects we are studying the use of embodied interaction, speech, gestures and audio, and use these in the same underlying spatial model to facilitate the creation of fully multimodal interactive spaces with interactive lighting.
Figure 1. Simulation of Speech Controlled Lighting Lighting Simulator In order to aid interactive development of applications with interactive lighting, we implemented a lighting simulator that provides a virtual 3D visualization of a physical space. The simulator can be used for rapid prototyping of interactive lighting scenarios without having to assemble a physical setup. On architecture level, the simulator replaces the low level lighting controller. One or more 3D models are provided for the simulator in addition to the light fixture information. The simulator displays the 3D scene and places each light in this 3D world. As it receives the XML requests, the visualization updates the lights, thus providing a real time simulation of the actual lighting. CONCLUSIONS AND FUTURE WORK The layered architecture for interactive lighting control has made it possible to implement a variety of interactive prototypes and a production level solution. As we have based our research on practical iterative development, there are a lot of future work issues. For example, the current spatial model only works for a single space at a time. We aim at a solution that could incorporate multiple connected spaces, each with their own lights and multiple simultaneous users.
ACKNOWLEDGMENTS This work was funded by the Finnish Funding Agency for Technology and Innovation (“Space, Theatre & Experience – Novel Forms of Evental Space” project) and the European Institute of Innovation & Technology (EIT ICT Labs). REFERENCES 1. Bhardwaj, S., Özçelebi, T., and Lukkien, J.J. Smart lighting using LED luminaries. In Proc. 2010 PerCom Workshops, IEEE (2010), 654-659. 2. Magielse, R., and Ross, P.R. A design approach to socially adaptive lighting environments. In Proc. CHItaly2011, ACM Press (2011), 171-176. 3. Magielse, R., Ross, P., Rao, S., Özçelebi, T., Jaramillo, P., and Amft, O. An Interdisciplinary Approach to Designing Adaptive Lighting Environments. In Proc. Intelligent Environments 2011, IEEE (2011), 17-24. 4. McCandless, S. A Method of Lighting the Stage, Fourth Edition. Theatre Arts Books, New York, NY, USA, 1958. 5. ThereGate. http://www.therecorporation.com/en/platform 6. Turunen, M., Melto, A., Hella, J., Heimonen, T., Hakulinen, J., Mäkinen, E., Laivo, T., and Soronen, H. User expectations and user experience with different modalities in a mobile phone controlled home entertainment system. In Proc. MobileHCI 2009, ACM Press (2009), Article 31, 4 pages.