Spatial Control Framework for Interactive Lighting
Recommend Documents
system developed based on cinematic and theatrical lighting design theory to .... and, depending on the light instrument used, penumbra and umbra angles.
to create a lighting design that accommodates dramatic action and tension. ..... To evaluate the performance of ELE as a lighting designer, a web-based evaluation ... Lessons Learned from the Theme Park Industry," in Gamasutra, '00.
is still at a very early stage. Thus, a theory or model of lighting design for interactive entertainment that adapts creative design elements from film and theatre is ...
and entertainment purposes [1â10]. The support of ..... the director agent will start another round of directorial control, starting by detecting potential goal ...
have been there since the beginning, supporting all my educational .... MPI Programs Over Grid Environments, IEEE Cluster Computing and the Grid, ... David J. Pritchard, Jeff Reeve (Eds.): ... ume rendering algorithms, including ray-casting [Lev88],
General society is incredibly dependant upon lighting â in fact, lighting currently accounts for some 15 % of worldwid
800-451-2606 or (440) 248-3510 Fax (800) 451-2605 VentureLighting.com E-
mail: .... HIT -MH tubular G12 enclosed rated 20Watts-2000Watts Nothing=Clear
BD - Base Down (±15°) ... you complete information about the ballast.
Manufacturer. Ballast. Circ
mobile phones (and devices) have played in this lighting ... Android phone to display off-screen information. Two ... DIS 2012, June 11-15, 2012, Newcastle, UK.
We will explore all the ways you can select channels and assign levels. This tutorial will provide you with comprehensiv
May 23, 2006 - Personalisation, network and device adaptation, IPTV, Quality of ... pEPGs proposed in [1, 2, 3, 4] manage a User Model ... Network Monitor.
Technologies, Technical University - Sofia, 8, Kliment Ohridski blvd., 1000 Sofia, Bulgaria, e-mail: [email protected]. G. Nikolov is with the Department of ...
http://irc.nrc-cnrc.gc.ca ..... 1996), Escuyer and Fontoynont (2001), Roche and others (2001), and Wyon (1999). ..... 37. 21. 35. 20. 51. 27 all available controls (estimated). 40. 47. 44. 52. 47. 56. 50 ...... book2005.pdf> last accessed April 2007.
Jul 21, 2016 - when to sell and buy electricity to/from the grid, in order to minimize the .... conditions using the DALI protocol and dedicated microcontrollers. ... three-layer control architecture, consisting of a backend server, multiple cen-.
measurement accuracy and the visibility of the relevant object attributes. Telecentric lenses in ... The resolution depe
Journal of Statistical Software. MMMMMM YYYY .... A regular grid of sampling locations is shown in the top two panels of Figure 2. This raises the issue of ...
information from these large and heterogeneous spatial databases. Towards this goal, spatial data mining and knowledge discovery has been gaining ...
Jun 9, 2010 - to a more interactive framework in which the necessary data would become .... guaranteed cost-plus capital cost recovery by the generators.
There has also been a large, interesting body of work fo- cusing on human .... Change in distance to the body of the learner for block movements initiated by the ...
Aug 8, 2014 - Angle of incidence in degrees. . Angle of refraction in degrees. A. Ampères ... upon an existing patent r
Aug 8, 2014 - Using this magnetic technology, the energy efficiency of the system will not be negatively ...... Symbol f
A well-known example is Quick-. Time VR ... Position _E is the COP (eye). The .... window and are illuminated by n light sources. The radi- ance along V is, nâ.
Nov 25, 2004 - URL http://www.cs.hut.fi/english.html. This is an ... 2.2.2 Validation . ...... Validator visual components. Set of valid. Layout. Representation.
Karlheinz Stockhausen is presented in order to establish a relation between musical composition practice and multilevel sound com- munication. Afterwards, we ...
Spatial Control Framework for Interactive Lighting
of the spectrum are stage lighting solutions, where large numbers of light fixtures with many control variables are synchronously controlled in complex patterns.
This is a pre-print of the definitive version of the article in the ACM Digital Library, located at: http://dx.doi.org/10.1145/2523429.2523462
Spatial Control Framework for Interactive Lighting Jaakko Hakulinen
Markku Turunen
Tomi Heimonen
Tampere Unit for Computer Human Interaction, School of Information Sciences 33014 University of Tampere Finland +358 50 586 6358
Tampere Unit for Computer Human Interaction, School of Information Sciences 33014 University of Tampere Finland +358 40 533 9689
Tampere Unit for Computer Human Interaction, School of Information Sciences 33014 University of Tampere Finland +358 50 337 1378
ABSTRACT Lighting has become an integral element in interactive environments as a result of advances in LED and home automation technology. New models for controlling diverse sets of lighting fixtures are needed. Traditional lighting control, where users manipulate the state of individual light fixtures, is not efficient or expressive in situations where a large number of lights with complex parameters need to be configured. We present a multi-tiered software framework which enables the creation of new kind of user interfaces for controlling lighting and construction of intelligent solutions which automatically control lights. Using the low level of the framework, individual fixtures are controlled over unified protocol, which abstracts the exact technical capabilities of lighting hardware. At the spatial lighting service level the control is realized using concepts like area and direction. The framework has been used in several real-world installations, e.g., to create physical exercise games for children and to design light-based indoor guidance.
Categories and Subject Descriptors H.5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous. D.2.11 Software Architectures (Domain-specific architectures)
1. INTRODUCTION As lighting hardware has become increasingly controllable through software, lighting has become a viable component in interactive systems in intelligent environments. One major motivation in lighting control is energy consumption, as by being able to control lighting, we can get rid of unnecessary energy expenditure. Energy conservation is addressed also by using more efficient lighting technology and by better incorporating natural
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. MindTrek 2013, October 1-4, 2013, Tampere, FINLAND. Copyright 2013 ACM 978-1-4503-1992-8/13/10…$10.00.
light. Since LED technology combines energy efficiency with potential for great control over the lighting, it goes well hand in hand with interactive solutions. The incorporation of natural light, on the other hand, requires that we consider this uncontrollable light source in the interactive solutions. Traditionally lighting has been studied from the safety and work efficiency point of view. Optimized lighting can improve safety, work efficiency and increase the pleasantness of spaces. In addition to basic illumination, lighting can provide information to users, for example guide them to the correct direction. This view has also been expanded to cover psychological aspects and wellbeing. Magielse and Ross [11] emphasize another interesting aspect, the social effect of light. They argue that we should understand this effect and to be able to adapt lighting to support different social situations. We see lighting as one element in interaction between humans and technology in pervasive computing environments. Our approach is application driven, i.e., we incorporate lighting in different interactive systems and study how it can contribute to the interaction. Currently, there is lack of software solutions to easily incorporate lighting into interactive applications. On one hand, there are building automation systems, with protocols like Dali, C-Bux, and X10. However, these low-level solutions consider light fixtures independently or in fixed groups. On the other end of the spectrum are stage lighting solutions, where large numbers of light fixtures with many control variables are synchronously controlled in complex patterns. Traditionally these are specified using low-level fixture based parameters, but alternative solutions exist [26]. Still, the lights are mostly handled per fixture basis in most of these systems, with higher-level solutions being introduced in some prototypes [1]. Modeling and simulation tools of both regular and stage lighting are also available1 and prototypes have also combined virtual and scale model presentations [8]. Our research addresses the gap that exists both in building automation and stage lighting solutions when it comes to higherlevel concepts to control lighting and to build applications on top of. In particular, solutions that take into account both low-level
control and interaction modeling needs are sorely lacking. In this paper we identify the key criteria for a reusable lighting components and introduce our spatial lighting framework that seeks to address these challenges. In the following, we first review related research, especially the ways lighting has been used in interactive systems, both under explicit control of the users and as an automatically controlled feature. We also review software architectures for spatially aware systems. Next, we describe our lighting control framework and provide examples on how it has been applied in practice. We conclude by discussing the contributions of our work.
2. Related Work 2.1 Interactive Lighting Control Interactive lighting can be explicitly controlled by users, react to users’ presence and actions via automation or something in between. This spectrum has been explored by van Essen et al. [5]. In the following we cover examples from this continuum.
2.1.1 Explicit control In explicit control, traditional light switches have been replaced for example with centralized controllers, which usually allow selection of predefined lighting profiles to match different needs. Commercial systems commonly use buttons or touchscreens, while research prototypes have been built to enable different modalities, such as gesture input [1], or combination of buttons, speech and gestures [17], to control a collection of different devices, with lighting being just one of the features. Personal mobile devices have also been applied to such tasks, which can
provide both personalized and shared control of lighting [10]. At the same time, concepts where individuals can control large number of public lighting have been created (e.g., [21]). While most systems assume that an expert specifies the profiles, enabling users to do this, and use natural names in doing so, widens the flexibility and applicability of the solutions. Ramírez Chang and Canny [23] provide a simultaneous naming and configuration solution to tackle the challenge in the context of a speech interface. One natural way to identify the fixtures one wants to control is by pointing at them. Seitinger et al. [24] allowed control of individual, movable light fixtures using a flashlight to point at them. Delamere at al. [4] addressed the issue of controlling a large number of lights through pointing by enabling the fine selection of individual fixtures by analyzing hand rotation in addition to the pointing. Pointing has also been combined with gestures [19].
2.1.2 Automated control Automated control of indoor [25], outdoor [22], and road lighting [6] can save energy, and when well designed, improve safety and comfort. While the basic approach of activating lights when there are people present can already provide the above-mentioned benefits, automated control can also contribute to more complex solutions. Lighting can act as an ambient display [14],[16],[29], or as a more explicit spatial information presentation tool [15]. While straightforward solutions can work without explicitly modeling the space, to automatically control large number of lights in a space, particularly when the space is shared between
many people, some sort of model of a space is necessary. Petrushevski [20] provides requirements for such a solution, such as the need for context awareness beyond the spatial context. The modeling of space has been addressed in many ubiquitous computing applications and this knowledge can be applied to solution where lighting is included. Crepaldi et al. [5] presented a ubiquitous computing software architecture, which was applied to lighting control. What this architecture is missing, is explicit spatial model. However, lighting has one feature, which is not usually addressed in general ubiquitous applications: recording of the location of light fixtures is not enough but one must also know what area, and to what extend they can lit.
3. INTERACTIVE LIGHTING ENGINE A reusable interactive lighting platform can help development of lighting applications and also provide some reusable models and concepts, which can help the work of designers incorporating lighting into intelligent environments. Such a platform should: -
Provide different levels of abstraction for controlling lighting, which allows developers to focus on interaction. Enable lighting control design that is independent of the exact setup of the light fixtures, which enables the creation of more generic lighting designs. Be able to update lighting in real time with smooth transitions to enable creation of lighting patterns for unobtrusive ambient display. Have spatial models of the environment, including the capabilities of the lighting hardware, to enable automated reasoning and lighting calculations. Be able to control large number of lighting fixtures and consider uncontrollable light sources like natural light.
In the following, we present the Interactive Lighting Engine, which provides a layered software framework for lighting control as depicted in Figure 1. At the core of the framework is a spatial model of the light fixtures located in an environment. This enables the framework to control the lighting using spatial concepts like area and direction. In addition, a 3D model of the real or a virtual environment can be used in conjunction with the spatial model and make a linkage between these. The framework is implemented as a set of services running on a PC connected to lighting control hardware. The framework is intended to work as a part of a larger interactive system. Next, we present spatial model and the layers of the framework in detail.
3.1 Spatial Model Configuration The spatial configuration specifies an ID, controller instance, information on control channels, type of light and spatial properties of the light, most importantly its location and orientation, for each fixture. Based on this spatial information, the engine can do calculations to adjust lights correctly. The coordinates specifying the light locations can be freely chosen with respect to the location of origin and orientation so that it is easy to use a shared coordinates system when the framework is part of large, spatially aware system. In addition to the location, the way the light spreads in space is specified. The different light types include omnidirectional lights (e.g., light bulbs) and spotlights, i.e., lights which point at certain direction with a specific beam-spread angle. For moving head lights, the configuration specifies how many axes can be controlled, the range of rotations in degrees and the zero angle orientation. The Controller, which is specified for each fixture, is a software component responsible of operating a physical lighting control
device. The control channels refer to low level control of the lights and their exact content are control solution specific, for example DMX512 channel numbers. The spatial configuration maps each light fixture to one or more rooms, which on this level are simply names and may or may not refer to real life rooms but, in general, do refer to (possibly overlapping) physical spaces of some extent.
3.2 Low-level Light Control Service The lowest architectural level abstracts the actual light control hardware and protocols in the form of a software server component. The server provides two alternative protocols, one XML based and the other OSC based, to control all attached lighting hardware in uniform manner. This layer can be directly used in applications, which want to control individual light fixtures in precise manner. Example 1 shows what kind of control requests the server understands; they control the color of one light fixture (“robohead”) and the rotation of another moving head light (“microspot”). 255 0 0 Example 1: A control document with two requests. The Low-Level Service encapsulates actual lighting control hardware into controller components. New kinds of lighting hardware can be added by implementing a controller. It abstracts away some low level features of the lights, such as color system mappings, shutter control and rotation calculations. Multiple controllers can be used under a single Low-level Service. Our reference implementation uses DMX512 to control lights using Enttec DMX USB Pro adapters and Arduino micro controllers to control LEDs.
3.2.1 Low-level Control Requests Low-level control requests affect individual light figures. The current set of requests includes the following: - Color request sets the color of the light fixture. - Chase request causes frequent switching from color to color; - Fluctuate request alters the light color smoothly around a specified color. - Rotate request adjusts the rotation of a moving head light. Parameter are either rotation values or a 3D location where the light should point at. - Generic request passes the given value without any processing to a controller. It is also possible to create smooth transitions and more complex patterns without sending control requests at high frequency by specifying the duration and delay for the requests.
3.3 Lighting Simulator In order to aid the development of applications utilizing the lighting service, we implemented a lighting simulator that
provides a 3D visualization of a physical space. On architectural level, the simulator replaces the lighting hardware Controllers on the Low-level Light Control Service. A 3D model is provided for the simulator for each room specified in the profile file. The simulator displays the 3D scene and places each light fixture in this 3D world. As it receives requests, the visualization updates the virtual lights, thus providing a real time simulation of the actual lighting. The simulator also supports visualization of people moving in the space and 3D models can be moved around the simulation to visualize, e.g., the movement of an elevator and its doors. The simulator also supports a first person view, which can be attached to simulated users, which subsequently can be mapped to a real person by using a motion tracking camera system.
001.6 linear 9000 9045
3.4 Spatial Lighting Service The Spatial Lighting Service controls the light fixtures using spatial concepts. The spatial lighting requests made to the service are not connected to individual light fixtures but rather use spatial concepts such as location and direction. This enables specification of lighting without knowing the exact lights available in a room, thus making the lighting control more generic, which enables designers to think and work using these concepts instead of having to deal with individual light sources. The requests also have greater temporal structure: each request can consist of a list of individual values for specific points in time and functions for interpolating the values in between, which enables complex, dynamic lighting scenes and patterns to be presented. The service compiles the results of multiple requests together to a set of fixture specific requests for the Low-level Service. This provides means for application developers to specify multiple, independent lighting needs without considering the exact details of the lighting hardware setup. The information from the spatial model configuration is used also on this level. In particular the spatial information is key as all the requests select light fixtures based on their spatial location and the rooms they are assigned to. The concept of rooms can be used to support more complex spaces. There can be any number of rooms and each light fixture can be mapped to one or more rooms. When requests are made to service, they can be directed to one or more rooms. A limitation in the current implementation is that it treats individual rooms as open spaces enclosed by walls rather than adjoined spaces with (possible) openings in the walls.
3.4.1 Spatial Lighting Requests
Example 2: Directional lighting request. One of the design challenges with the directional requests is that since the number of light fixtures is usually rather small in real life applications, simply selecting lights that fall within the specified direction would be problematic. If there are no lights in the specified area but lights can be found just outside it, we would get darkness, while optimal effect could be achieved by using the lights neighboring the area. More importantly, when the lit area is moved or resized, naïve solution would switch lights on and off resulting in unwanted abrupt changes in illumination. Our solution is to calculate scaling factors, which gradually include fixtures near the requested area to be included in realizing the request. This way, when a request smoothly travels around the room or resizes, the light intensities change gradually. The algorithm first sorts out light fixtures by the angle difference from the perceiver to light and center of the requested area. Next all the lights that are within the specified area are selected. Finally, one additional light on both sides outside the area is added. For each light fixture in the set, normal distribution probability density function is used to calculate the scaling factor, which is used to factor the brightness of the light (i.e., the color values of the request are multiplied by the scaling factor). In the example in Figure 2, light fixture L1 will not participate in realizing the direction request, since fixture L2 is also outside of the request area and closer than L1 to the area. For L2…L5, the scale factor is calculated from their location on the normal distribution.
L1
L2
Area center L3 L4
The spatial lighting service accepts and maintains a set of lighting requests consisting of spatial directive. s.The requests are registered either over an XML based socket protocol, similar to that of Low-level Service or via an API in Python programming language. Directional light request specifies the direction and size of a lit area and the color with which the area should be lit. For example, we can have a request: “show dim red light on the left hand side in an area that is 45 degrees wide”. The XML format for the request is show in Example 2. The direction requests are relative to a perceiver so the model must have information of a perceiver, i.e., a person present in the lit space. In most cases it is enough to place a static perceiver in the center of the room but particularly in larger rooms a more precise result could be achieved by updating the perceiver location as the user moves in the room.
L5
Scaling factor value 1.0
-1σ +1σ Area width˚
0.0
Figure 2: Example of normal distribution based scaling. The scaling factor is calculated by following formula: factor = pdf(z(direction, 0, area/2.0)) * (1.0 / pdf(0)) where direction is the relative direction of light fixture from center of the area as an angle, area is the width of the requested
area as an angle, z is standard score for normal distribution, and pdf is probability density function for normal distribution. This means that half of the area is considered standard deviation of the distribution. The last part of the formula scales the factor so that in the very center of the area, full intensity of the color is used. Thus, rather than summing up the amount of light produced, the aim is to produce the desired of lighting in the specified direction i.e., we are considering how a user perceives the lighting. Area light request lights the floor of a room. It consists of the coordinates of the center of the lit area, its diameter and the RGB value of the color to use, e.g., “show dim blue light on to the area 2 meters wide that is 3 meters from the front wall”. The XML format for the request is shown in Example 3. smoothed 0090 02 22 Example 3: Area light request. To realize the location requests, the service uses light fixture locations and orientations to identify the fixtures whose light falls on the specified area. A model similar to the one used for direction requests is used to calculate scaling factors. The difference is that all fixed lights pointing on floor are considered and the factor is calculated separately for the two horizontal dimensions and then multiplied. Spot light request controls a moving head light based on the 3D coordinates of the center point of the area to be illuminated. For example: “show yellow light on the head of the person at the center of the room”. The service allocates a suitable moving head light to the request and generates a rotation request for that light to be executed by the Low-level Service.
sensors and machine-vision-based techniques, to make the lights of a room react to users’ positions by using spatial light requests. One example of this style of approach has been presented by Gritti and Monaci [7].
4. CASE STUDIES The framework is used in our daily research work and has been applied in several interactive systems. The following four case studies demonstrate of how it has been utilized and exemplify how it matches the requirements we specified for such a reusable, interactive lighting framework.
4.1 Stage Lighting Automation We implemented an automation solution for a science museum that controls the lights, sounds, graphics and a video camera stream based on sensors added to a stage and a state machine based model of a show. The sensors react to the actions of performers during science theater shows, timing parts of the show events. The solution enabled lighting to be tightly synchronized to stage events without a human lighting operator or extra burden to the performers. The installation has been used regularly for several months. In this installation, the low-level light controller was used to control the lights. Many of the features of the low-level light control, namely the possibility to specify delays and durationbased requests such chase and fluctuate, were specifically implemented to support this type of use. By using these requests, the number of messages sent to the Low-level Service is kept reasonably low and basic patterns such as dimming or flickering lights could be specified with a single message, which simplifies the implementation of the lighting control scheme.
4.2 Light-Based Exercise Game In a light- and audio-based game for schools [8], a set of lights and speakers is placed on a darkened school gymnasium and together light and audio are used to tell a story where children participate by doing various exercises, as seen in Figure 3. Activities in the game are centered on avoiding or following the lights, for example, avoiding the spot of a moving head light, which represents an evil character in the story.
Ambient light request specifies the ambient lighting color of the room. The ambient light RGB value is added to the color value of each fixed light in the space.
3.5 Interaction and Application Layer Different application scenarios require different approaches to lighting control. For this purpose, we have kept the framework flexible to use. It is possible to create sophisticated interactive lighting applications by directly using the Low-level Service if the abstraction provided by the Spatial Lighting Service is not needed. For example, if we want to have explicit control of individual lights in a user interface for lighting control (e.g., based on gesture or speech control as in one of our cases) it is useful to communicate directly with the Low-level Light Control Service. On the other hand, if we want to communicate at a more abstract level with the same application, input such as speech commands can be mapped to spatial lighting requests. In the first case, a relevant example is speech input such as “Turn the kitchen lamp on”, while in the second case the user might say something like “I would like to have dim reading lighting here”. Further practical examples include mapping of user positions, such as input from
Figure 3: Children playing the exercise game. Lighting creates an engaging experience for children and effectively works to create meaning to the space and thus facilitates physical movement. The game uses the same state machine based model as the stage lighting automation system, except it is the teacher who decides when and where to move in the story.
Particularly complex lighting patterns, where for instance a spot light moves and changes color many times over a long duration, are very common in the scenarios designed for the system. The model used by the Low-level Light Control Service was able to scale to this kind of use reasonably well.
4.3 Mapping Between Real and Virtual Worlds in Laboratory
been designed by using the lighting simulator, where elevator users’ movements were visualized together with the elevator movements as seen in Figure 5. By replacing the simulator with a hardware solution, the same code can run an actual lighting setup. Since all the requests are specified using the spatial model, the implementation can easily adapt to changes in the details of the setup. In this case, rather complex lighting patterns were described using the spatial lighting service.
We have studied speech-based control of lighting for apartments designed for the elderly in our laboratory. The Low-level and Spatial Lighting Services of the framework were used to build prototypes that mix virtual and real world to help people get a better feeling of how it would feed like to control lighting with speech without having to build an actual system in a real environment. In the very first prototype, the speech recognition results were mapped to the low-level light control requests to light fixtures in the laboratory. However, to better reflect the real use case, we added a virtual representation layer of an actual apartment in the lighting simulator. User locations in the laboratory were tracked using Kinect and displayed in the lighting simulator by mapping real life to the virtual world in the scale of 1:2 (Figure 4). Speech commands controlled the lights in the virtual apartment. Finally, the lights in the laboratory were mapped with directional light requests to the virtual lights in relation to the user’s position in the virtual model. Thus, speech commands controlled not only the virtual lights on the visualization, but also the real lights in the physical space so that the real life lighting space around the user matched the condition of the virtual space around the virtual user.
Figure 4: Simulation of speech controlled lighting in a home environment. In this case, the fact that the engine can dynamically use any number of lights made it easy to use the large number of light fixtures in our laboratory to simulate the virtual space efficiently. This case also showed that the simulator could be more than just developers’ debugging tool and be used to prototype interactive solutions at early design stages.
4.4 Indoor Guidance in Elevator Lobby The framework has been used to design lighting-based guidance concept for elevator lobbies. This includes both lighting the insides of the elevator car and area around the elevator doors. The aim was to provide unobtrusive information for people using the elevator so that they would be aware of elevator movements and whether there are people coming out of or into the elevator. The prototype uses the concept of rooms to partition the world (elevator car, lobbies on different floors) so that spatial requests in each area can be used independently. The guidance has primarily
Figure 5: Prototyping a lighting based guidance concept in simulator.
5. DISCUSSION AND FUTURE WORK One of the key design requirements for the interactive lighting framework was to enable a tiered approach to interaction design that supports different needs. Our experience from the cases described in the previous section validates this approach. It is easy to implement simple prototypes relying on a few lights using the low-level service and also to realize complex use cases with the spatial lighting service, such as dynamic lighting based on user movement. Of particular value has been the separation between the hardware configuration and lighting control. Using the framework we are able to specify lighting designs, which can be reused in different spaces and with different lighting setups without having to specify the capabilities of the fixtures as a part of the interaction design. The automated reasoning built into the Spatial Lighting Service is another aspect that has made it easier to deploy our lighting solutions. The approach adopted in the framework represents a tradeoff between the accuracy of the resulting lighting effect (e.g., in terms of illumination) and the amount of work required to realize the overall design. Finally, a practical benefit of separation between the spatial model configuration and the other layers of the framework is that the number of light fixtures that can be controlled is only limited by hardware constraints. It takes very little effort to add new fixtures, as they are automatically included into the subsequent high-level requests without a need for additional configuration.
5.1 Limitations Currently the framework does not consider the light sources it cannot control, such as natural light or surface reflections. One important future goal is to incorporate natural and other light sources into calculations to provide more accurate rendering and consistent lighting experience when external light sources change. This work needs to include luminance sensors, and possibly information from different databases to have a live loop that
automatically adjusts the parameters in the process, similarly to models applied by Park et al. [18] and Bhardwaj et al. [2]. The spatial model could also include information about the color and reflective properties of surface materials in the space to enable more accurate lighting calculations.
5.2 Future work So far, our applications have used the Low-level or the Spatial Lighting Service directly. Our grand view of future is to define reusable lighting patterns and implement Functional and User Experience Layer on top of the exiting two services. An example of functional lighting control is turning people's attention to a certain location or direction to give, for example, guidance in indoor environment. An example of an experience level function is to set the ambient light in a location to reflect different moods (i.e., calm or exiting feelings). Another purpose of the Functional and User Experience Layer is to abstract lower-level layers from the interaction and application design. For human-computer interaction, the functional and user experience level would be the most crucial layer. A key element for creating sophisticated functional and user experience level lighting solutions is the notion of “lighting patterns”. We define a lighting pattern as a sequence of spatial light requests to create re-usable patterns, which can be employed with the right parameters across applications. During the development of the light-based exercise game and elevator lobby guidance, various pattern candidates have emerged. In this work, both well-known techniques, e.g. McCandless' Method [13], and novel solutions can be used to create a rich and expressive “lighting language”. This approach has been found useful in other areas of multimodal interaction, such as in creating haptic and auditory feedback [27]. In the future we plan to add these reusable patterns into the framework to make it even easier to realize interactive lighting designs.
6. CONCLUSIONS We presented an interactive lighting control framework. Our implementation provides two levels of abstraction, which work to address different types of design criteria. Our experiences suggest that both levels can be useful, and we hope to add one more layer of abstraction in the form of a lighting pattern language to make the framework even more adaptable to different usage scenarios. The use of spatial level enables the design of lighting patterns independent of specific lighting fixture setup. Both layers of the engine have proven to be able to generate smoothly changing lighting scenes. The spatial model, while simple, enables not only the setup of independent pattern specification but also the use of spatial concepts in lighting control. The implementation can also control, in theory, any number of light fixtures. However, we are still missing support for adjustments to uncontrollable light sources. In particular, efficient use of natural light will be vital in future scenarios that take place in energy efficient buildings and environments. The layered architecture for interactive lighting control has made it possible to implement a variety of interactive prototypes and a production level solution. It provides a model how developers can control lighting using spatial terms and also includes explicit temporal aspects. The solution provides efficient tool for developers, in particular for constructing interactive solutions, since the layers provide abstraction on light control enabling developers and designers to think on the level of interaction rather than individual light fixtures. We believe this can help developers consider lighting more from users’ point of view and create
innovative more solutions. Interactive lighting is a new and emerging field where developers and designers are still learning how to fully express themselves. Formalizing the collected, practical knowledge into interactive systems and control architectures is our future aim within this field. Considering the advances in the flexibility and controllability of lighting solutions, we believe such knowledge will be valuable soon.
7. ACKNOWLEDGMENTS This work was Technology and Novel Forms of projects) and the (EIT ICT Labs).
funded by the Finnish Funding Agency for Innovation (“Space, Theatre & Experience – Evental Space” and “Active Learning Spaces” European Institute of Innovation & Technology
8. REFERENCES [1] Bartindale, T. and Olivier, P. 2013. ThorDMX: a prototyping toolkit for interactive stage lighting control. In CHI '13 Extended Abstracts on Human Factors in Computing Systems (CHI EA '13). ACM, New York, NY, USA, 30193022. DOI= http://doi.acm.org/10.1145/2468356.2479600 [2] Bhardwaj, S., Özçelebi, T., and Lukkien, J.J. 2010. Smart lighting using LED luminaries. In Proc. 2010 PerCom Workshops, IEEE (2010), 654-659. [3] Crepaldi, R., Harris, A.F. III, Kooper, R., Kravets, R., Maselli, G., Petrioli, C., and Zorzi, M. 2007. Managing heterogeneous sensors and actuators in ubiquitous computing environments. In Proceedings of the First ACM workshop on Sensor and actor networks (SANET '07). ACM, New York, NY, USA, 35-42. DOI=http://doi.acm.org/10.1145/1287731.1287739 [4] Delamare, W., Coutrix, C. and Nigay, L. 2012. Pointing in the Physical World for Light Source Selection. Proceedings of Designing Interactive Lighting workshop at DIS 2012, June 11th 2012, Newcastle, UK. [5] van Essen, H., Offermans, S. and Eggen, B. 2012. Exploring the Role of Autonomous System Behavior in Lighting Control. Proceedings of Designing Interactive Lighting workshop at DIS 2012, June 11th 2012, Newcastle, UK. [6] Fujii, Y., Yoshiura, N., Takita, A. and Ohta, N. 2013. Smart street light system with energy saving function based on the sensor network. In Proceedings of the fourth international conference on Future energy systems (e-Energy '13). ACM, New York, NY, USA, 271-272. DOI=http://doi.acm.org/10.1145/2487166.2487202 [7] Gritti T. and Monaci. G. 2011. ImagiLight: a vision approach to lighting scene setting. In Proceedings of the 19th ACM international conference on Multimedia (MM '11). ACM, New York, NY, USA, 1285-1288. DOI=http://doi.acm.org/10.1145/2072298.2071995 [8] Hakulinen, J., Turunen, M., Heimonen, T., Keskinen, T., Sand, A., Paavilainen, J., Parviainen, J., Yrjänäinen, S., Mäyrä, F., Okkonen, J. and Raisamo, R. 2013. Creating Immersive Audio and Lighting Based Physical Exercise Games for Schoolchildren. In Proceedings of the 10th International Conference on Advances in Computer Entertainment ACE 2013 (ACE2013). Springer. [9] Horiuchi, Y., Inoue, T., and Okada, K. 2012. Virtual stage linked with a physical miniature stage to support multiple users in planning theatrical productions. In Proceedings of the 2012 ACM international conference on Intelligent User
Interfaces (IUI '12). ACM, New York, NY, USA, 109-118. DOI= http://doi.acm.org/10.1145/2166966.2166989
NY, USA, 568-571. DOI=http://doi.acm.org/10.1145/2370216.2370311
[10] Krioukov, A., Dawson-Haggerty, S., Lee, L., Rehmane, O. and Culler. D. 2011. A living laboratory study in personalized automated lighting controls. In Proceedings of the Third ACM Workshop on Embedded Sensing Systems for Energy-Efficiency in Buildings (BuildSys '11). ACM, New York, NY, USA, 1-6. DOI=http://doi.acm.org/10.1145/2434020.2434022
[21] Pihlajaniemi, H., Luusua, A., Teirilä, M., Österlund, T. and Tanska, T. 2012. Experiencing participatory and communicative urban lighting through LightStories. In Proceedings of the 4th Media Architecture Biennale Conference: Participation (MAB '12). ACM, New York, NY, USA, 65-74. DOI=http://doi.acm.org/10.1145/2421076.2421087
[11] Magielse, R., and Ross, P.R. 2011. A design approach to socially adaptive lighting environments. In Proc. CHItaly2011, ACM Press (2011), 171-176.
[22] Poulsen, E.S., Andersen, H.J. Jensen, O.B., Gade, R., Thyrrestrup, T. and Moeslund, T.B. 2012. Controlling urban lighting by human motion patterns results from a full scale experiment. In Proceedings of the 20th ACM international conference on Multimedia (MM '12). ACM, New York, NY, USA, 339-348. DOI=http://doi.acm.org/10.1145/2393347.2393398
[12] Magielse, R., Ross, P., Rao, S., Özçelebi, T., Jaramillo, P., and Amft, O. 2011. An Interdisciplinary Approach to Designing Adaptive Lighting Environments. In Proc. Intelligent Environments 2011, IEEE (2011), 17-24. [13] McCandless, S. A. 1958. Method of Lighting the Stage, Fourth Edition. Theatre Arts Books, New York, NY, USA, 1958. [14] Müller, H., Fortmann, J., Pielot, M., Hesselmann, T., Poppinga, B., Heuten, W., Henze, N. and Boll, S. 2012. AmbiX: Designing Ambient Light Information Displays. Proceedings of Designing Interactive Lighting workshop at DIS 2012, June 11th 2012, Newcastle, UK. [15] Nakada, T., Kanai, H. and Kunifuji, S. 2005. A support system for finding lost objects using spotlight. In Proceedings of the 7th international conference on Human computer interaction with mobile devices & services (MobileHCI '05). ACM, New York, NY, USA, 321-322. DOI=http://doi.acm.org/10.1145/1085777.1085846 [16] Occhialini, V., van Essen, H. and Eggen, B. 2011. Design and evaluation of an ambient display to support time management during meetings. In Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part II (INTERACT'11), Pedro Campos, Nuno Nunes, Nicholas Graham, Joaquim Jorge, and Philippe Palanque (Eds.), Vol. Part II. Springer-Verlag, Berlin, Heidelberg, 263-280. [17] Pan, G., Wu, J., Zhang, D., Wu, Z., Yang, Y., and Li, S. 2010. GeeAir: a universal multimodal remote control device for home appliances. Personal Ubiquitous Comput. 14, 8 (December 2010), 723-735. DOI=http://dx.doi.org/10.1007/s00779-010-0287-7 [18] Park, H., Burke, J., and Srivastava, M.B.. 2007. Design and implementation of a wireless sensor network for intelligent light control. In Proceedings of the 6th international conference on Information processing in sensor networks (IPSN '07). ACM, New York, NY, USA, 370-379. DOI=http://doi.acm.org/10.1145/1236360.1236407 [19] Peters, S., Loftness, V., and Hartkopf, V. 2011. The intuitive control of smart home and office environments. In Proceedings of the 10th SIGPLAN symposium on New ideas, new paradigms, and reflections on programming and software (ONWARD '11). ACM, New York, NY, USA, 113114. DOI=http://doi.acm.org/10.1145/2048237.2048255 [20] Petrushevski. F. 2012. Personalized lighting control based on a space model. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing (UbiComp '12). ACM, New York,
[23] Ramírez Chang, A. and Canny, J. 2009. Illuminac: simultaneous naming and configuration for workspace lighting control. In Proceedings of the 14th international conference on Intelligent user interfaces (IUI '09). ACM, New York, NY, USA, 413-418. DOI= http://doi.acm.org/10.1145/1502650.1502710 [24] Seitinger, S., Perry, D.S. and Mitchell, W.J. 2009. Urban pixels: painting the city with light. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '09). ACM, New York, NY, USA, 839-848. DOI=http://doi.acm.org/10.1145/1518701.1518829 [25] Singhvi, V., Krause, A., Guestrin, C., Garrett, J.H. Jr., and Matthews, H.S. 2005. Intelligent light control using sensor networks. In Proceedings of the 3rd international conference on Embedded networked sensor systems (SenSys '05). ACM, New York, NY, USA, 218-229. DOI=http://doi.acm.org/10.1145/1098918.1098942 [26] Sperber, M. 2001. Developing a stage lighting system from scratch. In Proceedings of the sixth ACM SIGPLAN international conference on Functional programming (ICFP '01). ACM, New York, NY, USA, 122-133. DOI=http://doi.acm.org/10.1145/507635.507652 [27] Turunen, M., Melto, A., Hella, J., Heimonen, T., Hakulinen, J., Mäkinen, E., Laivo, T., and Soronen, H. 2009. User expectations and user experience with different modalities in a mobile phone controlled home entertainment system. In Proceedings of the 11th International Conference on HumanComputer Interaction with Mobile Devices and Services (MobileHCI '09). ACM, New York, NY, USA, Article 31, 4 pages. DOI=http://doi.acm.org/10.1145/1613858.1613898 [28] Westerhoff, J., van de Sluis, R., Mason, J., Aliakseyeu. D. 2012. M-Beam: A Tangible Atmosphere Creation Interface, Proceedings of Designing Interactive Lighting workshop at DIS 2012, June 11th 2012, Newcastle, UK. [29] Wisneski, G., Ishii, H., Dahley, A., Gorbet, M.G., Brave, S., Ullmer, B. and Yarin, P. 1998. Ambient Displays: Turning Architectural Space into an Interface between People and Digital Information. In Proceedings of the First International Workshop on Cooperative Buildings, Integrating Information, Organization, and Architecture (CoBuild '98), Norbert A. Streitz, Shin'ichi Konomi, and Heinz Jürgen Burkhardt (Eds.). Springer-Verlag, London, UK, UK, 22-32.