âArchitectural Roboticsâ: An Interdisciplinary Course Rethinking the. Machines We Live In ... ways in which the knowledge and perspective of architects could stimulate ..... course poses questions of pedagogy as: âHow to educate students from ...
“Architectural Robotics”: An Interdisciplinary Course Rethinking the Machines We Live In Apoorva Kapadia†∗ , Ian Walker† , Keith Evan Green‡ , Joe Manganelli‡, Henrique Houayek‡, Adam M. James‡, Venkata Kanuri† , Tarek Mokhtar‡ , Ivan Siles† , and Paul Yanik† Abstract— We discuss disciplinary barriers which have traditionally prevented robotics from significantly impacting the built (architectural) environment we inhabit. Specifically, we describe the implementation of, and lessons learned from, a multidisciplinary graduate-level course in Architectural Robotics. The results from class interactions and projects provide insight into novel ways in which robotics expertise can be effectively leveraged in architecture. Conversely, our outcomes suggest ways in which the knowledge and perspective of architects could stimulate significant innovations in robotics.
I. I NTRODUCTION While major progress has been made within the core subdisciplines of robotics over the past several decades, the transition of this progress into technologies affecting the world we live in has been relatively slow. Despite promising efforts in areas such as health care robots in the fields of surgery [1], [2], [3], rehabilitation [4], [5], [6], domestic robots [7], [8], [9], and robots for education [10], [11], robots are still largely restricted to industrial, remote, and hazardous environments [12] [13], out of view of most people. Robotics is still awaiting the “killer application” that will, as predicted in innumerable science fiction stories, make robotics widespread in people’s everyday existence. One engineered product familiar to all, though often overlooked by technologists, is the (architectural) environment we each inhabit. While architects have been increasingly incorporating new technologies into their methods and products [14], there has been almost no incursion of robotics into architecture, in the sense of the inclusion of robotic elements as integral parts of built environments. Architecture as a field has a long and rich history of innovation [15], and its economic impact vastly overshadows that of robotics. Widespread adoption of robotics technologies within architecture would likely have a major impact on our field. Two highly desirable activities arise naturally from the above discussion: 1) identification of ways in which robotics, in its current state of the art, can transition usefully into architecture; and 2) identification and removal of the key barriers currently preventing such transitions. This work is supported in part by an NSF grant IIS-0534423. ∗ To whom all correspondence should be addressed. † are with the Department of Electrical & and Computer Engineering, Clemson University, Clemson, SC 29634 (akapadi, iwalker, vkanuri, isiles, pyanik)@clemson.edu. ‡ are with the School of Architecture, Clemson University, Clemson, SC 29634 (kegreen, hde, amjames, jmangan, tmokhta)@clemson.edu.
This paper discusses each of the above issues, using as the main vehicle for discussion the experiences of the authors in a graduate-level course (“Architectural Robotics”) taught at Clemson University, aimed at the intersection between Architecture and Engineering. There has been some prior research in combining robotics and Architecture. Mitchell [16] postulated that in the near future “our buildings will become...robots for living in”. Most subsequent efforts have concentrated on either adding sensory/computational elements to existing architecture (smart buildings) [17], [18], or introducing self-contained robots into existing spaces, [10]. The second approach may appear to be the obvious way to introduce robotics expertise into Architecture. However, we argue here that a more interesting (and perhaps more practical) approach involves a tighter coupling of the fields, in the sense of including robotics via (re)programmably moving the mass that forms the core shape of the environment. Several researchers have considered aspects of the above notion of “robotic environments” as outlined by Weller and Do [19]. Oosterhuis is developing real-time configurability in programmable pavilions [20]. Raffle et al. [21] developed a responsive textile called “Super Cilia Skin”, which could be used within structures to possibly harvest energy from multiple sources, while also abstracting and communicating information visually as well as haptically. Carlo Ratti’s SENSEable City Lab designed a pavilion for the 2008 World Expo with an exterior envelopes composed of digitally solenoid-controlled water droplets [22]. These droplets formed a pixellated envelope which also served as an interactive display and a means to cool the galleries inside. However, these environments have been framed in the tradition of the architectural discipline of their creators, and have not specifically included the expertise of robotics researchers. The recent “Animated Work Environment” (AWE) [23], [24], which creates an ensemble of continuous, morphing surfaces (e.g. walls, task surfaces), is founded at the intersection of robotics and architectural research, but remains focused on the interior of a space, rather than the space itself. With that in mind, the explicit goal of the multidisciplinary graduate-level course at Clemson University discussed in this paper is to explore the boundary between robotics and Architecture, and to promote creativity at the intersection. As such, all course activities were designed to be “openended,” with the creative process (rather than any specific end product) being the key object. As will be discussed in the following sections, this led to valuable insight into the
nature of inherent disciplinary biases, and the “surprises” that can result when the creative strengths of the two fields are suitably catalyzed. The course described herein is not the first effort aimed at combining engineers and architects in graduate classes [25]. A similar multidisciplinary course was also offered at the University of California at Davis [26], whose aim was to provide computer project experience to graduate students with very different education backgrounds. However, the course at Clemson University is unique in concentrating on the robotic elements being an integral part of environmental design (as is plumbing, air-conditioning, and other aspects of architecture), as opposed to being introduced into a previous (built) space. In the following sections, we highlight some of the findings and results of the Spring 2009 Clemson course. The course is offered to students across all sub-fields of Engineering and Architecture. In the offering of the course discussed herein, enrolled were three Electrical/Computer Engineering and one Mechanical Engineering students, together with four Architecture students. Students were paired together in engineer/architect teams with each team engaging three multi-week projects. The team combinations were changed for each project to maximize the diversity of interaction. The experience level of the students ranged from first year graduate students (one engineer, one architect) to graduating Ph.D. candidates (one architect). These students, together with the faculty members who co-taught the course, are the authors of this paper and the highlights and a summary of the lessons learned are outlined in the following. II. H ARDWARE T ECHNOLOGIES
AND
C OMPONENTS
Since the class was meant to be an intensive collaboration of architects and engineers, using Arduino as a platform was a natural choice. Arduino [27] is an open-source electronics prototyping platform based on an 8-bit microcontroller and is popular in the field of physical computing. It uses a simplified version of the C/C++ programming language, and was created for individuals interested in developing interactive prototypes, who might not have the software skills or background to work on other, more sophisticated systems. Though simple, the Arduino platform is robust and extensible, with multiple vertical female pins for the connection of sensors and actuators, as shown in Figure 2. Additionally, specific pins allow for the output of PWM signals, essential for servo motor control. Application-specific shield boards are also available to simplify the connection of multiple motors or audio devices. The Arduino can be programmed using a modified version of the C/C++ programming language, based on the “Wiring” project. Multiple scripts have been written to simplify certain coding tasks, and the code editor uses multiple colors to highlight different parts of the user program, simplifying the coding process for user unfamiliar with programming environments. Additionally, the Arduino IDE, written in Java, provides an initial template using which programmers can write their code.
Fig. 1.
Fig. 2.
Arduino
Arduino Example
These characteristics, along with the relatively low cost and ease of interface with off-the-shelf hobby parts made the Arduino an ideal platform for cross-displinary collaborative work, and thus this course. In addition to the Arduino, the students had access to multiple servo and stepper motors, as well as a rich array of sensors, including pressure sensors, PIR and infrared sensors, switching systems and relays. In addition to the Arduino platform, sensors and motors, visualization software and rapid prototyping tools equipment was used extensively through the design and development processes. The architects made 3D and physical models using software such as Rhino [28] and AutoCAD [29] as well as tools such as CNC laser cutters and milling machines, the use of which contributed to rapid design adjustment and enhancement. This process ensured faster critical analyses by both the architects and engineers, allowing for the quicker identification and mitigation of the various technical challenges posed during the design and development stages. III. P ROJECT 1: C HILDREN
AND
C REATIVITY
This project intended the groups to design an environment or a device that would foster creativity and learning in children. The groups were given two constraints, to interface
a sensor and either a servo or stepper motor to the project board, allowing the students to explore the capabilities of the Arduino board, as well as to hack an existing toy which could pique the curiosity of a child. A. The Interactive Flower The idea of this project was to cultivate children’s creativity by providing a hands-on interactive experience about a flower’s natural diurnal cycle. In addition, it would help children learn geometric relationships by placing puzzle pieces in their respectively shaped holes. The puzzle pieces were shaped in the form of the three basic ingredients needed to make the flower bloom: the sun, seeds, and water, here in the form of a raindrop. The flower would initially be in a configuration with all the petals closed as shown in Figure 3 A. As each ingredient was added, an optical sensor would detect its placement, noting that only the correct placement of a block would trigger the sensor. Each placement would cause the flower to open up by a fraction of its totallyopened configuration as shown in Figures 3 B, C, D, and E. The flower would have fully bloomed when all three pieces were correctly placed. As each piece was removed, the flower would close by the same fraction, eventually resetting to the closed configuration at which it started, allowing for immediate reuse, as is demonstrated in the video found at [30].
Fig. 3.
Interactive Flower
B. The iTOI (interactive Toilet) The inspiration for this device was a New York Times article highlighting the difficulties of trying to toilet-train young children [31]. The students figured that providing simple sensory interaction to the otherwise arduous process of relieving one’s self could possibly get toddlers to look forward to their next visit to the toilet. Thus the iToi was developed as seen in Figure 4. Additionally, the device could also provide adults with information about the success status of the toddler’s toilet visit. This interactive toilet was constructed using off-the-shelf parts: a standard child toilet seat found at most department stores, proximity sensors, two servo motors, serving as toiletpaper and towel dispensers, and a sound box that would activate music when the child was done. One proximity sensor would detect the presence of the child when he/she
sits on the kid toilet seat and additional sensors would detect the success or failure of the child’s visit. In the event of a successful visit, the sound box would play music to entertain the child and the two motors, initially in rest state under the toilet seat, would spin upwards, providing the child with toilet-paper and a towel, both necessary upon successful completion of the job. The associated video can be found at [32]. C. Project I Summary The overarching inspiration for Section III-A was to develop a device that would be simple to construct, use, and maintain, and could be placed in multiple settings. Having such a device in a playground or the home could greatly assist teaching children about the world around them outside the confines of a classroom. Additionally, the design and concept can be easily adapted into other tools that could explain various phenomena on the planet and the universe. In the case of the interactive toilet, the basis for the device was to provide sensory stimuli that could entice the child into performing what would be an otherwise cumbersome chore, a similar concept to which was presented in [33]. While eliciting a few laughs, the interactive toilet concept produced in the classroom lab, more seriously, might be extended to new bathroom fixtures for aging-in-place and those with disabilities. It should be noted that a video made to demonstrate this device was uploaded onto Youtube, quickly accumulating over 200,000 hits and receiving innumerous remarks that ranged from indifference to utter disdain, indicating that the design of such technologies, despite the altruistic intentions, can also be quite intimidating if proper thought is not applied into it’s form and function. However, these projects did demonstrate the ways in which environmental robotics could support the physical environment being actively used for education (as opposed to being passively used only). IV. P ROJECT 2: U RBAN D ISASTER M ANAGEMENT The premise of this theme was to explore how robotic systems could either augment existing architecture, or provide the basis for a paradigm shift in architectural design focussing on disaster detection and management. Unlike the previous project, no constraints were placed on the groups for their design archetypes.
Fig. 4.
The Interactive Toilet
A. Shelter in a Storm This project was aimed at designing prototype building skins that could morph from conventional shapes to more aerodynamic ones to dissipate high wind forces such as those experienced during hurricanes. According to [34], the presence of curvature in the shape of building parapets would reduce the wind speed traveling up the exterior wall surface and over the roof, thereby reducing the vacuum effect that typically occurs in more orthogonal designs which would compromise the integrity of the envelope. In this case, a wind-speed sensor would detect the presence of wind gusts, which would activate the morphing mechanism in the event of sustained winds beyond a designated safe threshold. It is important to note that the morphing of the external structure would not affect the internal design, shape, and integrity. When such high winds are detected, the conventionally-shaped building would self adjust the important components of its external surface: the parapet walls, the roof, and broad, flat wall surfaces over a reasonable and safe period of time, and the related change in shape is shown clockwise in Figure 5. Each of these vulnerable external surfaces is flexed to project a convex surface toward the oncoming wind. The curved surfaces, act to dissipate the wind and thereby reduce the force on the underlying structure as has been shown to occur with rigid domed buildings. After the high winds have died down, the building would revert to its original shape. Additionally, lighting was also used to outwardly project the current mode of the structure, in addition to providing aesthetic beauty. The related video can be found at [35]
B. The Directing Leaf One of the toughest weather phenomena to predict, spot, and track is the tornado, which can cause widespread damage and destruction, especially in urban areas. The directing leaf was designed to work with existing urban infrastructure to guide and assist populations towards shelters and safer areas in the event of a tornado passing around or within city limits. It was meant to be a leaf-like device able to blend into the environment, while also not being completely inconspicuous and capable of lighting up street blocks during festive seasons as in Figure 6. The leaf itself was made up of a
Fig. 6.
Schematic of the Leaf on Trees on a City Block.
Fig. 7.
Fig. 5. The morphing shape of adaptable skin to reduce damage do to high winds.
The Leaf attached to a tree.
translucent material and embedded with high-powered LEDs which would be visible from a distance as seen in Figure 7. It was connected to the main branch (articulated out of a pliable copper plumbing pipe to somewhat match the color and texture of tree branches) by a stepper motor which would be able to turn the leaves from their resting state to point to the nearest tornado shelter or safehouse. Each tree that would have the “directing leaf” device attached to it would also be radio-linked to a reciever and speaker system that would be able to play area-specific warning messages being recieved from local weather centers, thus providing audio information to complement the visual aids. Upon relaxation of the “severe
weather alert” status, the leaves could go back to their resting state and the speaker system would alert the population that the situation is clear. In addition to tornado warning, this device could also be adapted as a wildfire and lightning strike warning and evacuation measure. The associated video can be found at [36] C. Particulate Gas Localization In the aftermath of natural and man-made disasters such as Hurrican Katrina’s devastation of New Orleans and the 9/11 attacks on the World Trade Center and the Pentagon, much loss of life and chronic illness has resulted from exposure to air contaminated with hazardous particulate matter and noxious gases. This project titled pCAP (Particulate Control and Air Purifier) was an urban response mechanism that seeks to minimize the dispersion of airborne hazardous particulate and gases while at the same time providing shelter and purified air to those trapped withing the noxious atmosphere. In the test scenario of a city block, it was proposed that air purification systems be added onto existing city infrastructure. Bus shelters could be augmented so as to function as glowing beacons in the dusty haze where people can trust they will be found by rescue crews and where purified air is available as shown in Figure 8. To minimize particulate dispersion, pCAP used modified fire suppression sprinkler heads mounted on the parapets of buildings and actuated by gas and vibration sensors to produce atmospheric mist that could trap the particulates. The conceptual demonstration included a scale-model mockup of a city block with a bus shelter fitted with an expanding cowl, air purification bellows and beacon lights. The mock up was actuated by two servos and responded to a puff of particulate released onto the model. The related video can be found at [37].
to have more adaptable skin. To avoid the greater scaleup pitfalls involved in implementation that would inevitably arise, this project proposed modifications only to the ends of of structures. What was surprising though, was the difference a curved surface made in the protection of the integrity of a structure in the event of hurricane-force winds. Implementation was also found to be relatively easy, even on existing structures, especially if only the parapets are to be made bendable. What was found to be quite compelling about this design proposal were the logistical advantages the system could provide. For example, in the event of an impending hurricanes or tornadoes, it would be very difficult to transport the sick, elderly, and disable from institutions and assisted living facilities. This project provided a possible response to increase the integrity of the building structure by leveraging robotic technologies to provided a more cost-effective and adaptive solution to the challenge. The material that would make up the pliable skin was considered beyond the scope of the project as well as a topic not directly researched by the group. Sections IV-B and IV-C were examples of devices and systems that could be added onto existing structures to provide response and relief during their respective events. It was noticed that simple prosthetic devices and off-theshelf components could be innovatively put together to provide effective technologies. Additionally, the presence of Directing Leaves in trees and the Bus Shelter Bellow systems could also provide a novel aesthetic viewpoint in urban landscapes, which could be useful for special events as well. V. P ROJECT 3: AGING - IN -P LACE One of the biggest social problems faced by the United States as well as other nations is the support of those with short- and long-term disabilities and those whose mental or physical health is in general decline. An increasing number of the aging populace prefer living independently, and strongly resist moving into institutional care facilities [38]. The projects presented for the “aging-in-place” assignment were mandated to provide mechanisms that could either delay the first step away from the family home, or ease the person’s transitition into aged care establishments, a goal also expressed by Dishman [39]. A. ReLiS - Responsive Lighting and Screen mount
Fig. 8.
Air Purifying Bus Shelter.
D. Project II Summary Section IV-A seemed to have provided a seed of thought as to how otherwise rigid structures could be reimagined
ReLis was designed as part of an intelligent home environment primarily for heavy-use appliances to anticipate the needs of occupants. The main component of ReLis was a 3-link revolute planar redundant robot manipulator, as shown in Figure 9, which could actively track the occupants position while avoiding obstacles and furniture to orient the media center display mounted on the end-effector, in that direction. In addition, the lighting system adjusts lamp intensities, based on the occupants location and time of day, alleviating the need for travel to fixed switching systems. The model presents a prototypical single occupant one-bedroom unit with a combined living room/kitchennette, a schematic
of which is shown in Figure 10. The tracking algorithm adjusts display orientation and lighting intensity based on pre-defined zones of the occupants location, allowing for minimal robot travel, intrusion, and obstacle avoidance, along with efficient electrical power consumption.
Fig. 9.
could provide a personalized stimulus in the sensory areas of lighting color, ambient music, touch, by bringing a favorite piece of reading material, or taste, by serving a favorite beverage. The system is based on an overhead 2-DOF robot capable of prismatic motion along the length and width axes of a single room shown in Figure 11. A hanging armature equipped with infrared sensors can be used to pick up and deposit objects between any desired points in the room while avoiding obstacles. The robot uses 2 motor driven belts to pull the armature carriage along axis tracks. The system used IR sensors to detect the presence of obstacles in the robot’s trajectory. When an obstacle is detected, the robot recalculates it’s motion possibilities by assigning high cost to the obstructed directions. If no navigable trajectory is available, the robot holds its position.
Redundant Robot Manipulator
Fig. 11.
The ET arm
C. mKare - Intelligent Side Table
Fig. 10.
ReLiS Schematic
The system used two PIR sensors mounted on top of the screen to provide a comparative binocular output, to center the screen end-effector towards the occupant. The home environment lighting worked in conjunction with the redundant robot by increasing the intensity of the lighting in the region where the occupant was located. While not implemented, the project also allowed for a universal remote control device allowing the occupant to override ReLiS responses at any time. B. ET - Emotionally Together A common problem faced by many aged people is that of loneliness. ET sought to minimize the aspects of loneliness associated with aging by analyzing voice for emotional affect and by providing an appropriate automated response. The robot could detect four distinct emotional states: happiness, sadness, anger, and fear. In each of these cases, the robot
The mKare, shown in Figure 12 was designed to be an interactive mobile unit built to aid the physically challenged people in their daily lives. mKare was also equipped with omnidirectional wheels, powered by servo motors and controlled by a Wii remote to ensure that the table moved freely in all directions, allowing the user to remotely move the robot as required. The sides were also provided with sturdy flaps that could rise up to provide an extension whenever necessary, thereby giving more workspace for activities such as eating, playing games, or writing, while simultaneously keeping the overall size of the table compact. A smart lighting system which could turn on in the event of insufficient ambient light was also integrated into the design. The related video can be found at [40] D. IIF - Interactive Inflatable Furniture While society is making a rapid transition into an allpervasive high-technology environment, domestic environments today still remain comparatively low-tech and conventional, neglecting human conditions, especially for agingin-place. Increases in life time expectancy requires special
robotic technologies were not necessarily ground-breaking, it could be argued that the applications certainly were. The particular choice of strategies and materials resulted from the collaborative process and produced concepts of high potential. This was particularly noteworthy due to the openended nature of these research problems. It was found that all these technologies could just as easily be applied to more conventional home or work settings. This seemed to hint that while high-tech devices and computers are now ubiquitous, robotic technology has not yet completely penetrated the home environment as it has almost every other aspect of our lives, a sentiment echoed by [42], [43], and [44]. VI. L ESSON L EARNED
Fig. 12.
The mKare Side Table
care and different spatial conditions. For most senior citizens, mundane activities such as sitting, sleeping and getting up can be a hard task. IFF was designed to increase the quality of life of both healthy elderly individuals as well as persons with impaired mobility. The IIF was constructed primarily out of balloons overlaying a rigid 3-link robot manipulator forming a robot chair that could change shape based on the seating or sleeping preference of the user. For the project prototype, the balloons were inflated using small C02 canisters used to inflate bicycle tires. However the prototype shown in Figure 13 shows only the body of the IFF, since its shape would be lost under the balloons. The related video can be found at [41].
Fig. 13.
Interactive Inflatable Furniture
E. Project III Summary Three of the four projects associated with this theme involved applications or adaptations of robotic manipulator concepts, while all the projects required some sort of sensing and a general notion of the location of the user within the home setting. It was interesting to see that while the
With each of the projects and with every assignment, it was found that what occasionally manifested itself as a problem was also one of the greatest benefits to the interdisciplinary collaborations: the inherent lack of understanding of the capabilities and purviews of the respective team members. Often, it seemed that teams struggled with the desire to develop a project idea without a full awareness of how each partner could bring their background and skills to bear toward the success of the project. It can be observed through the descriptions above, however, that the stress of doing so brought forth interesting results which would not have been achieved by either partner working alone. In fact, the resulting projects of this collaboration could, in the best of cases, not be defined as either architecture or engineering but a productive, compelling hybrid of both. Practical problems were experienced due to the scale and “hobby” quality of the electronics used. Often, sensors and actuators were imprecise, poorly characterized, or underpowered and required ad hoc and time-consuming workarounds to achieve the desired results. Teams invested time in post process documentation to highlight such difficulties in order to streamline the efforts of future students in the course. While there was no obvious glass ceiling to shatter, The knowledge and understanding gained through the collaborative process is of a kind that will be, arguably, necessary for the advance of either discipline - Architecture and Robotics. Robotic technology has not yet come along enough to be manufactured and supplied to consumers for ubiquitous domestic use. Also, we have yet to understand the social and psychological implications for a robotic, domestic space. As well, the architecture and building industries are often slowadopters of very new technologies. For Architecture and Robotics, the collaborative research assignments of this novel course begin to cultivate new vocabularies of design and ways that complex engineered systems can be designed to respond to human needs and wants, developing reconfigurability, reliability, controllability, and usability theories. From Robotics specifically, Architectural Robotics will require sophisticated algorithms for sensing and inferring the occupancy, activities, and external conditions of a building that trigger reconfiguration, as well as planning and routing. For Architecture specifically, the course contributes to an understanding of how to apply new
technologies to improve traditional complex systems such as buildings. For its faculty member in both disciplines, the course poses questions of pedagogy as: “How to educate students from these different backgrounds to collaborate productively in teams?” and “What tools could further teaching and learning in the design and implementation of Architectural Robotics?”. The gradual embedding of robotics throughout the built environment will have a broad social impact as these technologies support and, in some cases, augment everyday work, school, entertainment, and leisure. While Architectural Robotics is in its formative stages, what is clear now is that the buildings of tomorrow will be actively responsive to a variety of forces, including weather, security, and human needs, requiring collaboration across the environmental design disciplines and engineering such as this one. VII. C ONCLUSION Architectural Robotics promise to support and enhance human needs and desires. In the coming decades, robotics embedded in our built environment will support and augment everyday work, school, entertainment, and leisure activities. This course served as an early effort for rising Robotics Engineers and Architects to learn from one another in the process of reckoning with a hybrid of their traditional concerns. It might be said that the expansion of one into the other is inevitable and offers potential for Engineers and Architects working together to advance human needs and desires and safeguard the environment. R EFERENCES [1] B. Siciliano and O. Khatib, Eds., Springer Handbook of Robotics, ch. 52 Medical Robotics and Computer Integrated Surgery, pp. 1199– 1222. [2] G. Ballantyne, “Robotic surgery, telerobotic surgery, telepresence, and telementoring,” Surgical Endoscopy, vol. 16, no. 10, pp. 1389–1402, Oct. 2002. [3] A. Lanfranco, A. Castellanos, J. Desai, and W. Meyers, “Robotic surgery: A current perspective,” Ann. Surgery, vol. 239, no. 1, pp. 14–21, Jan. 2004. [4] B. Siciliano and O. Khatib, Eds., Springer Handbook of Robotics, ch. 53 Rehabilitation and Health Care Robotics, pp. 1223–1251. [5] M. Hillman, “Rehabilitation robotics from past to present a historical perspective,” Adv. Rehab. Robots, vol. 306, no. 1, pp. 25–44, Aug. 2004. [6] S. Hesse, H. Schmidt, C. Werner, and A. Bardeleben, “Upper and lower extremity robotic devices for rehabilitation and for studying motor control,” Current Opinion Neuro., vol. 16, no. 6, pp. 705–710, Dec. 2003. [7] B. Siciliano and O. Khatib, Eds., Springer Handbook of Robotics, ch. 54 Domestic Robotics, pp. 1253–1281. [8] B. Graf, H. Matthias, and R. Schraft, “Care-o-bot iidevelopment of a next generation robotic home assistant,” Autonomous Robots, vol. 16, no. 2, pp. 193–205, Mar. 2004. [9] J. Forlizzi and C. DiSalvo, “Service robots in the domestic environment: a study of the roomba vacuum in the home,” in Proc. IEEE/ACM Int. Conf. Human-Robot Interaction, Salt Lake City, Utah, 2006, pp. 258–265. [10] B. Siciliano and O. Khatib, Eds., Springer Handbook of Robotics, ch. 55 Robots for Education, pp. 1283–1301. [11] F. Martin, Robotic Explorations: A Hands-On Introduction to Engineering. New York, NY: Prentice Hall, 2000. [12] B. Siciliano and O. Khatib, Eds., Springer Handbook of Robotics, ch. 42 Industrial Robotics, pp. 963–986. [13] ——, Springer Handbook of Robotics, ch. 47 Robotics in Hazardous Environments, pp. 1101–1126.
[14] N. Negroponte, The Architecture Machine; Toward a more human Environment. Cambridge, MA: MIT Press, 1970. [15] Vitruvius, The Ten Books on Architecture, 25 B.C. [16] W. Mitchell, E-Topia. Cambridge, MA: MIT Press, 2000. [17] B. Johanson, A. Fox, and T. Winograd, “Inventing wellness systems for aging in place,” IEEE Pervasive Computing, vol. 1, no. 2, pp. 67–74, Apr.-Jun. 2002. [18] N. Streitz, J. Geissler, and T. Holmer, “Roomware for cooperative buildings: Integrated design of architectural spaces and information spaces. in cooperative buildings - integrating information, organization and architecture,” in Proc. CoBuild ’98, Darmstadt, Germany, 1998, pp. 4–21. [19] M. Weller and Y. Do, “Robotics: A new paradigm for the built environment,” in Proc. Int. Conf. Des. Sci. and Tech., 2007, pp. 4– 21. [20] K. Oosterhuis, Towards an E-Motive Architecture. Basel, Switzerland: Birkhauser Press, 2003. [21] H. Raffle, H. Iishi, and J. Tichenor, “Super cilia skin: A textural interface,” Textile, vol. 2, no. 3, pp. 1–19, Mar. 2004. [22] J. L. Brown, “Curtains of water form expo pavilion walls,” Civil Engineering, vol. 78, no. 2, pp. 14–16, Feb. 2008. [23] K. Green, I. Walker, L. Gugerty, and J. Witte, “Three robot-rooms/the awe project,” in Proc. CHI 2006, Montreal, Canada, 2006, pp. 809– 814. [24] K. Green, L. Gugerty, J. Witte, I. Walker, H. Houayek, I. Dunlap, R. Daniels, J. Turchi, M. Kwoka, and J. Johnson, “Configuring an animated work environment for digital lifestyles: A user-centered design approach,” in Proc. Int. Conf. Intell. Env., Seattle, WA, 2008, pp. 1–8. [25] M. Gross and E. D, “Environments for creativity a lab for making things,” in Proc. SIGCHI conf. on Creativity and Cognition, Washington, DC, 2007, pp. 27–36. [26] S. Joshi, “Development and implementation of a matlab simulation project for a multidisciplinary graduate course in autonomous robotics,” Journ. Comp. App. in Eng. Edu., vol. 12, no. 1, pp. 54–64, Apr. 2004. [27] D. Mellis, M. Banzi, D. Cuartielles, and T. Igoe, “Arduino: An open electronic prototyping platform,” in Proc. Alt CHI, San Jose, CA, 2007. [28] Mcneel. (2009) Rhino. [Online]. Available: http://www.rhino3d.com/ [29] AutoDesk. (2009) Autocad. [Online]. Available: http://usa.autodesk.com/ [30] P. Yanik and H. Houayek. (2009) Interactive flower. [Online]. Available: http://www.youtube.com/watch?v=UejX2Db83cE [31] E. Goode, “Two experts do battle over potty training,” New York Times, Tech. Rep. [Online]. Available: http://www.nytimes.com/1999/01/12/us/two-experts-do-battleover-potty-training.html?pagewanted=all [32] I. Siles and T. Mokhtar. (2009) Interactive toilet. [Online]. Available: http://www.youtube.com/watch?v=IxaOsKu9oBE [33] J. Roberts and B. Sutton-Smith, “Child training and game involvement,” Ethnology, vol. 1, no. 2, pp. 166–185, Apr. 1962. [34] J. Lin, P. Montpellier, C. Tillman, and W. Riker, “Aerodynamic devices for mitigation of wind damage risk,” in Proc. Int. Conf. Adv. in Wind and Structures, Jeju, Korea, 2008, pp. 1533–1546. [35] P. Yanik and A. James. (2009) Shelter in a storm. [Online]. Available: http://www.youtube.com/watch?v=-4-1nnE5lqE [36] A. Kapadia and T. Mokhtar. (2009) The directing leaf. [Online]. Available: http://www.youtube.com/watch?v=oh8m83druO8 [37] I. Siles and J. Manganelli. (2009) Particulate gas localization. [Online]. Available: http://www.youtube.com/watch?v=YTphm MGDqc [38] E. Mynatt, I. Essa, and W. Rogers, “Increasing the opportunities for aging in place,” in Proc. Conf. Universal Usability, Arlington, VA, 2000, pp. 65–71. [39] E. Dishman, “Inventing wellness systems for aging in place,” IEEE Computer, vol. 37, no. 5, pp. 34–41, May 2004. [40] V. Kanuri and A. James. (2009) mkare. [Online]. Available: http://www.youtube.com/watch?v=RqiJb1NtRV4 [41] I. Siles and H. Houayek. (2009) Interactive inflatable furniture. [Online]. Available: http://www.youtube.com/watch?v=kuf2Vn5imc0 [42] R. Kurzweil, The Age of Spiritual Machines. New York, NY: Viking Press, 1999. [43] B. Gates, “A robot in every home,” Scientific American, Jan. 2007. [44] R. Brooks, Flesh and Machines. New York, NY: Pantheon, 2003.