Multi-Platform Human Computer Interaction in Converged Media Spaces D. Robison1, I. J. Palmer1, P. S. Excell2, R.A. Earnshaw1 and O. Al Sheikh Salem1 1 School of Computing, Informatics and Media, University of Bradford, UK 2 Centre for Applied Internet Research, Glyndŵr University, Wrexham, UK
[email protected], P.Excell@
[email protected]
Abstract The boundaries between different kinds of media spaces are complex and challenging. The convergence of computing, media, and telecommunications produces environments that contain elements of their origins, but also contain new components that allow interaction in new ways by new users with new kinds of information. This poses problems for effective human computer interaction and human media interaction because the paradigms are not well understood. Converged environments are driving these new uses just as the first PCs supported keyboards and then WIMP interfaces. Traditional models of human computer interaction are not adequate to deal with this complexity, and the shifting of the boundaries brought about by convergence.
overall objective therefore is to make human-computer interaction as powerful as human-human interaction. Such environments also need to be easy to use and easy to learn.
1.2 3D User Interfaces
1. Introduction and Background
3D user interfaces allow users to interact with virtual objects, environments or information using direct 3D input. Interest in this area has increased rapidly with the development of systems such as the Nintendo Wii. It is clear that the take-up of this system within a wide variety of ordinary environments has demonstrated its user appeal and the natural ways in which young and old can utilise it for multi-player games. It employs 3D graphics for picture generation and spatial input devices that can monitor both position and movement. In this way trajectories traced out by the users are able to be tracked in real time. The user can also point directly at the screen to select an object.
1.1 Post-WIMP Interfaces
1.3 Interdisciplinary Aspects
Van Dam [1] argues that WIMP interfaces have survived rather surprisingly for more than two decades when one might have expected them to be overtaken by the next generation of user interface technology using gesture and speech recognition, known as postWIMP. The objective here is to minimise the use of menu-based tools for manipulation and concentrate on the intent of the user. WIMP interfaces are typically operated by single desk-top users who control the sequence of commands via a mouse. The real human environment is characterised by multiple users, multiple interactions, task sharing, and incorporating sound and gestures. It is clear that these environments are beyond the capability of WIMP interfaces. The
Jeng [2] argues that the increasing emphasis upon human behaviours and everyday life situations requires the framework for HCI to give much more attention to the interdisciplinary aspects rather than the technological ones. These include cognitive and psychological aspects and human and environmental adaptation. Jacob et al [3] propose the concept of “reality-based interaction” to provide a framework for understanding and comparing the wide variety of post-WIMP interfaces that are now developing. These include virtual and augmented reality, tangible computing,
interesting to speculate whether a wider understanding of this language would be beneficial as an alternative means of communicating with computers, quite independently of its utility for the deaf.
ubiquitous computing, mobile computing, affective computing, and passive interaction.
1.4 Multi-touch Interfaces
Non-codified gestural body language is a subject for further research, but wider insights have been obtained from collaborative research (Baurley et al [4,5]; Stead [6]; Goulev [7]), in which distinctive insights drove investigation into the significance of emotional communication, including remotely-communicated "hugging" and gentle warming. These interactions were incorporated into special-purpose garments: it is very evident that there is much more scope to investigate such ideas and work so far has only scratched the surface.
Single touch and multi-touch interfaces (e.g. iPhone, Microsoft Surface) are increasing in popularity.
2. Characteristics of Converged Environments In terms of technological evolution, the personal computer can be said to have reached an evolutionary plateau around the turn of the millennium. The standardised format of keyboard, mouse and screen is ubiquitous and only audio input and output devices can be said to be common add-ons, if not ubiquitous. Those with experience of direct dictation (e.g. IBM ViaVoice or Dragon NaturallySpeaking) gain some insight into the problem of the very limited output channels of the human being: whereas the human has an extraordinarily high-resolution stereoscopic video input channel (equivalent to hundreds of megabits per second) and a stereophonic high fidelity audio input channel (a few hundred kb per second), its basic output channels are the voice, a generally low fidelity monaural audio channel (around 13kbps) and typing, which rarely exceeds 150 bps. Mouse input is hard to quantify, but is unlikely to approach even typing bandwidth. These considerations illustrate the massive constriction on human creativity that is caused by the limited human output channels and they strongly indicate that use of voice input is the most liberating conventional method, if used with care.
Returning to the personal computer paradigm, it can be observed that the most general traditional interaction modality involves the concept of a "desktop", on which are to be found "folders" and within these folders are "files". This jargon is precisely that of the deskbound office worker and is inappropriate to a very large number of users, yet it has become, in a space of relatively few years, "second nature" to a majority of users. Computer games have indeed broken through this rather stagnant paradigm and this is much to be welcomed, although the game paradigms are also somewhat stagnant and in need of being taken forward to wider applicability to all of human life, or at least all of human communication needs. It is ironic that, despite the massive investment in the PlayStation 3 and Xbox 360 platforms, the innovation in human interaction in them is quite modest; on the other hand, the relatively low-budget Nintendo Wii device has had a far more significant liberating and enriching effect in facilitating wider modes of human interaction with the intelligent device.
Obviously, voice input has many problems. The accuracy of the software has improved substantially since its launch (in low-cost commoditised form) in the mid-1990s, but it is still equivalent to use of a rather inexpert human audio typist. More significantly, it has to be used in an environment of social isolation, which is a significant inconvenience. This raises the question of whether other human output channels are possible and it is intuitively the case that "body language" conveys a significant amount of information, although it is difficult to quantify and also difficult to store or translate into other formats. Ignoring the observation that even using a keyboard is a form of "body language", we might usefully observe that sign languages for the deaf, such as British Sign Language, are indeed a form of body language which is highly codified and amenable to storage and translation: it is
3. Convergence of Media Structures and Audiences 3.1 Convergence Developments in HCI that have occurred in recent years have collided with a range of technological developments and social changes that have been going on in other fields and in “the world at large”. Of particular significance (in addition to developments mentioned in gaming, mobile technology and associated innovations at the level of interface) are developments in audio-visual media production. Computers and peripheral technologies are becoming
2
channels. Initially these often contained cheap material such as soap and drama repeats, supplemented by sporting pay-per-view events and the like. They may not have produced news and current affairs or indeed original drama to the same specifications of quality, but the sheer volume of material meant that viewers could often find something they would watch when they were not interested in the four primary UK channels. Over time the cable and satellite channels have formed stronger brand identities and several are now involved in producing original programming. The terrestrial channels have also expanded their ‘realestate’ by investing in additional digital TV channels and Web media.
primary technological constituents of “traditional” media, particularly of broadcast media, in the sense that the technology of capture and editing is, if not entirely computer based, at least makes use of computing technology. Running parallel are developments in the habits of “consumers” of these media. Theorists, such as MIT’s Henry Jenkins [8] have offered commentaries on the “convergence” of media, computing and communications technologies. In explaining this process, they have emphasised the importance of the ‘cultures’ in which technology adoption takes place. Convergence is a useful umbrella term that can be used to describe, among other things, how similar media or communication activities (e.g. watching movie trailers) can be carried out through a range of devices and technologies. However, the emphasis of the every-day usage of the word convergence – a ‘merging together’ or tending towards equilibrium – doesn’t fully describe the variety of media devices and computing technology actually available. We are gaining an increasingly diverse, multi-functional set of technologies and there is no obvious equilibrium in sight. Paradigms for understanding evolving aspects of new media distribution cannot therefore be static, although a merging of understandings from different fields is desirable. Convergence can exist on the level of technology, content, usage and also on the level of ownership and control.
The second aspect of the dual process is of course the rise of the Internet and computers as an alternative, competing media platform through which moving image content, text and audio could be consumed – in a ‘sit forward’ rather than ‘sit backwards’ way (Jones et al [9])
3.3 Multi-platform media output Established broadcasters are responding to the situation using a variety of strategies in terms of media output, but the broader processes of structural change can still be painful and are likely to have both winners and losers. Although the initial response to the rise of the Internet by some traditional broadcasters may have been akin to that of a rabbit in car headlights, those that are thriving, in addition to being ‘lucky’ because of market positioning, have embraced the new media, whilst attempting to still deliver what their core audiences tune in to.
3.2 Challenges to traditional broadcast models The ways that users interact with digital media technologies provide challenges to traditional understandings of the relationship between producers and consumers of ‘mass media’. This has been enabled by the increasing range of media forms and channels which saturate modern urban societies and their multilevel possibilities for broadcast, narrowcast and communicative interaction. These challenges have already required significant financial and structural change to television and film broadcast industries, who now find themselves in a world where a no-budget YouTube video can, on occasion, receive more viewers than a high-budget drama or factual production for terrestrial broadcast.
An example of this ‘embracing’ is the concept of 360 degree, cross-platform or multi-platform media. A number of phrases have been used to describe this process of producing a programme concept or series ‘package’ which exists across several platforms, with content re-purposed or added in a way that enables an extended and potentially more dynamic two-way interaction with the audience. These have been tentatively engaged in by BBC commissioners, who have been in the fortunate position of not needing to worry, in quite the same way as purely commercial broadcasters, about what part of their output the revenue is coming from. The mainly browser-based iPlayer software enables viewers to watch the BBC’s premium outputs online at a time convenient to them (usually programmes stay available for a week or more) – but this is only scratching the surface of what multi-platform interactions could mean.
Commercial terrestrial broadcasters have seen their advertising revenue and audience shares shrinking because of a dual process of firstly, de-regulation, which occurred in the UK during the 1980s, opening the doorway to a plethora of satellite and cable
3
expensive form of traditional advertising, and that there is an increasing body of evidence that it may not always be very effective (viewers press mute, switch over, talk to others in the room during ad breaks), it is not surprising that advertisers are looking to the Web and other media as potentially more effective places to publicise their products and services.
As an example, the immensely popular science fiction TV show “Doctor Who” can be watched in ‘broadcaster time’. It occupies a particular slot on the terrestrial BBC One channel. This is followed by the opportunity for fans to watch, on another BBC channel, “Dr Who Confidential” which is a programme made entirely of ‘behind the scenes’ footage, interviews and teasers from the series. The BBC has enough digital channels available to justify dealing specifically with this smaller but still significant market. Both of these programmes are viewable on iPlayer. This alone does not constitute “360 degree” media, but add the official Dr Who website, hosted as a mini-site within the BBC’s overarching Web framework, providing access to trailers, teasers, links to more resources and episode summaries in text form and you move closer. Most significantly in terms of what we might call ‘interactive media’ the site provides a means of interacting with and messaging The Dr Who “experience”. The programme output has an extended relationship with select fans and viewers, and if they so desired, the user/viewer could spend several hours ‘with’ Dr Who, with theme and content largely emanating from just one hour of traditional drama output. The content however is re-sliced, reformulated and presented across the platforms. Some of the output is entirely new, but the established broadcasters are still clearly privileging the broadcast, linear episode as the premium piece of content.
The gate-keeping power of the terrestrial broadcasters has taken a severe battering. Google, Yahoo, LiveSearch, Facebook and MySpace are all able to offer targeted advertising based on key-word analysis, therefore making it more likely that advertisers are communicating with a relevant market. There remain a number of questions of concern to media theorists, which include “What will happen to professional journalism?”, and “What will happen to public service broadcasting?”, but the digital generation perhaps do not have time for such niceties given the range of choice and methods of access to media now available.
3.5. Power, politics and convergence of ownership Media and Web studies academics remain serious and critically analytic in the face of this plethora of “choice”. It is easy for commentators to fall into the marketer’s trap of proclaiming the benefits of media forms without due reflection. As three prominent Media scholars put it, shortly before the millennium, an important debate in media studies is the question “how might media systems gain the maximum space for independent information-gathering analysis and debate, with the consequent expectation of 'public' value, and also have viable and stable funding?” [10]. Even within the short period of time since that publication (1997), it is hard to imagine that funding sources for commercial broadcasters could be stabilised in the near future, but there does seem to be strength in size.
Of course there is also a Doctor Who ‘game’ which can be played through the Website. Casual, Webbased games, built around the themes associated with a television programme, along with content for mobile devices are another example of how broadcasters have had to embrace new media in order to expand the depth of their relationships with fickle viewers. Dr Who is not a revolutionary example, but is indicative of the kind of change that is taking place.
3.4 Advertising revenues
An important area where convergence in new media has been occurring is on the level of ownership. Just as we have seen a long-standing consolidation of ownership of Newspapers, magazines and TV channels with most media outputs resting in the hands of a small number of interests, we are starting to see a consolidation, in some areas at least, online. Google (who now owns YouTube), Microsoft, Yahoo, MySpace (owned by News International) and others have become hugely powerful influences on how we interact with the Web. This is not a transparent or straightforwardly organic process, but involves the changing hands of vast resource and informed
Convergence is not at all complete, nor does it constitute a smooth and obvious path for broadcasters to go down. It is beset with financial challenges and threats to dominance of advertising revenue sources. The BBC can justify its license fee to its review committee on the basis of a relationship with its audience through any of its output channels, but a broadcaster such as ITV needs to place its audience within specific timetabled slots in order to sell television advertising. Given that television advertising with terrestrial broadcasters is the most
4
strategising about how users should interact with and indeed produce content.
builds customer loyalty, while giving end-users an enhanced TV experience.
3.6. Citizen journalism
Various authoring tools are available for creating different types of interactivities. Some of the most common platforms for creating interactivity include Adobe Flash and the lately released Microsoft Silverlight.
The idea of citizen journalism and user-as-producer has really been the rallying cry of those who believe that new media offer an ultimate liberation to the consumer as a citizen producer. They become a complete citizen, ultimately capable of influencing political decision making through their interactions and representations. The first elevation comes when we replace the word “viewer” or “listener” with the word “user”. We have elevated a practice in status from one that is (very arguably) passive to one which is active and ‘chosen’. We further elevate the user when we talk about “user-generated content” – users literally become the producers.
A survey in the UK was carried out involving an equal number of males and females. The survey was done by means of a questionnaire for each user. The results were analysed by Anova, a statistics toolbox package. Improvements have taken place in the mobile device processor speeds and storage facilities and the development of input and output devices [12]. The results from the survey indicated that males from the 15-20 age group generally agree that touch screen mobiles are convenient. Females from the 21-30 age group matched these results of the males. A multitouch screen such as the iPhone screen is included in these kinds of touch screen mobile devices.
9/11 and the 2004 Indian Ocean Earthquake (Tsunami) were key examples of citizen journalism coming to the fore [11], when a high portion of the visual content of what was appearing through both broadcast and Web channels was media produced by “ordinary” people on the front line, or just behind, of the events unfolding. However it would be foolish to suggest that citizens have so easily by-passed the means of power and control of media messages in all contexts. There are certainly examples available of digital technology being implicated in such processes, but because the overall process of convergence is neither transparent nor smooth in societies which contain groups that are antagonistic, it cannot be said that means of power of control of media messages became less relevant. The convergence of technology and ownership is not a singular process that points to a specific political and social outcome. It should not be used as a term to mask the conflict and confusion that reigns as the constantly changing new media ‘paradigms’ take hold.
As mobile devices keep increasingly reducing in size and weight they are enhancing their portability, but the usability - in our case the Mobile TV interactivity - will begin to suffer. The survey shows that there is a general agreement from females that they do care about the mobile screen size, and that none of the surveyed females would own a mobile device with a small screen. On the other hand, there is a general agreement that the 21-30 age group prefers to have a mobile device with a wide screen. In addition, males from the 21-30 age group were in general agreement that they preferred to own a wide screen mobile device. All those surveyed from all age groups did care about the mobile device screen size. So far the users appear to prefer a larger screen, and a touch screen for interaction.
4. Interaction with Mobile TV
This work divides interactivity into three interactive levels, e.g. low level interactivity such as “the big red button” and high level interactivity such as “the Second Life”, and medium level interactivity “especially the part in between”. In other words, high interactivity and low interactivity, i.e. “lean-forward” and “lean-back” modes. YouTube is an example of interaction at various levels, starting from generating the content to uploading it; the viewers choose the video and then they watch it.
Interactive mobile TV enables viewers to interact with the content while viewing it. The use of mobile devices such as Personal Digital Assistants (PDAs), PSPs, iPhone, and mobile phones for displaying an interactive Mobile TV service, is a field that is attracting increasing interest. The new interactive mobile TV application is an end-to-end solution based on existing technology, which enables mobile phone users to watch streamed TV programmes live and, at the same time, interact with the show. This opens the way to new TV formats, widens target groups and
5
human beings are predominantly visual and audio, and the Bluetooth headset has done a great deal to make the latter more convenient. The visual output is currently very poor, in both size and resolution. The possibility of using head-mounted displays has been available for some time, but has not gained widespread acceptance. Experiments with a single-eyepiece head-mounted display have illustrated three possible reasons for the lack of uptake to date. The most significant reason is probably social, in that users of eyepiece displays can be perceived to look "geeky" to the point of appearing similar to "The Borg" (a notorious villainous species in "Star Trek").
The results of the survey are as follows: all males prefer to have an interactive Mobile TV, whereas females from the 21-30 age group disagree totally with what the males say. 1) High level interactivity: males from the age groups 15-20 and 21-30 generally agree that they prefer to use high level interactivity on their devices. Females from the age group >40 generally agree that high level interactivity is not acceptable by them. 2) Medium level interactivity: males from the age group >40 generally disagree to having medium level interactivity applications on their mobile devices. Females from the age group 15-20 generally agree to have a medium level interactivity on their mobile devices, but those from the age group 31-40 think the opposite. 3) Low level interactivity: females from the age group >40 disagree with having low level interactivity on their mobile devices.
A more technological problem with eyepiece displays is that they are not particularly relevant if simply displaying the Windows desktop or a screen from Word software. They become much more relevant if the display is in some way correlated with the external environment and there are two ways, in particular, in which this could be achieved: firstly, the opaque screen paradigm could be abandoned and imagery overlaid on the ambient environment instead, similar to the "head up display" used in advanced military aircraft applications. Secondly, it is very desirable that the display should move as the wearer's head moves such that the wearer appears to be scanning a fixed scene. This mode of operation is known from some computer game headsets, using 3axis rate gyroscopes to determine head movement. Such an enhancement is, in fact, essential with head-up displays anyway, but would also be very valuable with opaque displays. Again, there needs to be a complete break with the traditional Windows-Word-Web browser usage paradigms and completely new applications and new forms of digital media for display need to be developed in parallel with technological innovations.
5. Interaction Modalities in Converged Environments Moore's Law continues to drive storage capacity, but its influence on processing ability is more questionable, since processor manufacturers have had to resort to multicore devices in the face of an inability to raise clock frequencies further, but those with experience of parallel computing over some decades understand that efficient exploitation of multi-core processors is a very complex matter and there is evidence that they are not in general being exploited effectively. Meanwhile, mobile devices, principally the mobile phone paradigm, have been advancing rapidly and continue to develop rapidly on the rising part of the evolutionary S-curve. Simple calculation shows that the progression of data rates from second-generation mobile phones (i.e. the first generation of digital phones), through third generation to the projected launch of fourth generation mobile phone systems (i.e. probably 3G-LTE) averages out approximately to an effective doubling of data rates every year. Although data rate does not map exactly on to Moore's Law, it is nonetheless pertinent to compare this progression to Moore's prediction of a doubling of numbers of gates every 18 months. This state of vibrancy in the mobile device evolutionary process strongly suggests that the "baton has been passed" to mobile devices, but unfortunately their HCI bandwidths are currently significantly worse than those for personal computers and the alleviation of this is a very high priority for innovative mobile device designers.
The input channel from a human being into the mobile device is currently extremely inefficient and, again, voice input would be the most desirable way forward. This is already possible with mobile devices capable of running full Windows, but these are a small minority. As with voice input to personal computers, the social acceptability issue of having to talk to a device in conditions of relative silence are problematic and this begs the question of whether silent voice-like interaction might be possible.
6. Interaction with Computer Games Before the advent of the dedicated games console, video games for home use ran on standard computer
The output channels from mobile devices into
6
controller two joysticks, sixteen buttons (some of which are force-sensitive) and a ‘six-axis’ motion sensing feature, and it is hard to envisage any further increase that will be useable by a diverse user-base.
hardware, and so input devices were largely limited to keyboards and mice. This is in direct contrast arcade games which used specialised hardware for input and output. This distinction has been blurred due to the reduction in hardware costs of specialised devices and the decline of the arcade, and it is now the home console market that drives the technology and set the standards due to the scale of its market.
There have been several attempts to counter this increasing complexity of devices by using other input methods. One approach has been to include voice input, with early examples such as Sega’s Seaman for the Dreamcast [22] and Nintendo’s Hey You, Pikachu! for the N64 [23], both of which used specialised hardware included with the game, having only limited commercial success with little interest outside their native Japan. Processing power and voice recognition algorithms both have improved greatly since these early examples, and this together with the fact that many consoles owners already have microphones attached to their machines for use in online collaborative games, more recent attempts such as Ubisoft’s Tom Clancy’s Endwar [24] have received considerable critical acclaim and commercial success.
Early video games used specialised hardware that was beyond the means of home users. Atari’s first game, Pong [13] used a standard television display with rotary controllers to move the on-screen paddles. Whilst this input method was highly appropriate to the particulars of Pong’s gameplay (it was replicated on the home console version), it is apparent that twin rotary controllers do not provide a very generic approach to controlling other games. As the variety of computer games increased, so did the number of specialised different input devices, including, track balls (e.g. Atari Football [14] and Missile Command [15]), steering wheels (used in many racing games including Namco’s Pole Position [16] and Sega’s Outrun [17]) and joysticks (e.g. Atari’s Battlezone [18]), with each being particularly suited to different game genres. This posed a problem for the home console market as offering a number of different devices was not commercially viable, and so work on a generic device that would be effective with a wide range of game types was crucial to the success of the market.
Other approaches have been to produce specialised input devices such as the Gametrak [25] for the PS2, but these typically prove to be of limited success due their cost and limited use across different game types, just as for the rotational controllers of the early Pong games. More successful has been the use of video input devices, the most common being the EyeToy devices from Sony [26]. These are relatively cheap cameras which when combined with image recognition software allow the position and movement of the players’ images to be used as input for games. The player’s mirror image is displayed on the screen to provide feedback, and although the system requires some calibration and is dependent on lighting and background contrast, this can be quite effective. The main problem with the system is the relative crudeness of the input that is possible in this way, and so early applications were limited to crude ‘party’ games such as EyeToy: Play [27] and EyeToy: Groove [28]. With the increased sophistication of the PS3 came the Playstation Eye [29], a higher specification camera that allowed games such as Eye of Judgement [30] to create augmented reality game environment that mapped 3D elements onto a live video feed from the camera. The game incorporated cards with symbols on them that were recognised by the system and appropriate graphics created on the screen, effectively creating a simple augmented reality environment.
In 1983, Nintendo developed the first effective ‘gamepad’ [19]. This consisted of directional control via a four-way rocker switch on the left and two input buttons (‘A’ and ‘B’) on the right. In the centre were two additional buttons, ‘start’ and ‘select’, which were originally intended for choosing options and beginning an pausing the game. Whilst this set the fundamental model for most gamepads that followed (including the latest PS3 controller [20]), as games became more sophisticated they soon ‘outgrew’ the input options that this relatively simple device offered. If we take an example of a game series such as the iconic Tomb Raider [21], the number of different actions that the main character could perform increased with each new release, soon resulting in button combinations (i.e. holding down more that one button at a time) being required for many different functions. Besides being awkward for the user, this also means that the player has to remember many different button combinations to complete the game, and many will have to be recalled at a moment of high tension in the game. The industry has now reached the stage where the PS3
The most significant development in game UI in recent years has been Nintendo’s Wii console [31] and its associated Wii Remote (or ‘Wiimote’) controller.
7
[6] L. Stead, P. Goulev, C. Evans and E. Mamdani, “The Emotional Wardrobe”, Personal and Ubiquitous Computing, Volume 8, Nos. 3-4, 2004, pp. 282-290.
This uses a variety of approaches including mercury switches and infra red sensors to detect position and movement so that in many games traditional joysticks and buttons are completely replaced by pointing and gestures.
[7] P. Gouley, L. Stead, E. Mamdani and C. Evans, “Computer Aided Emotional Fashion”, Computers & Graphics, Volume 28, No. 5, 2004, pp. 657-666.
7. Conclusions
[8] H. Jenkins, “Convergence Culture: Where Old and New Media Collide”, New York; New York University Press, 2006.
This paper has considered the effects on humancomputer interaction of converged media environments and the changing nature of both media and technology. It is clear that the Internet and Web 2.0 have changed the way in which content is generated, mediated, and accessed by users. At the same time, interaction is still relatively primitive, consisting mainly of selecting items from a menu or use of a pointing device, though both input and output may be in multimedia form. In general, augmented reality interaction devices have not gained wide acceptance due to the disconnect that they introduce between the virtual world and the real world. They only seem to work well when the user is completely immersed within the virtual world (e.g. in a training or entertainment simulator). For the user in the real world, connection with the content appears to be trivial and can be done with a mouse click (e.g. to display a page, or access a service) but this does hide an underlying complexity in transformation which is taking place as computing, media, and telecommunication environments converge.
[9] M. Jones, G Buchanan, P Jain and G Marsden, “From SitForward to Lean-Back: Using a Mobile Device to Vary Interactive Pace”, In Proceedings Mobile HCI, Italy, Springer, 2003, pp 390-394. [10] J. Corner, P. Schlesinger and R. Silverstone, International Media Research, 1997, 6. [11] D. Robison and W. Robinson, “Tsunami Mobilizations: Considering the Role of Mobile and Digital Communication Devices, Citizen Journalism, and the Mass Media”, in The Cell Phone Reader: Essays in Social Transformation, ed A. P. Kavoori and N. Arceneaux, Digital Formations v. 34, New York: Peter Lang, 2006, 246. [12] B. Shneiderman, “Designing the User Interface: Strategies for effective Human-Computer Interaction”, Addison-Wesley, 2004, ISBN: 0321269780, 4th Ed. [13] Pong, Arcade Video Game, Atari Inc., 1972. [14] Football, arcade video game, Atari Inc., 1979.
References
[15] Missile Command, arcade video game, Atari Inc., 1980.
[1] A. van Dam, "POST-WIMP User Interfaces". Communications of the ACM (ACM Press) Vol 40, No 2, 1997, pp. 63–67.
[16] Pole Position, arcade video game, Namco Ltd, 1982. [17] Outrun, arcade video game, Sega Corp., 1986.
[2] T. Jeng, “Advanced Ubiquitous Media for Interactive Space: A Framework”, Computer Aided Architectural Design Futures 2005, Proceedings of the 11th International CAAD Futures Conference, Austria, 2005, Eds Bob Martens and Andre Brown.
[18] Battlezone, arcade video game, Atari Inc., 1980. [19] Nintendo Entertainment System, video games console, Nintendo Co Ltd, 1985. [20] Playstation 3, video games console, Sony Computer Entertainment, 2007.
[3] R. J.K. Jacob, A. Girouard, L. M. Hirshfield, M. S. Horn, O. Shaer, E. T. Solovey, J. Zigelbaum, “Reality-Based Interaction: A Framework for Post-Wimp Interfaces”, 2008, Proceedings of CHI 2008.
[21] Tomb Raider, video game, Eidos Interactive, 1996. [22] Seaman, video game, Sega Corp., 1999.
[4] S. Baurley, “Interaction design in smart textiles clothing and applications”, in X. Tao (Ed.), Wearable electronics and photonics, Boca Raton, FL: CRC, 2005, pp. 223-244.
[23] Hey You, Pikachu!, video game, Nintendo Co Ltd, 1998. [24] Tom Clany’s Entertainment, 2008.
[5] S. Baurley, “Interactive and Experiential Design in Smart Textile Products and Applications”, Personal and Ubiquitous Computing, Volume 8, Nos. 3-4, 2004, pp. 274-281.
Endwar,
video
game,
Ubisoft
[25] Gametrak, video games console peripheral, In2Games, 2004.
8
[26] EyeToy, video games console peripheral, Sony Computer Entertainment, 2003. [27] EyeToy:Play, Entertainment, 2003. [28] EyeToy:Groove, Entertainment, 2003.
video video
game, game,
Sony
Computer
Sony
Computer
[29] Playstation Eye, video games console peripheral, Sony Computer Entertainment, 2007. [30] Eye of Judgement, video game, Sony Computer Entertainment, 2007. [31] Wii, video game console, Nintendo Co Ltd, 2006.
9