location modeling for ubiquitous computing - TecO

2 downloads 118858 Views 4MB Size Report
Sep 30, 2001 - University of Technology, to be approved 22.10.2001. [3] Korkea-aho, M., Tang .... at various degrees of accuracy and often suffer from independent errors. T heir output in ...... Based on an analysis of our online virtual community, MOOsburg, we identify two ...... infrared and ultrasound tech- nology based on ...
LOCATION MODELING FOR UBIQUITOUS COMPUTING

Workshop Proceedings Ubicomp 2001, Atlanta - September 30, 2001

Michael Beigl - Phil Gray - Daniel Salber

Table of Contents VI VIII XI

Preface Participants and Organizers Program

1

Mari Korkea-aho and Haitao Tang Exeriences of Expressing Location Information for Applications in the Internet

7

Nirupama Bulusu, Deborah Estrin, John Heidemann Tradeoffs in Location Support Systems: The Case for Quality-Expressive Location Models for Applications

13

Svetlana Domnitcheva Location Modeling: State of the Art and Challanges

21

Jeffrey Hightower, Gaetano Borriello Real-Time Error in Loction Modeling for Ubiquitous Computing

29

Harry Funk, Chris Miller Location Modeling for Ubiquitous Computing: Is This Any Better?

35

Martin Bauer, Christian Becker, Kurt Rothermel Location Models from the Perspective of Context-Aware Applications and Mobile Ad Hoc Networks

41

Thomas O'Connell, Peter Jensen, Anind Dey, Gregory Abowd Location in the Aware Home

45

Craig H. Ganoe, Wendy A. Schafer, Ulmer Farooq, John M. Carroll An Analysis of Location Models for MOOsburg

49

Stefan Gessler, Kay Jesse Advanced Location Modeling to enable sophisticated LBS Provisioning in 3G networks

55

Barry Brumitt, Steven Shafer Topological World Modeling Using Semantic Spaces

63

Christoph Schlieder, Thomas Vögele, Anke Werner Location Modeling for Intentional Behaviour in Spatial Partonomies

71

Thomas Pederson

Workshop Location Modeling II

Object Location Modelling in Office Environments First Steps 77

Mark Burnett, Pual Prekop, Chris P. Rainsford Intimate Location Modeling for Context Aware Computing

83

Joachim Gossmann, Marcus Specht Location Models for Augmented Environments

89

Murray Crease, Philip Gray, Julie Cargill Using Location Information in an Undergraduated Computing Science Laboratory Support System

99

Domenico Porcino, Martin Wilcox Empowering 'Ambient Intelligence' with a Direct Sequence Spread Spectrum CDMA Positioning System

107

Bernt Schiele, Stavros Antifakos Beyond Position Awareness

113

Gerald Bieber Non-deterministic location model on PDA's for fairs, exhibitions and congresses

121

Natalia Marmasse, Chris Schmandt Location Modeling

Workshop Location Modeling III

Preface Workshop on Location Modeling for Ubiquitous Computing September 30, 2001 held as part of the Ubicomp 2001 Conference Michael Beigl1, Phil Gray2, Daniel Salber3 1 TecO, University of Karlsruhe, 2 GIST, University of Glasgow, 3 IBM T.J. Watson Research Center Many ubicomp applications make use of location information sensed using diverse sensors. To be able to relate locations, compute with them, or present location information to the user, applications use a location model, although it is often implicit. The aim of this workshop is to understand what location models are used, how they are related, and identify requirements for a standard location model for ubiquitous computing.

1

Topics and Introduction

Location information is crucial for many mobile and Ubiquitous Computing applications. Location is used for a variety of purposes, e.g., to track people [1], to guide visitors [2], to trigger events [3] or to route communication packets [4]. These systems use a location model to represent different locations. The model allows one to distinguish between different locations, to compute with them, e.g., to compare locations or calculate distances, or to present the information to the user. Often existing models e.g., from geographic information systems are used and technically realized. The choice of a location model has implications on the usability of the applications as well as on the ease of implementation. Furthermore, different application domains might need different types of location models. This workshop aims at providing a forum for designers, developers and users of location models to exchange experiences and inspire their own work. Questions from disciplines other than computer science that contribute to the theme of location modeling (e.g., cognition of place, urban planning) should also be discussed. Participants from these disciplines will be welcome to the workshop. The final goal of

Workshop Location Modeling III

the workshop is to develop an understanding of how to model location information. This includes the following topics: · · · · · · · ·

Enumerate and compare existing location sensing technologies and their underlying location models. Contributions based on experiences using such technologies are especially welcome. Consider the usefulness of existing location models (e.g., from geographical information systems (GIS), cartography, geography, urban planning, etc). Present, assess and compare new models developed for ubicomp/mobile applications. Identify the special features of ubicomp/mobile applications and distinguish ubicomp location modeling from location modeling for other domains. Devise requirements for model(s) that would be suitable for ubicomp. Assess how models and data might be shared for reuse? This includes the consideration of access, intellectual property, pricing, privacy, etc. Explore the complementarities and potential synergies between different location models. Various location models exist or are planned. Are there approaches to relate different models to each other? Understand how technical parameters (e.g., deviation, accuracy, drift) should be represented in a location model. Which of the parameters are important/useful for applications? Can they be classified?

2. Desired Outcome We intend that the workshop will have concrete outcomes that will advance the development of location modeling for the ubicomp community. In particular, outcomes should include: · · ·

the evaluation and comparison of current location models; the identification of a set of requirements for a standard location model or modeling language possibly the beginning of a proposal for an (XML-based) location description language for ubicomp; the creation and consolidation of links between researchers interested in location modeling for ubicomp, possibly in the form of an informal working group tackling the issue of a location description language for ubicomp.

References 1. Andy Harter, Andy Hopper. A Distributed Location System for the Active Office. IEEE Network, 8(1), January 1994. 2. K. Cheverst, N. Davies, K. Mitchell, A. Friday and C. Efstratiou.. Developing a Contextaware Electronic Tourist Guide: Some Issues and Experiences. Proceedings of CHI 2000, The Hague, The Netherlands. 17-24 March 2000. p. 17-24.

Workshop Location Modeling IV

3. Anind K. Dey and Gregory D. Abowd. CybreMinder: A Context-Aware System for Supporting Reminders, Proceedings of the 2nd International Symposium on Handheld and Ubiquitous Computing (HUC2K), Bristol, UK, September 25-27, 2000. p. 172-186. 4. Hupfeld F. and Beigl M., Spatially aware local communication in the RAUM system, Proceedings of the IDMS 2000, Enschede, Netherlands, 2000

Workshop Location Modeling V

Program 9:00-9:10

Welcome

9:10-9:50

Session on Location Models overview and requirements

9:10-9:30

Mari Korkea-aho [Helsinki University of Technology, Finland] and Haitao Tang [Nokia Research Center, FIN00045 Nokia Group, Finland] Exeriences of Expressing Location Information for Applications in the Internet

9:30-9:40

Nirupama Bulusu [Laboratory for Embedded Collaborative Systems, University of California, Los Angeles], Deborah Estrin [Laboratory for Embedded Collaborative Systems, University of California, Los Angeles], John Heidemann [USC/Information Sciences Institute] Tradeoffs in Location Support Systems: The Case for Quality-Expressive Location Models for Applications

9:40-9:50

Svetlana Domnitcheva [Distributed Systems Group, Department of Computer Science, ETH Zurich, Swiss Federal Institute of Technology, 8092 Zurich, Switzerland] Location Modeling: State of the Art and Challanges

9:50 - 10:20 9:50-10:10

Session on Quality of Service and Error Handling in Location Models Jeffrey Hightower, Gaetano Borriello [University of Washington, Computer Science and Engineering] Real-Time Error in Loction Modeling for Ubiquitous Computing

10:10-10:20

Harry Funk, Chris Miller [Smart Information Flow Technologies, 2119 Oliver Avenue South, Minneapolis, Minnesota, 55405-2440 U.S.A.] Location Modeling for Ubiquitous Computing: Is This Any Better?

10:20-10:40

Coffee Break

Workshop Location Modeling IX

10:40-12:20

Session on Semantic Location Models

10:40-11:00

Martin Bauer, Christian Becker, Kurt Rothermel [Universität Stuttgart, Fakultät für Informatik, IPVR, Breitwiesenstr. 20-22, D-70565 Stuttgart, Germany] Location Models from the Perspective of ContextAware Applications and Mobile Ad Hoc Networks

11:00-11:10

Thomas O'Connell, Peter Jensen, Anind Dey, Gregory Abowd [College of Computing, Georgia Institute of Technology, Atlanta, GA 30332-0280, USA] Location in the Aware Home

11:10-11:20

Craig H. Ganoe, Wendy A. Schafer, Ulmer Farooq, John M. Carroll [Center for Human-Computer Interaction and Department of Computer Science, Virginia Tech, Blacksburg, VA 24061-0106, USA] An Analysis of Location Models for MOOsburg

11:20-11:30

Stefan Gessler [NEC Network Laboratories Europe, Adenauerplatz 6, 69115 Heidelberg, Germany], Kay Jesse [TeraSystems GmbH, Beiertheimer Allee 58, 76137 Karlsruhe, Germany] Advanced Location Modeling to enable sophisticated LBS Provisioning in 3G networks

11:30-11:40

Barry Brumitt, Steven Shafer [Microsoft Corporation, One Microsoft Way, Redmond, WA, 98053 USA] Topological World Modeling Using Semantic Spaces

11:40-12:00

Christoph Schlieder, Thomas Vögele, Anke Werner [Technologie-Zentrum Informatik, Universität Bremen, Postfach 330440, 28334 Bremen, Germany] Location Modeling for Intentional Behaviour in Spatial Partonomies

12:00-12:10

Thomas Pederson [Department of Computing Science, Umeå University, SE-90187 Umeå, Sweden] Object Location Modelling in Office Environments First Steps

12:10-12:20

Mark Burnett, Pual Prekop, Chris P. Rainsford [Information Technology Division, Defence Science and Technology Organisation, Department of Defence, Fern Hill Park, Canberra ACT 2600, AUSTRALIA] Intimate Location Modeling for Context Aware Computing

12:20-13:10

Lunch

Workshop Location Modeling X

13:10-13:50

Session on Geometric Location Models

13:10-13:30

Joachim Gossmann, Marcus Specht [Fraunhofer-IMK and Fraunhofer-FIT] Location Models for Augmented Environments

13:30-13:40

Murray Crease, Philip Gray, Julie Cargill [Department of Computing Science, University of Glasgow, Glasgow, G12 8QQ, UK] Using Location Information in an Undergraduated Computing Science Laboratory Support System

13:40-13:50

Domenico Porcino, Martin Wilcox [Philips Research Laboratories, Cross Oak Lane, Redhill, RH15HA England] Empowering 'Ambient Intelligence' with a Direct Sequence Spread Spectrum CDMA Positioning System

13:50-14:30

Session on Probabilistic and Learning Location Models

13:50-14:10

Bernt Schiele, Stavros Antifakos [Perceptual Computing and Computer Vision Group ETH Zurich, Switzerland] Beyond Position Awareness

14:10-14:20

Gerald Bieber [Fraunhofer-Institute for Computer Graphics (IGD) Rostock, J.-Jungius-Str.11, 18059 Rostock, Germany] Non-deterministic location model on PDA's for fairs, exhibitions and congresses

14:20-14:30

Natalia Marmasse, Chris Schmandt [MIT Media Laboratory, 20 Ames Street, ambridge,MA 02139,USA] Location Modeling

14.30 - 16.30

small group discussions

15.00 - 15.30

afternoon coffee (between the discussions)

16.30 - 17.30

final plenary (presenting results of small groups & concluding)

Workshop Location Modeling XI

([SHULHQFHV#RI#([SUHVVLQJ#/RFDWLRQ#,QIRUPDWLRQ#IRU# $SSOLFDWLRQV#LQ#WKH#,QWHUQHW# 0DUL#.RUNHD0DKR4#DQG#+DLWDR#7DQJ5# 4

#'HSDUWPHQW#RI#&RPSXWHU#6FLHQFH#DQG#(QJLQHHULQJ/# +HOVLQNL#8QLYHUVLW\#RI#7HFKQRORJ\/#)LQODQG# [email protected] 5 #1RNLD#5HVHDUFK#&HQWHU/#),1033378#1RNLD#*URXS/#)LQODQG# [email protected]

$EVWUDFW1# $V# SDUW# RI# WKH# 6SDWLDO# /RFDWLRQ# 3URWRFRO# DFWLYLW\# LQ# WKH# ,QWHUQHW# (QJLQHHULQJ# 7DVN# )RUFH# +,(7),# ZH# KDYH# EHHQ# ZRUNLQJ# RQ# KRZ# WR# H[SUHVV# ORFDWLRQ# LQIRUPDWLRQ# LQ# DQ# LQWHURSHUDEOH# ZD\# LQ# WKH# ,QWHUQHW1# 7KH# REMHFWLYH# RI# WKLV# SDSHU# LV# WR# VKDUH# RXU# LGHDV# DQG# H[SHULHQFHV# RQ# FRQFHSWV# IRU# HQDEOLQJ# LQWHURSHUDELOLW\# DQG# UHXVH#RI# ORFDWLRQ# LQIRUPDWLRQ1# 7KHVH# FRQFHSWV# FDQ# DOVR#EH# XVHG#LQ#WKH#DUHD#RI#XELTXLWRXV#FRPSXWLQJ1#

41#,QWURGXFWLRQ# /RFDWLRQ# LQIRUPDWLRQ# LV# TXLWH# FKDOOHQJLQJ/# VLQFH# LW# FDQ# EH# H[SUHVVHG# LQ# VR# PDQ\# GLIIHUHQW# ZD\V# GHSHQGLQJ# RQ# WKH# DSSOLFDWLRQ# GRPDLQ# DQG# WKH# UHTXLUHPHQWV# RI# WKH# DSSOLFDWLRQ# XVLQJ# WKH# ORFDWLRQ# LQIRUPDWLRQ1# $V# SDUW# RI# WKH# 6SDWLDO# /RFDWLRQ# 3URWRFRO# +6/R3,#DFWLYLW\#>4@#VWDUWHG#DW#WKH#EHJLQQLQJ#RI#5333#LQ#WKH#,QWHUQHW#(QJLQHHULQJ#7DVN# )RUFH# +,(7),/# ZH# KDYH# EHHQ# ZRUNLQJ# RQ# KRZ# WR# H[SUHVV# ORFDWLRQ# LQIRUPDWLRQ# LQ# DQ# LQWHURSHUDEOH# ZD\# LQ# WKH# ,QWHUQHW# >5/# 6/# 7/# 8@1# 7KH# DFWLYLW\# ZDV# LQLWLDWHG# LQ# RUGHU# WR# FUHDWH#D#FRPPRQ#VWDQGDUG# ZD\# IRU#REWDLQLQJ#ORFDWLRQ#LQIRUPDWLRQ#LQ#WKH#,QWHUQHW1#,Q# WKLV# SDSHU# ZH# ZDQW# WR# VKDUH# RXU# LGHDV# DQG# H[SHULHQFH# RQ# FRQFHSWV# IRU# HQDEOLQJ# LQWHURSHUDELOLW\#DQG#UHXVH#RI#ORFDWLRQ#LQIRUPDWLRQ1#:H#WKLQN#WKDW#WKHVH#VDPH#FRQFHSWV# FDQ#EH#XVHG#LQ#WKH#DUHD#RI#XELTXLWRXV#FRPSXWLQJ1#

51#([SUHVVLQJ#/RFDWLRQ#,QIRUPDWLRQ# /RFDWLRQ# LQIRUPDWLRQ# FDQ# EH# H[SUHVVHG# LQ# YHU\# PDQ\# GLIIHUHQW# ZD\V1# 7KH# ZD\# RI# H[SUHVVLQJ#WKH#ORFDWLRQ#LQIRUPDWLRQ#UHIOHFWV#WKH#QHHGV#RI#WKH#DSSOLFDWLRQ#GRPDLQ#LW#ZDV# SODQQHG# IRU1# :LWK# ORFDWLRQ# LQIRUPDWLRQ# ZH# XQGHUVWDQG# LQIRUPDWLRQ# H[SUHVVLQJ# WKH# SK\VLFDO#ORFDWLRQ#RI#DQ#REMHFW/#DV#ZHOO#DV#DGGLWLRQDO#LQIRUPDWLRQ#WKDW#FDQ#EH#QHFHVVDU\# IRU#XVLQJ#WKH#ORFDWLRQ#GDWD/# IRU#LPSURYLQJ#WKH#ORFDWLRQ# PHDVXUHPHQW/#RU#IRU#EULQJLQJ# DGGLWLRQDO# YDOXH# WR# WKH# ORFDWLRQ# GDWD1# 6XFK# LQIRUPDWLRQ# LV# H1J1# DFFXUDF\# LQIRUPDWLRQ/# REMHFW#LGHQWLILHUV#+,'V,/#WLPH#VWDPSV/#HWF1#>5@1# #

Workshop Location Modeling 1

7KH# ORFDWLRQ# FDQ# EH# H[SUHVVHG# XVLQJ# GLIIHUHQW# UHIHUHQFH# IUDPHV/# H1J1# DV# DEVROXWH# VSDWLDO#ORFDWLRQ/#GHVFULSWLYH#ORFDWLRQ/#RU#UHODWLYH#ORFDWLRQ#>5@1#$EVROXWH#VSDWLDO#ORFDWLRQ# LV#WKH#SK\VLFDO#ORFDWLRQ#RI#DQ#REMHFW#LQ#WKH#ZRUOG/#H[SUHVVHG#YLD#D#50#RU#60GLPHQVLRQDO# FRRUGLQDWH# V\VWHP# LQ# D# SDUWLFXODU# VSDWLDO# UHIHUHQFH# V\VWHP1# 7KH# VSDWLDO# UHIHUHQFH# V\VWHP# H[SUHVVHV# D# 50# RU# 60GLPHQVLRQDO# PRGHO# RI# WKH# HDUWK# DQG# GHWHUPLQHV# KRZ# WKH# XVHG# FRRUGLQDWH# V\VWHP# LV# DWWDFKHG# WR# WKH# PRGHO1# 'HVFULSWLYH# ORFDWLRQ# LV# D# ORFDWLRQ# GHVFULEHG# WKURXJK# RWKHU# PHDQV# WKDQ# D# FRRUGLQDWH# V\VWHP1# ([DPSOHV# RI# GHVFULSWLYH# ORFDWLRQV#DUH#H1J1#VWUHHW#DGGUHVV/#EXLOGLQJ#QXPEHU/#FRXQWU\/#HWF1#5HODWLYH#ORFDWLRQ#LV#D# VSHFLILF# W\SH# RI# GHVFULSWLYH# ORFDWLRQ/# ZKHUH# WKH# ORFDWLRQ# RI# DQ# REMHFW# LV# GHVFULEHG# UHODWLYH#WR#VRPH#RWKHU#REMHFW/#H1J1#³433#PHWHUV#IURP#WKH#VWRUH´/#³WKH#EXLOGLQJ#QH[W#WR# WKH#WRZHU´/#HWF1#*HQHUDOO\/#D#GHVFULSWLYH#ORFDWLRQ#FDQ#EH#PDSSHG#WR#DQ#DEVROXWH#VSDWLDO# ORFDWLRQ1# 514#([LVWLQJ#/RFDWLRQ#,QIRUPDWLRQ#([SUHVVLRQV# 7KHUH# DUH# PDQ\# GLIIHUHQW# ZD\V# RI# H[SUHVVLQJ# ORFDWLRQ# LQIRUPDWLRQ# GHILQHG# E\# QXPHURXV#DSSOLFDWLRQ#GRPDLQV#DQG#RUJDQL]DWLRQV1#7KH\#LQFOXGH#>5@=# # • ([SUHVVLRQ# VWDQGDUGL]HG# IRU# *60# DQG# 8076# +FDOOHG# KHUH# %6*33%,# WR# EH# XVHG# LQWHUQDOO\# LQ# WKH# *60# DQG# 8076# PRELOH# QHWZRUNV# VSHFLILHG# E\# WKH# 7KLUG# *HQHUDWLRQ#3DUWQHUVKLS#3URMHFW#+6*33,1# # • $Q#LQWHUIDFH#WRZDUGV#PRELOH#QHWZRUNV#+H1J1#*60,#IRU#SURYLGLQJ#DFFHVV#WR#ORFDWLRQ# LQIRUPDWLRQ# RI# PRELOH# WHUPLQDOV# LQ# FRQVLGHUDWLRQ# E\# WKH# /RFDWLRQ0LQWHURSHUDELOLW\# )RUXP#+/,),1# # • 7KH#*HRJUDSK\#0DUNXS#/DQJXDJH#+*0/,#IRU#VWRULQJ#DQG#WUDQVSRUWLQJ#JHRJUDSKLF# LQIRUPDWLRQ#VSHFLILHG#E\#WKH#2SHQ#*,6#&RQVRUWLXP#+2*&,1# # • 1D9LJDWLRQ# 0DUNXS# /DQJXDJH# +190/,# IRU# GHVFULELQJ# QDYLJDWLRQ# LQIRUPDWLRQ# VXEPLWWHG#E\#WKH#)XMLWVX#/DERUDWRULHV#WR#WKH#:RUOG#:LGH#:HE#&RQVRUWLXP#+:6&,1# # • 3RLQW# 2I# ,QWHUHVW# H;FKDQJH# /DQJXDJH# +32,;,# IRU# H[FKDQJH# RI# ORFDWLRQ0UHODWHG# LQIRUPDWLRQ#RYHU#WKH#,QWHUQHW# FUHDWHG#E\#02ELOH#,QIRUPDWLRQ#6WDQGDUG#7(FKQLFDO# &RPPLWWHH#+0267(&,#DQG#VXEPLWWHG#WR#WKH#:6&1# # • *HRWDJV# IRU# JHRJUDSKLF# UHJLVWUDWLRQ# DQG# UHVRXUFH# GLVFRYHU\# RI# +\SHUWH[W# 0DUNXS# /DQJXDJH#++70/,#GRFXPHQWV1# # • 1DWLRQDO# 0DULQH# (OHFWURQLFV# $VVRFLDWLRQ¶V# +10($,# LQWHUIDFH# DQG# GDWD# SURWRFRO# 10($034;6#RIWHQ#XVHG#E\#*36#UHFHLYHUV1# # • 7KH# HOHFWURQLF# EXVLQHVV# FDUG# IRUPDW# 9&DUG# DQG# ,&DOHQGDU# IRU# H[FKDQJLQJ# HOHFWURQLF#FDOHQGDULQJ#DQG#VFKHGXOLQJ#LQIRUPDWLRQ#LQ#WKH#,QWHUQHW#LQFOXGH#HOHPHQWV# WR#VSHFLI\#SRVLWLRQ1# #

Workshop Location Modeling 2

• $#0HDQV#IRU#([SUHVVLQJ#/RFDWLRQ#,QIRUPDWLRQ#LQ#WKH#'RPDLQ#1DPH#6\VWHP#+'160 /2&,#VSHFLILHG#LQ#DQ#,QWHUQHW#GUDIW#E\#'DYLV#HW#DO1# # • 6LPSOH#7H[W#)RUPDW#IRU#WKH#6SDWLDO#/RFDWLRQ#3URWRFRO#+6/R3,#+KHUH#FDOOHG#³6/R30 VLPSOH´,#SURSRVLQJ#D#VLPSOH#WH[W0EDVHG#IRUPDW#WR#FDUU\#D#PLQLPDO#ORFDWLRQ#GDWD#VHW# E\#0DK\1# # • *00//# ;0/0EDVHG# JHRJUDSKLFDO# LQIRUPDWLRQ# IRU# QDYLJDWLRQ# ZLWK# D# PRELOH# VSHFLILHG#DW#WKH#8QLYHUVLW\#RI#-\YlVN\Ol1# # • /DQG;0//# DQ# ;0/0EDVHG# GDWD# IRUPDW# IRU# H[FKDQJH# RI# GDWD# FUHDWHG# GXULQJ# ODQG# SODQQLQJ/#FLYLO#HQJLQHHULQJ#DQG#ODQG#VXUYH\#SURFHVVHV1# # • *HRVSDWLDO0H;WHQVLEOH# 0DUNXS# /DQJXDJH# +*0;0/,# IRU# HQFRGLQJ# DQG# H[FKDQJLQJ# JHRVSDWLDO#GDWD#VSHFLILHG#E\#WKH#*0;0/#&RPPLWWHH#LQ#-DSDQ1# # • &RPPRQ# 6SDWLDO# /RFDWLRQ# 'DWD# 6HW# >6@/# 6SDWLDO# /RFDWLRQ# 3D\ORDG# >7@/# DQG# &RPPRQ# 6\QWD[# DQG# &RGLQJ# IRU# 'HVFULSWLYH# /RFDWLRQ# >8@# GHYHORSHG# GXULQJ# WKH# 6OR30DFWLYLW\1# # ,Q# DGGLWLRQ# WR# WKHVH/# WKHUH# DUH# VHYHUDO# RWKHU# QRQ0SXEOLF# VSHFLILFDWLRQV# RI# ORFDWLRQ# LQIRUPDWLRQ# H[SUHVVLRQV/# LQFOXGLQJ# WKRVH# IURP# :$3# )RUXP# /RFDWLRQ# 'UDIWLQJ# &RPPLWWHH/#%OXHWRRWK#6SHFLDO#,QWHUHVW#*URXS/#,6227&544/#HWF1#

61#7DFNOLQJ#WKH#&KDOOHQJH#RI#WKH#0XOWLWXGH#RI#/RFDWLRQ#([SUHVVLRQV# )RU# XV# LW# DSSHDUV# WR# EH# D# ZDVWH# RI# UHVRXUFHV# WKDW# HDFK# DQG# HYHU\# ORFDWLRQ# LQIRUPDWLRQ#DSSOLFDWLRQ#RU#DSSOLFDWLRQ#GRPDLQ#ZRXOG#QHHG#WR#FUHDWH#LWV#RZQ/#SUREDEO\# QRQ0LQWHURSHUDEOH# ZD\# RI# H[SUHVVLQJ# ORFDWLRQ# LQIRUPDWLRQ1# $W# OHDVW# IURP# DQ# LQWHURSHUDELOLW\#SRLQW#RI#YLHZ#LQ#WKH#,QWHUQHW/#LW#LV#JRRG#LI#WKHUH#H[LVW#FRPPRQ#ZD\V#RI# H[SUHVVLQJ#DQG#SURFHVVLQJ#ORFDWLRQ#LQIRUPDWLRQ1#7KLV#FDQ#EH#WDFNOHG#LQ#VHYHUDO#ZD\V1# 614#$#&RPPRQ#/RFDWLRQ#'DWD#6HW# 7KH#LGHD#RI#D#FRPPRQ#ORFDWLRQ#GDWD#VHW#LV#WR#HQDEOH#ORFDWLRQ#LQIRUPDWLRQ#VRXUFHV# DQG#DSSOLFDWLRQV#WR#H[SUHVV#ORFDWLRQ#LQIRUPDWLRQ#LQ#DQ#LQWHURSHUDEOH#ZD\#ZLWK#KHOS#RI# D#FRPPRQ#ORFDWLRQ#GDWD#VHW1#$V#SDUW#RI#WKH#6OR30DFWLYLW\#ZH#KDYH#SURSRVHG#VXFK#D#VHW# >6@1#2XU#DLP#ZDV#WR#FUHDWH#D#VLPSOH#ORZHVW#FRPPRQ#GHQRPLQDWRU#GDWD#VHW#WKDW#DV#PDQ\# ORFDWLRQ#LQIRUPDWLRQ#VRXUFHV#DQG#DSSOLFDWLRQV#LQ#WKH#,QWHUQHW#DV#SRVVLEOH#FRXOG#XVH1# # ,Q#RUGHU#WR#SURSRVH#VXFK#D#VHW/#ZH#DQDO\]HG#GLIIHUHQW#H[LVWLQJ#ORFDWLRQ#LQIRUPDWLRQ# H[SUHVVLRQV/#DV#ZHOO#DV#WKH#UHTXLUHPHQWV#RQ#ORFDWLRQ#LQIRUPDWLRQ#RI#GLIIHUHQW#ORFDWLRQ# LQIRUPDWLRQ# VHUYLFHV# >5/# 6@1# %DVHG# RQ# WKH# DQDO\VLV# ZH# SURSRVHG# D# FRPPRQ# ORFDWLRQ# GDWD# VHW# FDOOHG# ³&RPPRQ# 6SDWLDO# /RFDWLRQ# 'DWD# 6HW´/# FRQVLVWLQJ# RI# HOHPHQWV# IRU#

Workshop Location Modeling 3

GHVFULELQJ#WKH#DEVROXWH#VSDWLDO#ORFDWLRQ#RI#DQ#REMHFW#LQ#JHRGHWLF#ODWLWXGH/#ORQJLWXGH#DQG# DOWLWXGH/# WKH# DFFXUDF\# RI# WKH# ORFDWLRQ# PHDVXUHPHQW/# WLPH# RI# ORFDWLRQ# PHDVXUHPHQW/# VSHHG/#GLUHFWLRQ/#FRXUVH/#DQG#RULHQWDWLRQ1#

6*33#

/,)#

*0/#

190/#

32,;#

*HRWDJV#

10($#

9FDUG#

,&DOHQGDU#

'160/2&#

6/R30VLPSOH#

*00/#

/DQG;0/#

*0;0/#

61414#(QFRGLQJ#LQ#([WHQVLEOH#0DUNXS#/DQJXDJH# # 7KH#HOHPHQWV#RI#WKH#GDWD#VHW#DOVR#QHHG#WR#EH#H[SUHVVHG#DQG#HQFRGHG#LQ#D#FRPPRQ# ZD\1#:H#FKRVH#WR#HQFRGH#WKH#GDWD#VHW#LQ#([WHQVLEOH#0DUNXS#/DQJXDJH#+;0/,1#7KLV# EHFDXVH#;0/#HQDEOHV#WKH#XVH#RI#VWDQGDUG#SURFHVVLQJ#WRROV#DQG#LV#KXPDQ#UHDGDEOH1#,Q# DGGLWLRQ/#PDQ\#RI#WKH#H[LVWLQJ#ORFDWLRQ#LQIRUPDWLRQ#H[SUHVVLRQV#XVH#;0/#+VHH#7DEOH# 4,1# )XUWKHU# RQ/# ;0/# DOVR# HQDEOHV# H[WHQGLELOLW\# DQG# UHXVH# RI# GLIIHUHQW# ORFDWLRQ# LQIRUPDWLRQ#GDWD#VHWV#ZLWK#KHOS#RI#;0/#6FKHPD#>9@1# # :LWK#KHOS#RI#;0/#DQG#;0/#6FKHPD#H1J1#WKH#&RPPRQ#6SDWLDO#/RFDWLRQ#'DWD#6HW# FDQ#EH#XVHG#DV#EDVLV#IRU#RWKHU#ORFDWLRQ#LQIRUPDWLRQ#H[SUHVVLRQV1#+RZHYHU/#D#ZRUG#RI# UHVHUYDWLRQ# QHHGV# WR# EH# UDLVHG# KHUH1# 7KH# IOH[LELOLW\# WKDW# ;0/# 6FKHPD# EULQJV# IRU# UHXVLQJ# DQG# H[WHQGLQJ# GLIIHUHQW# GDWD# VHWV# FDQ# OHDG# WR# QXPHURXV# GLIIHUHQW# ORFDWLRQ# LQIRUPDWLRQ#H[SUHVVLRQV/# ZKLFK#DJDLQ#FDQ#EH#D#FKDOOHQJH# LI#WUDQVIRUPDWLRQV#EHWZHHQ# WKHVH#H[SUHVVLRQV#DUH#QHHGHG1#7KLV#EHFDXVH#ZH#QHHG#WR#GHILQH#WKH#WUDQVIRUPDWLRQ#UXOHV# IRU#HDFK#SDLU#RI#ORFDWLRQ#LQIRUPDWLRQ#H[SUHVVLRQV#ZH#ZDQW#WR#WUDQVIRUP#EHWZHHQ1# # (QFRGLQJ#

;0/#

#

[#

[#

[#

[#

#

#

#

#

#

#

[#

[#

[#

%LQDU\#

[#

#

#

#

#

#

#

#

#

[#

#

#

#

#

# # # # # # # [4# [# [5# [5# [# [# # 7H[W# 4# 5 [ XVLQJ# +70/# 0(7$# WDJV/# [ # XVLQJ# *(2# HOHPHQW# LQ# 9&DUG# DQG# ,&DOHQGHU/# RU# /2&$7,21#HOHPHQW#LQ#9&DUG# 7DEOH#41#(QFRGLQJ#RI#GLIIHUHQW#ORFDWLRQ#LQIRUPDWLRQ#H[SUHVVLRQV#

615#$#&RPPRQ#:D\#RI#([SUHVVLQJ#'LIIHUHQW#/RFDWLRQ#,QIRUPDWLRQ#([SUHVVLRQV# 7KH#SURSRVHG#FRPPRQ#ORFDWLRQ#GDWD#VHW#ZDV#GHVLJQHG#WR#HQDEOH#DV#PDQ\#ORFDWLRQ# LQIRUPDWLRQ#VRXUFHV#DQG#DSSOLFDWLRQV#DV#SRVVLEOH#WR#XVH#LW1#+RZHYHU/#ZH#GR#QRW#EHOLHYH# WKH\# FDQ# DOO# UHVWULFW# WKHPVHOYHV# WR# RQH# FRPPRQ# ORFDWLRQ# LQIRUPDWLRQ# H[SUHVVLRQ1# 'LIIHUHQW# ORFDWLRQ# DSSOLFDWLRQV# PD\# QHHG# GLIIHUHQW# ORFDWLRQ# LQIRUPDWLRQ1# ,W# LV# WKXV# QHFHVVDU\#WR#KDYH#D#VROXWLRQ#WKDW#DOVR#HQDEOHV#WKH#XVH#RI#GLIIHUHQW#ORFDWLRQ#LQIRUPDWLRQ# H[SUHVVLRQV1#7KHUH#DUH#VHYHUDO#LVVXHV#UHJDUGLQJ#WKLV1#

Workshop Location Modeling 4

61514#&RPPRQ#1DPLQJ#DQG#5HJLVWHULQJ# # ,I# HDFK# ORFDWLRQ# LQIRUPDWLRQ# H[SUHVVLRQ# KDV# D# XQLTXH# LGHQWLILHU/# WKH# GLIIHUHQW# H[SUHVVLRQV# FDQ# EH# LGHQWLILHG1# 7KH# LGHQWLILHU# ZLOO# VLPSOLI\# WKH# LGHQWLILFDWLRQ/# SURFHVVLQJ# DQG# SRVVLEOH# WUDQVIRUPDWLRQ# RI# ORFDWLRQ# LQIRUPDWLRQ# H[SUHVVLRQV1# 7KH# QDPLQJ#VFKHPH#VKRXOG#EH#D#FRPPRQ#RQH#LQ#RUGHU#WR#JXDUDQWHH#XQLTXHQHVV#DQG#HQDEOH# FRPPRQ# SURFHVVLQJ# RI# LGHQWLILHUV1# ,W# FRXOG# EH# EDVHG# RQ# 85,V# +8QLILHG# 5HVRXUFH# ,GHQWLILHUV,1#)RU#VRPH#DEVROXWH#VSDWLDO#ORFDWLRQ#H[SUHVVLRQV#WKHUH#DOUHDG\#H[LVW#XQLTXH# LGHQWLILHUV#PDLQWDLQHG#E\#WKH#(XURSHDQ#3HWUROHXP#6XUYH\#*URXS#+(36*,#>5@1# # ,Q# RUGHU# WR# HQDEOH# WUDQVIRUPDWLRQV# EHWZHHQ# ORFDWLRQ# LQIRUPDWLRQ# H[SUHVVLRQV/# ZH# QHHG#WR#EH#DEOH#WR#LGHQWLI\#WKHP/#DV# ZHOO#DV# NQRZ#WKH#V\QWD[#DQG#WKH#WUDQVIRUPDWLRQ# UXOHV# IRU# WKH# FRQYHUVLRQ# EHWZHHQ# WKH# H[SUHVVLRQV1# )RU# VLPSOLI\LQJ# WKLV# DQG# DOVR# IRU# HQDEOLQJ# WKH# UHXVH# RI# H[LVWLQJ# ORFDWLRQ# LQIRUPDWLRQ# H[SUHVVLRQV/# WKHUH# FRXOG# EH# D# UHJLVWUDWLRQ# DXWKRULW\# IRU# UHJLVWHULQJ# ORFDWLRQ# LQIRUPDWLRQ# H[SUHVVLRQV# DQG# WKHLU# WUDQVIRUPDWLRQ# UXOHV# IRU# SXEOLF# XVH# >5@1# 7KHUH# FRXOG# DOVR# EH# SXEOLF# WUDQVIRUPDWLRQ# VHUYLFHV#SURYLGLQJ#WUDQVIRUPDWLRQV#EHWZHHQ#ORFDWLRQ#H[SUHVVLRQV1# 61515#&RPPRQ#6WUXFWXUH#DQG#(QFRGLQJ# # $#FRPPRQ#VWUXFWXUH#DQG#HQFRGLQJ#RI#WKH#GLIIHUHQW#ORFDWLRQ#LQIRUPDWLRQ#H[SUHVVLRQV# ZLOO# VLPSOLI\# WKH# SURFHVVLQJ# DQG# HQDEOH# WKH# XVH# RI# WKH# VDPH# SURFHVVLQJ# WRROV1# 3ULQFLSDOO\#D#FRPPRQ#VWUXFWXUH#DQG#HQFRGLQJ#FDQ#EH#VHHQ#DV#D#FRPPRQ#HQYHORSH#IRU# GLIIHUHQW# ORFDWLRQ# LQIRUPDWLRQ# H[SUHVVLRQV1# 7KH# ORFDWLRQ# GDWD# VHW# LQ# WKH# FRPPRQ# HQYHORSH#FRXOG#SULQFLSDOO\#HYHQ#EH#HQFRGHG#LQ#GLIIHUHQW# ZD\V1#+RZHYHU/#WKLV# ZRXOG# FRPSOLFDWH#WKH#SURFHVVLQJ#RI#WKH#VHW/#VLQFH#VSHFLILF#SURFHVVLQJ#WRROV#DUH#WKHQ#UHTXLUHG# WR#SURFHVV#WKH#FRQWHQWV1#,Q#>5@/#ZH#KDYH#PDGH#LQLWLDO#FRQVLGHUDWLRQV#UHJDUGLQJ#VXFK#DQ# HQYHORSH#HQFRGHG#LQ#;0/1# # ,Q# RUGHU# WR# EH# DEOH# WR# SURFHVV# WKH# ORFDWLRQ# LQIRUPDWLRQ# H[SUHVVLRQ/# LW# FDQ# EH# YDOXDEOH#WR#LQFOXGH#SDUDPHWHUV#IRU#GHVFULELQJ#WKH#GDWD1#7KLV#LQFOXGH#LQIRUPDWLRQ/#VXFK# DV/#ORFDWLRQ#H[SUHVVLRQ#LGHQWLILHU2QDPH/#RZQHU#RI#WKH#H[SUHVVLRQ/#YHUVLRQ/#FRQWHQW#W\SH# +H1J1#;0//#ELQDU\,/#DQG#HQFRGLQJ#+H1J1#87)0;/#EDVH97/#HWF1,1#7KHVH#SDUDPHWHUV#FRXOG# EH# SDUW# RI# D# KHDGHU# LQ# WKH# ORFDWLRQ# LQIRUPDWLRQ# H[SUHVVLRQ/# RU# SRVVLEO\# H[WHUQDO# PHWDGDWD#LGHQWLILHG#E\#WKH#LGHQWLILHU2QDPH#RI#WKH#H[SUHVVLRQ1#$#FRPPRQ#HQYHORSH#LQ# ;0/#ZLWK#SDUDPHWHUV#LQ#WKH#URRW#;0/#HOHPHQW#+ KHDGHU,#ZDV#SDUWLDOO\#LPSOHPHQWHG# LQ#WKH#&RPPRQ#6\QWD[#DQG#&RGLQJ#IRU#'HVFULSWLYH#/RFDWLRQ#>8@1# 616#$#&RPPRQ#/RFDWLRQ#3D\ORDG#IRU#/RFDWLRQ#,QIRUPDWLRQ#([SUHVVLRQV# ,W#PLJKW#QRW#EH#SRVVLEOH#IRU#DOO#DSSOLFDWLRQV#WR#UHVWULFW#WKHPVHOYHV#WR#WKH#XVH#RI#RQO\# RQH# ORFDWLRQ# LQIRUPDWLRQ# H[SUHVVLRQ1# 6RPHWLPHV# WKH# DSSOLFDWLRQ# PLJKW# ZDQW# WR# XVH# HOHPHQWV# IURP# VHYHUDO# ORFDWLRQ# LQIRUPDWLRQ# H[SUHVVLRQV# RU# H[SUHVV# WKH# ORFDWLRQ# LQ# VHYHUDO#ZD\V#ZLWK#WKH#KHOS#RI#GLIIHUHQW#ORFDWLRQ#LQIRUPDWLRQ#H[SUHVVLRQV1#7KH#6SDWLDO# /RFDWLRQ# 3D\ORDG# ZDV# GHVLJQHG# WR# HQDEOH# WKLV# >7@1# ,W# LV# SULQFLSDOO\# D# FRPPRQ# FRQWDLQHU#IRU#VHYHUDO#ORFDWLRQ#LQIRUPDWLRQ#H[SUHVVLRQV1#,W#LV#HQFRGHG#LQ#;0/1#

Workshop Location Modeling 5

71#&RQFOXVLRQV#DQG#)XWXUH#:RUN# :H# WKLQN# WKDW# LW# LV# D# ZDVWH# RI# UHVRXUFHV# WKDW# HDFK# DQG# HYHU\# ORFDWLRQ# LQIRUPDWLRQ# DSSOLFDWLRQ#RU#DSSOLFDWLRQ#GRPDLQ#QHHG#WR#FUHDWH#LWV#RZQ# ZD\#RI#H[SUHVVLQJ#ORFDWLRQ# LQIRUPDWLRQ1# :H# QHHG# ZD\V# RI# HQDEOLQJ# UHXVH# DQG# FRPPRQ# ZD\V# RI# H[SUHVVLQJ# DQG# SURFHVVLQJ#ORFDWLRQ#LQIRUPDWLRQ1#)RU#WKLV#ZH#KDYH#SURSRVHG#D#FRPPRQ#ORFDWLRQ#GDWD# VHW# WKDW# FDQ# EH# XVHG# EHWZHHQ# DSSOLFDWLRQV# WR# HQDEOH# LQWHURSHUDELOLW\/# RU# EH# XVHG# DV# EDVLV# IRU# RWKHU# ORFDWLRQ# LQIRUPDWLRQ# H[SUHVVLRQV1# :H# KDYH# DOVR# FRQVLGHUHG# PHWKRGV# IRU#H[SUHVVLQJ#GLIIHUHQW#ORFDWLRQ#LQIRUPDWLRQ#H[SUHVVLRQV#LQ#D#FRPPRQ#ZD\/#DV#ZHOO#DV# SURSRVHG# D# FRPPRQ# SD\ORDG# IRU# HQFDSVXODWLQJ# VHYHUDO# ORFDWLRQ# LQIRUPDWLRQ# H[SUHVVLRQV1#:H#KDYH#XVHG#;0/# IRU#HQFRGLQJ#WKH#GLIIHUHQW#SURSRVDOV1#7KLV#EHFDXVH# ;0/# HQDEOHV# WKH# XVH# RI# VWDQGDUG# SURFHVVLQJ# WRROV/# LV# KXPDQ# UHDGDEOH# DQG# PDQ\# H[LVWLQJ# ORFDWLRQ# H[SUHVVLRQV# XVH# ;0/1# )XUWKHU# RQ/# ;0/# DOVR# HQDEOHV# H[WHQGLELOLW\# DQG#UHXVH#RI#GLIIHUHQW#ORFDWLRQ#LQIRUPDWLRQ#H[SUHVVLRQV1# # :H# ZLOO# FRQWLQXH# WKH# ZRUN# RQ# ORFDWLRQ# LQIRUPDWLRQ# H[SUHVVLRQV# DQG# LPSURYH# WKH# SXEOLVKHG#,QWHUQHW#GUDIWV#EDVHG#RQ#IHHGEDFN#IURP#RWKHU#ORFDWLRQ#LQIRUPDWLRQ#DFWLYLWLHV1# :H#WKLQN#WKDW#WKH#FRQFHSWV#DQG#LGHDV#WKDW#ZH#KDYH#GHYHORSHG#IRU#H[SUHVVLQJ#ORFDWLRQ# LQIRUPDWLRQ# LQ# DQ# LQWHURSHUDEOH# ZD\# LQ# WKH# ,QWHUQHW# FDQ# DOVR# EH# XVHG# LQ# WKH# DUHD# RI# XELTXLWRXV#FRPSXWLQJ1#

81#5HIHUHQFHV# >4@# 6OR30DFWLYLW\# +5333,1# 6SDWLDO# /RFDWLRQ# %2)# +VSDWLDO,# RI# ,(7)1# KWWS=22ZZZ0 QUF1QRNLD1FRP2LHWI0VSDWLDO2# # >5@# .RUNHD0DKR/#01#+5334,1#/RFDWLRQ#,QIRUPDWLRQ#LQ#WKH#,QWHUQHW/#/LFHQWLDWH¶V#WKHVLV/#+HOVLQNL# 8QLYHUVLW\#RI#7HFKQRORJ\/#WR#EH#DSSURYHG#55143153341# # >6@# .RUNHD0DKR/# 01/# 7DQJ/# +1/# 5DF]/# '1/# 3RON/# -1/# DQG# 7DNDKDVKL/# .1# +5334,1# $# &RPPRQ# 6SDWLDO# /RFDWLRQ# 'DWD# 6HW/# ,QWHUQHW# GUDIW/# ,QWHUQHW# (QJLQHHULQJ# 7DVN# )RUFH/# ZRUN# LQ# SURJUHVV/#0D\#53341#KWWS=22ZZZ1LHWI1RUJ2LQWHUQHW0GUDIWV2#GUDIW0NRUNHD0DKR0VSDWLDO0GDWDVHW0 341W[W# # >7@# .RUNHD0DKR/#01/#DQG#7DQJ/#+1#+5334,1#6SDWLDO#/RFDWLRQ#3D\ORDG/#,QWHUQHW#GUDIW/#,QWHUQHW# (QJLQHHULQJ#7DVN#)RUFH/#ZRUN#LQ#SURJUHVV/#0D\#53341#KWWS=22ZZZ1LHWI1RUJ2LQWHUQHW0GUDIWV2# GUDIW0NRUNHD0DKR0VSDWLDO0ORFDWLRQ0SD\ORDG0331W[W# # >8@# 7DQJ/# +1/# DQG# .RUNHD0DKR/# 01# +5334,1# &RPPRQ# 6\QWD[# DQG# &RGLQJ# IRU# 'HVFULSWLYH# /RFDWLRQ/# ,QWHUQHW# GUDIW/# ,QWHUQHW#(QJLQHHULQJ# 7DVN# )RUFH/# ZRUN# LQ# SURJUHVV/# 0D\# 53341# KWWS=22ZZZ1LHWI1RUJ2LQWHUQHW0GUDIWV2GUDIW0WDQJ0VSDWLDO0GHVFULSWLYH0ORFDWLRQ0331W[W# # >9@# )DOOVLGH/#'1&1#+HG1,#+5334,1#;0/#6FKHPD#3DUW#3=#3ULPHU1#:6HFRPPHQGDWLRQ/#5#0D\# 53341#KWWS=22ZZZ1Z61RUJ2752[POVFKHPD032#

Workshop Location Modeling 6

    !"" #$&%! ')(* ,+-/.0 1  2435 76098:;%=A@B"0

# #CDE & DGF &H8 I35  J ""8 0 K LNMPORQ?S5TUS=VWOYXZOY[PO]\5^R_)`ba?cdMeSgfBhW[jieMPLNk]\l^YS5kYmBndcgfRk0o `bLNmp`bTUS5kRkYq r=s]tupvbwxtyjvw{z|NvbwW}~Hup€‚ƒg€‚D„#v†…‡…ˆtbupvbwxtyj‰‡Š‹€ŒlzgŽyj€:~Ž ’‘ ‰ˆŠ†€ewjŽ{‰“y”z vb|„t…‡‰‡|Nvbw ‘ ‰Zt5•Ys?vbŽW– ‘ƒ— €:…‡€:Ž «  ˜‚™‹Œƒš5„›¬šd­ œ‘ š'|NvbŸwjžl~9 5œt¡‹yj¢5‰ˆv £‘ ¤d¥Œg¦l®e§‹‰ˆ€ œ]‘ ¨©®:šd€:§bޝ›‹ª;­ ‘ ¨”Ž †yjžb‰“yjš °dyj€ ±‹²³ ¤ ³ ¦5£†œ‹£]¨” †žbš sYvl®‚tyj‰‡v Ž{°ƒ¾d¾pvbw{yŽzgŽyj€:~Ž#yŸzg¾ƒ‰‡®‚tb…‡…“zy{wxt†g€:vb¿&¾pvbŽ{‰“yj‰ˆv t…]t®:®:°dÀ xw´9tµ·®ezl¶:•g¸:°¹º¼‘ »b®e¸†€ew{½ yxtb‰ ‘ y”zl•gt ‘ ‘ D…ˆtyj€ ‘ ®ez9‰ ‘ ¾dwjvŠ5‰Zg‰ ‘ƒ— …‡vl®‚tyj‰‡v ‘ ‰ ‘ |Nvbwj~9tyj‰‡v ‘ ‘ |Ávw€‚tbŽ{€ v|®:v ‘gü— °dwxtyj‰‡v ‘ •2…ˆv‚Ä#€ewŃtwx5Ätwj€®:v†ŽyjŽ:•#€ ‘ €ew — z5ÀŸ€PÆ®:‰‡€ ‘ ®Pzl•2Ž{®:tb…ˆtbud‰ˆ…‡‰“y”zÇvbw ¾gwj€:Ž{€ewjŠ5‰ ‘ƒ— °dŽ{€ew¾dwj‰‡Šbtb®ezlÈ5­ ‘ yjÅd‰ˆŽ¾ƒtb¾p€ew‚•gÄ€’€eÉg¾ƒ…‡vbwj€’yjÅd€:Ž{€’y{wxtbd€:v¿RŽÂulz g‰ˆŽÀ ®:°dŽ{Ž{‰ ‘ƒ— yjр¯g€:Ž{‰ —†‘ Ž{¾ƒtb®:€¯vb|R…‡vl®‚tyj‰‡v ‘ Ž{°ƒ¾d¾pvbw{yŽzgŽyj€:~Žt ‘ )yjр:‰“w‰‡~¾¼tb®Py2v ‘ t¾ƒ¾ƒ…‡‰‡®‚tyj‰ˆv ‘ Ž:Èg–¾ƒ¾ƒ…‡‰‡®‚tyj‰ˆv ‘ Ž2‰ ‘ |Ávwj~€‚vb|Ryjр:Ž{€y{wxtbd€:v¿RŽ#®‚t ‘ tbƒt¾dyyjv yjÅd€:~ t !Ž{°duƒŽyxt yj‰ˆt…ˆ…“z!‰‡~¾dwjvŠ‹€HyjÅd€:‰“w ¾p€ew{|Ávwj~9t ®:€†È]s?vl®‚tyj‰ˆv ~v5d€e…ˆŽ t !tbudÀ Žy{‘ wxt®eyj‰‡v ‘ ޝŽ{‘ Ådv†°ƒ…ˆyjрewj€P|Ávbwj€’up€,Êl°¼t…ˆ‰“yŸz9€eÉg¾dwj‘ €:Ž{Ž{‰‡Š‹€ËÍÌÁ΂̈Ïg¾d‘ wjvŠ5‰ˆd€,t ‘ €P‘ Éd¾d…‡‰ˆ®e‰‡y wj€:¾gwj€:Ž{€ ‘ yxtyj‰‡v ‘ vb|¾ƒtwxt~€eyj€ewjŽŽ{°ƒ®xÅ0tŽ)tb®:®:°gwxtb®ezl•Yyj‰‡~€:…‡‰ ‘ €:Ž{Ž:•?€ ‘ €Pw — z®:v†ŽyjŽ v|'…ˆvl®:tyj‰‡v ‘ ‰ ‘ |Nvbwj~9tyj‰ˆv ‘ È Ð ÑƒÒÂÓpÔRÕÖH×)Ø]ÓRÙjÕÂÒ Úcd[e[x`†[P[PLZkYÛSORkYLN܃OR` XNcpÝS5iPLNcgk'^gSO?[x`bMÂÝSgk&[x`b`Þ;^5c5ß;`MÂcgM¯a]` c5ß;`Me`bmMe`b[PcgORM:Ý`†[#cgk=iPfY` aYSg[PLN[)c5à¯áfR`bMP`[xfY` LÁ[)XNcpÝSlie`bm·â;ãäO?[x`bMÝcgORXÁmMe`b܃OR`b[xiHSMP`†[xcdORMeÝ`cgM[x`bMPå¼LÁÝ`æ©`ÛYâN^;S QRMPLNkƒiP`bM:çÝXZcƒ[x`†[jiiPcfR`bMÂÝOYMPMe`kƒicgM`è¼Q]`bÝiP`†m àÍORiPORMe` XNc¼ÝbSlieLZcdk'â5é'LNÞg`báLN[P`g^5SgQRQRXNLNÝbSliPLNcgk?[ ÝcgOYXNmDOY[P`ÂiefR`,OY[x`bMbê [#XZcpÝbSliPLNcgkcgM2iPM:S‹åg`bX¼LˆieLZkR`bMeSgMPëiecSdÝÝ`b[e[#ÝbSgÝ:fR`†m mRS5ieSHcgM#iPc[P`bSgMeÝ:f àÍcgM¯LNkpàÍcgMeTUSlieLZcdkSga?cdOpiÂ[xQ]`bÝLˆì]Ý,Ûg`cdÛgM:S5QRfYLNÝbS5X¼S5Me`bSg[bâdíHOR`bMPLN`b[Sga?cdOpiÂS[x`bMPå¼LÁÝ`,áLˆiefRLNk SÛdLZåd`k-M:SgmRLZOY[cgà’SB[xQ]`bÝLˆìY`†m-XNcpÝS5iPLNcgkÇÝcgORXÁmÇa?`UmpLNMP`†Ý‚ie`bmiecSXNLZT=LZiP`†mÇk¼ORTDa?`bMc5à [x`bMPåd`M:[WLNk!iPfR`Me`ÛdLZcdk!áLZiPfRcdOpiîYc¼cpmpLNkRÛiPfR``bkdieLZMe`kR`ijá’cgMeÞ;â h¯XN`ålSlieLZkRÛÇXZcpÝbSliPLNcgkiPcÇiPfR`BMeSgkRÞïc5àHS-ìYM:[jiPðÝXNSd[P[D[xëp[xiP`bTñcgapòj`†Ý‚iD`bkYS5aYXZ`†[DTUSgkƒë kR`áóÞ¼LZkYmY[c5à'[P`MeåƒLÁÝ`d^‹àÍMPcdTäieMeSgkY[xQ?S5Me`kƒi#Q]cgLNkƒieÝbSg[xiPLNkRÛ&æÍkR`bá [xQ?S5Q]`M¯mp`bXZLNåg`bMPë iPccgkY`gê [ mpc¼cgM:[jie`Q'^†áfR`Me`åd`M'LZiTLNÛgfƒia?`‹ç·iec)`bT`bMPÛd`kYÝë[P`Meå¼LNÝ`b[Wæ©mpLÁ[jieMP`†[P[#ÝSgXZXÁ[iPfYS5i2LÁmp`kƒiPLZàÍë iPfR`ÝbS5XNXZ`bMbê [XNc¼ÝbSlieLZcdk?ç‚âV’ëÇMP`båg`bSgXZLNkRÛ0S0kRcpmp`&cdMOY[x`bMbê [`èpQ]`bÝiP`bmïXNcpÝS5iPLNcgkôSli SBàÍOpð iPORMe` ieLZT=`g^ƒLZiWá’cgORXÁm=a?`HQ?cƒ[P[PLNaRXZ` iPc cgß]`bMÂQRMe`b[xieSgÛg`†mmR`XNLZåd`Meë c5à[PQ]`bÝLZì?Ý)LZkpàÍcdMPTUS5iPLNcgk áLˆiefRcgOpiMP`†ÜƒORLZMeLNkRÛ0iPfR`kRcpmp`&iecõMP`†ÜƒOR`b[xi9`èpQRXNLNÝLˆieXZëiefR`&LNkpàÍcgMeTUSliPLNcgkôOYQ?cdkS5MeMPLNålS5X”â ö OYÝ:f0QRMe`àÍ`i:Ý:fRLNkRÛUÝS5kBÛgMe`bS5iPXNëUMP`†mpOYÝ`HiefR`OY[P`M†ê [,Q]`M:Ý`bQpiPLNcgk0cgà2XÁSliP`bkYÝëdâ ÷RORkYmYS5T=`kƒieSgXZXNëg^R[POYÝ:f0Ýcdkdie`è¼ixð{S‹á,S5Me`)S5kYmBXNcpÝSlieLZcdkpð{mp`Q]`kYmR`kƒi S5QRQRXNLÁÝSlieLZcdkY[’OY[P` S!XNcpÝS5iPLNcgk-T=cpmp`bXiPc0TUS5QÇiPfR`DøYù¼ú‹û‚ü©ý:þlÿÂÿ bý:þ ”ü c5à,S!O?[x`bMcdMkRcpmp`ULZkï[PcgT=`UÝc¼cgMPð mpLZk?SliP`[Pëp[jie`T QRMeclå¼LNmp`†mUa¼ë=iPfR`XNcpÝSlieLZcdk![PORQRQ]cgMPi,[Pëp[jie`T/iecS!ÿ 5ü©ý:þlÿÿ bý:þ Ÿü -ü 

 



Workshop Location Modeling 7



  

 

QRMPc‹èpLNT=Lˆijë=iecUcgkR`cgMT=cdMP`kR`†S5MeaƒëUcdapòj`bÝie[,cdM [x`bMPå¼LÁÝ`b[bâ ORMÂMe`b[P`bSgMeÝ:fà©S5XNXÁ[ÂORk?mp`MWiPfR`Me`bS5XNT c5à'áLNMP`bXZ`†[P[’[x`bkY[PcgM¯kR`ijáWcdMPÞp[ ”âÇ`Hå¼LZ`bá S XZcpÝS5iPLNcgk&[PORQRQ]cgMPi’[Pëp[jie`T Sg[WS9áLNMP`bXZ`†[P[¯[P`k?[xcdMÂkR`ijáWcdMPÞc5à'[P`åg`bMeSgXYkRcpmp`†[^d`bSdÝ:fUáLˆief mpLZåd`M:[x`[P`kY[PcgM:[^ƒiefYSli,iPM:SgmR`b[’c5ß-SdÝÝOYMeSdÝëg^ƒieLZT=`XNLNkR`b[e[,LZk0QRMeclå¼LNmRLZkRÛDXZcpÝbSliPLNcgkBáLZiPf0S QRf¼ë¼[PLÁÝS5X2T=`bSg[PORMe`T=`kƒi cgàiPfR`[xëp[xiP`T0â]VW`†ÝSgOY[x`DS5QRQRXNLÁÝSlieLZcdkMP`†ÜƒORLZMe`T=`bkdi:[,iP`bkYm0iec a?`Uà©SgLZMeXNëÇmpLNåg`bMe[P`g^;iefR`Me`&LÁ [ xkRccdkR`&[P L b`=ìRie[DS5XN X SgQRQRMecdSgÝ:f-iPcõQYMPclå¼LÁmpLZkYÛBXNcpÝS5iPLNcgk [xORQYQ?cdMxi†â?K `båg`bMxiefR`XN`b[e[^]S&[PLZkRÛdXZ` XZcpÝS5iPLNcgkõ[Pëp[jie`TILN[Hc5à ie`kõOY[P`bmBiec[PORQRQ]cgMPi[P`åg`bMeSgX S5QRQRXNLÁÝSlieLZcdkY[bâ ,fR`Me`àÍcdMP`d^5S5QRQRXNLÁÝSlieLZcdkY[[PfRcgORXÁma?`HS‹á’SgMP`Wc5à]iPfR`†[x`,ieMeSdmp`cgß;[¯SgkYm iPORkR` iPfR`bLZM Q]`MPàÍcgMeTUS5kYÝ`SdÝÝcgM:mpLZkYÛgXNëgâ {kiPfRLÁ[QYS5Q]`M†^]áW`mpLN[eÝO?[P[)iPfR`mp`b[PLNÛgk-[PQYSgÝ` cgà¯XZcpÝS5iPLNcgk-[PORQRQ]cgMPi[Pëp[jie`TU[b^]SgkYm QRMPclå¼LÁmp`![PcgT=`0ÝSd[x`0[jieOYmpLN`b[c5àHXZcpÝS5iPLNcgkó[xORQYQ?cdMxiU[xëp[xiP`bT=[DiecïSgMPÛdOR`&àÍcdM S Yþlÿ‡ü ”ú  ø  ‚û:û‚ü  XNc¼ÝbSlieLZcdk!T=cpmp`X”^RcdkR`iPfYS5i `kYÝbS5QY[PORXÁSliP`†[’kRc5iòjOY[xiiPfY`ÛgM:S5k¼ORXÁS5MeLˆijë=c5à#iPfY` XZcpÝS5iPLNcgk0LNkpàÍcgMeTUSliPLNcgk0aRORiHS5XÁ[xc=cgiPfR`bM QYS5M:S5T=`iP`M:[,LNkYÝXNOYmpLNkRÛ=iPLNT=`XNLZkY`b[e[)S5k?m!`bkR`MeÛgë Ýcd[xie[bâ ,fRLÁ[QYSgQ?`bMLN[,kRcgi LZkƒie`kYmp`†m!iPc=a?`9S=ÝcgT=QRMe`fR`bkY[xLNåg`[PORMeåg`ëUcgà2XNcpÝSlieLZcdkB[PORQpð Q?cdMxi,[xëp[xiP`TU[ÂcgMÂ`båg`bkUc5à·iPfY` iPM:SgmR`c5ß·[¯LNk¼ågcdXZåd`bm·âƒo clá’`åg`bMb^gáW` fYcgQ]` iPf?Sli’SgkUORkYmp`bMxð [ji:S5kYmpLNkRÛ!c5à¯iefR`b[P`DLÁ[e[xOR`†[)áLNXNX#`bkRMeLNÝ:fÇa?cgiPfõiefR`=mp`b[PLZÛdkõc5àWXZcpÝS5iPLNcgkõT=cpmp`bXN[S5k?miPfY` Ûg`kY`M:S5XN L †S5aRLNXZLZijëõS5k?mÇQ]`MPàÍcgMeTUS5kYÝ`=c5à’iPfR`SgQRQRXNLNÝbSlieLZcdkY[Lˆi ÝcgORXÁmï[xOYQRQ?cdMxi†â {kô[xOYTDð T=SgMPëd^á’`a]`XNLZ`båg`UiefYSliUS5k¼ëÛg`bkR`M:S5X¯XNcpÝS5iPLNcgkT=cpmp`X,[PfRcgORXÁmô[eSliPLÁ[xàÍëÇiefR`&àÍcdXZXNcláLNkRÛ MP`†ÜdOYLZMe`T=`kƒi:[â  V’`9S5QRQRXNLÁÝS5aYXZ`iPcUSáLÁmp`MeSgkRÛg`c5àXNc¼ÝbSlieLZcdkpð[PORQRQ]cgMPi[xëp[xiP`TU[bâ  h¯kYÝbS5QY[PORXÁSlie`SÛg`bkR`M:S5XkRcgieSlieLZcdkAáfRLÁÝ:f à©SgÝLNXNLˆi:SliP`†[ÝcdTTDORkYÝLNS5iPLNcgkAa]`ijá’``bk OY[P`M:[,c5à#mRLˆß;`Me`kƒiijë¼Q?`†[c5à#XZcpÝbSliPLNcgkRð[PORQRQ]cgMPi [xëp[xiP`TU[bâ  Ú¯Meclå¼LNmp`’Smp`ìYkYLˆieLZcdkcgà?Q]`MPàÍcgMeTUS5kYÝ`’SgkYm QYMeSd݂iPLÁÝSgX¼T`iPfRcpmR[cgà?T=`bSd[xOYMPLNkRÛHiPfYS5i Q?`bMxàÍcdMPTUSgkYÝ`dâ  hÂ[xieSgaRXZLÁ[PfBSgk!ORQYQ?`bM XZLNT=Lˆiiec=iPfR`iefR`cdMP`iPLÁÝSgX·Q?`bMxàÍcdMPTUS5k?Ý`c5àS[Pëp[jie`T0â  Ú¯Meclå¼LNmp`=S0á,S‹ëc5à[xQ]`bÝLˆàÍë¼LNkRÛXZcpÝbSliPLNcgkRð[PORQRQ]cgMPi[Pëp[jie`TU[áfRLÁÝ:fïS5XNXZclá [ÛdMP`†SliP`bM îY`èpLNaRLNXZLZijë&iecDiefR`9mp`†[xLNÛgkR`bMa¼ë&kRc5i QYMP`ðZòjOYmRÛgLNkRÛ ie`bÝ:fRkYLNÝbS5X'LÁ[P[POR`†[â 







Ô!Ö#"Õ%$'& ,fR`&mp`b[PLNÛgkô[xQ?SgÝ`Ucgà’XNcpÝS5iPLNcgkô[xOYQRQ?cdMxi [Pëp[jie`TU[LÁ[LNkRfR`Me`kƒieXZëÇfƒOYÛg`USg[LZi LNk¼ågcgXNåg`†[ [x`båg`M:S5XÝcgT=QRXN`è-à©Sd݂iecgM:[LZkYÝXZO?mpLZkYÛiPfR`=ijë¼Q]`&c5à,cgQ]`M:SlieLZcdkYS5X`bkƒå¼LNMPcdkRT=`kƒi&æÍLNkYmpc¼cgM cgMcgOpi:mpc¼cgM‚ç‚^#kƒOYT a]`M=c5àHmR`å¼LNÝ`b[!^kYSlieORMP`Bc5àS5QRQRXNLÁÝSlieLZcdkY[b^#mp`bå¼LNÝ`fYSgMempá,S5Me`SgkYm kR`ijá’cgMeÞ¼LZkRÛfR`iP`MecgÛd`kR`bLˆijëDS5kYm=Ýcƒ[jiMe`b܃ORLNMP`bT=`kƒie[SgkYm=ÝS5Q?S5aRLNXZLZiPLN`b[bâ {kiPfYLN[¯[P`bÝiPLNcgk'^ áW`mpLÁ[PÝOY[e[[xcdT`cgà2iefR`b[P`9mp`b[PLNÛgk!iPM:Sgmp`bc5ß·[â 

(*),+

-/.0.01324.5

÷RMecgT S5kôS5QRQYXZLÁÝS5iPLNcgkôQ]`M:[xQ]`bÝiPLNåg`g^2Q?cƒ[xLZiPLNcgk?S5XÂSdÝÝOYMeSdÝëõLÁ[9cgà iP`kiPfR`&T=cƒ[ji9àÍOYkYmRSlð T`bkƒieS5X]ÝcdkYÝ`bMPk'âƒã QRQYXZLÁÝS5iPLNcgk[PORQRQ]cgMPiWmp`bQ?`bkYmR[Wcgk&áfR`iPfR`bMÂLZie[WMP`†ÜƒORLZMe`bm=Q]cd[PLZiPLNcgkYSgX SgÝÝORM:SgÝëïÝSgk a]`0QRMPclå¼LÁmp`bmaƒëïiefR`BXZcpÝS5iPLNcgkó[xORQYQ?cdMxiU[xëp[xiP`bTBâ÷YcgM=LZk?[ji:S5kYÝ`g^#`åg`bk áLˆiefRLZkiPfY`0ÝcgkƒiP`è¼i=cgà)áLNMP`bXZ`†[P[=[P`kY[PcgMkR`ijá’cgMeÞp/[  6^Ûg`bcgÛgM:S5QYfRLNÝ!Sgm fRcpÝ!MecgORiPLNkRÛ

Workshop Location Modeling 8

MP`†ÜdOYLZMe`b[DXZcpÝbSliPLNcgkóSgÝbÝORM:SgÝë-iecÇa]`0cgkS[eÝS5XN`áLZiPfóM:S5kRÛd`g^#áfR`Me`bSd[DÝcgXNXNSga?cdMeS5iPLNåg` [xLNÛgkYSgX;QRMPcpÝ`b[e[xLNkRÛ SgQRQRXNLNÝbSliPLNcgk?[WTUS‹ëUMe`b܃ORLNMP`HQRMe`bÝLÁ[P`HQ]cd[PLˆieLZcdkLNkpàÍcgMeTUSlieLZcdk  â¼ÚÂMPcgð åƒLÁmpLNkRÛBQRMP`†ÝLÁ[x`DXZcpÝbSliPLNcgkÇLZkRàÍcgMeT=S5iPLNcgkõMe`b܃ORLNMe`b[mp`bmpLÁÝS5iP`†mõfYSgMempá,S5Me`g^]fRLNÛgfR`bMHQ]clá’`M†^ `è¼iP`bkY[xLNåg`HQRMe`ð{ÝcgkRìYÛgORM:SlieLZcdkUcgM,Ý`bkdieMeSgXZ L `†m&S5QYQRMPcƒSgÝ:fR`†[â ,fR`Me`àÍcdMP`d^ƒSDXZcpÝbSliPLNcgk![xëp[xð iP`T7TUS‹ëõa]`&`kRÛdLZkY``Me`bm-iPcõ[PORQRQ]cgMPi9cgkYXZëSÝ`MPieSgLZkïmp`†[xLNMP`†mXZcpÝS5iPLNcgkÛgM:S5k¼ORXÁS5MeLˆijëdâ ã QRQRXNLÁÝSlieLZcdkY[WáLˆiefXNcláW`bMÂÛgM:S5k¼ORXÁS5MeLZijë Me`b܃ORLNMP`bT`bkƒie[ÂÝbS5kUàÍOYMxiefR`M’LZT=QRMeclåg` iPfR`bLZM’Q?`bMxð àÍcgMeT=SgkYÝ`ÂaƒëXN`åd`M:S5ÛgLNkRÛ,iefRLN[ìYkR`bM2ÛdMeSgLZkY`bmXZcpÝbSliPLNcgk LZkpàÍcdMPTUS5iPLNcgk'â†÷RcgM#LNkY[ji:S5kYÝ`g^báLˆief ìYkR`ðÛgM:S5LNkR`†mÇQ]cd[PLZiPLNcgkôLNkpàÍcgMeTUSliPLNcgkïiefR``bkR`MeÛgëƒð` &ÝLN`k?Ýë-cgà SõÛg`bcgÛdMeSgQRfRLÁÝDMecgORiPLNkRÛ S5XNÛgcgMeLZiPfRT ÝcdORXÁm9a]`ÂàÍORMPiPfY`MLZT=QRMeclåg`†ma¼ë9Sgm‹òjOY[xiPLNkRÛHiPfR`WiPM:S5k?[xT=LÁ[P[PLZcdk9Q]cláW`bMiec)cgkYXZë MP`†SgÝ:f&iefR`LNkdie`kYmR`bmBkR`è¼i fRcdQ0c5à#ST=`b[e[eS5Ûg`S5k?m!kYcDàÍOYMxiefR`M†â ,fR`XNcpÝSlieLZcdkôT=c¼mR`X[xfYcgORXÁmiPfR`bMP`àÍcgMe`LNkYÝcdMPQ]cgM:Slie`UkRc5iHòjOY[xiDiefR`Q]cd[PLˆieLZcdkaROpi ø lû:ü ”ü ;þlÿþdýeý  Pþdýúlâpã)ÝbÝORM:SgÝëDLZie[P`XZà'fYSd[ijá’c T=`ieMPLÁÝ[ ?æ †ç  ‚û lÿ  Ÿü !ð#áfRLÁÝ:f&ÛgLNåg`†[ iPfR`’T=`bS5kDQ?cƒ[xLZiPLNcgk `MeMPcdMWæ©mRLN[xieSgkYÝ`’a?`ijáW`b`kDT`†Sg[PORMe`bm9SgkYm9SgÝiPOYSgXdQ]cd[PLZiPLNcgk?ç2S5kYm!,æ dç ø :ýüÁû:ü =ð·fRcláïc5à iP`bk9iefR`W`†[jieLZTUS5iP`bm Q?cƒ[xLZiPLNcgk `MeMPcdMXNLZ`†[áLZiPfRLNk9iefR`’MP`†[xcdXZOpieLZcdkæÍT=`†S5k Q?cƒ[xLZiPLNcgkB`MeMPcdM:çâ  





  

 

(*)(

34

 

  

é'cpÝS5iPLNcgkLZkpàÍcdMPTUS5iPLNcgkLN[O?[xOYSgXZXNë cdapieSgLZkY`bmOY[PLZkRÛmpLNåg`bMe[P`[x`bkY[PcgM:[ð2QYSg[e[xLNåg`,cdM¯Sg݂ieLZåd` LZkpàÍM:SlðMe`bm·^’SgÝcdOY[xiPLÁÝ5^ÂM:SgmRLZcgðŸàÍMe`b܃OR`bkYÝë SgkYm LNTUS5Ûg`Ç[x`bkY[xcdMe[bâÂhWSgÝ:f c5àiefR`b[P`Ç[P`kY[PLNkRÛ TcpmRSgXZLZiPLN`b[Slß;`bÝie[ iPfR`BORkYÝ`MPieSgLZkƒijëLZkXNcpÝS5iPLNcgkLNk S-mRLˆß;`Me`kƒi=á’S‹ëdâ2÷RcdM LNkY[xieS5k?Ý`g^ MeSgkRÛg`BcgM&SgTQYXZLZiPOYmR`õ[x`bkY[xcdMe[fYS‹åd`!fYLZÛdfóOYkYÝ`bMxi:S5LNkdijëS5XNcgkRÛÇiPfR`õS5èpLN[=Q]`MeQ?`bkYmpLÁÝOpð XNSgM9iPcÇiPfY`!mpLNMP`†Ý‚ieLZcdkc5à T=c5ieLZcdkc5à)iPfR`!cgaY[P`Meåg`†mïcgapòj`†Ý‚iDáfY`Me`bSg[ a?`†S5MeLZkYÛÇ[P`kY[PcgM:[ fYS‹åg`BXZclá’`M=ORk?Ý`MPieSgLZkƒijë SgXZcdkRÛÇiPfRLÁ[&SlèpLÁ[â WcdT aRLNkR`bm SdÝcgO?[jieLNÝBS5kYmM:SgmpLNcïMeSgkRÛgLNkRÛ LN[Slß;`bÝiP`bm-a¼ëBkYcgkõXNLZkY`ðc5à ð{[xLNÛgfƒiHÝcgkYmpLZiPLNcgk?[^]aROpiQRMeclåƒLÁmp`†[fRLNÛgfRXNëõSgÝbÝORM:Slie`9XZLNkR`ð”cgà ð [xLNÛgfƒiMeSgkRÛgLNkRÛ  *â ,fR`DXNcpÝS5iPLNcgk-T=cpmp`bXÝcgORXÁmiefR`Me`àÍcdMP`  Yþ5ÿ‡ü :ú  ]ý  þ5ü ”úBSgXZcdkRÛ `bSgÝ:fBSlèpLÁ[^ƒcgM,QRMeclå¼LNmp`S QYMPcdaYS5aRLNXNLˆijëUmpLÁ[jieMPLNaROpieLZcdk&àÍORk?݂iPLNcgkïæ©ÚW_H÷çÂàÍcgM,XNcpÝSlieLZcdk'^¼Lˆà#S ÝcgT=QYSd݂iMe`QRMe`b[P`kƒieS5iPLNcgk!`èpLN[xie[bâ 

(*)!

2 5

"#

 $ &%'(  )









24* +

÷RcgM0[P`åg`bMeSgX)S5QYQRXZLÁÝS5iPLNcgkY[b^’kRcpmp`b[BS5kYm a?`†SgÝcdkY[&cgà iP`bkAkR`b`bm iec a]`[PTUS5XNXSgkYm ORkpð iP`iefR`Me`bm'^ÂLNTQ]cd[PLNkRÛ [PORaY[xieS5kƒieLNSgX `kR`bMPÛdëÝcgkY[xiPM:S5LNkƒie[bâ¯h¯kR`bMPÛdë ÝcdkY[jieMeSgLZkƒi:[UfR`bS‹å¼LNXZë LZkpî?OR`kYÝ`ô[xëp[xiP`T mp`b[PLZÛdk'â#,fR`ôÝcdkY[P`b܃OR`kƒi0XZclá,ðQ]cláW`bMBmp`†[xLNÛgk c5à ie`k ÝcgT=`†[SliS QRMPLÁÝ` c5ààÍORkYÝiPLNcgkYSgXZLZijëgâ;ã’iiPfR`DfYS5M:mpá,S5Me`XZ`båg`X”^YXNclá,ðQ?clá’`M)mR`b[PLZÛdkT=S‹ëBLNkYÝXNOYmp`DOY[jð LZkRÛDàÍ`á’`M,[P`kY[PcgMWTcpmRSgXZLZiPLN`b[’Q]`M,kRcpmp`æ àÍcgM,LNkY[xieS5k?Ý`g^ƒQRORMe`XN-ë , ÷2ðaYSg[P`bmUXNcpÝSgXZ L bS5iPLNcgk [xëp[xiP`TU[ . çâgã’iiPfY`)[xëp[xiP`bT XN`åg`bXŸ^‹iefR`b[P``kY`MeÛgë ÝcgkY[PLNmR`M:SliPLNcgk?[ÝcgOYXNm=S5XÁ[PcLNkpîYOY`kYÝ` a?`†SgÝcdkY[’iPc=ieORMek!iefR`TU[P`XNåg`†[,c5ß-cdMcgQ]`M:SliP`Sli)SXNcláW`bM mpOpijëBÝëpÝXN`gâ M [PQ?`†ÝLÁS5X'[P`kpð [xcdMe[OYÝ:fSd[#iPLNXˆiÂ[P`k?[xcdMe[#TUS‹ë9a]``T=QRXNclëg`bm iPc[P`kY[P`,T=clåg`T=`bkdi†^†iPcMe`b[xiPMeLNÝiXNcpÝS5iPLNcgk ÝcgT=QRORieSlieLZcdkDcdkRXNëáfR`kcgapòj`†Ý‚iT=clåg`bT`bkƒi#LÁ[mp`ie`bÝiP`bm'â ,fR`,XZcpÝS5iPLNcgkDTcpmp`bXR[xfYcgORXÁm iPfR`bMP`àÍcgMe`LNkYÝXZOYmR`ULZkpàÍcdMPTUS5iPLNcgkÇ[POYÝ:fôSg[iefR/`    ;ýúcgàWXNc¼ÝbSlieLZcdkÇORQ;mRS5iP`b[9cgMiPfY`  gú ‚ø 10 20!Q]`MXNc¼ÝbSlieLZcdk!ORQ;mRS5iP`9Sg[SàÍOYkY݂ieLZcdkBcgàiefR`MP`†[xcdXZORiPLNcgkBMP`†ÜdOYLZMe`bm·â















Workshop Location Modeling 9

(*)





2

  +

4

1

,fR` S5aYLZXNLˆijëiPc&mp`iP`bMPT=LNkR`iPfR` cgMeLZ`bkdi:SlieLZcdk!cgàSUmR`å¼LNÝ`9ÝSgkBÛdMP`†SlieXZë&`bkRMeLNÝ:fÝcdkƒiP`è¼iPð

S‹á’SgMP`S5kYmXZcpÝS5iPLNcgkpð{mp`bQ?`bkYmp`kƒi,T=cgaRLNXZ`ÝcgT=QROpieLZkRÛ?⠍kRcláXN`bmpÛd`)c5à2cgMeLZ`bkdi:SlieLZcdkc5à#S TcdaRLNXZ`mp`båƒLÁÝ``bkRfYS5k?Ý`b[,ålSgMPLNcgOY[,SgQRQRXNLNÝbSliPLNcgk?[^pLNkYÝXNOYmpLNkRÛ=`&ÝLZ`bkdiá,S‹ëƒðŸìYk?mpLZkYÛSgkYm kYS‹å¼LZۃSliPLNcgk^RmpLZMe`bÝiPLNcgkYSgX'[x`bMPå¼LÁÝ`9mpLÁ[eÝclåg`bMPë&SgkYm PSgORÛgT=`kƒie`bm¼ðMP`†S5XNLˆijë mpLÁ[xQRXÁS‹ëp[â ,fY` XZcpÝS5iPLNcgkõT=cpmp`bXfRcdORXNm-S5XÁ[xc&`èpQ?cƒ[x` iPfR` ‚ü {þ ”ü SgkYm ‚ü þ Ÿü ôLZkpàÍcdMPTUS5iPLNcgk áfR`k S‹ålS5LNXNSgaRXN`gâ¯o clá’`åd`M†^ÝcgT=QRORieSlieLZcdkcgàcdMPLN`kƒi:SliPLNcgkLˆi:[x`bXˆàTUS‹ëMe`b܃ORLNMP`BìYkY`MPð ÛgM:S5LNkR`bmQ?cƒ[xLZiPLNcgk0LNkpàÍcgMeTUSliPLNcgkâ 

    

(*)

 



24



 (

iBLÁ[`è¼iPMe`T=`XNë

1

 





4  *4  .



2

 

4

  $



   

5





 

 )

Ý:fYS5XNXZ`bkRÛgLNkRÛôiec `bkRÛgLNkR``bM!S XZcpÝS5iPLNcgkA[Pëp[jie`TEiPf?Sli!QYMPclå¼LÁmp`b[&ìYkR`ð gÛ M:S5LNkR`bmïQ]cd[PLˆieLZcdkôLZkRàÍcgMeT=S5iPLNcgkôLNk SÛdXZcdaYS5X’[PÝcgQ]`gâ2ã)QRQRXNLNÝbSlieLZcdkY[ cgà iP`bkkR``†mïiPc-LZkpð iP`ÛdMeS5iP`!XZcpÝS5iPLNcgkLZkpàÍcdMPTUS5iPLNcgk cdapieSgLZkY`bmôàÍMecgTñTDORXZiPLNQRXZ`ÝcƒcdMemRLZkYS5iP`B[xëp[xiP`TU[bâ%,fRLN[ MP`†ÜdOYLZMe`b[&SàÍMeSgT=`á’cgMeÞôàÍcgM!MP`bQRMP`†[x`bkƒieSlieLZcdk c5à9XNc¼ÝbSlieLZcdk LZkpàÍcdMPTUS5iPLNcgk àÍMecgTEå‹SgMPLNcgO?[ [xcdORMeÝ`b[b^'[xO?Ý:fSd[9[PS5iP`bXZXNLˆie`UkYS‹å¼LZۃSlieLZcdkÇ[Pë¼[xiP`bTU[^'áLNMe`XN`b[e[Q?cƒ[xLZiPLNcgkRLNkRÛ0ie`bÝ:fRkRcdXZcdÛgLN`b[b^ a?`†SgÝcdkY[b^dLNkYmpc¼cgM’kYS‹å¼LNÛdSlieLZcdk[Pë¼[xiP`bTU[^ƒf¼ORTUS5kLNkRQRORi,`ieÝgâ ,fY`b[P`[PcgORM:Ý`†[ÂSgXZX;cgQ]`M:Slie` SliålS5MeLZcdOY[HmR`ÛgMe``†[c5àWSdÝÝOYMeSdÝëS5kYmÇc5à ie`kï[xORß]`bMàÍMPcdT LZkYmR`Q]`kYmp`bkƒi9`MeMPcdMe[bâ ,fR`bLZM cgOpieQROpiLZkie`MeTU[ c5àiefR`XZcpÝS5iPLNcgk'^]ÝSgkõa?`DMP`bQRMe`b[P`kƒiP`†m0TcdMP` Ûg`kY`M:S5XNXZë0Sg[HSUQRMecgaYS5ð aRLZXNLZijëmp`bkY[PLˆijë=mpLÁ[jieMPLNaROpieLZcdk-æ©ÚW_H÷çcgà;iPfR` XNcpÝS5iPLNcgkUclåg`bM¯Sijá’ccgM¯iefRMe``)mpLNT=`kY[PLNcgkYSgX [xQYSdÝ`ð ijë¼QRLÁÝSgXZXNë ’SgMxie`b[PLNSgkÇcdMc5iPfY`M9Ýc¼cdMempLNkYS5iP`b[b â WcgTDaRLNkRLZkYÛ0ijáWc0cgMT=cdMP`U[POYÝ:f ÚÂ_÷#[#ë¼LN`XÁmR[S T=cgMe`,SgÝÝORM:SliP`’ÚÂ_÷c5àYiPfR`,XNc¼ÝbSlieLZcdkDS5k?m9LNTQYMPclåd`b['k?S‹åƒLNÛdS5iPLNcgk9OYkYmp`M mp.L &ÝORXˆiÝLNM:ÝORTU[xieS5k?Ý`b[)[xOYÝ:fÇSg[ LNkYmpc¼cdMe[cdM)LNkà©SgmRLZkRÛ`k¼å¼LZMecgkRT=`bkdi:[â ,fR` XNcpÝS5iPLNcgk Tcpmp`bXp[xfRcdORXÁmQRMeclåƒLÁmp`’S á,S‹ëHcgàYMP`bQRMe`b[P`kƒiPLNkRÛiefR`ý  05ü ]þ û:ú‹û & Pþ ’LNk áfRLÁÝ:f iPfR`9XNcpÝS5iPLNcgk0LN[)`èpQRMe`b[e[x`†m·^YSgkYm0S ,eþ Rû   =þ Ÿü   ;ý ”ü -iPcUieMeSgkpàÍcgMeT iefYSli)XNcpÝS5ð iPLNcgkBiPcUS5kYc5iPfY`M Ýc¼cdMempLNkYS5iP`[xëp[xiP`T0^RLZàkR`bÝ`b[e[PSgMPëdâ



(*)



/5 34 





.

2

"  $( 

4



 + * 



*5

  

  









 

1 24

   %'1  

1

é'cpÝS5iPLNcgkÇ[Pëp[jie`TU[c5à ie`k-Me`XNëcgkÇ`è¼iP`bkY[PLZåd`DQRMe`ð{ÝcdkpìYÛdORMeS5iPLNcgk0iec!QRMeclå¼LNmR`DSgÝbÝORM:Slie` Q?cƒ[xLZiPLNcgkLZkRàÍcgMeT=S5iPLNcgk'â )kpàÍcgMPiPOYkYSlie`XNëg^liefRLN[ÂÝS5k=Slß;`bÝiLˆi:[MP`bXZLÁS5aYLZXNLˆijëd^lMPcdaROY[xiPkR`†[P[SgkYm Q?`bMxàÍcdMPTUS5k?Ý`,LZkUmpë¼kYSgT=LNÝ,`bkƒå¼LNMPcdkRT=`kƒie[bâ ö cgT=` SgQRQRXNLNÝbSliPLNcgk?[T=S‹ëDmp`b[PLZMe` SXZcpÝbSliPLNcgkRð [xORQYQ?cdMxiW[Pëp[jie`T iPfYS5iWQRMeclå¼LNmR`b[iefR`Hmp`†[xLNMe`bmXZcpÝbSliPLNcgk&[PORQRQ]cgMPiÂLNkUiP`bMPTU[Âc5à·MP`†[xcdXZORiPLNcgk S5kYmóXÁSliP`bkYÝë MP`†ÜdOYLZMe`bm·^¯mR`b[PQRLˆie`-[PLZÛdkRLˆì]ÝS5kƒi&mpë¼kYSgTLÁÝ[U[POYÝ:f Sg[=`bk¼åƒLNMecgkRT=`kƒi:S5XcgaRð [jieMPOYÝiPLNcgkY[b^'[Pë¼[xiP`bT$cgMkYc¼mR`Dà©SgLZXNORMe`b[cdM`båg`bkÇcdapòj`bÝimRëƒk?S5T=LNÝb[Uæ àÍcdM9`èRS5T=QRXN`g^·T=cgð aRLZXN`O?[x`bMe[:ç‚â÷RcdM LNkY[ji:S5kYÝ`g^#TUS5k¼ëÇQ]cd[PLZiPLNcgkRLNkRÛ-[Pëp[jie`TU[DSd[P[PORT=`UiefYSliDiPfR`!Q?cƒ[xLZiPLNcgkY[ c5à)a]`bSdÝcgk?[DcdMDMe`àÍ`bMP`bkYÝ`BkRcpmp`†[SgMP`BQRMe`ð{ÝcdkpìYÛgOYMP`†m·â2ã)kLNT=Q?cdMxi:S5kƒi=[Pëp[jie`T ÝbS5QYS5ð aRLZXNLZijëóá’cgOYXNm a]`õàÍcdMa]`bSdÝcgk?[UiPcóÝc¼cgQ]`M:SlieLZåd`XNëmp`iP`bMPT=LNkR`-M:S5kRÛd`b[&iPc`†SgÝ:f cgiPfR`bM S5kYm0LNkYmp`bQ?`bkYmp`kƒieXZë0àÍcdMPTS&Ýc¼cdMempLNkYS5iP`9[Pë¼[xiP`bT0^?fRclá’`åd`M,iefRLN[Há’cgORXÁm0MP`†ÜƒORLZMe`à©SgLZMeXZë [xcdQRfRLÁ[jieLNÝbSliP`†mïÝS5Q?S5aRLNXZLZiPLN`b[LNkôa]`bSgÝcgkY[  âV’`bÝbS5OY[P`iPfR`&܃OYSgXZLZijë-cgà’XNcpÝS5iPLNcgkïLZkRàÍcgMPð T=S5iPLNcgkBÝS5kB[xLNÛgkYLˆì?ÝbS5kƒiPXNë=ålS5MeëáLZiPf0mpë¼kYSgTLÁÝ[b^ƒiPfR`XNc¼ÝbSlieLZcdkT=cpmp`X·[PfRcgORXÁm&QYMPclå¼LÁmp` S&á,S‹ë!iec`èƒieMeSd݂ia?cgiPfÇ[xieS5iPLÁ[jieLNÝbS5XmpLÁ[jieMPLNaROpieLZcdkY[S5kYmM:S‹á XNc¼ÝbSlieLZcdkõLZkRàÍcgMeT=S5iPLNcgk'⠏à iPfR`XZcpÝbSliPLNcgk!T=c¼mR`X·ÝSgk&`èpQ]cd[P` iPfY`ÝORMeMP`bkdi,܃OYS5XNLˆijë=cgà2[x`bMPå¼LÁÝ`d^dS5QYQRXZLÁÝS5iPLNcgkY[Wá’cgORXÁm S5XÁ[xcõa]`!SgaRXN`UiPcÇÛgLNåg`UàÍ`b`bmpaYSdÝ:Þóæ©LZT=QRXNLÁÝLZiPXNëcdM `è¼QYXZLÁÝLZiPXNëRçieciPfR`!XZcpÝbSliPLNcgkRð[PORQRQ]cgMPi [xëp[xiP`T0^R[PciPfYS5i Lˆi)ÝbS5k0QRMeclåƒLÁmp`a?`ixie`M)܃OYSgXZLZijëgâ !

Workshop Location Modeling 10

 !%&"WÓR×)ÖHÙ " & ,fR`XZcdaYS5X2Úcd[PLˆieLZcdkRLZkYÛ ö ëp[xiP`bT7æÚ ö ç  f?Sg[a]``bk0SgMPcdORkYmàÍcgM)SgXZT=cƒ[ji  =ëd`bSgMe[bâ VW`†ÝSgOY[x`  Ú ö mRcƒ`†[kRcgi áWcdMPÞ&LNkYmpc¼cdMe[b^YSk¼ORT a]`M cgàQRMPcgiPcgijëƒQ]`LZk?mpc¼cgMQ]cd[PLˆieLZcdkRLZkYÛ

x[ ëp[xiP`TU[fYS‹åd` a]``kÇQRMecgQ]cd[P`bmLNk-iefR`QYSg[xiHàÍ`bá ëd`bS5M:['  â {kõiefRLN[[P`bÝiPLNcgk'^]cgOYMÝbSg[P` [jieOYmpLN`b[)àÍc¼ÝOY[ cdkiPfYMP`b` mpLZß]`bMP`bkdiXZcpÝbSliPLNcgkõ[Pëp[jie`TU[iefYSli`T=QRXNclëiefR`D[eS5T=`DMeSgkRÛgLNkRÛ iP`bÝ:fYkRLN܃OR`=aRORi fYS‹åg`=[PLZÛdkRLˆì]ÝS5kƒiS5M:Ý:fRLZiP`bÝiPORM:S5XmpLZß;`Me`kYÝ`b[bâ ,fR`=MeSgkRÛgLNkRÛ!iP`†Ý:fRkRLÁÜdOY` `èpQRXNcgLZie[·iefR`ÂålSd[jieXZëmpLZß]`bMP`bkdiQ]``†mR['cgàY[xcdORkYmSgkYmM:SgmpLNc [xLNÛgk?S5XÁ[^O?[xLNkRÛ)S)MeSdmpLZc [PLNÛgkYSgX àÍcgMHiPLNT`[xë¼kYÝ:fYMPcdkR L †SlieLZcdkõS5kYmõS!ÝcdkYÝORMeMe`kƒiPXNë`bTLZixie`bm-aROpi[xXNcláW`bMHSdÝcgO?[jieLNÝ9[PLNÛgkYSgX àÍcgM ÝSgXNÝORXÁSliPLNkRÛiPLNT=`ðc5à ð”îYLNÛgfƒib^RSgkYm!ÝcgkY[P`b܃OR`bkƒiPXNëmpLÁ[ji:S5kYÝ`a]`ijá’``bkBijáWc=Q]cgLNkdi:[â ),+

-/.



(

4



ã)݂ieLZåd`)V,Sli  ]LN[¯iPfR` `†S5MeXZLN`b[xi¯QRMec5iec5ijë¼Q?`LNkYmpc¼cdM¯ORXZiPM:Sg[PcgkRLÁݒXZcpÝS5iPLNcgk&[Pëp[jie`T0⠏iÂOY[P`b[ QYSg[e[xLNåg`=Ý`bLZXNLNkRÛ5ðT=cgORkƒiP`†mõa]`bSdÝcgk?[)iefYSli9XNLÁ[jie`kÇiPcÝcdkYÝORMeMe`kƒiHM:SgmpLNc0S5kYmÇORXZiPM:Sg[PcgkRLÁÝ QRORXÁ[x`†[àÍMPcdT4S5k ã)ÝiPLNåg`õV,Sli&mp`bå¼LNÝ`g%⠏i&Me`XNLZ`†[=cgkóiPLNÛgfƒi&[xë¼kYÝ:fYMPcdkR L †SlieLZcdkS5T=cdkRÛd[xi a?`†SgÝcdkY[b^ƒÝ`kƒiPM:S5XN L b`bm&ÝcgkƒiPMecgX;S5k?mÝSgMP`àÍORX]a?`†SgÝcdk&Q]cd[PLˆieLZcdkRLZkYÛ iecSdÝ:fRLZ`båg` åd`Meë=fRLZÛdf SgÝÝORM:SgÝëóæ©SõàÍ`á ÝT&çâ ,fR`ìYkR`ð”ÛdMeSgLZkR`†mïSgÝÝORM:SgÝëc5à ãH݂iPLNåg`BV,SliD`bkYS5aYXZ`†[ [P`åg`bMeSgX `èRÝLZiPLNkRÛ&S5QYQRXZLÁÝS5iPLNcgkY[bâ ,fR` XNSga?cdMeS5iPcgMeë;ê [Wie`XN`QRfRcdkR` [Pëp[jie`TÝS5kmpLNåg`bMxi QYfRcgkR`DÝS5XNXÁ[ iPciPfR`Me`bÝ`LNåg`M’kR`†S5Me`b[xiWiefR`aYSgmRÛg`áW`†S5Me`M†â i ÝSgkBSgXN[Pc=[x`bkYm`ð”TUSgLZXÁ[WiPciPfR`kR`†S5Me`b[xi iP`MeT=LZk?S5X”â '2 . 4 %'2$ .(  %' 4  WMPLÁÝ:Þg`i ?LÁ[WSgk&ORXZiPM:Sg[PcgkRLÁÝ,XNc¼ÝbSlieLZcdk[xëp[xiP`bT àÍcdMÂQ]`Meå‹Sd[xLNåg`HÝcgT=QRORiPLNkRÛDSgQRQRXNLNÝbSliPLNcgk?[ S5Þ¼LZkÇiPcãH݂iPLNåg`UV,Sli†^'S5XNa?`bLˆi9áLZiPfômpLZß;`Me`kƒi9mp`b[PLZÛdkÇÛdcdSgXN[bâ WMPLÁÝ:Þg`iLÁ[mp`bÝ`kƒiPM:S5XNL  `bm'â )(

% $ (  * 





,fR`Me`àÍcdMP`d^#M:SliefR`MiefYS5kiPfR`õ[Pëp[jie`T

ieMeSdÝ:ރLNkRÛÇiPfR`O?[x`bMbê [XZcpÝS5iPLNcgk'^`†SgÝ:fóQ?cdMxi:S5aRXN` pm `å¼LÁÝ`0mp`ie`MeTLNkR`†[DLZie[clákXNcpÝS5iPLNcgk'â%,fR`B`T=QRfYSd[xLÁ[á’Sd[ cdk`bSd[x`!c5àmp`QRXNclë¼T=`kƒi S5kYm=QRMe`b[P`Meå¼LZkYÛO?[x`bMÂQRMeLZålSdÝë áLZiPfTcdMP`T=cpmp`†[ji,SgÝÝORM:SgÝë ÛgcdSgXN[)æ©ÝcdMPMe`bÝiPXNë9LÁmp`kƒiPLZàÍë MP`bÛgLNcgkY[,áLZiPfRLNkõS=MecƒcdT&ç‚â ,fR`†[x`9mR`b[PLZÛdkõmp`bÝLN[PLZcdkY[,fYS‹åd`9[x`båg`M:S5X;Q]`MPàÍcgMeTUS5kYÝ`LZT=QRXNLˆð ÝSlieLZcdkY[#a?`†[xLÁmp`†[XZclá’`MSgÝbÝORM:SgÝëgâ‹÷RcdMLZkY[xieSgkYÝ`d^lmp`bÝ`kƒiPM:S5XN L b`bmDÝc¼cgM:mpLNkYSlieLZcdkDS5T=cdkRÛd[xi a?`†SgÝcdkY[2MP`†ÜdOYLZMe`b['iPfYS5i WMeLNÝ:Þd`i#a?`†SgÝcdkY[·ieMeSgkY[PTLZiQ?cƒ[xLZiPLNcgk Sdmpåg`bMxieLN[PT=`kƒie[#SliS TDOYÝ:f XZclá’`M#àÍMe`b܃OR`k?Ýëg^gLZk=cgM:mp`bM2iec9S‹ågcdLNm LZkƒiP`bMxàÍ`bMP`bkYÝ`SgTcdkRÛd[xi¯[x`båg`bMeSgXƒa]`bSdÝcdkY[å¼ëƒLNkRۍàÍcdM iPfR`9[eS5T=`iPM:S5kY[PT=LN[e[PLZcdk![PXZcgibâ ,fRLÁ[Slß;`bÝie[,iefR`Q]`M:Ý`LNåg`†mO?[x`bMXNS5iP`k?Ýëgâ k iefR`0c5iPfY`MUfYSgkYm·^iPfY` WMeLÁÝ:Þg`i WcgT=QYSd[P[  ^¯S5kó`è¼iP`bkY[PLZcdkcgà)iPfY`cgMeLZÛdLZkYSgX WMPLÁÝ:Þg`iÂXZcpÝbSliPLNcgk![xORQYQ?cdMxi,[xëp[xiP`T iefYSli’QRMPclå¼LÁmp`b[ÂQ?cƒ[xLZiPLNcgk!S5kYmUcdMPLN`kƒi:SliPLNcgk^gMe`b܃ORLNMP`†[ ÝS5Me`àÍOYX)Q]cd[PLˆieLZcdkRLZkYÛa?cgiPf cgàìRèp`bm a]`bSdÝcgk?[S5k?m QYSg[e[xLNåg`OYXˆieMeSd[xcdkRLNÝ-[x`bkY[xcdMe[UiPfYS5i ÝcgT=QRORiP`&XNcpÝSlieLZcdk'^·iefR`Me`àÍcdMP`&[eSgÝMPLZì?ÝLNkRÛLZie[ `bSg[P`Uc5àmp`bQRXZclë¼T=`kƒi†3â ,fRLÁ[9LÁ[a]`bÝSgOY[P` mp`ie`MeTLNkRLNkRÛUcgMeLN`kƒieS5iPLNcgkMe`b܃ORLNMe`b[,[xOYapðÝT Q]cd[PLˆieLZcdkYS5X'SdÝÝOYMeSdÝë  ¼^ ”â .04 4  ( 5  ,fR`HTDORXZiPLNTcpmRSgX;XZcpÝSgXZL b S5iPLNcgk[Pëp[jie`T  ·LÁ[WLNkƒiP`bkYmp`bm&iPc=[xOYQRQ?cdMxiS5QYQRXZLÁÝS5iPLNcgkY[’[POYÝ:f )!



1



34



  *





Sg[ORkRcdapiPMeOY[PLZåd`fYS5aYLˆi:Sli¯T=cgkRLZiPcdMPLNkRۍOY[PLZkYÛáLNMe`XN`b[e[[x`bkY[xcdMkR`ijá’cgMeÞp[â5ãóijë¼QRLÁÝSgXpieSd[xÞ

Workshop Location Modeling 11

fR`Me`,áWcdORXNma]`QYSg[e[PLZåd`,mR`iP`†Ý‚ieLZcdk=SgkYm iPM:SgÝ:Þ¼LZkYÛcgà]aYLZM:mR[a¼ëDSk¼ORT a]`MÂc5à·mpLN[xiPMeLNaROpiP`†m [x`bkY[xcdMe[ÂaYSg[P`bmcgkBÝcgXNXÁS5a]cgM:SliPLNåg`H[xLNÛgkYSgX?QYMPcpÝ`†[P[PLNkRÛ9c5à#SgÝcdOY[xiPLÁÝ)[PLZÛdkYSlieORMP`†[â¼o `bMP`HiPfY` ÛgcdSgX¯LÁ[iPcÇQRMeclåƒLÁmp`&SXNcpÝS5iPLNcgk[PORQRQ]cgMPiD[xëp[xiP`bT iefYSliLN[ kRc5i cdkRXNë-ìYkY`ðÛgM:S5LNkR`bmXNLZÞd` iPfR`=ã)ÝiPLNåg`DV,S5iæ©[PORapð{ÝT SdÝÝOYMeSdÝëRç‚^?aROpiLN[S5XÁ[PcSgmfYc¼ÝDmp`QYXZclëƒS5aRXN`g^?XZLNÞg` iPfR`cgMeLZÛgð LZkYSgX WMPLÁÝ:Þg`ibâ ’S5T=`M:SULNT=SgÛgLNkRÛ&iec!O?[x`†miPc0mp`iP`bÝiSgkYmõ`bXZLNT=LZk?SliP`kRcdk-XNLZkR`ð”cgà ð[PLNÛgfƒi MP`†SgmpLNkRÛd[’LNkBSdÝcdOY[jieLNÝM:S5kRÛdLZkYÛYâ¼ã kSgmRmRLˆieLZcdkYS5X'[P`kY[PLNkRÛDT=cpmRSgXZLZijë-æ©ÝbS5T=`M:SdçÂTUS5Þd`b[,LZi Q?cƒ[P[PLZaYXZ`iec!SdÝ:fRLZ`båg`9SUXNcpÝS5iPLNcgkõ[Pë¼[xiP`bT iefYSliLN[H[xLNT OYXˆi:S5kR`bcgOY[PXZë0Sgmpð”fRcpÝmp`bQRXNclëdS5aYXZ`d^ mp`bÝ`kƒiPM:S5XN L b`bm·^RSgkYmìYkR`ðÛgM:S5LNkR`†m·â ÇÕWÒ)Ø x× &¼ÙjÕÂÒ & é'cpÝS5iPLNcgk [xORQYQ?cdMxi[Pëp[jie`TU[ijë¼QRLNÝbS5XNXZëiPM:Sgmp`bc5ß Q]cd[PLˆieLZcdkYS5X)SdÝÝORMeSdÝëd^2ieLZT=`bXZLNkR`b[e[ULNk QRMPclå¼LÁmpLNkRÛ9XNc¼ÝbSlieLZcdkULZkpàÍcdMPTUS5iPLNcgkSgkYmUc5iefR`M,ÝSgQYS5aYLZXNLˆieLZ`†[àÍcdM’[eÝSgXNSgaRLZXNLZijëg^g`†Sg[P`)cgàÝcgkRð ìYÛgORM:SlieLZcdk'^l`kY`MeÛgëƒð”)` &ÝLN`kYÝëSgkYm=[xTUSgXZX¼àÍcdMPT à©Sd݂iecgM†^lXZcláóÝcd[xifYS5M:mpá,S5Me`gâ cdMP`bclåg`M†^ XZcpÝS5iPLNcgk[PORQRQ]cgMPi[xëp[xiP`TU[Wc5à ie`k![PORQRQ]cgMPi,T ORXZiPLNQRXN`SgQRQRXNLNÝbSliPLNcgk?[WSgkYmUiefR`Me`àÍcdMP`ÝS5kRð kRc5i=a?`0ÝO?[jiecgT= L b`bmïiecõiefR`!kY``bmY[Dcgà)SõQYSgMxieLNÝORXÁS5MDSgQRQRXNLNÝbSlieLZcdk'â#ã)QRQRXNLNÝbSlieLZcdkY[DiPfYS5i S5Me`DLNkpàÍcgMeT=`bmÇc5àWiPfR`†[x`iPM:SgmR`c5ß·[ÝSgkÇQ]`MPàÍcgMeT a?`ixie`M†âo clá’`åd`M†^][POYÝ:fLZkpàÍcdMPTUS5iPLNcgk LN[cgà iP`kïQRMeclå¼LNmR`bmõLNT=QRXNLNÝLˆieXZëdâ2é'cpÝbSliPLNcgkÇT=c¼mR`XÁ[9S5kYmS5a?[jieMeSd݂iPLNcgk?[[PfRcdORXNmÇa]`&ÜdO?S5XZð Lˆijë`èpQRMe`b[e[xLNåg`ïü )QRMeclåƒLÁmp`S5k `è¼QYXZLÁÝLZiMe`QRMe`b[P`kƒieS5iPLNcgkàÍcgMUiefR`b[P`QYSgMeSgT`iP`bMe[iec S5XNXZclá S5QRQYXZLÁÝS5iPLNcgkY[ieca?`)`ß;`b݂ieLZåd`)S5k?m=Q?cdMxi:S5aRXN`áLˆiefRcgORiÂa?`bLZkYÛiPLN`bmiec SQYS5MPiPLÁÝOYXNSgM XZcpÝS5iPLNcgkpð{[PORQRQ]cgMPi,[Pëp[jie`T0â  Ø Ò Õ   "2 Ö " "ÒWÓ & ,fR`DSgOpiPfRcdMe[)áLN[Pfiec!SgÝ:Þ¼kRcláXN`bmRÛg`9é'`báLN[ LZMecpm0àÍcgMHfRLÁ[)LNkY[PLZÛdfdiPàÍORX#àÍ``†mpaYSgÝ:Þ;â ,fRLN[ MP`†[x`†S5M:Ý:fá,Sg[,[xOYQRQ?cdMxie`bmBa¼ëK ö ÷ÛgM:S5kƒiã)K ð     ¼â  " "Ô "Ò)Ø "3& È )È #°ƒ…‡°ƒŽ{°Y• dÈ ’€:‰ˆd€e~9t ‘ƒ‘ •5t ‘  HÈg}Žy{wj‰ ‘ È ’¾ƒŽÀ©…‡€:Ž{Ž…‡vÄ®ev†Žy¯v†°gyxdvlvw¯…ˆvl®:tb…‡‰ ‚tyj‰‡v ‘ |Nvbw  Š‹€Pw{zŽ{~9t…ˆ…ƒd€:Š5‰‡®:€:Ž:"È !# ##%$W'Î &)('*,+.-0/21*4353567+d9Ë 8:-,; 9Ë *,+(=-@?A-@BeCË +pÎP• DEGF,H'I J4KMLN,OdQ• P’®eyjvbup€ew J4R0R0R5È Jl È È;}Žy{wj‰ ‘ • S)È ’vŠg‰ ‘ ƒt ‘ • d2È ’€:‰ˆd€e~9t ‘ƒ‘ •;t ‘ õŒp2È T°ƒ~9tw‚UÈ ’€PÉgy)®:€ ‘ yj°gw{zB®xżt…‡…ˆ€ ‘ƒ— €eVŽ I Œ5®‚tb…ˆtbud…‡€®:vlvbwxg‰ ‘ tyj‰‡v ‘ ‰ ‘ Ž{€ ‘ Ž{vbw ‘ €ey”Ä#vb:w W5Ž:È)­ ‘ $X&Y*@8xÎj:Î ZCË +7?,([*]\ ^_1`Y efWgThcYbXikj8d`lmacbnac_>acWfT[AoPWMp;q>d`TZV/TYr s acVXidRa@ite svuZwZuuyx{z|wZx}w'~2€'s  ‚Zƒ…„g†ˆ‡R‰?Šm‹ZŒ>‰Œ5AŠŽ‡5>{R‘ŠŽ‡5’…„5“R'” •…–R–P—8˜…‡5‚…Œ…– ƒ\—mŒZ™ šk›‘œggž Ÿˆ Z¡ € bXd>pgWacq`W£¢>TRT5hcl¤W;hcW T^¥Wgd`W{¢b/d§¦¨igrT5[ x5wZwZw…~ ]_>p;qT5[©acq`W /b d>[Eh@ilmaŽhc_>pMac_>hcWCi5dˆ¢£i^`^>V/bXpgi5acb/TZd>lªb/d£acq`W#«\hcTyiZ¢'¬ˆi5dˆ¢ j8d`lmacbnac_>acW­ l ®ªWglŽbX¢>WMd…acbXi5V ¯ i¬¥Th@i acThŽrL°/±;²Jq`ilªbXd…³yTV/³yW{¢£aŽhŽr'b/d`YHacTtV/TRp{i5acW,^¥WgTZ^>V/WZ´¥oPWMp;q>d`TZV/TYZb/Wgl0acTTZ¬ z a@i5bXdVXTRpgi5acb/TZdkb/d>[6Thc]ki5acb/Tdki5hcW©iV/]TlmaµpMTZ]^`V/WMacWMV/rHb/dt^`VXi5pgW ~ i5dˆ¢,d`T{¤§¤UW©i5hcW ¬¥WgYb/d`d`b/d>YCacTCV/TRTZ¶ti5aUacq>W©bXlŽlŽ_>WglUhcWgYZi5h@¢'bXd>Yacq`Wª]kidˆi5YZWg]WMd…aµi5dˆ¢tpgTRT5h@¢>b/dˆi z acb/TZd·T[‘acq`b/l b/d>[6Thc]ki acbXTd?´ ¸ ¹»ºHºH¼@½;¾J¿UÀ¥½@ÁªÂÄÃ#º,¿ ¾JÅ H+ Æ%É{:;D6ΈDX:MDE3Z*k(…1?F§Q"235ÐJ!µ3Ég"21%$'46$'N>D6É(…4µÉg"?(…464E31%N'3Z*:;"?(R::;"%DÊ*Ï%Æ¥:M*ÇÈ$>ÐUÚÛ"2351v:;"%DÊ*·$ˆÉÉ Æ%3)É $'1?*c:MÆ2*@3>Ð%ôÍ(R:g*(…:M3F Q03('ÉgÞ`N>Æ%12FØ*@Æ%O%:;1%N·QDX:M"LO24E$>OØ:MÉgވDE1%N¨(…46N'$>QDX:M" N'1ØDÊ*#Þ'3I):;$¨N>351%3É5É5Æ%1A- :"2(yÎ'3£&£Æ2Ég"ÍD61Í:M"%3£Q#(yIL$…Ç (…Æ%< É Æ%1*;I¥*c:M35&)*(…12*f"2('*#463FØ:M$ :M"%3F¥3*;D612f* XyÑ5$>1%3£1?(…&¨3*F¥3Z*;É57P('12FË$>1%4EIL"2(>*C&·ó|4635Î>354A:M35*,&¨$'Æ%4ÊFËÏ%Æ¥:*@$>&¨3£*@312*;$'4E463É :;$'*tO?3351Ý&)('F%3·QDE:;"Ý7U('12FÙ(…:·*;$'&¨3ÌÏP$'D61`: :M"%D61%N>*Ð !#"%3LÜ`Æ%3Z*c:MDE$>1Ý:;"2351ÙOP3 ó É $'&¨3Z*57‘"%$RQ F¥$Q03)(…1?*@Qf351%3,DÊ*Јùˆ$>&¨3,&)(yI¨1%353ZF&·ó|4635Î>354|7'$…:M"%35('12F)I>3 :#$…:;"235&·3$'1%3·DÊ*DE1§:;"23)O%Æ%D646F¥D61%N?ÐAÚÛ"2(R:DEÇf$>1%46IÍ&ü4E3Î'354 DE1¥ÇÈ$>z h@i5]WM¤T5hc¶id`¢Øi oPTRTZV/¶…bna,[6Th € _`^`^¥T5hŽacb/d`Y©acq`W\®fi5^`bX¢b‘{ hcTacT5a|r'^`b/d>Y0T5[%SUTZdRacW}…| a z8s ¤i5hcW s ^`^>V/bXpgi5acb/TZd>lg´ s d`p@q`T5h‘i5hŽacb/pgV/WT5[ i,lŽ^¥WgpgbXiV2b/lŽlŽ_`WTZd·SUTZdRacW}…| a z8s ¤i5hcWCSUT]^`_'acbXd>Y>´~€‚Mƒ „…E†d‡ =ˆ‚‰Š^‹wŒE„‰Š^‹ƒŽ}‰‘‡ „“’~U†Œ” • ‡ ‚‹E„*ƒ–yʗ acTi5^`^¥W{i h^˜ ~ \e TZV ´ :š™ ~ˆxwZw ±Z´ u ´#b «U_>aŽacTdt s hc]T5hcW{¢ lmacWgWMV pgT]^`_>acW;hp@q`b/^ [6ThWg³yWMhŽr>¢>i{r ¤UW{i5h id`¢ acW{i5h{´ qRaŽac^t u u ¤ ¤ ¤´ b/¬`_>aŽacTd?´ pgT]›u œ ´oj  ®=Jz jžvŸt o?W}>| il j8d`lmaŽhc_`]WMd…acl ®0iZ¢'bXT >z hcWfR  _>Wgd`pMr jm¢'WgdRacb¢ˆ¡ p{i acbXTd € TZV/_'acbXTd`l qRaŽac^t u u ¤ ¤ ¤´ acb ´ pMTZ]› u acbnhcb/lEu

Workshop Location Modeling 44

An Analysis of Location Models for MOOsburg Craig H. Ganoe, Wendy A. Schafer, Umer Farooq, John M. Carroll Center for Human-Computer Interaction and Department of Computer Science Virginia Tech, Blacksburg, VA 24061-0106, USA {ganoe, wschafer, ufarooq, carroll}@cs.vt.edu

Abstract. Based on an analysis of our online virtual community, MOOsburg, we identify two challenging requirements for location models. The need to support parallel physical and virtual worlds and the need to allow for complex definitions of proximity are explored. Two scenarios involving these requirements are presented as well as a discussion of the inherent issues.

1

Introduction

Applications use a location model to provide access to information about people, objects and data. We are interested in the use of location information to support collaboration. Virtual worlds are a popular way for people to communicate. They are online environments such as groupware tools, collaborative websites MUDs and MOOs. Analyzing our virtual community software uncovers two interesting requirements for creating general location models. First, location models need to support an integration of physical and virtual worlds. As ubiquitous computing devices become easier to network, it becomes practical to integrate those "devices" into virtual network communities. For example, electronic whiteboards and other input devices can exist in parallel virtual and physical spaces. Similarly, information present in a virtual world can be made available in the physical world. For example, peripheral displays in the real world can show activity in a parallel virtual space. Second, location models need to support complex notions of proximity. Simple definitions of proximity only take physical distance into account. Yet, physical proximity is not the only way to determine the information available. For example, another approach is to use the context of the user’s activity to define proximity. Additionally, items such as geographical features (roads, streams, etc.) and properties can have their influence. Currently, we are analyzing location models that could support these requirements for our online network community MOOsburg.

2

Background

MOOsburg is a place-based collaborative virtual community designed to closely parallel the town of Blacksburg, Virginia [1, 2]. Originally developed to be accessed

Workshop Location Modeling 45

though a desktop PC, a 2-D digital map provides for random access based navigation of the virtual community. Collaborative tools placed at locations within the virtual community provide access to location related web pages and many other forms of shared content such as whiteboards, message boards, etc. The underlying software architecture supports both synchronous and asynchronous collaborative activities [3]. The current location model of MOOsburg is hierarchical, starting at the town level and working its way down through buildings and rooms, but allowing for other types of places/landmarks at each level of the hierarchy. We are working with handheld devices, such as wireless PDAs, which can provide a convenient, personal user interface into the virtual space. The goal of user interfaces on these devices has generally been to simplify access to information even at the expense of not providing a large feature set. By taking advantage of location and other context models, we can simplify the access to information related to the user’s needs and surroundings. Other systems have integrated physical and virtual worlds. The design of the Jupiter system [4] at Xerox PARC allowed for the convolution of real and virtual (MOO) worlds. In the Jupiter system, it would be possible to dial from real phones into phones in the virtual world and virtual bulletin boards could be seen on public displays in the real world. The system would also allow for sensors in users’ offices to show, in the virtual world, the state of things in the physical world such as office door: open/closed and telephone on/off hook. The Internet Foyer [5] provides equivalent information between a physical foyer, a collaborative virtual environment (CVE) foyer, and a web based foyer. People in the physical foyer see graphical representations of people in the CVE and web foyers projected on the wall. People in the CVE and web foyers see video from the physical foyer integrated with graphical representations from the other virtual foyer. An open audio connection also exists between the CVE and physical foyers. The following section provides two scenarios that are examples of the interactions that we would like to provide in MOOsburg.

3

Example Scenarios

Within the Center for HCI at Virginia Tech, we plan to integrate wireless collaborative meeting place technology into our conference rooms. Each electronic whiteboard will be tied into a parallel virtual whiteboard in MOOsburg with collaborative applications supporting generic whiteboard use as well as typical meeting place activities such as presentations and brainstorming. Sensors and other wireless access information will provide information to remote participants in MOOsburg about the people and activities in the physical conference room. Peripheral display(s) in the conference room will show information about the remote participants. Laptop/desktop as well as handheld applications can then provide personal access into the virtual space. While on a trip, Joe wants to attend a regularly scheduled design meeting that takes place in the center’s conference room. He connects into the conference room with his handheld through MOOsburg. His friend Sue, noticing his arrival on a peripheral

Workshop Location Modeling 46

display, sends a chat message welcoming him and asking about his trip. As the meeting begins, Joe’s handheld coordinates with the applications being run on the whiteboard so that he can see the content and contribute. First the whiteboard is used for a status presentation by one of the team members. Then, the team uses the whiteboard to bring up their design from last week and proceeds to make improvements. This scenario demonstrates the need to provide access to information in multiple worlds. Information was needed in the physical meeting room as well as in the virtual place. This shows how location models need to support a relationship between parallel places. In MOOsburg, the model should use a “separate but equal” policy, as places can require coordinated information or disjoint information. For example, two disjoint meetings can occur at the same place, one in physical world and one in virtual world. The location model also needs to support finding a previous state of shared data. This information needs to be available in both the physical and virtual worlds. For example, we may want to look up information on the state of a whiteboard at the conclusion of last week’s meeting, despite multiple uses of that whiteboard since the meeting. This information might be used at the start of a physical meeting next week or be used for desktop reference by individuals throughout the week virtually via MOOsburg. MOOsburg is interested in supporting collaboration among community groups. The streams project is a cooperation between three groups: SEEDs (Seek Education Explore Discover) and the Virginia Tech Museum of Natural History, both with offices in downtown Blacksburg, and the Virginia Water Resources Research Center, located on the Virginia Tech campus. The cooperative goal of these groups is to monitor and preserve Stroubles Creek. The creek runs through downtown Blacksburg and the Virginia Tech campus, underground in many places. For monitoring activities, 200 ft. by 200 ft. areas are measured off along the creek. Generally, SEEDs typically works with water analysis, the museum (through the Save Our Streams project) counts invertebrates, and the water center performs visual assessment of what is going on in the environment around the stream. Bob takes a small group of middle school students from the museum to a measured downtown section of Stroubles Creek for the Save Our Streams (SOS) project. As they begin to wrap up their invertebrate count at the creek, Bob is entering the numbers on his handheld and notices that the number of Mayflies for this area is considerably lower than it was at this place last year. Determined to show how changes to the creek can affect these numbers to the students, he connects into SEEDs office via MOOsburg and checks for recent water analyses near their location. He finds data just upstream that shows chemical change over the past year and discusses with the students how these changes affect the insect population. Then Bob moves on to information from the Water Center where he steps through visual assessments, up the stream, discussing at each assessment site what environmental changes may have affected the water quality. The scenario demonstrates the need to support different notions of proximity within a location model. Bob needs information based on the SOS activity. A location model needs to use this context as well as his particular location to provide appropriate information. The offices he acquires data from are located on the other side of town and providing information from unrelated locations in between would be

Workshop Location Modeling 47

excessive. Better understandings of context can simplify the information that needs to be provided to the user. The stream provides an example of a more complex geographic entity as a directed path. Paths curve and do not preserve straight-line distances. Also, a person’s location along the stream determines the upstream and downstream sections, where events upstream can influence the current location.

4

Conclusion

Our work with MOOsburg reveals the need for location models to support two complex requirements. The integration of virtual and real spaces involves numerous issues when relating parallel spaces. Different notions of proximity can be very complex, requiring unique descriptors. Analyzing MOOsburg with respect to location models revealed these requirements. Further analysis of MOOsburg or other applications could identify additional challenging requirements.

References 1. Carroll, J.M., Rosson, M.B., Isenhour, P.L., Ganoe, C.H., Dunlap, D.R., Fogarty, J., Schafer, W.A. and Van Metre, C.A.: Designing Our Town: MOOsburg. International Journal of Human-Computer Studies. Vol 54, No. 5. (2001) 725-751 2. Carroll, J.M., Rosson, M.B., Isenhour, P.L., Van Metre, C.A., Schafer, W.A. and Ganoe, C.H.: MOOsburg: Supplementing a real community with a virtual community. In Proceedings of the Second International Network Conference: INC 2000. (2000) 307-316 3. Isenhour, P.L., Rosson, M.B. and Carroll, J.M.: Supporting Interactive Collaboration on the Web with CORK. Interacting with Computers. Issue 2 (2001) 4. Curtis, P. and Nichols, D. A.: MUDs Grow Up: Social Virtual Reality in the Real World. Proceedings of the 1994 IEEE Computer Conference (1994) 193-200 5. Benford, S., Brown, C., Reynard, G. and Greenhalgh, C.: Shared Spaces: Transportation, Artificiality, and Spaciality. Proceedings of CSCW ’96. (1996) 77-86

Workshop Location Modeling 48

Advanced Location Modeling to enable sophisticated LBS Provisioning in 3G networks Stefan Gessler1, Kai Jesse2 1

NEC Network Laboratories Europe, Adenauerplatz 6, 69115 Heidelberg, Germany [email protected] 2 TeraSystems GmbH, Beiertheimer Allee 58, 76137 Karlsruhe, Germany [email protected]

Abstract. Network Operators recently spotted Location Based Services (LBS) as a promising attempt to establish new sources of profit for the upcoming 3G networks (which are expected to be the main driver for real ubiquity) beside voice and video communication [1]. For the time being, LBS are based on a simple location model, which maps basic geographic position information into a human-friendly description of position that is then used as a simple filter parameter within a database lookup. As location awareness for services generally require more extensive conditioning of the location information, the existing models restrict the range of possible LBS. To allow for a comprehensive range of sophisticated Location Based Services, an augmented form of location modeling is required. In this paper we introduce an advanced location modeling system, which comprises a multi-dimensional view of the term 'Location'; far beyond just representing simple geographic positions.

1 Introduction Today the provisioning of Location Based Services follow the simple chain of (1) determination of a position, (2) mapping this information onto a natural-language based description of this position and (3) performing the service itself. The mapping process in (2) is based on an underlying position interpretation model. This model is usually named location model, although it only describes simple geographic positions. The drawbacks of such location models are: − dedication to a specific service − restrictions on interpretation of geographic positions − descriptions of only discrete, point-shaped locations − dependence on the type of position-finding mechanism Because of these characteristics, the set of accessible LBS today comprises only simple services like personal or acquaintance positioning, contextual information regarding locations of restaurants/ATMs/Taxis or direction finding services. We believe that comprehensive support for ubiquity requires the operator to provide an LBS-enabling platform, whereas the position-finding mechanism, especially the serv-

Workshop Location Modeling 49

2 ices themselves, will be developed and deployed by 3rd party service suppliers [3]. The prerequisite for such an LBS-enabling platform is, therefore, openness towards different position-finding techniques and openness towards any conceivable type of service. For the location modeling, it means that the restrictions of service-specific position interpretation must be overcome. To achieve the desired openness of the location model it is necessary to provide (1) extensions to the handling of geographic positions, (2) the integration of non-geographic position information and (3) the consideration of non-position dependent information [5]. In the following sections we outline the requirements for designing an open universal location model comprising the introduced steps of a widened interpretation of the term 'location'.

2 Considerations on advanced Location Modeling As noted in the introduction, a location model is a pre-requisite for the interpretation of a geographic position as a location. Today the identified location still describes a geographical position and the description itself only allows for the referring of a discrete, uniformed location identifier. Next we introduce a more convenient view on location, which first allows for more sophisticated descriptions of geographic positions and also covers non-geographic and non-position related location information. We denote this enriched interpretation of location as location+. 2.1 'MapIt' - Location as Expressive Position Description The transformation of position data into a location description is part of each LBSSystem. Position information derived by position-finder mechanism is based on either geometric (e.g., UMT, MGRS, GEOREF, Longitude/Latitude; see [4]) or symbolic models. (e.g., Cell-Id). This position information is mapped onto locations according to a location model. We can take the word 'mapped' literally in this case, as the location model typically has a 'real map' underlying the model. This map could be a street map; a tourist attraction map, a zip-code table, a floor plan or anything else depending on the service itself. The above-mentioned Longitude/Latitude data can be mapped onto a city descriptor ('New York'), a place ('Times Square'), a ZIP code ('10036'), a bus stop or something similar. Simple mapping is state-of-the-art today. Here the new requirements on a location+ model comprise the ability to combine different mapping rounds to get a bundle of different location descriptions out of one position. Furthermore an option to seamlessly add new 'real maps', which enables the platform to allow an any-to-any transformation, must be kept in mind.

Workshop Location Modeling 50

3 2.2 'SpreadIt' - Domain of Validity At present the position information processed by Location Based Service systems are typically discrete and point-shaped. This is obvious in the case of real geographic coordinates like GPS, but is also true for systems that compute expressive position data: the location 'Times Square NY' can be regarded as point-shaped as the appropriate GPS coordinates. Very often this simple, point-shaped location does not meet the requirements of location aware services and many sophisticated services demand additional position information. In our analysis we identified extension, shape and orientation as indispensable for a significant number of services. This information describes what we call 'Domain of Validity'. The relevance of a domain of validity is described in detail in [5]. 1. Extension of Validity: For the Times Square example, a theatre program service may not only provide musicals played in the Times Square area but also present the off-Broadway theatres as well playing nearby. The relationship 'NEARBY' would here be estimated as an area of 5 blocks (500 meters) around the Times Square. In contrast, late in the evening, when people leave the theatres a bus stop service should not present every bus stop within a range of 500 meters around the Times Square; here a maximum of 1 block/100m would be more appropriate. This simple example reveals that although the location is the same and the same LBS relation 'nearby' is used, the services need different interpretations of the location in terms of an extension of the discrete point-shaped description. Note: the measure of the distance to determine the extension is not necessarily based on units of length, also units of time can be much more appropriate. The location model has to be open enough to support this option in addition. 2. Shape of Validity Domain: To this point we have only dealt with point-shaped locations by integrating information of the extension of validity- 'circle-shaped' location areas. As several LBS demand for a location domain of validity that is different from a circle, we must add the shape of validity domain to our location model as well. One example for the usage of the shape information is that of a traffic jam service: Drivers driving on highway No1 do not want to get information about traffic jams in the adjacent towns. The shape of this validity domain would have approximately the shape of the highway here. 3. Orientation: The traffic jam service example shows us the third type of location data to be added to the model: the orientation. No driver is interested on jams behind him, but of course on jams for at least 100 miles ahead. The location domain of validity of an oncoming driver would have the same geographic position, the same expressive position, the same extension and the same shape, but will obtain a different list of jams due to the different orientation of validity.

Workshop Location Modeling 51

4 2.3 Non-Geographic Location A second area of consideration for a universal location model is the information on non-geographic positions. As those non-geographic positions are heavily application driven, it is not possible to provide a comprehensive and complete specification for a location+ model. By having a non-geographic location available, even e-mail applications could become location aware. In this scenario the method of presenting/recording e-mails changes according to the location of the user: being located in a car, the system would invoke speech synthesizer/recognition tools instead of the classic text based front- ends. Another example is that of the user located within a Virtual Private Network: she/he gains access to all company services, whereas her/his neighbor (with exactly the same geographic position) would be denied access to such information.

VAI VAI supplement supplement

geographic geographic position position

Location+

position position interpreinterpretation tation

non-geogr. non-geogr. position position

Figure 1 From Position to Location+

3 Location Add-Ons In the previous chapter we presented a number of advanced location related information data beyond the simple position, which enables an LBS-enabling system to support a broad range of location-based services. To complement the available information and to allow performing LBS in a comprehensive way, the location+ model is supplied with location independent information. Those add-ons comprise dynamic behavior and context information. These location add-ons are, of course, not naturally part of a location model, nor of a location+ model. But to meet the claimed openness of the location based serviceenabling system, we may disregard these additional information for the underlying

Workshop Location Modeling 52

5 model. The location+ model enriched by location add-ons is therefore called location++ model. 3.1 Dynamic Behavior Dynamic behavior covers the influence of changing environments. Here we distinguish: 1. General Time Dependencies: this class comprises information regarding, e.g., opening hours of tourist attractions (which then have to be matched with the daytime of service access, see context information). Rate of Location Change: having this information available enables the platform to provide expectations on how the conditions could change within a certain range of time. An example is an expected decrease of a traffic jam after the rush hour, so that it is foreseeable that a jam may disappear once a driver reaches the location. 3.2 Context Information The second non-location related information add-ons are context related information. Here we distinguish between object context, user context and context: 1. Object context: describes context of objects inside the location domain of validity. (Examples are types of building, off-limit areas etc.) 2. User context: describes the users characteristics (like being handicapped), the users profiles (like having activated the tourist profile), and general preferences (e.g. the maximum cost of services, the preferred food etc.) 3. General context: LBS often have to consider information like the time of day, the weather and others in order to perform their service. Context

VAI supplement

geographic position

L+ non-geogr. position

position interpretation

Dynamics

Figure 2 Advanced Location Model Location++

Workshop Location Modeling 53

6

4 Modeling 'The great thing about standards is there are so many to choose from'. This saying [6] never turned out to be as true as of the area of LBS today. For this reason, it is advisable to extend an existing approach instead of adding yet another model to the existing set of real or planned standards. A very promising area of extension for our location model is the 'Common Spatial Location Data Set' [7]. This Internet Draft in the framework of SLoP1 [9] aims at providing a data set for spatial location information, bridging various existing data representation formats and meeting the requirements of location aware services. Besides classic position information, the currently defined data-set already comprises advanced information like time, speed, orientation and direction of the located object. The data set is available in XML Schema providing and easy extension and enhancement. Another option [8] enables the attachment of a newly defined additional data set for a location++ model. We are currently preparing a proposal how to integrate the concepts from our location++ modeling approach into the SLoP framework.

5 References [1] [2] [3] [4] [5] [6] [7] [8] [9]

1

E. McCabe, "Location-Based Services offer a Global Opportunity for New Revenue", SignalSoftCorp., http://webhome.idirect.com/~dental/3glocator/locate.htm 3rd Generation Partnership Project, "Technical Specification Group Core Network, Universal Geographical Area Description (GAD)", Release 1999, Technical Specification, 3G TS 23.032 V3.1.0 (2000-03) S. Gessler, "Advanced Requirements for Location-based Services Support Platforms", VON Europe 2001, Stockholm, June 2001 P. Dana, "Coordinates System Overview", The geographers Craft Project, University of Texas at Austin, 1997 S. Gessler, "Location beyond position: Requirements for Location-based Services Portals", Location based Services Summit, pulver.com, Boston (USA), May 2001 A. Tanenbaum, "Computer Networks", 3rd edition, Prentice Hall 1996 M. Korkea-aho, et al., "A Common Spatial Location Dataset," IETF Draft (draftkorkea-aho-spatial-dataset-01.txt), Work in progress, May 2001. M. Korkea-aho, et al., "Spatial Location Payload," IETF Draft (draft-korkea-ahospatial-location-payload-00.txt), Work in progress, May 2001. R. Mahy, "A Simple Text Format for the Spatial Location Protocol (SLoP)", Internet draft, July 2000, Work in progress, http://search.ietf.org/internet-drafts/draft-mahyspatial simple-coord-00.txt

The group around SLoP is about to be accepted as IETF Working Group

Workshop Location Modeling 54

Topological World Modeling Using Semantic Spaces Barry Brumitt & Steven Shafer Microsoft Corporation, One Microsoft Way, Redmond, WA, 98053 USA {barry, stevensh}@microsoft.com

Location models of the physical world play several roles in ubiquitous computing systems, including the provision of physically-scoped lookup, an abstraction layer between sensors and applications, and a shared metaphor with the user. Location models can be divided into two classes: metric and topological. Metric systems rely on distance measurements, while topological systems use relationships between abstract spaces, such as “near”, “contains’”, etc. This paper describes a design for a topological space representation, discusses three ways in which such a model can be used, and poses five open problems in the ongoing development of semantic spaces representations.

Introduction In ubiquitous computing4, the system aims to deliver services to people in the locations of their work, commerce, home life, and other activities. One of the key requirements for ubiquitous computing is the ability to determine where people are, what physical objects and software services are available at those locations, and how people can move from place to place. Some example questions that arise might be: • Where is Joe? • What display screens can Joe see? • Which printer is closest for Joe to walk to? • Is any sensor currently tracking Joe’s location? One approach to answering such questions is to postulate the existence of a detailed floor plan of the space (building etc.), along with the location of all interesting devices in that space, and some method of dynamically tracking the locations of people in the space to a resolution of, say, one foot. With such a metric model of space, ordinary geometric calculations of distance, intersection, and path planning can be used to calculate the answers to these questions. Unfortunately, in real life, such detailed and exhaustive models are generally not available, and persontracking at such a resolution requires a forbidding amount of hardware (e.g. a visual surveillance system1 or a high-resolution active badge system3). In the absence of such a metric model, it might seem that the above questions cannot be answered. However, a significantly simpler space model and tracking system that actually does suffice for answering these questions. Such a system might have a list of the rooms in the building, along with the key corridors and other spaces; a list of the doorways (etc.) that connect these places; and a list of the key objects and devices located in each named place. People’s locations could be determined by

Workshop Location Modeling 55

asking them to take a specific action to assert where they are (such as touching a thumb-scanner when they enter a meeting room), or by using a low-resolution active badge system. Such a model might be called a topological rather than a metric model, because it represents containment and connectedness of spaces but not their size and shape; and it represents the fact of object containment in a space but not any specific geometric position of the object within that space. A similar distinction between metric and semantic spaces has been suggested by Pradhan2. This paper presents Semantic Spaces, a characterization and database representation of spaces that is suitable to embody such a topological model of space.

Semantic Spaces Database A semantic spaces database needs to be generally accessible to a large variety of applications. To simplify the development of such applications, minimizing the complexity of accessing the space information is a high priority. While ubiquitous computing systems are inherently distributed in nature, having a central database for obtaining location information keeps the application from needing to contact a huge set of smaller data stores which might only hold information about a small portion of the model. While centralized approaches can suffer from performance and reliability problems, modern relational database systems such as Microsoft SQLServer 2000 and Oracle9i are designed for high throughput and reliability. Beyond robustness and scalability, commercial databases also provide standard methods for data access and the ability to load custom code onto the server to perform more complex operations on the data, both of which further simplify application development. For these reasons, the Semantic Spaces database is currently implemented on SQLServer, and therefore, the following description of the logical schema will be given as standard relational database tables. Spaces A Semantic Spaces needs some representation for people, electronic devices, software services, and other things of interest, and should provide a single, uniform naming system for all of these. Semantic Spaces are not primarily concerned with these things, but rather with the spaces that contain them. So, they will be referred to as atoms and the name of an atom its ID. It is sufficient to assume that the surrounding system can determine all useful properties of an atom given the ID. For example, there might be a database that lists the IDs of all atoms and tells what type of atom they are and so forth. In Semantic Spaces, the key element is a set of spaces, which have unique IDs, in a similar fashion to atoms as described above. They can be represented using a database table as follows: Space ID

Friendly Name

Type

Workshop Location Modeling 56

This denotes a table of entries, each of which has the above fields. The Space ID will be used as the system’s “name” for the space. The Friendly Name allows the system to print messages about the space that will hopefully be meaningful to a person. A “Type” field allows description of the type of space. It is valuable to add this type indication about whether a space represents a room, a building, a work area inside a room, etc., to aid in limiting queries about spaces and atoms, and to otherwise differentiate types of spaces for query and UI purposes. However, the definition of these types remains a complex issue which will not be discussed here. Containment With Semantic Spaces, to goal is to reply to queries about world state, i.e. to search the model for the IDs of atoms and spaces that satisfy the conditions of a given query. One possibility would be to assume that the physical world is partitioned into unambiguous, non-overlapping pieces that exhaustively cover the area in question; then, each atom could be assigned to its piece. However, the real world is not so neat. For example, an office may be part of a floor, a wing, and a particular organizational group. Partitioning is, in effect, a way of centralizing all semantic boundaries into a single level of representation without any representation of aggregation or multi-level abstraction. Partitioning seems to be too heavy-handed and inflexible a tool to create a satisfying space representation. What is needed, though, is the concept of one space being contained wholly within another, i.e. to be a subset or subspace of another. Mathematically, this can be expressed as S ⊆ T, and the equivalent proposition in terms of set membership, is ∀p (p ∈ S) ⇒ (p ∈ T). In other words, the assertion S ⊆ T means that, if we are given the assertion p ∈ S, we can then assume p ∈ T. Assertions of the form S ⊆ T are represented in the database with the following table: Space ID of superset (T)

Space ID of subset (S)

Note that this relationship defines a partial ordering on spaces, and thus implies a lattice structure among them. This is similar to a tree, except that each space could have multiple parents. Thus, one can still speak of the children and descendants of a space in the same way that one would speak of these concepts in a tree, except that the descendants (as well as the ancestors!) also form a lattice rather than a tree. Presence In space representation, a primary concern is what is inside a space, i.e. what devices, objects and people are located within it. The assertion that an atom a is within a space S is denoted simply as a ∈ S. A subtlety here is that S is not actually a collection of atoms, but a collection of points in the physical world; thus a actually represents the location of the atom. One consequence is that in this system, it is not possible to represent an object which is partly inside a space and partly outside of it. Assertions about the presence of atoms in spaces can be represented by a table as follows:

Workshop Location Modeling 57

Space ID

Atom ID

Note that there is no assumption that an atom is only a member of a single space. Therefore, the assertion a ∈ S by itself does not imply anything about the relationship between a and any other space. If combined with the assertion S ⊆ T, of course, it is possible to infer a ∈ T, which is very important for these semantics. This is the justification for propagating queries throughout the space lattice. For example, if the query is poised, “List the atoms in T”, then the list must include the contents of T as well as the contents of all spaces in T’s subspace lattice. Notice that this may not in fact produce the list of all atoms that are inside space T in the physical world; it merely produces the list of those that are asserted to be within T in the Semantic Space model of the world. Thus, all atoms produced are guaranteed to satisfy the query; however, there might in fact be additional interesting atoms in the world that satisfy the query but are not included in the list. Conversely, if we ask “List the spaces that contain a”, then we would include every space that contains a directly, and all the superspace lattices of those spaces. One additional issue that arises is the relationship between presence and containment. Formally, atoms a and spaces S, T are different types of things, so we say a ∈ T but S ⊆ T. This means there are two separate tables – the Space Containment Table and the Atom Containment Table. However, if the Space IDs are drawn from the same set as Atom IDs, then presumably every space might also be represented as an atom of type “Space” and a single table could be used to represent containment of both atoms and subspaces.

Using Semantic Spaces

Presence of People Because the locations of people are so important in ubiquitous computing, it is worth considering where the assertion might come from that a person is located within a particular space. Here are some possible origins for the assertion that Joe is in a particular room: •

Joe logs onto a computer that is known to be in this room.



Joe uses an application on his Pocket PC to assert he is in this room.



Joe speaks a recognition phrase to microphones known to be in this room.



Joe touches a thumb scanner known to be in this room.



Joe is wearing a badge which is tracked and interpreted to be in this room.



Joe is presents his face to cameras known to be in this room



Joe waves his Pocket PC to contact an IR beacon known to be in this room.

Workshop Location Modeling 58

Through any of these technologies, the system might generate the assertion that Joe is in this room. In some cases, the system might automatically know when Joe leaves the room; in other cases, the system would know this only if Joe takes some action to inform it. If such an action is required, Joe might be likely to leave the room without doing this, in which case the assertion in the model would be incorrect. One could imagine augmenting the entry in the Atom Containment Table with a time stamp to indicate when the assertion was made, and possibly even adding a notation that indicates the source of the assertion, to help “expire” old assertions. Also, if Joe enters a location that is known not to intersect with this room, then the assertion that Joe is in this room can be expunged. Presumably, similar technologies might be used to create assertions in the model about the present of objects and computing devices in various spaces. Geographic/Semantic Lookup Many operations in ubiquitous computing scenarios require the ability to know the location of devices. For example, if the goal is to deliver an urgent message to a particular person, it is helpful to know if there are nearby displays, speakers, a PC, a cell phone, and so forth. Alternatively, if the task is to local and appropriate printer, knowledge of the location of the user and the printers can be used to produce a list of printers which is limited to those nearby, rather than listing all printers available on the network – a potentially huge set! Semantic spaces can be used for this sort of geometric lookup task. In the first example, the application responsible for messaging could look in the spaces which contain the person, and also in the spaces which are contained by the person to find potentially useful devices. If the application is aware of the ontology which describes spaces, it could use this knowledge to further limit the device set by not searching outside of the current room, hallway, or other immediate container. Space Browsing Interface For many applications, the goal of the semantic space representation is to aid the user in the selection of a device or service based on location information interpretable by the user. This is somewhat analogous to selection of files by navigation of a typical folder-based file explorer. The primary difficulty in designing such a space browser is that the space structure (a lattice) is much more difficult to visualize than the file structure (a tree). The following diagram shows one possible design for a space browser. [The folder icons are placeholders; in a real UI, they could be used to reflect type information]

Workshop Location Modeling 59

The left pane of the window allows browsing of the topological structure, with arrows used to expand or collapse a node’s parents or children. Note that, for example, since this structure is a lattice, “Building 112” appears twice in this view. The right-hand panes show the contents and containers of the selected space. The type filter window would allow filtering of that view by the selected type(s). The “Show All” option causes the pane to show all contained entities, including those anywhere in subspace lattice. This would make it easier to list all printers (for example) on a given floor, just by selecting the floor and filtering based on the type. This type of interface could also be used to edit the space model, such as by adding new devices, spaces, relationships between spaces, and so forth. It is expected that a space model will be modified by many users, rather than being the product of a single person. One of the advantages to using containment as the primary relationship between spaces is that additions of new assertions about the world can never invalidate older ones.

Some open problems The given semantic spaces description implies several interesting open problems. An incomplete list of such problems follows: 1. Ontology : What is the set of types that are used to describe spaces and atoms? Who controls this type definition, and how can it be extended?

Workshop Location Modeling 60

2. 3.

4. 5.

Efficient browsing : UI’s for lattice navigation are complex and hard to use. Is there a simpler interface for quick completion of typical tasks? Navigation : By adding portals between spaces, and perhaps cost metrics, paths could be planned between spaces, so that the database could be used to answer questions like “How do I get to conference room 112/3912?” Access : Who modifies the database? Who ensures it is correct? Person Tracking : Different sensors have subtly different semantics; how can this information be successfully represented so that applications have the appropriate interpretation of the data ?

Conclusion This paper introduces Semantic Spaces, a topological representation of the physical world for ubiquitous computing. The semantic space database has been implemented in a relational database system, using 3 logical tables to represent spaces themselves along with presence, and containment in those spaces. The database can be used for storing person (and device) location and for providing physically-scoped lookup. A sketch for a UI to help the user construct and navigate such spaces was also presented, though providing simpler interfaces remains an open challenge. Finally, a set of open problems related to semantic spaces was given.

References 1. Brumitt, B. L., Meyers, B., Krumm, J., Kern, A., Shafer, S. "EasyLiving: Technologies for Intelligent Environments", Handheld and Ubiquitous Computing, 2nd Intl. Symposium, , pp. 12-27, September 2000. 2. Pradhan, S. “Semantic Location”, Personal Technologies, Springer-Verlag, Vol 4., No 4., pp. 213-217, 2000. 3. Ward, A. et al. “A New Location Technique for the Active Office”, IEEE Personal Communications, Vol. 4, No. 5, pp. 42-47, Oct 1997. 4. Weiser, M. “Some computer science issues in ubiquitous computing”, Communications of the ACM, 36(7):75--85, July 1993.

Workshop Location Modeling 61

Workshop Location Modeling 62

Location Modeling for Intentional Behavior in Spatial Partonomies Christoph Schlieder, Thomas Vögele, Anke Werner Technologie-Zentrum Informatik, Universität Bremen, Postfach 330440 28334 Bremen, Germany {cs, vogele, anke}@tzi.de http://www.tzi.de/

Abstract. Due to technical and design-based constraints, mobile devices need special capabilities of proactive information presentation. In the simplest case, these can be the intelligent arrangement of menu items based on their relevance for a specific situation. Typically, the decision about what is relevant to the user is taken on ground of information about the user’s spatial location. We show that if the regions of the geographic space in which the user moves are structured hierarchically by partonomies a disambiguation problem arises. To resolve the problem, not only the user’s location but also his motion must be taken into account. We propose a location model that supports inferring intentional behavior in spatial partonomies from motion patterns.

1

Intentional behavior in geographic space

Location-aware services pioneered by researchers at the Xerox Parc Laboratory under the vision of ubiquitous computing (see e.g. Schilit & al., 1993) exploit the idea that the intentions of a human agent can be inferred from information about his current location. This is a valid assumption in certain cases. However, intentional behavior often correlates with complex motion patterns rather than with location. A further challenge for location modeling comes into play when the user’s mental representation of space is considered. From psychological research it is known that region-based representations of geographic space tend to be organized hierarchically by part-of relations (Hirtle, 1995). Thus, intentional spatial behavior seems intrinsically bound to what AI research has called spatial partonomies (e.g. Davis, 1990). For ubiquitous computing, this raises the problem of finding a suitable location model for identifying the intentions of a user who is moving in an environment structured by partonomies. This paper reports on a particular instance of this problem which we encountered in the TourServ project1. Within the scope of this project, a service platform is build that provides regional information and navigation support to tourists. A pilot system is under development for the Italian ski resort of Scopello near Milan. The tourists use mobile devices, such as PDA or smart phones, to get optimal support during their 1

Funded by the European Union IST-1999-20414

Workshop Location Modeling 63

skiing, mountaineering, or hiking activities. The tourist’s actual position is gained by GPS and this information is used to guide proactive information presentation and navigation support. In the simplest case, this amounts to finding an arrangement of menu items based on their relevance for a given situation. Positioning technologies like GPS are able to provide sufficiently exact information for navigation purposes but they do not resolve the problem of identifying which spatial context is relevant for the user. A position on a digital map typically corresponds not to a single region but to a hierarchy of regions. The tourist located at the ski lift in the resort of Scopello is also located in the commune of Scopello, in the valley of Varese, and in Italy. Depending on the tourist’s intentions, any of these regions can become the focus of relevance for services such as formation presentation or navigation.

2

The problem of ambiguous location

We would like to illustrate the problem of ambiguous location, and to present our solutions for that problem, using a simplified application scenario. In this scenario, a tourist explores an art museum. He is assisted by a mobile device connected to the museum’s tourist information system. The basic problem lies in the fact that the tourist’s location is part of multiple, hierarchically stacked “spatio-thematic regions”. A typical museum consists of a number of buildings or wings, each of which is subdivided into several exhibition rooms (Fig. 1). Each room holds a number of exhibits. The museum itself is located in a specific district of a city, and the city is part of a country.

Museum

Wing Room Exhibit

Fig. 1. Ambiguous location in the museum partonomy

Depending on the spatio-thematic region we look at, different types of services are relevant: On the museum level, we are interested in global navigation services which guide the user from one wing to another whereas on the exhibit level specific information services are needed which inform about a painting and its painter (Tab. 1). A proactive mobile information system has to decide which of the spatio-thematic regions is the most relevant.

Workshop Location Modeling 64

Tab. 1. From motion patterns to service layers

Motion pattern

Intentional behavior touring_museum traversing_wing visiting_room

Level in partonomy museum wing room

any type of motion moving fast moving slowly or looking_at_exhibit resting and oriented towards exhibit

Service layer global navigation local navigation general information

looking_at_exhibit

exhibit

specific information

The naïve approach of simply using the region that is closest by would yield erroneous results in the case of a user who moves through the exhibition rooms to get from one wing of the museum to another. He would be prompted with (unwanted) information about all the exhibits on his way while actually needing help on how to navigate through the building. To solve the problem of disambiguation, i.e. to decide, which one of the spatio-thematic regions is relevant, we propose to analyze the tourist’s motions. The detected motion patterns can be used to make assumptions about the tourist’s intentional behavior, which in turn can define the relevant focus region and the services to be offered by the tourist guidance system. To be able to solve this task we need a.) to encode typical motion patterns, b.) to represent a partonomy of the regions, and c.) to encode the 2-dimensional, spatial relations of the spatio-thematic regions. In this paper, we will focus on a.) and b.). For c.) we adopt the solution described in Schlieder et al. (2001).

3

Location modeling with partitioned motion patterns

The encoding scheme for motion patterns has to meet several requirements. Firstly, there is the need for an adequate representation of the temporal dimension. Secondly, the encoding should be domain-independent, which implies that it should abstract from specific sensors. Thirdly, it should be sufficiently expressive to deal with spatial partonomies. In the following, we propose such an encoding scheme. Encoding motion patterns Formally, a motion pattern is defined as a non-empty sequence of elementary motions each of which is a 5-tuple of spatio-temporal parameters: (position, heading, direction, distance, duration). Position and heading describe the outcome of the motion, that is, the current location of the agent and the direction it heads towards. The other parameters give some information about the motion itself. Direction, distance, and duration of the motion are measured with respect to the previous elementary motion (see Fig. 2). Note that the

Workshop Location Modeling 65

parameters convey redundant information only if they are computed from complete and correct sensor data – an assumption which is rarely given in practice. heading distance, duration

position

direction

Fig. 2. Motion pattern and parameters of an elementary motion

Each parameter is specified by a magnitude and a measuring system, e.g. 15,3 m or 3 s. This way, not just the results of quantitative measurements but also those of qualitative measurements can be stated. Different systems of qualitative spatial and temporal measures have been studied in the field of Qualitative Spatial Reasoning (see Cohn, 1997 for an overview). Typically, magnitudes of qualitative measurements are elements of relational algebras that axiomatize simple computational operations such as relational composition (Ladkin & Maddux, 1994). For instance, a relational algebra defined over {north, west, south, east} allows to express qualitative directions, whereas a relational algebra defined over {near, medium, far} describes qualitative distances. The 5-tuple of measurement systems of an elementary motion is called the motion’s signature. Below are the signatures of a quantitative and a qualitative description of a motion. position: [Gauss-Krüger] heading: [radian] distance: [meter] direction: [radian] duration: [second]

position: {inside, outside} heading: {any} distance: {any} direction: {any} duration: [second]

The quantitative description is typical for GPS-based localization as it is used, for instance, in the TourServ project. The qualitative description abstracts from all distance, direction, and heading information. It only indicates whether the agent’s position after the motion falls inside or outside the region considered. This matches with region-based sensors that can only detect that the agents enters or leaves a region. Obviously, the encoding scheme is sufficiently flexible to handle both, quantitative and quantitative description. Therefore, it fulfils the requirement of sensor-independence. Encoding spatial partonomies Motion patterns easily combine with hierarchical data structures that describe spatial partonomies. Such partonomies are the result of recursively applying the spatial partof relation to describe the decomposition of wholes into parts, i.e. regions into subregions. In our approach we make use of the representation for spatial partonomies described by Schlieder & al. (2001) in a GIS context. Different types of spatial part-of

Workshop Location Modeling 66

relations can be distinguished. To define these types, we assume that the regions are encoded as polygons, and that each polygon is a closed sets of points, i.e. edges and vertices belong to the polygon. If we consider polygons P1, … ,Pn that are contained in a part of the plane bounded by a polygon P, three types of arrangements of the polygons within the containing polygon P can be distinguished. (1) polygonal covering, where P1 ∪… ∪ Pn = P. The polygons cover the containing polygon. In general, they will overlap. (2) polygonal patchwork, where for all i ≠ j from {1, …, n} interior (Pi ∩ Pj) = ∅. The polygons are either disjoint or intersect only in edges and/or vertices. (3) polygonal tessellation, which is a polygonal covering that also forms a polygonal patchwork. We introduce the decomposition tree which is defined recursively as hierarchical data structure for encoding the spatial part-of relation together with the type of arrangement of the parts. The nodes of the decomposition tree denote regions and are labelled with one of the following arrangement types: patchwork, covering, tessellation, undecomposed, other. By abstraction from the type of spatial arrangement one obtains the partonomy that underlies a decomposition. This partonomy is obtained by omitting the labels from the decomposition tree. The type of spatial arrangement determines which spatial relations may be modeled. For polygonal tessellations, it is possible to formalize both metric (denoting distance), ordinal (denoting directions), and topological (denoting neighborhoods) spatial relations (Schlieder, 1996). These can be represented using graph-theoretical constructs like neighborhood- and connection graphs (Schlieder et al., 2001). Given the existence of valid quantitative GIS or CAD data, the automatic or semi-automatic creation of such qualitative models is straight-forward. Where tessellations are not available, it is possible to map other polygonal arrangements onto standard tessellations. For the purpose of describing motion patterns, we need to represent the way in which a partonomy (or a decomposition) divides the motion pattern into subpatterns. Each spatial region determines a subpattern: the sequence of elementary motions that occur within the region. The organization of the regions in the partonomy is inherited by the subsequences. We call this hierarchical structure a partitioned motion pattern (see. Fig. 3). Analyzing partitioned motion patterns The primary interest of partitioned motion patterns is that they provide information about the agent’s intentional behavior with respect to the spatial partonomy. To extract this information, an analysis is run on the fly. Each new elementary motion triggers an evaluation cycle. The evaluation tries to associate an intentional behavior to each open region, that is, to each region in the partonomy which contains the agent’s current location. First, the open region with lowest position in the partonomy is considered. Then, the analysis proceeds to the superregions in the order they appear in the partonomy (see. Fig. 3) Finding an intentional behavior for some part of the motion pattern amounts to solving a classification problem. Different algorithmic solutions for classification are available such as neural networks or decision rules. We chose a rule-based approach because it enables the software developer to explicitly state which motion patterns are

Workshop Location Modeling 67

associated with a specific intentional behavior in the application domain he is modeling. During an evaluation cycle, the decision rules are applied to the parts of the motion pattern proceeding from the most specific to the most general open region. As soon as a rule fires, a region has been found for which the motion pattern can be associated with an intentional behavior. This most specific region with an intentional behavior is then identified as the focus region and the menu for the service associated with the focus region is presented to the user on his mobile device. r1 r2

r3 rk

11

12

...

...

...

...

k1

...

new

elementary motions

Fig. 3. Identifying intentional behavior from partitioned motion patterns

4 Related approaches and discussion In a wide range of published work, location is the dominant context parameter to tailor information presentation. Several guide systems have been built (e.g. Abowd & al., 1997, Davies & al., 1998, Oppermann & al., 2000.) which use the user’s current location and his travel history to predict objects of interest to visit. But most of these simply use spatial regions that are closest by, or represent the smallest region for a specific location and do not consider that this location belongs to several spatial regions of a partonomy. If the user is located within the region of a specific object as defined for example by an Active Badge (Want & al., 1992) sensor, the system would decide that this region is the most relevant one and prompt the user with detailed information about this object. We are not aware of any work dealing with the disambiguation problem connected with intentional behavior in spatial partonomies. Mental representations of motions have been studied by researchers on spatial cognition, especially Musto & al. (2000). Based on data from psychological experiments, they propose a qualitative motion representation which uses sequences of qualitative motion vectors. These can easily be expressed in our more general framework as they encode only the direction and distance parameters of the elementary motion we defined. The central concern of Musto & al. (2000) is with a cognitively plausible segmentation of a motion pattern into subpatterns. In our case, however, segmentation is not internal but external, that is, induced by the regions of the partonomy. In our paper, we have shown that a disambiguation problem arise in connection with the interpretation of the user’s intentional behavior in a spatial partonomy. We have argued that the observation of the user’s motion can provide valuable information for inferring his intentions and proposed the idea of partitioned motion patterns.

Workshop Location Modeling 68

References 1. Abowd, G.D., Atkeson, G.C., Hong, J., Long, S., Kooper, R., and Pinkerton, M. (1997). Cyberguide: A Mobile Context-Aware Tour Guide. ACM Wireless Networks, Vol. 3, pp. 421-433. 2. Cohn, A. (1997). Qualitative spatial representation and reasoning techniques. In Proc. KI97: Advances in Artificial Intelligence, (pp. 1-30). Springer: Berlin. 3. Davis, E. (1990). Representations of commonsense knowledge. Morgan Kaufman: San Mateo, CA. 4. Davies, N., Mitchell, K., Cheverst, K., and Blair, G. (1998). Developing a Context Sensitive Tourist Guide. Proceedings of the First Workshop on Human Computer Interaction with Mobile Devices, pp. 64-68, University of Glasgow, UK. 5. Hirtle, S. (1995). Representational structures for cognitive space: Trees, ordered trees and semi-lattices. In: Spatial Information Theory (COSIT-95), (pp.327-340), Springer: Berlin. 6. Ladkin, P. & Maddux, R. (1994). On binary constraint problems, Journal of the ACM, 41, pp.435-469. 7. Musto, A., Stein, K., Eisenkolb, A., Röfer, T., Brauer, W., & Schill, K. (2000). From motion observation to qualitative motion representation. In: C. Freksa & al. (eds.) Spatial Cognition II (pp. 115-126). Springer: Berlin. 8. Opermann, R., and Specht, M. (2000). A Context-sensitive Nomadic Information System as an Exhibition Guide. Proceedings of the Second International Symposium on Handheld and Ubiquitous Computing, Bristol. 9. Schilit, B.N., Theimer, M.M., Welch, B.B. (1993). Proceedings of the USENIX Mobile and Location-independent Computing Symposium, pp. 129-138, Cambridge, MA. 10. Schlieder, C., Vögele, T., & Visser, U. (2001). Qualitative spatial representation for information retrieval by spatial gazetteers. In: Spatial Information Theory (COSIT-01), Springer: Berlin. 11. Schlieder (1996). Qualitative shape representation. In A. Frank (ed.), Spatial conceptual models for geographic objects with undetermined boundaries. London: Taylor & Francis. 12. Want, R., Hopper, A., Falco, V., and Gibbons, J. (1992). The Active Badge Location System. ACM Transaction on Information Systems, Vol. 10, No. 1, pp. 91-102.

Workshop Location Modeling 69

Object Location Modeling in Office Environments — First Steps Thomas Pederson Department of Computing Science, Umeå University, SE-90187 Umeå, Sweden [email protected]

Abstract. In this position paper we briefly present our application of location modeling onto office environments, in the context of our general goal of designing physical-virtual knowledge work environments.

1

Introduction

In a world where humans increasingly find themselves in a state of information overload, location data can serve as valuable input to computer systems for filtering out irrelevant information based on the current physical context of the user. This is based on the assumption that there is a relationship between the user’s interest and her/his physical location. Similarly, the location of objects can say a great deal about what the objects mean to the person that placed them in their particular locations as well as how they are perceived by others. In this position paper we describe our research efforts in designing integrated physical-virtual environments where object location tracking is an important part of the underlying architecture. However, we are only in the beginning of understanding how this location data best can be used to facilitate and support the physical activities performed by the user. We believe that object location tracking has the potential of enabling services beyond information filtering. The remainder of the paper will discuss the underlying location model that the Magic Touch system uses to represent physical activities in a virtual environment as well as point to some open questions related to location modeling to be addressed in future work.

2

Designing Physical-Virtual Knowledge Work Environments

Definition: A knowledge worker is a person principally concerned with data, information, and knowledge as working objects, often working with these in both the physical world and the virtual (digital) world, and sometimes in the borderland between them. Common work tasks are to create, search, refine, and mediate data, information, and knowledge [2] based on [3] and Kidd [4]. Since aspects of knowledge work are present in almost all human activity, we do as designers of knowledge work environments cur-

Workshop Location Modeling 71

rently focus on supporting certain kinds of knowledge work, namely activities in office environments, in order to reduce the research scope. In offices, people tend to organize their physical environment based on general parameters such as how objects relate to other objects, how often they are used, the urgency of dealing with issues connected to them, personal interests as well as personal preferences for how to organize their workspace [6]. Seeing Wellner's DigitalDesk [10] as a starting point, there has been a continuous interest in merging the physical and virtual worlds in office environments and in more specialized settings such as [1, 5]. Although knowledge work activities often involve extensive use of the virtual environments that modern information technology provide, significant working time is spent on activities in the physical environment as well. However, knowledge work environments equipped with personal computers tend to create a significant gap between the virtual environment offered by the computer system(s) on the one hand, and the surrounding physical environment on the other [7, 9].

2.1 A Physical-Virtual Design Perspective In order to overcome this gap, a perspective for design and analysis of integrated physical-virtual environments is currently under development, based on analysis of differences and similarities between physical and virtual environments, such as [1, 7]. This physical-virtual design perspective emphasizes a holistic view on the design of knowledge work environments and the objects within them, in order to break loose from traditional distinctions made by designers of software, electronics hardware and architecture [7]. A core concept within this design perspective is the concept of Physical-Virtual Artefacts (PVAs), things that consist of both a physical and virtual representation, tightly linked to each other. Changes done on the physical or virtual representation of a specific PVA is assumed to immediately change the state also of the other. Notation: While PVA refers to both instantiations of a PVA (that is, the PVA as whole), PVA refers to the physical instantiation of a specific PVA and PVA refers to the virtual instantiation of a specific PVA.

2.2 Magic Touch Physical-virtual homomorphism is assured by a computer system, Magic Touch [8], which recognizes any alterations on PVA instantiations and consequently performs the appropriate update to the other corresponding instantiation (see Fig. 1.). Fully developed (it is still under development), this system will make use of a large amount of

Workshop Location Modeling 72

physical and virtual activity data to be used as input to user modeling and object (PVA) modeling algorithms. Technically, the object location tracking is performed by a combination of RF/ID, infrared and ultrasound technology based on a small wearable unit placed on user’s hands, as described in [8].

3

Location Modeling

Almost all physical activities performed by humans involve moving things from one place Fig. 1. Magic Touch basic architecture [8] to another. Sometimes it is a matter of millimeters, sometimes its about thousands of kilometers. Sometimes its about moving parts of an object while at other times collection of objects are moved all at once. We have found it useful to differentiate between two different kinds of object manipulation: Inter- and intra-manipulation. • Inter-manipulation stands for activities that change the relationship between a specific object and other objects. • Intra-manipulation is manipulation of a specific object that changes that objects internal state, not necessarily affecting the relationship between the manipulated object and the others. Currently focused on inter-manipulation, the Magic Touch system registers PVAs’ new locations in a database, as soon as they are moved from one location to the other within the tracked environment. Thus, the system can be said to maintain a low-level location model of all PVAs. In addition, the user is given the possibility of defining threedimensional spaces in the physical environment, “active volumes”, and to give these spaces names. The user can also assign virtual functions to the active volumes so that activities within a specific volume triggers an application to start or a certain operation to be applied, based on the activity. At the time of writing, the only activity that can be assigned functions is the activity of putting a PVA into an active volume. As an example, the user could define an active volume called “inspection” that automatically displays PVAs that corresponds to any PVA that is put into the active volume. Fig. 2. shows a simple virtual representation of a physical office environment. The

Workshop Location Modeling 73

user has defined active volumes for some furniture in the office including bookshelves and desks. Each active volume is represented as a folder in the hierarchical tree structure. PVAs and active volumes placed within active volumes become children to the folder that represents the active volume. This model, defined by the user and maintained by the system, allows the system and the user to communicate about PVA locations and relationships between PVAs, based on the names that the user has given them. Thus, this tree structure represents a Fig. 2. A physical office represented as a tree structure [9] higher-level location model of the physical space compared to the coordinate-based mentioned earlier. E.g., as a result of a search operation, it is more suitable for the user to learn that the phone book is on the second bookshelf rather than on coordinates 23, 289, 119. We have also implemented a Virtual Reality-based visualization of the physical space that, however, from a modeling view is identical to the one represented in Fig. 2.

3.1 The Physical World as a Tree Is it reasonable to model the physical world as an hierarchically organized collection of invisible volumes and artefacts contained by them? For our purposes, having the goal of modeling user’s way of organizing their knowledge work environments, we believe that it is a powerful and yet simple modeling approach. A strength is that the level of model granularity, or the “volumisation”, 1) is controlled and configurable by the user, and 2) is allowed to be different in different parts of the tracked physical environment. Thus, physical places where high-precision/shortmovement activities are of interest (e.g. a wall-hung geographical map having markers attached to it for the representation of company offices throughout the world) can be mixed with spaces where detailed modeling is of little use (e.g. in open air where physical bodies are not supported by a force perpendicular to gravity). This model does not

Workshop Location Modeling 74

compromise a simple three-dimensional grid because the active volumes can be of different size and be nested.

3.2 Problems and limitations: •





Clashing volumes. Spaces only partially enclosing each other are hard to handle. We have found it necessary to constraint active volumes to either spatially fully enclose each other or to be completely disjunctive. If this restriction is not followed, PVAs can get several representations in the same tree structure which for most applications probably would confuse the user. Context dependency. Naming of the active volumes is task/perspective dependent. Certain physical locations mean different things in different context. For environments used by more than one person and/or for more than one purpose this can be limiting. For our purpose we don’t see any big problems since offices, at least as regards the physical organization of objects, are mainly used by one person only. Model construction overhead. To define active volumes introduces overhead since three corners of the volumes have to be pointed out and the volume should be given a name. We have tried to at decrease the possible distraction from the work by giving the active volumes default names at the time of definition. Nevertheless, under normal working conditions we expect office workers to spend most time with defining the volumes in an initial stage so the overhead in a longer perspective is assumed to be relatively low.

3.3 Possibilities for Improving Interaction using Location Modeling Gathering and interpreting information about user activities in knowledge work environments has the potential to improve knowledge work environments in many ways. A few potential location-modeling-based contributions could be: • Information/functionality filtering, allowing for minimalistic interaction styles using small interaction devices (small screens, few buttons), possibly wearable • Re-design of the working environment with respect to Euclidian, topological and temporal aspects to better suit most frequent or most time/space/cognition-intense tasks • Incitement for the creation of knowledge work tools that rationalize (compresses, compiles) recurring object-use sequences by providing tool functionality applicable on all objects at the same time instant • On-demand organization suggestions where the system proposes suitable place-

Workshop Location Modeling 75



4

ment of new/altered objects based on their similarity with objects already existing in the environment. This presupposes that semantic analysis of the existing objects in the environment has been performed (relatively cheap if the objects are PVAs since then it is enough to analyze the already digitized PVA). The users’s spatial organization of PVAs can be analyzed from a similarity perspective and connections between objects that otherwise would be impossible to infer since it is based on implicit user knowledge not perceivable by the system.

Conclusions and Future Work

We have presented our initial attempts to model office environments based on location changes of physical objects. Extensive refinements and additions to our model is left for future work.

References 1. Arias, E., Eden, H., Fischer, G.: Enhancing Communication, Facilitating Shared Understanding, and Creating Better Artifacts by Integrating Physical and Computational Media. Designing Interactive Systems (DIS 97): Processes, Practices, Methods and Techniques Conference Proceedings. ACM Press (1997) 2. Broberg, A.: Tools for Learners as Knowledge Workers . PhD Thesis, UMINF-00.01, ISSN0348-0542, Umeå University, Sweden (2000) 3. Drucker, P. F.: Management: Tasks, Responsibility and Practices. New York: Harper & Row (1973) 4. Kidd, A.: The Mark are on the Knowledge Worker, presented at The Human Factors in Computing Systems (CHI'94), Boston, ACM Press (1994) 5. Mackay, W. E., Fayard, A.-L., Frobert, L., Médini, L.: Reinventing the Familiar: Exploring an Augmented Reality Design Space for Air Traffic Control, in Proceedings of CHI'98 , ACM Press (1998) 558-565 6. Malone, T. W.: How Do People Organize Their Desks? Implications for the Design of Office Information Systems. In: ACM Transactions on Office Information Systems, Vol. 1, No. 1 (1983) 99-112 7. Pederson, T.: Physical-Virtual instead of Physical or Virtual - Designing Artefacts for Future Knowledge Work Environments, in Proceedings of the 8th Int. Conf. on Human-Computer Interaction, Lawrence Erlbaum Associates (1999) ISBN 0-8058-3392-7 8. Pederson, T.: Magic Touch: A Simple Object Location Tracking System Enabling the Development of Physical-Virtual Artefacts in Office Environments. Short paper for the Workshop on Situated Interaction in Ubiquitous Computing, ACM CHI2000. In Journal of Personal Technologies , issue 5/1, Feb 2001 9. Pederson, T.: Physical-Virtual Knowledge Work Environments — First Steps, in Proceedings of the 9th Int. Conf. on Human-Computer Interaction, Lawrence Erlbaum Associates (2001) 10. Wellner, P.: Interacting With Paper On the DigitalDesk, in Communications of the ACM 36, 7 (1993)

Workshop Location Modeling 76

Intimate Location Modeling for Context Aware Computing Mark Burnett, Paul Prekop and Chris Rainsford Information Technology Division Defence Science and Technology Organisation Department of Defence, Fern Hill Park Canberra ACT 2600 AUSTRALIA {mark.burnett, paul.prekop, chris.rainsford}@defence.gov.au

Abstract. This paper describes some of the complexity that needs be captured in any location model of an intimate environment. We take the position that a location model for such an environment must be part of a wider framework for handling context that includes knowledge of people, devices and their communication abilities. We present an ontology based on a general object model of people and devices to deal with the complexity of location modeling within an intimate environment and show how temporal events may be represented and reasoned with using standard modeling techniques based on time intervals.

1

Introduction

Smart rooms and workspaces assist in the seamless integration of people with computers within a physical environment. Typically furnished with networked pervasive computing devices and sensors, these rooms are designed to assist people in pursuit of everyday work goals such as finding information, collaboration with colleagues, and so on. Smart rooms are the subject of much research around the world, and a key enabler of smart rooms is the use of context awareness to link pervasive computing devices with users within their physical environment. Context can be simply defined as that which surrounds, and gives meaning to something else [1]. Within a smart room, context is what gives the room its understanding of the user, the user’s intention, the user’s task, and the physical environment the user is situated within. In most mobile, wearable and ubiquitous computing application, knowing the user’s location, absolutely, or relative to other parts of the environment is important for the seamless integration of people with pervasive computing devices within the physical environment [2]. Our interest in location modeling for context aware computing is within the intimate space of a smart room, or smart building. Previous work has attempted to devise location models for intimate environments. The Active Badge system developed at Olivetti research Laboratory [3] used small, active badges, worn by individuals to locate them within a building. The tags periodically emit a unique

Workshop Location Modeling 77

signal, which is picked up by a network of sensors around the building. This information is then feed into a location model, and used to locate individuals within the space. Xerox Parc’s ParcTab [4] system uses a similar approach based on a handheld device called a ParcTab. The ParcTab provides location information as well as to access to networked devices. MIT’s Hive project provides similar features for wearable devices, and combines location modeling with automatic integration of wearable devices with the computing devices present within the location [5]. While these applications go some of the way toward location modeling within an intimate environment, most don’t capture the full complexity of the intimate environment. Also, for most of these applications, the location model is implicit, and unique to a particular application with limited utility outside of the application for which it was developed. In this paper, we describe some of the complexity that must be captured in any model of a smart environment. We take the position that a location model within such an environment must be part of a wider framework for handling context that also includes information of people, devices and their communication abilities. We present an ontology based on a general object model of people and devices to deal with the complexity of location modeling in an intimate environment, and discusses how temporal events may be represented and reasoned with using modeling techniques based on time intervals.

2

Issues in Modeling a Smart Environment

Intimate locations are characterized by a very complex and densely populated terrain, which includes people, embedded computing devices, traditional computing devices, as well as furniture, white boards, room partitions and so on. In some cases, the intimate location may span several different rooms, or in extreme cases, several floors of a building. The goal of using a location model for an intimate environment is to determine the optimal way of making information available to a user, or receiving input from a user. For example, given the user current location and orientation in a room, which display device, fixed/mobile, visual/audio, with a particular capability, would be suitable for outputting information? This seemly simple question must take into account a wide variety of other factors. Can the user see the output device? Is there an office partition, or another person in the way? Given the proximity of other people, would an audio output be too disruptive? The resolution of the location information needed within an intimate environment will need to be very fine, perhaps to the centimeter. Not only would the location need to be captured, but also the user’s orientation within the room. Questions like, what display is the user currently looking at? Which input device would offer the user the most privacy? What devices can communicate to each other? Who are the occupants of a room? All need to be answered by the intimate environment location model. Within the Defence domain, as well as some commercial domains, security of information is important. If information is being displayed on a screen that can be seen by someone not authorized to see it, the screen should be intelligent enough to turn off, or somehow hide the information. This requires the location model to be able

Workshop Location Modeling 78

to answer question about not only location and orientation, but relative location between classes of object, in this case a person, with a defined viewing range, and a screen within that visual range. In addition person attributes related to security clearances may need to be stored within the model. As well as being populated by people, the intimate environment will also include different kinds of mobile devices, wearable devices, fix and semi-fixed interaction devices, displays, input devices, and so on. The characteristics, and the movement of these devices would also need to be captured by the location model. Over time, intimate environments change. Furniture is moved; interaction devices are added, moved or removed. The location model should be able to track these movements, and keep its representations current. In the next section, we describe a location model of an intimate environment that is able to address many of these issues.

3

Location Model for an Intimate Environment

We argue that a simple position based location model isn’t enough to capture the wide range of other location relevant information needed to provide location awareness within an intimate environment. Any location model for an intimate environment will need to include a considerable amount of other information directly related to the person’s or the object’s location. For example, this might include the visible range of a display device, or the distance over which audio output can be heard, or the range of a bluetooth connection needed for a mobile display and so on. Rather than a location model, what is needed is a comprehensive object model that captures properties that describe the object within the context of the space in which it exists. For a smart room, these properties may include: • • • • •

Unique Identifier; Type Class; Current Location -- Position within the space; Means of Communication -- Visual, audio, bluetooth, touch, and so on, and Range of Communication -- Maximum effective range of device’s input or output.

3.1

An Object Model for Location and Context Awareness

The information needed for the location model of a intimate environment could be captured in a class hierarchy, with each object in the environment -- people, fixed devices, mobile devices and so on -- simply being a specialization of some generic object. This is shown in Fig 1. The attraction of this kind of structure is that additional levels of contextual information can be added as required (say to the Person entity). Composite objects that contain other objects -- such as rooms -- are accommodated within the model as a distinct object type.

Workshop Location Modeling 79

OBJECT

ID

COMMS

TYPE LOCATION

RANGE

COMPOSITE TYPE DEVICE

PERSON

Fig 1. A simple object model for location awareness

Given the structure and additional information captured by the object model it becomes possible to address many of the complexities of the intimate space described in the previous section. These include: • • • •

Proximity of objects within the space; Person-person proximity; Person-computer proximity, and Connective and communicative ability of objects -- which objects within the space can be connected to what other objects, given their current proximity.

It is quite likely that the environment may permit only partial knowledge of any particular object, so that the object model will at times not be fully populated or will contain out of date information. Hence it is important to choose a knowledge representation implementation technology that allows useful inferencing despite the uncertain and fluctuating nature of the environment.

3.2

Applications, Knowledge Representation and the Object Model

The approach taken in this paper concentrates on the development of an ontology for location and context awareness within an intimate environment. This contrasts with other approaches, such Dey et al [6], which adopt a software infrastructure for context-awareness approach. As discussed by Davis et al [7] commitment to an ontology forms the first step toward implementing this model within a knowledge representation technology. Our goal is to implement the ontology in a knowledge representation technology that allows for intelligent reasoning and that provides a medium for efficient computation of the location model. A semantic web representation, based on of Resource Description Framework (RDF)-encoded Uniform Resource Identifiers (URI), would provide a very general framework for

Workshop Location Modeling 80

making assertions and statements about data, and potentially allows re-use of the data by other programs and agents [8].

3.3

Temporal modeling

In handling a dynamic environment we need the model to support inferences on how objects have changed over time and predictions on how they may change in the future. In our object model many of the attributes will not change with time, while others may change rapidly. For example, a printer is unlikely to change location very often and its other attributes would be expected to be similarly static for long periods. In contrast a person walking through a building will have a dynamic location and his proximity to, and connectivity with, other objects will change rapidly with time. Our approach to modeling time is based on two ideas: (i) Adopt a model of time that supports both intervals and instantaneous points, where points are modeled as a special case of a zero duration interval. This approach allows complex reasoning about temporal interval relationships and is consistent with the TSQL2 standard [9] (ii) Use an implementation of the object model in which time is stored as the key external contextualization of the underlying model, and that allows tractable computation over the model with respect to time. Object models are stored with their associated valid temporal intervals in a database of the form shown in Table 1. These temporal periods define the time for which the object description was observed to be true. When the state of an object changes, say at time T3, a new entry is made in the table showing the completion of the time interval bounded by T2 where T2 < T3 and the difference between the two time points is one chronon (the smallest measurable unit of time within the system). T3 then becomes the start time for a new interval. Each object is tracked over time in this way to permit the system to be interrogated for proximity and connectivity information. Table 1. A database description of an object as it changes over time Valid Time Interval T1 – T2 T2 – T3 T4 – ∞

Object Model O (T1, T2) O (T2, T3) O (T4, ∞)

Allen describes thirteen basic temporal relationships between two intervals [10]. Freksa [11] extended these relationships to describes semi-interval relationships where some endpoints are unknown. Semi-interval relationships allow reasoning with incomplete knowledge and also to generalize across a number of Allen’s basic relationships. By recording the temporal intervals associated with each Object’s properties we can use the taxonomies provided by Allen and Freksa as a basis for describing and reasoning about temporal behavior.

Workshop Location Modeling 81

4

Conclusions

In this paper we have begun to outline some of the problems and requirements for modeling location in closed, densely populated, intimate environments. We argue that to capture many of the complexities of the intimate environment, a location model for objects must capture more than just location. It must deal with properties and temporal events that describe the context of the object and it’s surroundings.

References 1.

A. Schmidt, M. Beigl, and H. Gellersen, “There is More to Context than Location,” Computer & Graphics, Vol. 23. (1999) 893-901 2. T. Selker and W. Burleson, “Context-aware Design and interaction in Computer Systems,” IBM Systems Journal, Vol. 39. (2000) 880-891 3. Roy Want, Andy Hopper, Veronica Falcao & Jon Gibbons, “The active badge location system”, ACM Transactions on Office Information Systems (TOIS) Vol. 10. No. 1. (1992) 91-102 4. R. Want, B. Schilit, N. Adams, R. Gold, D. Goldberg, K. Petersen, J. Ellis, and M. Weiser, “An overview of the PARCTAB ubiquitous computing experiment”, IEEE Personal Communications, Vol. 2. No. 6. (1995) 28-43 5. B. J. Rhodes, N. Minar, and J. Weaver, “Wearable Computing Meets Ubiquitous Computing,” In the Proceedings of the Third International Symposium on Wearable Computers, San Francisco, CA. (1999) 6. Dey, Anind K., Daniel Salber and Gregory D. Abowd. “A Context-based Infrastructure for Smart Environments”. In the Proceedings of the First International Workshop on Managing Interactions in Smart Environments (MANSE '99), Dublin, Ireland. (1999) 114-128 7. Davis, R., Shrobe, H., and P. Szolovits “What is a Knowledge Representation?“ AI Magazine. (1993) 17-33 8. T. Berners_lee, J. Hendler and O. Lassila, “The Semantic Web”, Scientific American, May 2001. 9. Dyreson, C.E., Soo, M.D., and Snodgrass, R.T. “The TSQL2 Data Model for Time”, In TSQL2: A Design Approach, Ed. Snodgrass, R.T. (1994) 10. J.F. Allen, “Maintaining knowledge about temporal intervals”. In Communications of the ACM. Vol. 26, No. 11. (1983) 11. C. Freksa “Temporal reasoning based on semi-intervals”. In Artificial Intelligence Vol. 54. (1992)

Workshop Location Modeling 82

Location Models for Augmented Environments Joachim Goßmann and Marcus Specht Fraunhofer-IMK and Fraunhofer-FIT [email protected], [email protected]

Introduction In the following paper we will describe two projects on contextualised computer systems and audio augmented environments we are currently working on at Fraunhofer Institutes FIT and IMK. During the finished EU project HIPS 1the prototype hippie was developed at Fraunhofer. Hippie is a nomadic information system that supports mobile information access from multiple platforms. It allows to browse exhibition information in a personalized way. Detailed features of a prototype system and evaluations are described in (Oppermann and Specht 2000). The second one is the EU project LISTEN2 (Eckel 2001) (LISTEN 2001) which started in January 2001. LISTEN will provide users with intuitive access to personalized and situated audio information spaces while they naturally explore everyday environments. A new form of multi-sensory content is proposed to enhance the sensual impact of a broad spectrum of applications ranging from art installations to entertainment events. This is achieved by augmenting the physical environment through a dynamic soundscape, which users experience over motion-tracked wireless headphones. Both systems have several models to present individualized media and create augmented environments in common: •

World Models (= Space Model, Location Model) describing space the user moves through and thereby interacts with the system.



Augmentation Layer on World Model: which describes the areas in the location model that contain active information or sound objects (Zones, Segments, Triggers, Agents) which interact with the user of a system



Domain Model which describes (with MetaData) the information of sound objects and other hypermedia objects which are connected to the physical space via the augmentation layer



User Model which holds profile information about the user. While the user moves in physical space events are send to the user model and by these events the model is refined.

In this paper we will only focus on the world models and the augmentation layer. Both projects are based on completely different technologies and will use different representation methods and interaction facilities. While in hippie users are moving with small laptop computers or wearables with a small visual display, in LISTEN users will have only a wireless headphone displaying 3D audio. In the following section we will describe both projects mainly from the perspective which representation approach was chosen for writing location models, and the augmentation layer, and which are the shortcomings and advantages of the different approaches. Furthermore the requirements of the different types of information presentation (hypermedia pages vs. auditory display) in the two projects are quite different. From our point of view the requirements for a more fine grained location sensitivity in LISTEN has an important impact on the selection of the representation approach and also on the selection of tracking technology.

1 The prototype Hippie was developed by GMD within the project Hyperinteraction within Physical spaces (HIPS), an EUsupported LTR project in ESPRIT I3. 2 LISTEN – Augmenting everyday environments through interactive soundscapes, Fifth Framework Programme, Creating a user-friendly information society (IST), Contract no.: IST-1999-20646.

Workshop Location Modeling 83

Hippie: A mobile exhibition guide The Space Model In hippie all objects are described based on an ontology of objects and their relations and roles. In this ontology objects for describing the space model and also object types of the augmentation layer are described. Two examples are given in figure 1: (defClass hipsposition () ()) ;;; absolute and relative positions used for describing the room layout (defClass absolute-position (hipsposition) ((latitude :type number) (longitude :type number) (altitude :type number))) ;;; the relative position is mainly for indoor where no absolute GPS coordinates are available (defClass relative-position (hipsposition) ((xCoord :type number :accessor xCoord :initarg :xCoord :initform nil) (yCoord :type number :accessor yCoord :initarg :yCoord :initform nil) (zCoord :type number :accessor zCoord :initarg :zCoord :initform nil) (releated-absolute-position :type absolute-position))) ;;; areas are polygons described by a list of positions (defClass hipsarea () ((position-list :type list))) (defClass physical-container (container) (:hipsarea :type hipsarea)) (defClass room (physical-container) ()) ;;; describing artworks and their content (defClass artwork (unit) ((related-objects :type list :accessor related-objects) (container :type container :accessor container :initarg :container :initform nil) (home :type string :accessor home :initarg :home :initform nil) (artist :type artist :accessor artist :initarg :artist :initform nil) (title :type string :accessor title :initarg :title :initform nil) (dateline :type list :accessor dateline :initarg :dateline :initform nil) (occasion :type string :accessor occasion :initarg :occasion :initform nil) (history-of-object :type history-of-object :accessor history-of-object) (epoche :type epoche :accessor epoche :initarg :epoche :initform nil) (style :type style :accessor style :initarg :style :initform nil) (motive :type motive :accessor motive :initarg :motive :initform nil) (additional_sources :type list :accessor additional_sources)))

Figure 1: Basic ontology classes to describe the hips space model and the contained art objects The basic entities for positions allow for description of areas and points and build up a very simple model of connected entities in a physical space layout like a floor plan of connected rooms and the connections (doors, steps, elevators). In this floor plan the information objects also have a position in most cases described as a point.

The augmentation layer In hippie Infrared emitters of different granularity and an electronic compass were used to track the user’s current position and direction and to relate them to objects in the physical world. The IR system mainly consists of: •

The IR emitters placed in physical space: Different types of IR emitters are available depending on the objects they were attached to. The IR emitters where configurable for range and angle and could vary between 1 and 10 m range. Nevertheless they could only be adapted by hand and we did not have an exact measure for the current range to model the IR cone precisely with space model coordinates.



The electronic compass was integrated with IR receiver badges attached to the client machine of the user and could detect the current direction of the user.



The IR& Compass scanner software: the software was running on a client machine of the user and detected IR signals in the current environment of the user. The software could detect several signals in

Workshop Location Modeling 84

parallel and filter the strongest signal of the incoming IR identities. Furthermore it integrated IR signals and compass information in one protocol and sent the location information to the server. The IR and compass software is configurable in several ways: A threshold for filtering IR signals, the threshold for triggering the sent message about the user’s location. In hippie a curator or administrator of the system can load different exhibition objects on top of a space model and describe the art objects placed in the space model. After loading the exhibition objects and the contained artworks the curator can load different set-ups for the infrared locators to describe the positions of infrared emitters in the space model layout. The representation of the augmentation layer mainly was influenced by the type of tracking that was possible with the IR emitters where users can be detected inside an IR cone but no continuous tracking is possible. ;;; the painting Geigender Orpheus placed in a room containing the relative hipsposition (setf (gethash "IV-1" *entity-table*) (make-instance 'painting :name "Geigender Orpheus" :hipsposition (make-instance 'relative-position :xcoord 27.6 :ycoord 13.2 :zcoord 3) :abstract "... " :artist "Das Bild stammt entweder aus der Sammlung Centurione in Genua oder es handelt sich um ein Werk von Nicole Regnier." :dateline "Das Gemälde ist im 16. oder 17. Jh. entstanden." :epoche nil :style "Barock" :motive '("Orpheus") :size '("Hochformat") :material "Öl auf Leinwand" :genre "mythologie")) ;;; an Irlocator placed close to the artwork Geigender Orpheus (setf (gethash 68 *IrLocator-table*) (make-instance 'IrLoc :id 68 :hipsposition (make-instance 'relative-position :xCoord 27.5 :yCoord 15 :zCoord 0) :object-list '("IV-1" "I-4" "Marcus Specht")))

Figure 2: A painting and an IR locator connected to it in the exhibition database for Castle Birlinghoven Basically the positions of the infrared locators are just hips-positions and not hips-areas, because the IR emitters could just be adapted by hand for the range of the emitted IR cone. This does not allow for high flexibility in setting up new IR-locator configurations and fine grain position tracking of users. Another major shortcoming was that the positions of the IR emitters, the art objects, and all the areas of the rooms in the space model must be known or measured. As an alternative approach the system can work with IR identities and use no position information at all. Like shown in figure 2 IR-locators could be directly connected to objects which allows the system to connect a received ID even with a moving object like another visitor who has an IR emitter attached (see object-list in IR-locator 68, which is connected to the entity “Marcus Specht”).

The LISTEN World Model The LISTEN World Model is a detailed VR-based geometric model. The model is created for the AVANGO application (Tramberend 1999) and is described as a geometric scene graph. Therefore, a LISTEN environment can be tested and protoyped in a CAVE system (Eckel 2001), or be explored in real space with virtual audio content displayed through a wireless motion-tracked headphone. In a LISTEN environment, the space the user moves through is addressed in three interdependent, but not necessarily coherent layers: •

The space model, containing geometric information about the real exhibition space and the objects within it, that is needed to allow exploration of the space in a CAVE system.



The location model, filtering the position and motion of the user by dividing the dimensions the user moves through (location and orientation) into meaningful constraints and deriving continuous parameters from them.



The virtual acoustic space, in which the location of virtual sound sources and spaces are defined.

Workshop Location Modeling 85

The LISTEN location model In a LISTEN environment, content is displayed to the user in form of a spatially rendered, continuous multilayered audio stream. Next to the automatic adaption of sound scene rendering to the position and orientation of the user’s head, the audio stream is controlled in two ways: Events (mediated interaction), that are used to start and stop the playback of information items in form of audio recordings, and continuous control (immediate interaction) changing parameters in the audio-generation of the presentation (e.g. a sound that gets continuously louder as you approach a certain position within the space). The location model needs to provide the data-sources from which this interaction can be created, • to create an acoustic spatio-temporal dramaturgy based on the spatio-temporal behaviour of the user • to guide the user and provide a detailed acoustic feedback about her or his interaction with the system • to be able to escort the encounters a user has with the real and virtual objects of a listen set and environment with a detail corresponding to that of a natural sound environment • to provide all tools necessary to create virtual sound sculptures, parametric collages and other artistic applications We therefore created structures dividing the dimensions a user’s head moves through (location and orientation) into meaningful Segments and Zones: •

A Zone is a region, e.g. a cube, in space. It is “on” when the user’s head is inside the region and “off” when the head is outside. It can be an arbitrary 3D body model that provides the necessary functionality.



A Segment can be imagined as a portion of a 360º panorama. If a measured angle is within the portion, it is “on”, “off” when outside. A Segment can both be imagined with the user as point of reference or target.

To achieve immediate interaction, it is necessary to derive continuous parameters from the motion of the user in space by immersing the user into parametric fields. This is done by attaching an Evaluator to a Zone or Segment. Evaluators provide a spatial envelope function, thereby allowing Zones and Segments to have a detailed continuous parametric “profile”. These parameters are later on scaled to be used for any part of the sound and presentation generation. Two examples for a segment with a panoramic evaluator and a 2-d Zone with a centroid evaluator are shown in figure 3 and figure 4. A more complex Evaluator could provide a value-profile that is constrained by a Zone. The concept of the Evaluator is very expandable: A landscape that changes its values with the temporal behaviour of the user (e.g. setting places where the user has been to 0) could be imagined within the range of the concept.

Figure 4: Segment and a panoramic evaluator

These elements can be combined into activity graphs creating a hierarchical location model: The children of a Zone are only checked and evaluated if their parent is active („on“). The zones and segments are represented internally as a graph of nested activateable objects, an example graph can be seen in figure 6. Relating to exhibition objects (Proxies) Proxies provide us with information about their position and size relative to the user’s position:

Figure 4: 2d-Zone and 2d-Zone with a centroid evaluator

Workshop Location Modeling 86

1.

The angular boundaries of the object in respect to the user position, therefore the relative size of the object for the user’s perspective (aspect dimension).

2.

Angle between the user’s nose and the vector to the centre of the object (user angle).

3.

Angle between the main direction of the object and the vector to the user position (object angle)

4.

Distance between the user and the centre of the object (object distance)

Figure 5: The geometric relationship between visitor and object. This information, derived from the geometric space model, is used to control the size and properties of Zones and Segment in the location model: Example: multiple aspects As an example we assume that the statue has three different interesting perspectives that each require a comment only audible at the respective position. Each position can be defined by object angle and distance, and is of course only valid if the visitor is facing the statue (if the user angle is within the aspect). The Evaluators in the following dependency graph can be used to guide the visitor to the desired position in space:

Figure 6: Example Activity Graph

Figure 7: Example Location Model Illustration

The LISTEN tracking system In contrast to the HIPS project, there are very high demands on the tracking technology that arise from the necessity to immerse the user into a convincing virtual acoustic scene (Bregman 1990): Continuous low-latency tracking of the position of the user’s head and its orientation covering the entire area to be augmented is necessary. Tests concerned with acoustic VR have shown that the total system latency (from the head motion to the reaction in the sound presented through the headphone) must be below 59 ms (Wenzel 1998), and the spatial resolution has to be in the centimeter range. A tracking system of the mentioned resolution will easily allow us to detect the spatial structures we propose. The electromagnetic tracking module for LISTEN is currently developed at IEMW, Vienna University of Technology, URL: http://www.iemw.tuwien.ac.at. (Goiser 1998).

Workshop Location Modeling 87

Comparison of approaches Both projects aim at producing augmented environments for museum-based applications. Nevertheless each of the presentation media targeted has specific requirements for the granularity of the tracking system, the space modelling and the augmentation layer. Hips is dealing with visually displayed factual information items such as texts and images, while the information display in LISTEN is entirely time-based and tries to explore the potential of the acoustic medium to the fullest. In hips, hypermedia pages need to be updated as the user is accessing information about an art object, but in LISTEN, constant updating is necessary to adapt the sound scene continuously in a way that is comparable to a natural sound environment. The information content of spoken word is only to a rather small aspect of acoustic information (Bregman 1990), The creation of a LISTEN presentation requires a large amount of additional scripting to interpret and use the events and parameters produced by the location model, controlling the flow of the presentation.

Conclusion and future work Some of the restrictions of hippie will be overcome by the approach of LISTEN. Using the CAVE system to simulate augmented environments will allow for new ways to evaluate ubiquitous computing systems and modelling approaches. Next to the described location model, a detailed analysis of the user motion is also under investigation. The highly detailed augmentation layer of listen will be evaluated in terms of authoring practicability. Actual test scenarios include two virtual LISTEN prototypes that are created until summer 2002, and a physical prototype that will be installed in the Kunstmuseum Bonn, Germany in January 2002. From our point of view, the success of the auditory medium evolving in LISTEN relies strongly on the user navigation technique. With it, we hope to trigger new notions in using virtual acoustical spaces within physical visual environments.

References Bregman, A. S. (1990). Auditory Scene Analysis: The Perceptual Organization of sound. Cambridge, Massachusetts, MIT Press. Eckel, G. (2001). Immersive Audio-Augmented Environments. 8th Biennial Symposium on Arts and Technology at Connecticut College, New London, USA, CT. Eckel, G., Beckhaus, S. (2001). ExViz: A Virtual Exhibition Design Environment. International Symposium on Virtual and Augmented Architecture (VAA'01),, Dublin, Ireland, Springer Verlag Wien, New York. Goiser, A. M. J. (1998). Handbuch der Spread-Spectrum Technik, Springer-Verlag Wien New York: 152-158. LISTEN (2001). The LISTEN Website. Oppermann, R. and M. Specht (2000). A Context-Sensitive Nomadic Exhibition Guide. HUC2K, Second Symposium on Handheld and Ubiquituous Computing, Bristol, UK, Springer. Tramberend, H. (1999). Avango: A Distributed Virtual Reality Framework. IEEE Virtual Reality '99 Conference, Houston,Texas, USA. Wenzel, E. (1998). The impact of system latency on dynamic performance in virtual acoustic environments,. 5th International Congress on Acoustics and 135th Meeting of the Acoustical Society of America, Seattle, WA.

Workshop Location Modeling 88

Using Location Information in an Undergraduate Computing Science Laboratory 1

Using Location Information in an Undergraduate Computing Science Laboratory Support System Murray Crease, Philip Gray and Julie Cargill Department of Computing Science, University of Glasgow, Glasgow, G12 8QQ, UK @dcs.gla.ac.uk

Location is important to both students and tutors in teaching laboratories – in particular, it influences how they interact during help-giving. We have developed the Lab Support System (LSS) to support tutor-student interaction in teaching laboratories that utilises mobile wireless-enabled computers. In this paper we discuss the ways that we represented and used locational information in this system and our observations of the role of location during a field trial of the LSS. We also consider how additional location information might be exploited in future system developments.

Introduction Location is important to both students and tutors in university teaching laboratories – in particular, it influences how they interact during help-giving. We have developed a system to support tutor-student interaction in laboratories and, in the course of developing and using the system, we have had to confront a variety of issues related to the role of location and location information in the activity and in our system design. This paper reports on our experience and some of what we have learned from it. The Lab Support System (LSS) is a web-based application, deployed on static workstations and wireless-enabled palmtop computers, that supports student-tutor interaction in computer laboratories, particularly the process of students asking for help and tutors delivering it. In developing the LSS our primary focus has been the tutorial process, especially as it exists in teaching labs in Computing Science at our university; issues related to location and mobility were very much secondary and emergent and not a initial central concern. Exploiting location information was not a prime factor in our design nor did we set out to change the locational aspects of the activity (e.g., reduce tutor movement). Nevertheless, our system did take location into account and we found some interesting interactions between location and system use, as will be reported below. In this paper we describe the LSS, discuss the way it handles locational information both in the current system and in possible future versions of the system, and report our observations of the role of location and location information during a field trial of the

Workshop Location Modeling 89

Using Location Information in an Undergraduate Computing Science Laboratory 2

system. Section 2 provides an overview of the way tutors serviced student help requests prior to the introduction of the LSS. Section 3 gives an overview of the LSS and section 4 discusses the features of the system that relate to location information. Section 5 presents some observations on how location and location information influenced system use during a four week field trial. Finally, in section 6 we offer some conclusions and consider possible location-oriented enhancements of the system.

Locational Information in Teaching Labs The LSS has been developed in the context of the GRUMPS project, investigating support for large-scale distributed experiments based on computer usage data [Atkinson et al, 2000]. The LSS was one part of a larger initial testbed for capturing data generated by student lab performance, both at the keystroke level and at an application level. Our development was originally focussed on student-tutor interaction in a 1st year teaching lab in the Computing Science Department at the University of Glasgow (GUCSD) [Draper 2001]. Although our longer-term aims are more general, the work reported here refers to the system we developed for this 1st year lab. The GUCSD 1st year laboratory consists of a single room with 60 student workstations deployed on benches. First year students carry out practical work in tutorial groups of around 20 students and a staff tutor. Groups remain fixed for a semester’s work. Each tutorial group attends one two-hour laboratory each week during the year, working on practical exercises. Each tutorial group has assigned to it a cluster of workstations (a set of contiguous workstations in one area of the lab) intended to be used by the group during the lab session. Machines in the lab are divided into four clusters, each with a different colour represented by a coloured sticker on the machine’s monitor In GUCSD 1st year lab sessions, physical location influences help requests and help-giving in several ways. Students are static in the sense that, for most of a lab session, they remain at one workstation, although they may use different workstations during different sessions. Tutors have no fixed location but move freely around their tutorial group as well as occasionally visiting students from other groups and consulting with other tutors and demonstrators. There is a relationship, albeit fuzzy, between physical location and group membership. Although a student may use different workstations on different occasions, they are supposed to choose from the machines belonging to their group’s cluster, although this is not strictly enforced. Also, currently unoccupied machines in a cluster may be used by students not belonging to the group scheduled to use that cluster. Tutor help is largely demand-driven. Students catch the attention of a tutor (usually the one belonging to their group) and the tutor must work out a strategy for handling the current set of requests. The help itself, of course, is given when the tutor is near the student (standing behind, looking a the display, or sitting next to them). Occasionally, the tutor might make an announcement to the entire group or call a small group away for a mini-tutorial.

Workshop Location Modeling 90

Using Location Information in an Undergraduate Computing Science Laboratory 3

We can consider location and location information from both the students' and tutors’ points of view. For a tutor servicing help requests is largely influenced by estimation of need. However, physical location plays a part. For example, some tutors take requests from a nearby student first, if they think it will not be long. Other tutors work their way systematically along a lab bench, using the physical layout to structure the help servicing. In general, tutors find it difficult to walk past a student in need of help to service the request of another student further away. Finding a student who has requested help in a busy lab is not always easy. As mentioned, presence of a student in the assigned cluster is not a good indicator of group membership. Also, the shape of clusters makes a difference. For example, one cluster in the lab studied is T-shaped. Students at the end and head of the T were rarely seen. From a student’s point of view, getting the attention of a tutor is the most important part of requesting help. This clearly depends on where the tutor is relative to the student and what they are doing. The most popular methods of obtaining help before the introduction of the LSS were hand-raising (36% of students questioned stated that they used this method every lab. 52.8% used it in some labs) and attracting attention when tutor is passing by (12.4% of students questioned stated that they used this method every lab. 53.9% used it in some labs). On rare occasions, a student would get out of their seat and approach the tutor directly (thus maximising their locational advantage). Some students appear to have consciously chosen to sit out of their cluster, as far away as possible so that they could work unimpeded by unsolicited tutor attention.

System Description The LSS was implemented in Cold Fusion™ using SQL Server™ as the persistent data store. Cold Fusion™ was chosen because it required a minimal specification on the client machines and allowed the system to be run across different platforms. It also allowed the rapid prototyping and development of the system. The system proved to be very reliable; it failed only once, due to a failure of the web server (unrelated to the use of the LSS). Students ran the system on low-specification Windows NT4™ workstations (the standard lab machines they used for their practical work) and the tutors used Compaq iPAQ™ handhelds running WinCE 3.0™. In both cases, the system was run using Internet Explorer™ browsers. In this section the functionality of the two versions of the system – for students and tutors – is described. The Student System The primary aim of the student system was to allow students to request help from their tutor with a minimum of disruption. It was also hoped to prompt the students to think about their problem by requiring them to type in some keywords summarising their problem. The LSS also enabled a student to see who else in their group was requiring help and, if the student requiring help wished it, the keywords describing the problem. This would allow students, who perhaps had been stuck with a similar prob-

Workshop Location Modeling 91

Using Location Information in an Undergraduate Computing Science Laboratory 4

lem, to help each other. A secondary aim of the LSS was to allow students to record personal memos. These memos, which only the student can view, can be used to record anything but were primarily aimed at allowing students to record problems outside supported lab times to act as a reminder to raise the problem at the next scheduled lab. The interface to the student system can be seen in figure 1.

Fig. 1. Student interface to the LSS (on the right) showing the three main areas of functionality: view group (in this instance as a map); request help (in the lower window); and create/delete memo. Correspondence between map and the actual workstation cluster in the lab is shown to the left. Note that map orientation is rotated 180o from the student interface since the student using this interface would be facing the camera. The student is represented by ‘X’ on interface and is marked with ‘*’ on map. The direction a student is facing in the map is given by the point of the arrow. A photo of the lab is inset at bottom right.

The top left hand area of the interface presents the current the state of a lab group. In this case, the group is represented in a map. The student is represented on the map as a black ‘X’. Other members of the group who are using the system are represented as a black ‘■’. Computers not in the student’s colour cluster are represented as ‘-‘ and unoccupied1 computers in the student’s cluster are represented as ‘□’. If a student requests help then their representation on the map turns red. It can be seen, therefore, that in Figure 1 the computer cluster the student is sitting at is approximately rectangular (there are two computers from an adjacent cluster at the left represented by '-') and there are eight students logged in including the student whose interface is shown. Three students are asking for help. They are shown in red on the interface and are located two computers to the left of the student, at the end of the row facing the stu1

In this context unoccupied means not being used by a student who is the lab group and is using LSS. Future versions of LSS will more accurately represent the presence of a student at a machine regardless of lab group and use of LSS.

Workshop Location Modeling 92

Using Location Information in an Undergraduate Computing Science Laboratory 5

dent and to the left and behind the student. It is also possible for the student to view the state of their group as a list. In this case all members of the group are shown regardless of their location in the lab or login status. If they are logged into the system, the name of the machine they are using is given. If they have asked for help this is indicated and, if they have permitted, the topic keywords for the help request will be given also. The area at the bottom of the screen allows the students to request help. The students must type in some keywords describing their problem before they are allowed to submit a request. If they check the ‘Broadcast topic keywords to group members’ box the keywords they enter will be shown to other group members. The area at the top right of the screen allows students to record and delete memos. These memos are only visible to the student. The student is also able to view their record. This allows the student to see the information that is provided to the tutor, such as previous help requests and when they have been seen by a tutor. There are also some links to web pages providing help in operating the LSS. The Tutor System As well as allowing tutors to service student requests for assistance, the tutor system also provided tutors with background information on students, thus enabling them to provide more meaningful help to individual students. Furthermore, by providing a high-level view of the entire group, it was hoped that tutors might be able to recognise problems common across their group and, for example, convene a mini-tutorial. It was not our intention, however, to impose any particular strategy for the handling of help requests. We wanted to give tutors more information during labs, but leave it to them to decide how to use that information. The tutor interface is shown in figure 2. The tutor has three separate windows, only one of which can be viewed at a time. In the options window, the tutor can select the lab group they are going to work with, how that group will be represented, either as a map or a list, and, if represented as a map, how the map is rendered: the orientation of the map and whether the whole lab is viewed or just the area with the group’s students. Figure 2 shows the group view rendered as a map. The tutor view is similar to the student view, but students requesting help are allocated a number, based on the order they asked for help. In this case, there are three students asking for help. A tutor can see a student’s record by clicking on their representation on the map or their name in the list view. This brings up the information window which is shown on the right in figure 2. In this window, the tutor can see a photograph of the student, the student’s help request, if any, as well as details of previous help requests and when the student has been seen by a member of the teaching team.

Workshop Location Modeling 93

Using Location Information in an Undergraduate Computing Science Laboratory 6

Fig. 2. Tutor interface to the LSS. The map view on the left is shown giving the tutor a high level view of the state of his/her group. The right-hand image displays the detail associated with the student logged in at workstation represented by ‘1’.

Locational Information in the LSS Several forms of locational information are used in LSS. The entire system is based on data held in an SQL Server™ database. This database stores information describing the layout of a computing science teaching laboratory. As well as containing information about the x,y co-ordinate location of individual machines, the database stores higher-level locational information such as the orientation of the machine and the cluster the machine belongs to (the machines in the lab are divided into four clusters, each with a different colour represented by a coloured sticker on the machine’s monitor). This information is used to generate maps for both the student and tutor interfaces. In the student interface, the orientation of the map depends upon the computer the student is using. The orientation of the map is always such that what is in front of the student in the lab is above the student’s location on the map. This is achieved by using the orientation information associated with the workstation as stored in the database. Fixed orientation is not possible with the tutor’s interface, since it is used on a handheld device. Therefore, map orientation is configurable via a menu, including options presented in terms of four landmarks in the room (windows, printers, tutorial room, fire exit). The students are not able to specify the area of the lab visible to them. If the student is sitting in the correct cluster for their lab group then they can only see that cluster. If they are sitting at the incorrect cluster for their lab group they can see the cluster they are sitting at, the cluster they should be sitting at and any clusters located between them and the cluster they should be at. Furthermore, to reinforce the fact that they are

Workshop Location Modeling 94

Using Location Information in an Undergraduate Computing Science Laboratory 7

sitting at the wrong cluster, the student’s representation on the map changes from ‘X’ to ‘*’. By minimising the amount of the map shown to students it is possible to minimise the screen space used taken up by the LSS interface on the student machines. Tutors are able to choose either a view of the entire lab or just the area where their students are sitting. In the latter case, if one or more students are sitting outside the appropriate cluster for the tutorial group, the map view is resized accordingly to include their machines in the view. Although the tutors are using a handheld device with a limited screen-size, because LSS is the only application running and the full screen is being used (as opposed to the student situation where LSS is run as a background application) the full map of the lab can be displayed on the screen without the need for scrolling. Different lab groups have different cluster(s) associated with them. This can be a single cluster or a list of clusters, perhaps covering the whole lab. Thus, the system is able to cover different perceptions of the importance of location. If students are to sit in an appropriate cluster for a lab their group has a specific cluster associated with it. If students are to sit anywhere (i.e. location is unimportant) the lab group would have all the clusters associated with it.

Observations on the LSS in Use Both the LSS and its related keystroke level system were deployed in a 1st year computing lab from 24 April to 18 May 2001. A total of 27 tutors and 283 students used the LSS system during 87 two-hour lab sessions. During this trial, we collected data via the Grumps tools themselves, as well as via: direct observation during lab sessions; discussion groups (i.e., focus groups); questionnaires; diaries and interviews. The non-computer-based data collection was intended to identify critical incidents and to explore behaviours and attitudes that couldn’t be captured automatically. The comments that follow are based on this data. Introducing our system tends to reduce the power of location to control the order in which help requests are serviced. We noticed that tutors sometimes felt compelled to use the numbering system even when they may have wanted to use some other servicing strategy. Direct observations of LSS and non-LSS lab sessions suggest that tutors find it uncomfortable to walk past students who were further down in the queue of requests. 81% of tutors who commented on their LSS servicing strategy employed a combination of help request order and systematic circulation of the lab group. During the trial, the use of the LSS changed the role of tutor location on student requests and on student perception of the response. When making a request for help, students located the tutor in the room and were more likely to use the LSS to summon help if the tutor was located at a distance or out of sight of the student; when asked to explain how they used the LSS, 26.3% of students who responded reported using this strategy. Similarly, students reported that tutors' strategies for responding to help requests were noticeably different when the LSS was in use. Students reported that tutors would respond to help requests as they appeared in the queue, rather than servicing student help request based on the students' proximity. That is, students were aware

Workshop Location Modeling 95

Using Location Information in an Undergraduate Computing Science Laboratory 8

of the change in tutor strategy that we also recorded. A number of students commented on the fairness of the first-come, first-served strategy. Tutors tended to keep map orientation fixed, setting it up at the start of a session. Occasionally, they were observed turning the machine to orient it to the room rather than changing the map orientation. For students we anticipated that the map would give them information about the state of their group – who was there and where. We expected that some might use this to find nearby students with similar problems. We have little evidence that this information was actually used explicitly in this way. With respect to orientation, students would often get a different orientation each time they logged in. We found no evidence that this caused any problems. Observation of students in the lab suggested that they could locate other members of their group in the map. As stated above, we anticipated that the map, if used in combination with publicly broadcast topic keywords, might encourage peer-to-peer help amongst the students, the pedagogical benefits of which have been described in other studies [Greer et al, 1998]. There is no firm evidence that the LSS stimulated this kind of behaviour. The post-study student questionnaire suggests that users experienced difficulty expressing their problem in keywords, and the recorded archive of keywords used also supports this. Probably because of the relationship between tutors and students, we observed no explicit negotiation to establish the ways that the new technology would be used, unlike the process reported in other studies on the introduction of mobile location-aware technology [Weilenmann 2001].

Conclusions Perhaps the main finding resulting from our dealings with location in developing and using the LSS is the richness of issues in this application that are related to location. Even without introducing sophisticated location awareness, introducing mobility to some of the technology had a significant impact on how people performed their tasks. We benefitted from what we found out during early investigations of the domains and we were also fortunate that unexpected and unanticipated effects of introducing a mobile information source didn’t cause the LSS to fail. It appears that the details of location matter and can have significant influence on the success and appropriateness of information presentation and interaction techniques. We intend to develop the LSS further, although the means of doing so is currently under discussion. There are a number of ways in which such an LSS enhancement might benefit from location, including: · Proximity awareness of a tutor In the current system, a student must explicity complete the help session by pressing one of a set of help completion buttons on their interface (the buttons correspond to different types of help completion, e.g., tutor provided help, I solved the problem myself, etc.) and they cannot make another help request until this is done. Students often didn’t bother to complete this, until they needed

Workshop Location Modeling 96

Using Location Information in an Undergraduate Computing Science Laboratory 9

·

·

help again. Proximity sensing could be used to provide automatic closure on a help request. That is, if a tutor is detected physically close to the student, their subsequent departure (after a suitable interval) could be interpreted as completion of the help session or could trigger a modal dialogue (like using the taking of a card in an ATM to trigger dispensing of money). Location awareness of student We currently determine student location by login to the LSS. This is problematic because a student might be sitting in the lab, not yet logged in, or logged into the workstation, but not the LSS. Detecting locational features of help requests Another location-oriented issue is identifying sets of help requests that are spatially nearby and presenting this information to the tutor, allowing the tutor to create ad hoc tutorial groups. The map provides some support for this, but still requires cognitive effort to identify potential groups.

Acknowledgements This work was supported by EPSRC under the Distributed Information Management Initiative (Grant ). We wish to thank all our colleagues on the Grumps Project (http://grumps.dcs.gla.ac.uk) for their feedback on the ideas expressed in this paper and for their contribution to the design of the LSS. Section 4 of the paper is based on observational studies carried out largely by Margaret Brown.

References Atkinson, M; Draper, S; Gray, P (1999). The Grumps Project Proposal. http://grumps.dcs.gla.ac.uk/documents/DIMbid-v8-29feb.htm Draper, S. (2001) Lab Support Software. Grumps Technical Report. http://staff.psy.gla.ac.uk/~steve/grumps/lss.html Jim E. Greer, Jim E; McCalla, Gordon I. ; Cooke, John ; Collins, Jason A. ; Kumar, Vive; Bishop, Andrew ; Vassileva, Julita . (1998) The Intelligent Helpdesk: Supporting Peer-Help in a University Course. Intelligent Tutoring Systems, (1998): 494-503 Weilenmann, Alexandra. (2001) Negotiating Use: Making Sense of Mobile Technology. Personal and Ubiquitous Computing (2001) 5: 137-145.

Workshop Location Modeling 97

Empowering 'Ambient Intelligence' with a Direct Sequence Spread Spectrum CDMA Positioning System Domenico Porcino, Martin Wilcox Philips Research Laboratories, Cross Oak Lane, Redhill, RH1 5HA England Email: [email protected] [email protected]

Abstract: Distributed intelligence is set to revolutionise the interface between humans and the surrounding environment. Smart objects will become more and more commonplace in the home of the next decade, in a dynamic network of distributed intelligent elements. One of the most important steps yet to be addressed in this vision is a positioning system able to locate people and objects and allow them to interact in an efficient way. This paper presents an experimental 2.4 GHz Direct Sequence Spread Spectrum system for accurate indoor positioning. The theoretical limits of this technology are presented along with the challenges ahead in delivering the location results with clear and userfriendly logical descriptors.

1. Introduction The interaction between man and machine is set to change dramatically in the near future. With computational power becoming more and more accessible and easy to embed in almost any shape and material, the presence of intelligent devices will grow exponentially making daily life easier and humanising our contacts with objects and machines. This appearance of distributed intelligence in and around our lives is known as 'Ambient Intelligence' [1]. A growing number of products are already beginning to incorporate electronics to help the users: from intelligent 'white goods' (fridges, washing machines, microwave ovens) to wearable devices (mp3 music players, speakers, health sensors). But we are only at the start of this gradual revolution in our habits. Many are the challenges still in front of us and numerous the barriers that slow down the powerful interactive experience envisioned for futuristic life scenarios. Among those: the absence of an appropriate auto-recognition and automatic initiation mechanism in the home network (to sense when we arrive home), the necessity of using predetermined and unattractive man-machine interfaces (keyboards or touch screens) and the general 'dumbness' of actual devices which know nothing about where they are and what is their role in the surrounding area.

Workshop Location Modeling 99

The future digital environment will have to overcome the problems and barriers of technology producing spaces that are sensitive and responsive to our needs, connecting and organising the exchange of information within the network of thinking devices. Intelligent objects will be aware of where we are and what we require. Our presence, our gestures and our (voice) commands will prompt appropriate reactions.

2. The Importance of Indoor Positioning for 'Ambient Intelligence' One of the key technical elements allowing the realisation of this vision will be the knowledge of the accurate position of the person and objects. Already today accurate location mechanisms, such as the Global Positioning System ([2]), are commercially available. Many more will become part of our daily life with the introduction of third generation mobile phones ([3]). But none of these techniques will allow the meter or sub-meter level accuracy within buildings, which is necessary for smart houses. The 'pervasive computing' that will be part of our future requires in fact accurate knowledge of indoor position to make any intelligent electronic appliance aware of its surroundings and react to them ([4]). At the time of writing, dynamic positioning of people, objects or equipment in indoor environments as offices, shopping malls or hospitals, is commercially an almost unexplored area, both in the professional and in the consumer market. The location sensing technology that will drive the ambient intelligence revolution needs to offer characteristics that are challenging when considered all together: meter (or submeter) level accuracy, large area of coverage (ideally 100-200m2), multiuser access, privacy observance, automatic set-up, low-cost, user-friendly interface. Philips Research has been working with these goals in mind to develop the fundamental technology for accurate indoor positioning. The rest of the paper will present the technical background of this location sensing technique. It will also show the theoretical limits of the system, and present some of our ideas on how the information will be delivered to the final user.

3. The Optimal Receiver for Ranging Applications Classical positioning systems based on RF transmissions estimate the location of a mobile device by calculating -at the receiver- the propagation time (also known as time of flight) of predetermined signals broadcast by the network infrastructure. By correlating one of the signals transmitted by the infrastructure with a local replica of the same signal generated inside the mobile, it is possible to derive a function, whose maximum is theoretically linked to the time which radio waves take to propagate between source and destination. Considering rays travelling at the speed of light, it is relatively easy to derive a Time Of Flight (TOF) delay estimate and from this the so-called pseudorange, i.e. the estimated distance from transmitter to receiver. Using three separate pseudorange measures to three transmitters and knowing their coordinates, it is possible to derive a complete 2-D position fix receiving unit via simple trigonometry in a process often called triangulation or trilateration.

Workshop Location Modeling 100

The technological founding block for positioning is therefore the range estimator, whose optimal structure has been known for several decades ([5],[6],[7]). The block scheme of the optimal transmitter and receiver structures is described in Figure 1.

s(t)

s(t-To)ej(ω t+θ) + n(t)

s(t-To) + nc(t)

c

ejω t

ej(ω t+θ)

c

g(t-TR)

c

Transmitter

Receiver

Figure 1: The optimum ranging system Given a generic signal s(t), with a bandwidth BS, the transmitted signal can be obtained mixing it onto a carrier frequency ωc to produce: ℛ{s(t)·ejωc·t}, with ℛ{} indicating the real part. The signal arriving at the receiver r(t) will then be:

(

)

r ( t ) = r1 ( t ) + n ( t ) = s t − T 0 ⋅ e

j æç ω c t + θ ö÷ è

ø

+ n (t )

(1)

where T0 is the propagation time between transmitter and receiver (=R/c, with R the distance between transmitter and receiver, and c the speed of propagation in free space), θ is the amount of phase shifting, and n(t) is a noise component. The optimum receiver mixes the received signal down to baseband coherently with a local oscillator signal ·ejωc·t+θ. After the mix down to baseband, the received signal will still be corrupted by the noise component nc(t)=n(t)·cos(ωct+θ ). The receiver will then apply some sort of ‘gating’ function g(t-TR) to the received signal, with TR being a guess of the time of arrival. The purpose of this gating function is to put a ‘gate’ around a certain range RR=c·TR and to test whether the transmitter is, in fact, at this range. If the transmitter is at this range, the gate will let the signal through; if not, it will indicate an error signal. The receiver shown in Figure 1 acts therefore as a proximity detector, signalling when the transmitter is at a particular range from the receiver. By building a bank of parallel receivers, or a tracking receiver which adjusts TR iteratively to find the correct time, we can test several ranges and gain a continuous measurement of range. An optimum gating function that minimises the time measurement error TR-T0 has been proposed by Mallinckrodt and Sollenberger [5]. From a simple analysis of the Fourier transform of Mallinckrodt's gating function, it is possible to distinguish two parts related respectively to a matched filter for the transmitted signal s(t) and a differentiation function. The optimum receiver will therefore present as gating, a matched filter followed by a differentiator at time TR. The (minimum) time measurement error resulting from this optimum structure is:

Workshop Location Modeling 101

δT R =

E=

ò

−∞

β

E

where ∞

1

(2E

No )

where β

2

1 = E



ò (2πf )

2

R1 ( f ) df 2

(2 )

−∞

is the energy in the received signal r1(t), calculated from

R1 ( f ) df , and β is the effective signal bandwidth. This last parameter is 2

determined not only by the bandwidth BR of the received signal but also by the shape of the signal spectrum. The value β 2 is often referred to as the mean-square bandwidth or Gabor bandwidth of the signal. The root-mean-square range error will clearly be δRR=c·δTR. Thus, for any transmitted signal s(t), the range error of an optimal receiver will be completely determined by the energy received, the noise floor, and the effective bandwidth β . Taking several measurements from the matched filter in which the noise contributions n0(TR) are independent and combining them before applying the differentiation, the estimate of the range will improve. Helstrom [6] shows that in the case where P independent measurements are made, the variance of the time measurement error can be reduced to: 1

δ T R2 = β2

P

å (2 E

N o )k

=

1

β

2

(2 E T

No )

(3 )

k =1

where (E/N0)k is the energy-to-noise density during the k-th measurement and ET is the total received energy over all the k measurement periods.

4. Physical Limits on the Accuracy of a Direct Sequence Spread Spectrum Indoor Positioning Receiver In the case of the Direct Sequence Spread Spectrum (DSSS) receiver used for our experiments, s(t) is a pseudonoise (PN) code and the receiver structure comprises an antenna, a mixer and a matched filter stage followed by a differentiator (which is approximated with an Early-Late gating block), making this proximity detector close to the optimum theoretical ranging receiver. The range error generated by the receiver will be determined by its ability to locate the time-of-arrival of the line-of-sight component in the presence of thermal noise. As seen in the classical radar theory presented above, the accuracy with which the receiver can do this estimate is determined by the received line-of-sight energy, the thermal noise floor, and the mean-square bandwidth of the signal. The mean-square bandwidth of our DSSS signal is equal to the second derivative of the correlation function at its peak, i.e. the ‘sharpness’ of the peak. As long as the bandwidth of the pulse-shaping filter is much greater than the chipping rate, the meansquare bandwidth of the DSSS signal can be approximated by β2≈2Bfc, where B is the bandwidth of the pulse-shaping filter and fc is the chipping rate. The root-mean-square (rms) range error in our ideal DSSS receiver will then be:

Workshop Location Modeling 102

B

δR R =

c

(4 )

f c 2 B ET N 0

where c is the speed of propagation in free space, ET is the total received signal energy (which might be collected from several separate measurements), and N0 is the thermal noise floor. In the indoor environment, the measurement of pseudorange is made considerably more difficult by the fact that the receiver not only receives the signal directly from the transmitter, but also from reflections off the walls, ceiling and floor between the transmitter and receiver. This phenomenon is known as 'multipath' and is particularly destructive for range measurements since the correct distance is given by the line-ofsight path and measuring any of the reflected components (which travel further), will give a range error. To verify whether the application of Ambient Intelligence is physically feasible with this positioning technique, we consider an ‘ideal’ receiver, whose requirements are less stringent than a real receiver. The ideal receiver will perform some averaging to extract as much radio energy as possible and remove the effect of fast-fading, and should be able to identify the line-of-sight component from amongst the multipath components. In [8] the performance of this ideal receiver is calculated -starting from the theoretical rms error of equation (4)- in terms of the accuracy with which a given range can be measured, and the largest range that can be measured to a specified accuracy. Cluttered LoS in the Engineering building or Retail store Clear LoS in the Engineering building or Retail store

Maximum range measurable to 1m rms accuracy

Maximum range measurable to 1m rms accuracy

Cluttered LoS in the Engineering building or Retail store Clear LoS in the Engineering building or Retail store

(Bandwidth = 20MHz Chipping rate = 8Mchips/s Measurement time = 100ms) 1km

100m −4

−2

0

2

4

6 8 10 12 Transmission power (dBm)

14

16

18

20

(Transmission power = 0dBm Measurement time = 100ms Constant bandwidth : chipping rate ratio of 2.5)

1km

100m

10m

1

10 Bandwidth (MHz)

100

Figure 2: The maximum range measurable for a given power and bandwidth

Results related to the maximum range which can be measured to 1m (rms) accuracy are shown in Figure 2 versus transmitted power and signal bandwidth. These plots indicate the physical limits on Direct-Sequence ranging in the 2.4GHz band and in the propagation environments measured in [9]. Figure 2 demonstrates that the ideal receiver can easily achieve 1m accuracy for ranges of several hundred meters within the power and bandwidth restrictions placed on the 2.4GHz band (transmission power limited to 100mW, useful bandwidth lower than 80MHz, processing gain higher than 10dB [10]). These results are useful both for assessing the feasibility of particular applications and benchmarking the performance of real receivers.

Workshop Location Modeling 103

5. The DSSS testbed The calculations performed in Sections 3 and 4 have shown the theoretical limits of an indoor ranging system, and have also confirmed that the positioning requirements for Ambient Intelligence are achievable with an "ideal" receiver. Philips Research has developed a real prototype of an indoor location system, initially concentrating on a DSSS transceiver. There are several reasons for this choice: the possibility to use the license-free 2.4 GHz band; easy access to off-the-shelf components for this frequency band; the maturity of DSSS technology; the possibility of recreate a system based on the well-established principles used in the GPS; the automatic and efficient multiple access scheme implicit when using different Pseudo Noise (PN) codes. An experimental hardware system for indoor positioning has been set-up at Philips Research Laboratories. The system is composed of a transmitter radio board, a set of antennas and a receiver radio board. The transmitter, whose directional transmitting antenna was placed on a doorway pointing towards a long corridor, is able to transmit Gold code sequences (PN sequences) spread over a large frequency band. The chipping rate used was of 8Mchip/s, with a sampling rate of 40 Msample/s. The power effectively transmitted over air was -4dBm, i.e. 0.39mW.

Figure 3: The experimental hardware of a 2.4 GHz DSSS indoor positioning system

A set of 6 antennas were placed 4-5m distant from each other along a narrow (width of about 2m) corridor -as shown in Figure 3- starting 8m from the transmitter and extending to 30m distance. The receiver board was connected to the different antennas via a switch activated via a "virtual" control panel written in the GUI program controlling the main operations. The receiving antennas were all connected to the receiving board with the same amount of cable (40m each). The received signal was digitised, acquired and analysed in a PC with signal processing algorithms written with Matlab. Single shot measures were used and no averaging over them was done.

Workshop Location Modeling 104

Table 1. Summary of parameters used for the experimental work

Parameter Frequency Chip Rate Sampling Rate Power TX-RX Separation

Value 2.4 GHz 8 Mchip/s 40 Msample/s -4dBm, i.e. 0.39mW 8m, 13.5m, 17.5m, 21.2m, 27.2m, 30.8m

The first set of results clearly show that an accurate (to a few meters) system based on spread spectrum systems is achievable indoors with DSSS technology.

6. Logical Positioning As described earlier in this paper, the problem of positioning an object can be reduced to the solution and combination of multiple ranging equations. The final result offered by the positioning technology of choice will be either in terms of simple range from the transmitter (probably expressed in meters of distance) or in terms of absolute coordinates (or latitude, longitude, altitude) of the target object. A clear limit of this approach is that the description of the position of an object in terms of coordinates or longitude and latitude is only very rarely useful to the user. Most outdoor commercial positioning systems (as GPS receivers) already today offer information on a geographical map and not only in terms of coordinates. For the large majority of cases, the user will in fact not be interested in the exact coordinates of the objects he is looking for, but will be looking for a position relative to something he is familiar with. In the context of an intelligent environment, the position information must be translated into human understandable terms. As a simple example, when a user asks the intelligent home "Where are my car keys?", he will expect a response in terms of: "Your car keys are on the table in front of the TV", and not "Your keys are at longitude x and latitude y". The approach of a 'relative' positioning is fundamental for the success of indoor location and context-aware intelligent systems. While the problem of delivering the appropriate information could be handled at the application layer, a more radical approach, that makes use of signal characteristics measured at the physical layer, should be followed. The complications of defining accurate 'logical descriptors' should be disconnected from the application designer and standardised as much as possible in order to stimulate a market of real-live applications. A 'plug and play' positioning technology block that translates the raw coordinates into logical descriptor is therefore necessary. Philips Research has started investigating the background for the definition of a set of Application Programming Interfaces (API) for indoor location. Preliminary conclusions show the necessity to develop a common interface formalising a process of location request/response, the appropriate descriptors and a full hierarchy of objects within the specific location environment context. The Geographic Markup Language (GML), which has been developed to help the programmers writing applications using location-response information, could be an appropriate mean for defining the semantic descriptors and passing them to the

Workshop Location Modeling 105

ranging device. GML is based on the eXtensible Markup Language (XML), which is a well known and standardised (in W3C) form of describing information. Long is still the work in front of us and many the challenges to be overcome. An effective translation of position information into human understandable, logical representation will need at least: the dynamic formation of a map or pseudo-map of the indoor intelligent environment, an efficient (and hierarchical) storage and retrieval of map information and an effective parsing of the hierarchical maps in GML, given the context from which the request comes. Work will be continued in each of these areas in the near future.

6. Conclusions This paper has discussed the importance of location information on the Philips vision of ‘Ambient Intelligence’ environments, in which context-aware devices will help our daily tasks and will dramatically simplify and humanise the interaction between man and machine. The single most challenging technology block necessary to enable this vision is an accurate positioning system with a set of logical descriptors representing meaningful information to the end user. The paper described a Philips testbed architecture based on a Direct Sequence Spread Spectrum CDMA system for indoor accurate positioning and presented calculations of theoretical limits of such technology in ideal conditions. Attention has also been dedicated to describing the challenges and future work items necessary to guarantee that a formal set of logical descriptors is defined, disconnecting this task from the application layer.

References 1. E.H.L. Aarts, ‘Ambient Intelligence: calming, enriching and empowering our lives’, Password, Issue 8, July 2001, Royal Philips Electronics 2. E. Kaplan, 'Understanding GPS principles and applications', Artech House (1996) 3. D. Porcino, 'Location of Third Generation Mobile Devices: A Comparison between Terrestrial and Satellite Positioning Systems', IEEE Vehicular Technology Conference 2001 (VTC01), May 2001 4. N.C. Bird, ‘The importance of place", Password, Issue 8, July 2001, Royal Philips Electronics 5. A.J. Mallinckrodt and T.E. Sollenberger, ‘Optimum Pulse-Time Determination’, IRE Transactions, No. PGIT-3, pp. 151-159, March 1954 6. C.W. Helstrom, ‘Statistical Theory of Signal Detection’, Pergamon Press, 1960 7. M.I. Skolnik, ‘Introduction to Radar Systems’, Second Edition, McGraw-Hill (1981) 8. M.S. Wilcox, 'Derivation of an upper limit on the performance of indoor Direct-Sequence ranging systems', Proceedings of the 2001 London Communications Symposium, University College London, Sept. 2001 9. S. Kim, H.L. Bertoni, M. Stern, ‘Pulse Propagation Characteristics at 2.4GHz inside buildings’, IEEE Trans. on Vehicular Technology, Vol. 45, No. 3, Aug. 196, pp. 579-592 10. Federal Communications Commission, ‘Code of Federal Regulations’, Title 47, Part 15.247, available at http://www.access.gpo.gov/nara/cfr/index.html

Workshop Location Modeling 106

      ! #"%$'&(*)+*,-/.-102$43 ('&%0657"%8:91­*®*,q¯:­4,R&%82­49)8:°Œ±4­{&=,²$*³µ´-¶\·¸l¹’º?¶l»¼,²9›8A>½&%#$µ­i9g\3µ029›0¾9g¿7$*8:$7¿'° >À82"V±©8:9=,R&%,-8:$40A.y,²${>À82"%°V0l&%,-8:$™ÁXÂ8:9=,R&%,-8:$40A.!,²${>À82"%°V0l&=,²82$¼,q9V0•"%,q)+¼9=82­*")›>À82"^)#82$'&=#¬7& 06Ã!02"=À82"%°V0l&%,-8:$$i0A°œ#.²¿5:#.²87)#,R&G¿ 0A$430:)#)/.-/"%0A&=,²82$™ÁiǍ8:9g&8A>½&%#$–+48lÃ_/52#"X3{#57,q)\9,q90A$—,-$'&=/"=\9G&%,-$4³^)82$i)#±*& >À82"°V0A$7¿)#82$'&=#¬7&06ÃD9=#$i9g8:"%99=­4)+œ,-${>À8:"=°V0A&=,²82$ )#0A$É®i #¬'&%"%0:)†&%/3ÊÃy,-&=+*8:­{&œ&=+* $*//3¾>À82"œ¬{±©#$49=,-5:›,²${>À"029g&="%­4)†&%­*"%2ÁË*­*"=&=+*/"=°œ8:"=:Æ 9g/$49g8:"%9!02343$*/ÃÌ3{,²°œ#$49=,²82$49!&%8&%+*T&G¿7±iT8A>.-8{)/0l&=,²82$›,-$*>À82"%°œ0A&=,²82$›82$*)#02$8:®{&%02,-$™Á «[$±40A"=&=,q)­*.q0A"82$*)/0A$#¬'&%"%0:)†&0 Í#Î#Ϝ¸A»4¹’º?·ЩÑ=¶/Ò2º½ÏŒº½¹’ӐÏVÎ%¸l͆Ô{Ñ=ÎyÃy+*,q)+³287\9y®i/¿28:$43 0A$43)8:°Œ±4.-/°Œ/$'&%9À#ÃÖ¿:/02"%9³2"%/0l&023*5602$4)\9+i0652Œ®i/#$—°V0:3{œ,-$š°œ,-$*,q0l&%­*"%,-×\0l&=,²82$ˆÆ .-8lÃX°œ82®*,².-:Æi)#82$'&=#¬'&T0A$439=,-&=­40A&=,²82${Ø[06Ã!02"=02±*±*.²,²)/0l&=,²82$i9 ž£J'aQ'f'U:CC%jAE_O1FN\K#QlE[I[I[Eg“?˜vK#IGF%YI!H2˜ÄEGj:N#aߒC“’F%à/I_á/á\PRYá“Äo²à/J'â#Q7ã6U2à/C†ä2UŒh©K/PZQœQ'UœH'K#ŠlE[‰I£IGŠlf:‰C‡IGjlf:kXCPZMY[YyN\Oå'C†OU2C%PZY[EgY[K/PZL™N\QŒæ_N/çToFIGCyf'oqCN/E°­{&%­40A.b±*"%86¬7,²°œ,R&G¿,-${>À8:"=°V0A&=,²82$ 82$ Ãy+*,q)+ Ã_T®*­*,².q30›Í#Î#Ϝ¸A»4¹’º?·&=+*"%0:3{,-82Øv>À"%/¯'­*/$4)¿9g,²³2$i0A.ˆ,²$–8:"%3{/"&%8 3{&%#"%°Œ,²$*_±*+7¿79=,q)#0A.:±*"%86¬7,²°œ,R&G¿:Á'í 6ï70A.q9g8­49g\9™&%,-°œ#ؒ82>½Ø 4,²³2+'&®*­{&b82>403{,q9g&=,²$4)†&02)#82­49g&=,q) 9g,²³2$402.vÁÙ5l0A"%,-8:­49y9g/$49=82"9#Á و8 3{#5:#.²82±š0›9=#°V0A$'&%,²)±*"%86¬7,²°œ,R&G¿+4,-/"%02"%)+7¿›&%0A®4.-œ9g+*8lÃ9T0).q029%9g , i)/0l&=,²82$—8A> ±*+7¿79=,q)#0A.&=+4/$75',²"%82$*°œ#$'&9Œ02$43É82®{ÜG\)†&%9œÃy+4,²)+ )#02$Û®©°Œ\029=­*"%/3 ­49g,²$*³ 3{,-Å©/"=/$:&19=#$49=82"9/Á Õ $—&=+*œ82$4Œ+i0A$43—Ã_œ3{,q9G&%,-$*³:­*,q9g+š®i#&GÃ_/#$—&=+*V3{¿7$402°Œ,q)#9 0A$43Û&=+*–9g&%0A&=,q)9G&0l&%82>&%+*#$757,²"=8:$*°œ#$'&/Æ0A$i3Û8:$É&%+*8A&%+*#"V+i0A$43É®i#&GÃ_/#$¼&=+4 3{¿7$40A°œ,q)#9y0A$i39g&%0A&=,q)T9G&0l&=T82>02$›8:®{ÜG/)&/Á

‚' } 7„ Bˆfl‰2Y[PZF†K/LiH7KEgK/OC%IGC%EGYJ:Y[C†UIGNTF%L-K#Y[Y[Ppo²‰VKH:EGN†Ž2PROPpIv‰1f'PZC%EgK#EGFgfl‰ *€‚'€   ƒ 4€‚'€ IGK/CO‡OŠ:H{PZ CC% QlEg IXK#IGK/J2FEGN/ C\J'hD YÄfA€ IGPZJ:F\Oh:kXP-U2PRPpQ'I’‰lU4h4h4H:uEGuCu Y[Y[J2EGC\h N/‡è½PZEGQ{}PZCxQlN\ :IgJ:ƒ/KIG€IGY[PRPRN/U2QiCyh›Š{IGNPRŽ'LpI†ì=h h4uK/uLpu IGPpIGJ7U:C/hLZPZS\flI "! ‚ # ƒ\~ ON/IGPZN\Qêè½H{C%EGY[N\Q ONz2PZQ:Slì=h–LZPZS\flI K/F%FCLZC%EgK#IGPZN/Qih7kXPZQ7U*h'LZPZS\flI£Fgf7K/Q:S\CYh F=Y[L-f'K#K/OQ'OS/CPZYQ'hSlK/ì=Fh*N/uJ'u%YÄu IGPZF•è½Y[H{CK/r6C%E†hU:NlN#E uuu $£02)+š).q029%9‡8A>_±4+'¿{9=,²)/0A.±402"%02°œ&=/"%9‡3{,-Å©/"%9T,²$š&=+*Và 9g/°œ02$'&=,q) ±*"=86¬{,²°œ,R&G¿:Á£(7,-$i)°V0A$7¿É3*,RÅD#"%#$'&.²8{)#0l&%,-8:$49œ°V06¿¾+4065:–9g,²°œ,-.q0A"VÃ!/0A&=+*/" )82$i3{,R&%,-8:$49VÃy,-&=+*8:­{&®©#,²$*³¾±*+7¿{9=,²)/0A.².-¿¾8:"9=#°V0A$'&=,q)#02.-.²¿Û)#.-8'9g:Æ&=+*,q(9 i"%9g&^.-/52/.,²90 "%0A&=+*/"yÃ_\0A@V±*"%86¬{,-°œ,-&G¿^°œ/029=­*"%2Á Ù£±4"=86¬{,²°Œ,-&G¿ ,²9³2,²52/$–Ãy+*/$•3{,-Å©/"=/$'&‡3{/5',q)\9T3{&%/)†&9=,-°œ,².²02" 0A$4319=,²°­*.-&%02$*#8:­49™3{¿7$402°Œ,q)#9b,²$&=+*_#$757,-"%82$4°Œ/$'&/*Á )e+4#$>À8:"ˆ¬*02°Œ±4.-£0±i/"%9=82$&0A.²@{9 82"°œ8l52\90A"%82­*$43•,²$ô&=+*V57,q),²$*,R&G¿•8A>y9g/52/"%02.3*#57,²)#/9T&%+*/9=^)+402$*³2\9T,-$Ê9=82­*$43ô02$43 °Œ8l5:#°œ#$'&)#02$›®©3{#&=\)†&=\3 0A$43 )8:"="%#.q0l&%/3^&=8V3{/)#,²3*T&=+40A&y&=+*3*#57,²)#/9y02"=T,²$43{#\3 ,-$&=+419=02°ŒT±*+7¿{9=,²)/0A.™9=±402)#2Á

Workshop Location Modeling 108

ÙÀ"%,²#$43*9=+*,²/± .Œ®i#&GÃ_/#$3{/5',q)\9#Á 02135476839351:;9@8AB39ADCFE;>91G42H8IJ:DKL1GCM1N>POQ@R1GASIJETIJ4U@OWVÈ${>À8:"g&%­*$40A&=#.²¿2Æ*&=+*1±4"=82Ø ±i8'9g\3 ±*"=86¬{,²°œ,R&G¿›+*,-/"%02"%)+7¿›,²9°Œ8:"=102°®*,²³2­482­49D&=+*+*,-/"%02"%)+7¿1,q9!0A$V,²$:&%#"%/9g&=,²$*³"%/9=/0A")+œ±*"%82®*.²#° YÁ )š02"=³:­*y9g/$49=82"9,²$Ê82"3{/"&%882®{&0A,²$ 02)#)#­*"0l&=‡±*"%86¬{,-°œ,-&G¿^/9g&=,²°V0l&=\9#Á Õ $*T°Œ,²³2+'&0:9g@&=+*/$Ãy+7¿&=+*±*"%82±©8:9=/3›±*"%86¬{,-°œ,-&G¿^+*,²#"0A")+'¿V,q9y,²$'&=#"%/9g&=,²$*³iÁ*«[$ 82­*"82±4,-$*,²82$1&=+*/"=y0A"%_0A&.²/0:9G&&=+4"=/_"%/0:9g8:$4*9 Z54"9G&8A>©0A.².:&%+*À82"%°V0l&=,²82$ˆÁ (7/)#82$43{.²¿2ÆD&%+*^065l0A,².²02®*.²±*"%/)#,²9=,²82$—82>!±4­*"=œ±©8:9=,R&%,-8:$40A.,²${>À8:"=°V0l&%,-8:$—°œ,²³2+'&1.-,²°œ,R&1,-&%9 ­49g#>À­*.²$*/9%9X0A$i3Œ&=+*0A±*±4.-,q)#0A&=,²82$49±©8:9%9=,-®*.²2Á:;$43Œ&%+*,-"3{.²¿2Æl&%+*±*"%82±©8:9=/3Œ±*"%86¬7Ø ,-°œ,-&G¿ 3*8'\9$48A&"%#.²¿ 82$—0±40A"=&=,q)­4.²02",²${>À"%0:9G&%"=­i)†&=­4"=1,²$49g&%02.-.²/3,²$&%+*/$75',²"%82$*°œ#$'&\Á )e+*#"%/0:9&%+*#"%02"=°œ02$7¿ 3*,RÅD#"%#$'&T±i8'9=9=,-®4,-.²,R&%,-\9&=88:®{&%02,-$•±i8'9g,-&=,²82$i0A.,-${>À8:"=°V0A&=,²82$ &=+*/¿3{,-ÅD#",²$–±*"%/)#,²9=,²82$ Ãy+*,q)+–)#0A$ ®©8:®{&%02,-$*\3 0A$43 ,²$ &=+*Œ0A°œ82­4$:&82>,-${>À"029g&="%­4)Ø &=­*"%"%/¯'­*,²"%/3Á{Ǎ8'9G&½&%#$^&%+*T±*"%82±©8:9=/3V±*"%86¬{,-°œ,-&G¿œ+*,-/"%02"%)+7¿ŒÃy,-.².±*"%8l57,²3{,-$*>À#"%,-8:" ±*"=\),q9g,²82$Ãy,-&=+"=\9g±©/)&ˆ&%8‡02®49g8:.-­*&=X±©8:9=,R&%,-8:$*,²$*³9=¿{9G&%#°V9#Á\«[$°V02$'¿T,²$'&=#"%/9g&=,²$*³02±*±*.²,RØ )#0l&%,-8:$^9=)##$402"=,²8:9X+*8lÃ_/52/"X"=/.²0A&=,²52±4"=86¬{,²°Œ,-&G¿ŒÃy,-.².i®©#¬7&="%#°œ#.²¿œ­49g#>À­*\. [À±©8A&=/$'&=,q0A.².-¿ ±40A,²"=\3Ãy,-&=+0Œ"%#.q0l&=,²52/.-¿,-°œ±*"%/)#,²9=0A®49=82.²­{&=T±©8:9=,R&%,-8:$ ]†Á Õ $›&%+*82&=+*/"+402$43&=+*±*"=86¬{,²°œ,R&G¿›+*,-/"%02"%)+7¿^,²$4)#.-­43*/9°œ8:"=T&%+40A$VÜG­i9G&"%#.q0l&%,-5: ±i8'9g,-&=,²82$É,-${>À8:"=°V0A&=,²82$™Á«[°V0A³2,²$*&=+* )#0:9g›Ãy+*#"%^&GÃ!8•3{/57,²)#/9Œ0A"%±*+7¿{9g,q)#02.-.²¿Ê).²8:9= &=8š/02)+É8A&%+*#"Œ®*­{&V,²$¼3{,-ÅD#"%#$'&Œ"%8782°V9/Á;®49g8:.-­*&=±i8'9g,-&=,²82$*,²$*³ô9g¿{9g&=/°œ9Œ°œ,²³2+'&>?02,-. &=8–3{#&=#"%°œ,-$4V,R>i&%+*y#$757,-"%82$4°Œ/$'&9=­4)+V02902­43{,²8°V0A@2\9,R&X"%#.q0l&%,-5:#.²¿1/0:9g¿1&=813{#&=#"%°œ,-$4 ,R>i&GÃ_8T3{#57,q)\90A"%£,²$&%+*y9=02°ŒÀ82.².-8lÃy,²$*³9%)/$40A"%,²/8 ZÃy+*/$ Ã_82±©#$–8:").²8:9=&%+*Œ3*8'8:"®©&GÃ!#/$&GÃ_8›0236Üg0:)#$'&"%8782°V9y&%+*œ9g/°œ02$'&=,q)±*"%86¬7,²°œ,R&G¿ 8A>™3{#57,q)\9X)+40A$4³2/9X#52/$Œ&=+*8:­*³2+œ&%+*#,²"£±*+7¿{9=,²)/0A.*±*"%86¬{,-°œ,-&G¿3{87\9X$*82&_)+40A$4³22ÁA;³:02,-$™Æ °Œ\029=­*"%,-$*³0A$43^)#82"%"=/.²0A&=,²$*³13{¿7$402°Œ,q)#9_8A>&%+*/$75',²"%82$*°œ#$'&£°V02@2/9_,-&_"%#.q0l&=,²52/.-¿Œ/0:9g¿ &=8^3{,q9G&%,-$4³2­*,q9g+›&=+4/9=13{,RÅD#"%#$'&y.²#52/.²99=#°V02$:&%,²)‡±*"%86¬{,-°œ,-&G¿2Á ÙÀ8:"=°V0A&=,²82$ 0A®©82­{&y&%+*9g/°œ02$'&=,q)T±*"=86¬{,²°œ,R&G¿^02$43.²8{)#0A&=,²82$8A>3{#57,q)/9/Á

Workshop Location Modeling 109

b

ö™&%+*po,q9=02±*±©/0A"%,²$*³q_8:°Œ±4­{&=/"!«[$4,R&%,²0A&=,²52‡8A>™&%+*h$£­*"%82±©/0A$q_82°œ°œ,²9%9g,²82$i9 q_82°œ°­*$4,R&G¿ ry/9=/02"%)+ ÂX"%82³:"%02°ÁÙ?0A°œ,².-,²/98A>1#5:#"%¿{3*06¿¾82®*ÜG/)†&9#Æ£>À82"^#¬*0A°œ±*.² )+*,-.q3{"%#$Ls 9&=8l¿{9/Æ9%)#0A&g&%#"%/3ɱi/"%9=82$402._®©#.²82$4³2,²$*³:9/Ƴ2878{3*9Œ,-$e0š9G&%82"%2Æ82"œ±402"g&9,²$ 0 ±*"=8{)#/9%9g,²$*³ )+40A,²$™Áو8l¿{9>À82"#¬*0A°œ±*.²œ)#82­*.q3–"%/0:)†&,²$š3{,-ÅD#"%#$'&Ã!06¿{9T3{#±©#$i3{,-$4³8:$ &=+*!Ã!06¿&=+*/0:)+^)+*#°œ,q)#0A.©0A$43œ³:,-5: Ã!02"=$*,²$*³'9X,->ˆ&=+4#¿^0A"%$48A&!9%0l&%,²u9 4\3^82"(7°V02"g&=ؒ«Ä&9 )82­4.²3ô3{#&=#"%°œ,-$4^Ãy+*&%+*#"3*0A$*³:#"%82­49T)+*/°œ,²)/0A.q910A"%V).²8:9=œ&=82³:&%+*#"\Áˆ(7­4)+¾)#"=,-&=,q)#02. 9g,-&=­40A&=,²82$49)8:­*.²3ô®©›3*&=\)†&%/3Ê®7¿š°Œ\029=­*"%,-$*³ &%+*"%#.q0l&%,-5:V±*"%86¬7,²°œ,R&G¿•8A>)+*/°œ,²)/0A.q9 ­49g,²$*³&=+*À­*.².-\9G&±©8A&%#$'&=,q0A.’LÁ ^8lÃ!#5:#"\Æ9g°V0A"=&.²02®i/.²90A"%.²,-°œ,-&=\3 9g,²$4)&=+*/,-"_°œ02,-$œ>À­4$4)†&%,-8:$40A.²,R&G¿Œ,q9&=8­*$*,q¯:­4#.²¿,q3{#$'&%,R>À¿œ±*+7¿{9g,q)#02.*82®*ÜG/)†&9X,²$œ&=+*3{,-³:,RØ &%0A.7Ã!82"%.²3™Á\(7°V0A"=&gØÄ«Ä&%93{/57,²)#/9b82$&%+*_82&=+*/"+40A$4w3 v3{­4£&%8&=+4#,²"9=#$i9g,²$*³02$43)#82°œ°­*Ø $*,²)/0l&%,-8:$•)#02±40A®4,-.²,R&%,-\9xv+4065:°V0A$7¿ °œ82"%0A±4±*.-,q)#0A&=,²82$49,²$—&%+40l&3{82°V02,-L$ Z™(7°V02"g&=ؒ«Ä&9 °œ06¿^3*&=\)†&yÃy+*,q)+8:®{ÜG/)&%9y&=+*/¿ô02"=^9g&=,².²._8:$Ê&=+*›"=8'023š82",R>y&%+*#¿ô+4065:V®i/#$¾#¬{±i8'9g\3š&=8—­4$402)Ø )#±*&%0A®4.-1)#82$43{,-&=,²82$iy9 [½&%8'8V+48A&/Æ4&=878V°­i)+57,²®*"0l&=,²82$ ]†TÁ )e,-&=+ &=+4/9=)#02±40A®*,².²,R&%,-\9!>À­*.-.²¿ 0A­{&%82°V0l&%/3Æ*9=#.->8:"=³'0A$*,²×#\3^,²$752#$'&%82"%¿^)82$'&%"=8:.©Ã!82­4.²3›®i±©8:9%9=,-®*.²2Á ;Þ"=/8{)#)­4"=,²$*³œ&%0:9g@,²$°œ02$7¿02±*±*.²,²)/0l&=,²82$i9y,²9±*"%86¬{,-°œ,-&G¿›3{#&=/)&=,²82$82>_(7°V0A"=&gØÄ«Ä&%9/Á ÙÀ82.².-8lÃy,²$*³^9=/)&=,²82$Ã!0A"%³2­*1&=+40A&,R&‡,²9$*82&82$*.²¿ ­49=>À­*.®*­{&‡8A>½&=/$ $*/)#/9%9=02"=¿&=8 ­49=Œ,-${>À8:"=°V0A&=,²82$•)#82°œ,²$*³>À"%82° °­*.-&=,²±*.²œ9=#$i9g8:"%9/Á )šV0A.q9g8³2,²52œ9=82°œ ±*"=/.-,²°œ,-$i0A"%¿^¬{±©#"%,-°œ#$'&0A."%/9=­*.R&9!82$ ±*"%86¬7,²°œ,R&G¿^3*&=\)†&%,-8:$­i9g,²$*³V3{,-Å©/"=/$:&9=#$i9g8:"%9/Á

Workshop Location Modeling 110

•û ¤*÷{ggª=øªg¡ù¤{ý|cüxe÷™¤4ªgøú÷ˆ¡_¢{ùWg"} ÷{k7§~g[¢k «[$ &=+*>À82.².-8lÃy,²$*³œÃ_3{/9%)"%,-®©1±*"%#.²,²°Œ,²$402"=¿^#¬{±i/"=,²°œ#$'&%02.ˆ"%/9=­*.R&9À¿ÊÃy+*#&=+*/"œ&GÃ_8Ê(7°V0A"=&gØÄ«Ä&%9V02"= ).²8:9= &=82³:&%+*#"T82"$48A&®4029=/3—82$ô0A­43*,-8)8:"="%#.q0l&%,-8:$™Á(7,-$i)^(7°V0A"=&gØÄ,R&9T°œ,-³:+:&®©^0l&=&%02)+4/3 &=8 0A"%®*,R&%"%02"=¿82®*ÜG/)†&9‡Ã_œ9g+482­*.q3&0A@2,-$'&%80:)#)8:­*$'&&%+*œ±i8'9=9=,-®4,-.²,R&G¿ &%+40l&‡&=+4œ02­43{,²8 9g,²³2$402.²9Œ)#02$É+i06525:#"%¿Ê3{,RÅD#"%#$'&^9g&="%#$*³2&=+ÛÃy+4#$¼82$*82"Œ®i82&=+ 0A"%›Ãy"0A±4±i\3¾82"œ,²$ 0–®©86¬DRÁ )š›±*"%82±©8:9=^&%+*#"%>À8:"=^&=8ô3{,RÅD#"%#$'&%,²0A&=®©&GÃ!#/$¾&%+*>À8:.-.²8lÃy,²$*³—9=,R&%­40l&%,-8:$4*9 Z &=+*(7°V0A"=&gØÄ«Ä&%90A"%t[€U]1).²8:9=^&%82³:&=+4#"\Æ [v7ð ]1)#.-8'9g&=82³:&%+*#"Œ0A$4‚3 [?82$*›82"®i82&=+É02"=U] ,-$e0ô®©86¬É0A$4j3 [v7ò ]Œ$*82&).²8:9= &=82³:&%+*#"\ÁXÈ9=,-$4³¾0•±4+*8A&%8¾9g/$49g8:"œÃ_–)/0A$ 82®{&0A,²$¼02$ 0A±*±*"%82±4"=,q0l&%®*,²0:9V&%8Û3{,q9G&%,-$*³:­*,q9g+ 9=,-&=­40A&=,²82ƒ$ [u9]V02$4„3 [v7ð ]†Á&%+*02­43{,²89=,-³:$40A.q9£02$43 >À82­*$43 0œ3{,q9G&%"=,²®*­{&%,-8:$›Ãy+4,²)+)8:­*.q3®©°œ873*#.²/3Ãy,R&%+0 T02­49%9g,q0A$ší ó2ï’Á*Ù?02)œ &=8:" œ [G*Už9ž†])/0A$š®iV#¬7&=#$i3{/3•&=8   œ ¡ £  ‚¥8¦ œ [GUžUž ˆwœ §S“—~¤]WŸ œ [Gˆw§Y“—~¤¨]†Á (7,-$i)~ˆp§0A$43(—h¤ˆ02"=y,²$43{/±i/$43{#$'& [ˆw§†“”—h¤£] )#02$V®iÃy"=,-&g&%#$^029 [ˆw§9]TŸ [G—h¤]Á'ÙÀ"%82°œ &%+*V±*+*82œ &=89g/$49=82"\Á

Ë*82" &=+*.q029g&À­*$i)†&=,²82$±*.-82&g&%/3Œ>À82"_82­*"X&=/9g&_3*0A&%0‡>À8:"_3{,-Å©/"=/$'& 9=02°Œ±4.-,²$*³–"0l&%/982>!&%+*02­43{,²8—9g,²³2$i0A.’Á Õ $*V#5l02.-­40A&=,²82$Ê82>!&%+*^±*"%82®i0A®*,².-,-&G¿–>À­4$4)†&%,-8:$ Ã!0:9_)/0A.q)­*.q0l&%/3V­49=,²$*³&%+*‡®*,²0:9X>À"%82°&%+*‡.-,²³2+'&!9=#$i9g8:"!02$43V82$4Ãy,-&=+*8:­{&D&=+43{#57,q)\9Ã!0:9#$i).²8:9=/3 ,-$–0V®©86¬Æi02$430l>½&%#"~ª'†ð …œ9=/)8:$43*9&=+*3*#57,²)#/9Ã_/"=±*.q02)#/3 ,²$ &GÃ_8^9=#±i0A"0l&="%8782°V9/Á ÙÀ8:"=°V0A&=,²82$•>À82",²$:&%#"%±*"%&%0A&=,²82$•8A> &=+*102­43{,²8œ)#82"%"=/.²0A&=,²82$™Á

Workshop Location Modeling 111

1

without light bias with light bias

0.9

1

0.7

0.6

0.6

p(N1|corr)

0.8

0.7

p(N1|corr)

0.8

0.5

0.5

not together

0.4

not together

0.4

0.3 0.2

without light bias with light bias

0.9

0.3

together and one in box

together

0.2

0.1

together and one in box

together

0.1

Àè K6ì 轊{ì f'«¬K/®­YX„JŠ{7C„CQB™EGYGN/K/Š7OK#H:Š'LRPZCLZPZUŒIv‰ K#IN#oèÀK\H:ìXEGä#N†Ž2rAPZc!OmyPZIvK#‰Q7UJ'Y[è½PZŠ7Q'ìSŒà/IGá\f:á#Cc!K/m J'U:PZN^K/Q'U›H:f'N/IGNŒY[CQ'Y[N#E†uaf'C1K#J7U2PRNVY[PZS/Q7K/L 0 0

200

400

600

800

1000

1200

1400

0 0

200

400

time in seconds

800

1000

1200

1400

£¥ ¤µ ÂX"%86¬7,²°œ,R&G¿°Œ\029=­*"%/9X02"=³:"=\0l&X,²°Œ±©82"=&%02$4)À8:"£9=,R&%­40l&%,-8:$^06Ã!02"=À­{&=­*"%

Suggest Documents