Editor in Chief International Advisory Panel Associate

0 downloads 0 Views 19MB Size Report
Frederic Adam, University College Cork, Ireland. Hojjat Adeli ... Daniel E. O'Leary, University of Southern California, USA ... Coverage: The coverage of the journal is intended to cover a range of topics within the area of specialization. Topics of.
Editor in Chief Guisseppi A. Forgionne University of Maryland, Baltimore County, USA

International Advisory Panel Frederic Adam, University College Cork, Ireland Hojjat Adeli, The Ohio State University, USA David Paradice, Florida State University, USA Daniel Power, University of Northern Iowa, USA Andrew B. Whinston, University of Texas - Austin, USA

Associate Editors James Courtney, University of Central Florida, USA Jeet Gupta, University of Alabama, Huntsville, USA James R. Marsden, University of Connecticut, USA Manuel Mora, Autonomous University of Aguascalientes, México Vicki L. Sauter, University of Missouri, St. Louis, USA Victoria Yoon, University of Maryland Baltimore County, USA Pascale Zaraté, Toulouse University, France

Managing Editor

Assistant Managing Editor

Elizabeth Duke IGI Global

Jeffrey Ash IGI Global

The International Journal of Decision Support System Technology (ISSN 1941-6296; eISSN 1941-630X ) is published quarterly by IGI Global, 701 E. Chocolate Avenue, Hershey, PA 17033-1240, USA, www.igi-global.com. For pricing, please visit www. igi-global.com/journals. Sample copies may be requested (based on availability). Back issues available for US$66.25 each. Advertising information: IGI Global, 701 E. Chocolate Avenue, Suite 200, Hershey, PA 17033-1240, USA, 717/533-8845, www.igi-global.com. Postmaster: Send all address changes to above address. Copyright © 2009 IGI Global. All rights, including translation into other languages reserved by the publisher. No part of this journal may be reproduced or used in any form or by any means—graphics, electronic, or mechanical, including photocopying, recording, taping, or information and retrieval systems—without written permission from the publisher, except for noncommercial, educational use, including classroom teaching purposes. The views expressed in this journal are those of the authors but not necessarily of IGI Global.

IGI Publishing Hershey • New York Order online at www.igi-global or call 717-533-8845 x100 – Mon-Fri 8:30 am - 5:00 pm (est) or fax 24 hours a day 717-533-8661

International Editorial Review Board Patrick BrÂŽezillon, Pierre & Marie Curie University, France Fátima C.C. Dargam, ILTC, Research Institute “Doris Aragon”, Rio de Janeiro, Brazil Leonardo Garrido, Monterrey Tech, Mexico Carina Gonzales, University of the Laguna, Spain Zhiling Guo, University of Maryland, Baltimore County, USA Payam Hanafizadeh, Allameh Tabatabae’i University, Iran Luca Iandoli, Università degli Studi di Napoli Federico II, Italy Miroljub Kljajic, University of Maribor, Slovenia Carlos Legna, Universidad de la Laguna, Spain Katty Murty, Michigan State University, USA Daniel E. O’Leary, University of Southern California, USA Ricardo Colomo-Palacios, Spain Zita Iren Zoltayne Paprika, Budapest University, Hungary Doncho Petkov, Eastern Connecticut State University, USA Roger Pick, University of Missouri, Kansas City, USA R. Venkato Rao, Sardar Vallabhbhai National Institute of Technology (SV NIT), India Rita A. Ribeiro, Coordinator of research group on Computational Intelligence at UNINOVA, Portugal Clive Roberts, University of Birmingham, UK J P Shim, Mississippi State University, USA Ralph Sprague, University of Hawaii at Manoa, USA Xuan F. Zha, NITS, USA

JDSST is currently listed or indexed in: Cabell's; GetCited; Media Finder; Standard Periodical's Directory; Ulrich's Periodical Directory

IGIP

IGI Publishing Hershey • New York

Order online at www.igi-global.com or call 717-533-8845 x100 Mon-Fri 8:30 am - 5:00 pm (est) or fax 24 hours a day 717-533-8661

International Journal of Decision Support System Technology January-March 2009, Vol. 1, No. 1

Table of Contents

Editorial Preface



Research Articles

i JDSST 1(1) Guisseppi A. Forgionne, Editor-in-Chief James Marsden, Associate Editor

1 15 35 46 69

Effective DMSS Guidance for Financial Investing Guisseppi A. Forgionne, University of Maryland ‒ Baltimore County, USA Roy Rada, University of Maryland ‒ Baltimore County, USA Decision Support-Related Resource Presence and Availability Awareness for DSS in Pervasive Computing Environments Stephen Russell, University of Maryland ‒ Baltimore County, USA Guisseppi Forgionne, University of Maryland ‒ Baltimore County, USA Victoria Yoon, University of Maryland, ‒ Baltimore County, USA Collaborative Decision Making: Complementary Developments of a Model and an Architecture as a Tool Support Marija Jankovic, Ecole Centrale Paris, France Pascale Zaraté, Toulouse University, France Jean-Claude Bocquet, Ecole Centrale Paris, France Julie Le Cardinal, Ecole Centrale Paris, France Developing a DSS for Allocating Gates to Flights at an International Airport Vincent F. Yu, National Taiwan University of Science and Technology, Taiwan Katta G. Murty, IOE, University of Michigan, Ann Arbor, USA Yat-wah Wan, GSLM, National Dong Hwa University, Taiwan Jerry Dann, Taiwan Taoyuan International Airport, Taiwan Robin Lee, Taiwan Taoyuan International Airport, Taiwan Enabling On-Line Deliberation and Collective Decision-Making through Large-Scale Argumentation: A New Approach to the Design of an Internet-Based Mass Collaboration Platform Luca Iandoli, University of Naples Federico II, Italy Mark Klein, Massachusetts Institute of Technology, USA Giuseppe Zollo, University of Naples Federico II, Italy

International Journal of Decision Support System Technology Guidelines for Manuscript Submissions

Mission: There are a number of excellent theoretical and applied scholarly journals that offer articles in many DMSS areas. None, however, have a primary focus on DMSS technology and its role in DMSS support for the decision making process. The primary objective of the International Journal of Decision Support System Technology is to provide comprehensive coverage for DMSS technology issues. The issues can involve, among other things, new hardware and software for DMSS, new models to deliver decision making support, dialog management between the user and system, data and model base management within the system, output display and presentation, DMSS operations, and DMSS technology management. Since the technology’s purpose is to improve decision making, the articles are expected to link DMSS technology to improvements in the process and outcomes of the decision making process. This link can be established theoretically, mathematically, or empirically in a systematic and scientific manner. Coverage: The coverage of the journal is intended to cover a range of topics within the area of specialization. Topics of interest include, but are not limited to: • DMSS computer hardware • DMSS computer systems and application software • DMSS system design, development, testing, and implementation • DMSS data capture, storage, and retrieval • DMSS model capture, storage, and retrieval • DMSS system and user dialog methods • DMSS output presentation and capture • DMSS feedback control mechanisms • DMSS function integration strategies and mechanisms

• • • • • •

DMSS technology evaluation DMSS network strategies and mechanisms Web-based and mobile DMSS technologies Public and private DMSS applications DMSS technology organization and management Context awareness, modeling and management for DMSS • All other related technology issues that impact the overall utilization and management of DMSS in modern life and organizations

Originality: Prospective authors should note that only original and previously unpublished manuscripts will be considered for review. Furthermore, simultaneous submissions are not acceptable. Submission of a manuscript is interpreted as a statement of certification that no part of the manuscript is copyrighted by any other publication nor is under review by any other formal publication. It is the primary responsibility of the author to obtain proper permission for the use of any copyrighted materials in the manuscript, prior to the submission of the manuscript. Style: Submitted manuscripts must be written in the APA (American Psychological Association) editorial style. References should relate only to material cited within the manuscript and be listed in alphabetical order, including the author’s name, complete title of the cited work, title of the source, volume, issue, year of publication, and pages cited. Please do not include any abbreviations. See the following examples: Example 1: Single author periodical publication. Duke, E. (2008). Information technology standards. IT Standards Review, 16(2), 1-15. Example 2: Multiple authors periodical publication. Wilson, I.J., & Smith, A. J. (2008). Organizations and IT standards. Management Source, 10(4), 77-88. Example 3: Books. Smith, A. J. (2008). Information standards. New York: J.J. Press. State author’s name and year of publication where you use the source in the text. See the following examples: Example 1: In most organizations, information resources are considered to be a major resource (Smith, 2002; Smith, 2008). Example 2: Duke (2008) states that the value of information is recognized by most organizations. Direct quotations of another author’s work should be followed by the author’s name, date of publication, and the page(s) on which the quotation appears in the original text. Example 1: Wilson (2008) states that “the value of information is realized by most organizations” (p. 45). Example 2: In most organizations, “information resources are considered to be a major organization asset” (Duke, 2008, pp. 35-36) and must be carefully monitored by the senior management. For more information please consult the APA manual.

Review Process:To ensure the high quality of published material, JDSST utilizes a group of experts to review submitted manuscripts. Each submission is reviewed on a double-blind, peer review basis by at least two members of the International Editorial Review Board of the Journal. Return of a manuscript to the author(s) for revision does not guarantee acceptance of the manuscript for publication. The final decision will be based upon the comments of the reviewers and Associate Editor, upon their final review of the revised manuscript. Copyright: Authors are asked to sign a warranty and copyright agreement upon acceptance of their manuscript, before the manuscript can be published. All copyrights, including translation of the published material into other languages are reserved by the publisher, IGI Global. Upon transfer of the copyright to the publisher, no part of the manuscript may be reproduced in any form without written permission of the publisher. Submission: Authors are asked to send their manuscripts for possible publication by e-mail as a file attachment in Microsoft Word to [email protected]. The main body of the e-mail message should contain the title of the paper and the names and addresses of all authors. Manuscripts must be in English. The author’s name should not be included anywhere in the manuscript, except on the cover page. Manuscripts must also be accompanied by an abstract of 100-150 words, precisely summarizing the mission and objective of the manuscript. Length: The length of the submitted manuscript should be approximately 30 double-spaced, word processing prepared pages. The specific length should be reasonable in light of the chosen topic. Discussion and analysis should be complete, but not unnecessarily long or repetitive. Correspondence: The acknowledgment e-mail regarding the receipt of the manuscript will be promptly sent. The review process will take approximately 12-16 weeks, and the author will be notified concerning the possibility of publication of the manuscript as soon as the review process is completed. All correspondence will be directed to the first author of multi-authored manuscripts. It is the responsibility of the first author to communicate with the other author(s). Authors of accepted manuscripts will be required to provide a final copy of their manuscript sent electronically as an attachment in Microsoft Word and the original copyright form signed in ink by all authors of a paper. The accepted submission will be edited by the JDSST copyeditor for format and style. Upon completion of typesetting, the edited, typeset case will be sent to the author for proofreading. The author will be required to return the proofread case within 48 hours to the publisher.

All submissions and questions should be forwarded to: Guisseppi Forgionne, PhD. Editor-in-Chief International Journal of Decision Support System Technology E-mail: [email protected]

i

Editorial Preface

IJDSST 1(1) Guisseppi A. Forgionne, Editor-in-Chief, IJDSST James Marsden, Associate Editor, IJDSST

Decision Making Support Systems (DMSS) are information systems that interactively support the decision making process of individuals and groups in life, public and private organizations, and other entities. These systems include Decision Support Systems (DSS), Executive Information Systems (EIS), expert systems (ES), knowledge based systems (KBS), and creativity enhancing systems (CES). Other DMSS, such as executive support systems (ESS), management support systems (MSS), artificially intelligent decision support systems (IDSS), and decision technology systems (DTS), integrate the functions of DSS, EIS, ES, KBS, or CES, to provide more comprehensive support than the individual separate systems. Each DMSS is a vehicle that delivers computer and information technology and decision technology to the system user. Computer and information technology typically involves hardware, systems software, and applications software, while decision technology usually involves accounting, cognitive science, economic, management science, and statistical models that describe the decision problem explicitly and indirectly or directly provide solution alternatives, forecasts, or recommendations. There are a number of excellent theoretical and applied scholarly journals that offer articles in many DMSS areas. Some, in the areas of management science/operations research, economics, accounting, and statistics, tend to focus on the modeling technology being delivered through a DMSS. Others, in the areas of cognitive science and human centered computing, focus heavily on

management issues or the interaction between the user and the physical DMSS. A third group, in the area of information and database technology, center on data and knowledge management issues within DMSS. Finally, there are outlets in artificial intelligence that offer tools and methods to assist users in performing DMSS tasks. None of the scholarly journals, however, have a primary focus on DMSS technology and its role in DMSS support for the decision making process. The primary objective of the International Journal of Decision Support System Technology is to provide comprehensive coverage for DMSS technology issues. The issues can involve, among other things, new hardware and software for DMSS, new models to deliver decision making support, dialog management between the user and system, data and model base management within the system, output display and presentation, DMSS operations, and DMSS technology management. Since the technology’s purpose is to improve decision making, the articles are expected to link DMSS technology to improvements in the process and outcomes of the decision making process. This link can be established theoretically, mathematically, or empirically in a systematic and scientific manner. Decision making is a fundamental management task, and innovative management is necessary for the continued successful operation and long term survival and growth of any public or private enterprise. Consequently, articles within IJDSST are expected to provide as much management and organizational focus as possible. This

ii focus may involve the management of the DMSS, the organizational structure required to support such management, the impact of the DMSS on organizational performance, or a host of other related issues. To achieve the IJDSST mission, IGI has assembled an outstanding editorial group, including an International Editorial Advisory Panel (IEAP), Associate Editors (AE), and an Editorial Review Board (ERB). The ERB members, or their delegates, review submitted articles and make editorial recommendations. Reviews are reported in as much detail as possible, with the objective of mentoring the authors in developing high quality manuscripts for publication in the IJDSST. In the cases of conflicting opinions among the ERB reviewers, an AE will be assigned to the manuscript with the purpose of resolving the conflict and reaching a final publication decision. The IEAP serves in a consulting role to the Editor-In-Chief, offering advice on journal planning and operations. There are five articles in this inaugural issue. Each deals with various DMSS technological issues and applications. The first article deals with the role of DMSS in financial management. Investment decisions have a significant impact on individuals, groups, organizations, the economy, and society. As a result, many formal methodologies and information systems have been designed, developed, and implemented to assist with financial investing. While these tools can improve decision making, none offer complete and integrated support for financial investing. This paper seeks to close the support gap by offering a theoretical decision making support system for financial investing and illustrating the system’s use in practice. The demonstration indicates that the theoretical system can improve the process of, and outcome from, investing. The second article discusses issues related to data-driven decision support systems (DSS) in pervasive computing environments (PCE) and demonstrates that knowledge of resources’ online status and availability in these systems can improve decision outcomes. The state of a decision-related resource (users or data) may be intermittent where users or data may be mobile or distributed as is the case of PCEs. A decision maker’s knowledge of a resources’ state can af-

fect the decision making process, sought to be augmented by a DSS. A proposed theoretical model for incorporating resource availability and presence awareness in DSS is evaluated using a management problem simulation. In recent years, there has been considerable interest in cooperative, group, or collaborative decision making. Geographical dispersion, team effort, and concurrent working have contributed to this interest. The third article presents research that involves face-to-face and synchronous distributed decision making, showing how these activities can offer complementary decision making support. Typically international airports have features quite distinct from those of regional airports. The fourth article discusses the process of developing a DSS (Decision Support System), and appropriate mathematical models and algorithms to use, for making gate allocation decisions at an international airport. As an example, the authors describe the application of this process at TPE (Taiwan Taoyuan International Airport) to make the gate allocation decisions for their passenger flights. The successful emergence of on-line collaborative communities, such as open source software and Wikipedia, seems due to an effective combination of intelligent collective behavior and internet capabilities. However, current internet technologies, such as forum, wikis and blogs, while enabling participation, information sharing and accumulation at an unprecedented rate, appear to be less supportive of knowledge organization, exploitation and consensus formation. In particular, there has been little support for large, diverse, and geographically dispersed groups to systematically explore possibilities and make decisions concerning complex and controversial systemic challenges (on-line collective deliberation). The final article presents a new large-scale collaborative platform, called Collaboratorium, based on argumentation theory and mapping. Several research hypotheses are examined concerning the manner in which on-line large scale argumentation may improve collective deliberation when compared with other technologies. We are grateful for this opportunity to report and disseminate new knowledge and wisdom about DMSS technological issues, and we look forward to publishing high quality articles on these issues.

Call for articles International Journal of Decision Support System Technology An official publication of the Information Resources Management Association! There are a number of excellent theoretical and applied scholarly journals that offer articles in many DMSS areas. None, however, have a primary focus on DMSS technology and its role in DMSS support for the decision making process. The primary objective of the International Journal of Decision Support System Technology is to provide comprehensive coverage for DMSS technology issues. MISSION: There are a number of excellent theoretical and applied scholarly journals that offer articles in many DMSS areas. None, however, have a primary focus on DMSS technology and its role in DMSS support for the decision making process. The primary objective of the International Journal of Decision Support System Technology is to provide comprehensive coverage for DMSS technology issues. The issues can involve, among other things, new hardware and software for DMSS, new models to deliver decision making support, dialog management between the user and system, data and model base management within the system, output display and presentation, DMSS operations, and DMSS technology management. Since the technology’s purpose is to improve decision making, the articles are expected to link DMSS technology to improvements in the process and outcomes of the decision making process. This link can be established theoretically, mathematically, or empirically in a systematic and scientific manner.

COVERAGE/MAJOR TOPICS: • • • • • • • • • •

DMSS data capture, storage, and retrieval DMSS model capture, storage, and retrieval DMSS system and user dialog methods DMSS output presentation and capture DMSS feedback control mechanisms DMSS function integration strategies and mechanisms DMSS technology evaluation DMSS network strategies and mechanisms Web-based and mobile DMSS technologies Public and private DMSS applications

ISSN 1941-6296 eISSN 1941-630X Published quarterly

Full submission guidelines available at: http://www.igi-global.com

All submissions should be emailed to: [email protected]

Now when your institution’s library subscribes to any IGI Global journal, it receives the print version as well as the electronic version for one inclusive price. For more information contact a customer service representative at [email protected] or 717/533-8845, ext. 100

Receive a FREE JOURNAL SUBSCRIPTION when you join the Information Resource Management Association! Choose any of our journals to receive free with your paid membership to IRMA and recieve additional discounts for further journal subscriptions. For more information please visit www.irma-international.org. All institutional subscriptions include free online access! Please contact [email protected] for more information. IGIP

IGI Publishing 701 E. Chocolate Ave., Suite 200 Hershey, PA 17033-1240, USA Tel: 717/533-8845 Fax 717/533-8661 PLEASE RECOMMEND THIS JOURNAL TO YOUR LIBRARY!

New for 2009! The goal of the International Journal of Agent Technologies and Systems is to increase awareness and interest in agent research, encourage collaboration and give a representative overview of the current state of research in this area. It aims at bringing together not only scientists from different areas of computer science, but also researchers from different studying similar concepts. The journal will serve as an inclusive forum for discussion on ongoing or completed work in both theoretical and practical issues of intelligent agent technologies and multi-agent systems. The International Journal of Agent Technologies and Systems focuses on all aspects of agents and multi-agent systems, with a particular emphasis on how to modify established learning techniques and/or create new learning paradigms to address the many challenges presented by complex real-world problems.

The objective of the International Journal of E-Services and Mobile Applications is to be a truly interdisciplinary journal providing comprehensive coverage and understanding of all aspects of e-services, self-services and mobile communication from different including marketing, management, and MIS. The journal invites contributions that are both empirical and conceptual, and is open to all types of research methodologies both from academia and industry.

The International Journal of Sociotechnology and Knowledge Development wishes to publish papers that offer a detailed analysis and discussion on sociotechnical philosophy and practices which underpin successful organizational change thus building a more promising future for today’s societies and organizations. It will encourage interdisciplinary texts that discuss current practices as well as demonstrating how the advances of - and changes within - technology affect the growth of society (and vice versa). The aim of the journal is to bring together the expertise of people who have worked practically in a changing society across the world for people in the of organizational development and technology studies including information systems development and implementation. In an ambient intelligence world, devices work in concert to support people in carrying out everyday life activities and tasks in a natural way using information and intelligence that is hidden in the network connecting these devices. The International Journal of Ambient Computing and Intelligence will speci cally focus on the convergence of several computing areas. The t is ubiquitous computing which focuses on self-testing and self repairing software, privacy ensuring technology and the development of various ad hoc networking capabilities that exploit numerous low-cost computing devices. The second key area is intelligent systems research, which provides learning algorithms and pattern matchers, speech recognition and language translators, and gesture classi cation and situation assessment. Another area is context awareness which attempts to track and position objects of all types and represent objects’ interactions with their environments. Finally, an appreciation of humancentric computer interfaces, intelligent agents, multimodal interaction and the social interactions of objects in environments is essential.

IGIP

IGI PUBLISHING

Order online at www.igi-global.com or call 717.533.8845 ext.100 Mon–Fri 8:30am–5:00pm (EST) or fax 24 hours a day 717.533.8661

I G I J o u r n a l s

International Journal of Decision Support System Technology, 1(1), 1-14, January-March 2009 

Effective DMSS Guidance for Financial Investing Guisseppi A. Forgionne, University of Maryland ‒ Baltimore County, USA Roy Rada, University of Maryland ‒ Baltimore County, USA

Abstract Investment decisions have a significant impact on individuals, groups, organizations, the economy, and society. As a result, many formal methodologies and information systems have been designed, developed, and implemented to assist with financial investing. While these tools can improve decision making, none offer complete and integrated support for financial investing. This article seeks to close the support gap by offering a theoretical decision making support system for financial investing and illustrating the system’s use in practice. The demonstration indicates that the theoretical system can improve the process of, and outcome from, investing. Keywords:

data marting; data mining; decision making support systems; financial engineering

Introduction Financial decisions are some of the most challenging and important decisions made daily by individuals, groups, organizations, and other entities. These decisions are complex and, at best, semi-structured, and the selected actions will have a substantial impact on the well-being of these entities, in particular, and the economy, in general. A large amount of available quantitative data support financial decision making, and many rigorous models capture financial phenomenon (R C Merton, 1995). Various information systems are available to deliver the model’s embedded expertise to the investor (C. Zopounidis & M. Doumpos, 2000). Vellido

et al (1999) reviewed neural network applications in finance, concluded that the literature is rich with neural network applications, and suggested that future work should explore the combining of knowledge-based techniques with neural network techniques. A review of articles on the subject of finance in the journal Expert Systems with Applications highlighted a pattern of knowledge-based work in the 1980s and early 1990s and machine learning work from the mid-1990s till now (Roy Rada, 2008). Yet, most of the models, and thereby the information system delivery, focus on an aspect of the financial situation and typically are tailored to specific categories of investors. Such a fragmented and incomplete approach to financial analysis may not provide the investor with the specific and precise guidance needed

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

 International Journal of Decision Support System Technology, 1(1), 1-14, January-March 2009

for effective decision making in practice (Bob Berry, 2004). This article examines the issue of effective guidance for financial investing. First, the relevant literature is reviewed. Next, the financial investing approaches inspired by this literature and the pertinent decision support gaps in the approaches are identified. Then, the article offers a decision making support system that can close the support gaps and provide specific and precise guidance for financial investing in a complete and integrated manner. The article concludes with an examination of the implications for financial decision making, in particular, and for financial engineering, in general.

Financial Engineering Financial investing may be seen as a three step process of collecting data about assets, evaluating assets, and buying and selling assets into and from a portfolio. Usually, investors will seek professional advice to assist in the investment process. Typically, the professional’s organization employs technical experts, or financial engineers, to develop tools that help financial professionals in the advisement process. Financial engineers, broadly speaking, design new financial instruments and create solutions to financial problems. In particular, financial engineering includes these four areas (J M Mulvey et al., 1997): 1. Corporate Finance (new instruments to secure funds, engineering takeovers and buyouts) 2. Trading (develop dynamic trading strategies) 3. Investment Management (repackaging and collaterization) 4. Risk Management (insurance, hedging, and asset management) Risk management may in turn be decomposed into (J M Mulvey et al., 1997):

• •

strategic asset management (via multi-stage stochastic optimization) and operational asset management (via immunization models).

Often, financial engineering is accomplished with the assistance of optimization, statistical, and econometric approaches. Sometimes, these approaches assume the problem is well-posed and has a single objective. Financial decisions, however, could involve: • • • •

the existence of multiple criteria, conflicts among criteria, ill-structured evaluation, and political and social factors involved with human decision-making.

Management scientists, economists, and others have developed additional models to deal with these financial complexities. Investors and the investment professionals typically do not have the knowledge about, or interest in, the financial models to effectively utilize these tools in practice. Consequently, financial engineers, and information systems professionals, have developed information systems designed to deliver the available financial models to the investors and investment professionals (C. Zopounidis & M. Doumpos, 2000). The most efficient and effective delivery vehicle for such a purpose is the decision making support system (DMSS).

Financial Decision Making Support Decision-making Support Systems (DMSS) are computer-based information systems that support individual, group, or organizational decision-making processes in an interactive manner. Depending on the supported decision making phases or steps supported, a DMSS may take the form of a Decision Support System (DSS), Executive Information System (EIS), Expert System (ES), or some integrated combination

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 1-14, January-March 2009 

of the functions delivered by a DSS, EIS, and/or ES (G Forgionne et al., 2005) For example, recent advances in information technology and artificial intelligence could be used to enhance DSS or EIS processing, giving rise to Intelligent-DMSS (IDMSS) (Jatinder Gupta et al., 2006). The support rendered in an IDMSS can occur at four levels (B Roy, 1996): 1. 2. 3. 4.

Determine object of the decision Analyze criteria Model preferences Develop recommendations

This support often is achieved through the delivery of operations research methods such as: 1. multi-objective mathematical programming 2. multi-attribute utility theory 3. outranking relations, and 4. preference disaggregation analysis. Each of these methods has been applied to a rich variety of financial problems. For instance, multi-objective mathematical programming was applied to the investment problem of fund allocation among shopping malls (R Khorramshahgol & A A Okoruwa, 1994), and multi-attribute utility theory was used to assess sovereign risk (J C S Tang & C G Espinal, 1989).

Decision Making Support System Architecture Decision making support systems (DMSS), regardless of their specific form, generally will have the architecture shown in Figure 1. As this figure shows, DMSS inputs include a database of pertinent decision data, a knowledge base of decision knowledge, and a model base that contains a model of the problem and an appropriate solution method. Knowledge may be represented as production rules, semantic networks, frames, or in some other way.

The decision maker utilizes computer technology to access the various input bases and execute the processing tasks of organizing problem elements, structuring the problem, simulating policies and events, and finding the best problem solution. This decision maker controlled processing usually is called the dialog management system and is the mechanism that makes a DMSS interactive. Processing generates status reports on the problem elements, forecasts of inputs and outputs, recommended decision actions and strategies, and explanations for the recommendations. Processing may be assisted through artificial intelligence methodologies. For example, expert system and case-based reasoning functionality can help decision makers access data and models, infer relationships, and interpret outputs (G I Doukidis, 1988). Machine learning can be used to generate forecasts of problem elements, and natural language and vision processing can facilitate dialog management. Feedback loops from outputs to the decision maker and from processing to inputs indicate that DMSS processing is dynamic and continuous in nature. Outputs may suggest further decision maker processing, and processing may create new or additional data, knowledge, models, or solution methods relevant for future processing. A specific DMSS may have all or parts of this general architecture. For example, an IDMSS would have a knowledge base, but a DSS would not. Moreover, the content of the data, knowledge, and model bases and the processing tasks may differ from one DMSS to another. For instance, an EIS model base may contain data mining and statistical models, while a DSS model base may require an economic, accounting, or management science model. Similarly, an EIS may not find a best problem solution, while optimization may be the dominant task in a DSS.

DMSS in Finance A number of Decision Making Support Systems have been applied to finance. An example of an

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

 International Journal of Decision Support System Technology, 1(1), 1-14, January-March 2009

Figure 1. General decision making support system architecture Input feedback

Data Base Pertinent decision data

Organize problem elements

Status reports

Structure decision problem

Input and outcome forecasts

Knowledge Base Decision knowledge

Model Base Decision model

Simulate policies and events

Determine best solution

Solution method

Recommended decision

Outcome explanations and advice

Computer technology

Decision Maker Output Feedback

INPUTS

PROCESSING

expert system for finance is the FINEVA system (N.F. Matsatsinis et al., 1997). FINEVA has a knowledge base about assessing companies for financial viability. It has hundreds of rules which operate on input of financial ratios from financial statements. The system also interacts with the user to collect qualitative data about the management of the companies that are being assessed. The financial ratios that are considered are those which experts have said they typically use (and correspond generally to what the textbooks say are relevant ratios), such as the Quick ratio. The rules (as the ratios) are grouped into categories of profitability, solvency, managerial performance, and qualitative criteria. The rules connect with each other in a hierarchy. For instance, some

OUTPUTS

rules look at debt ratios and some other rules look at liquidity ratios. The rules for debt and liquidity ratios are combined at a higher level into rules about solvency. If one takes the area of machine learning and looks at recent examples of applications to finance, one can find a host of integrated techniques, and model them in various ways. The following studies illustrate the mixture of machine learning techniques with representational issues and diverse approaches to financial problem solving: •

Lam and Ho (Wai Lam & Kei Shiu Ho, 2001) used knowledge about stock valuation to support natural language processing

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 1-14, January-March 2009 







of financial news articles. Their system learned through experience and was intended to identify data for an asset valuation system. Chen and Chen (A.P. Chen & M.Y. Chen, 2006) semi-manually took rules from experts and other sources, and then their genetic algorithm integrated those rules to predict values of a stock market index. Dhar and Chou (V. Dhar & D. Chou, 2001) prepared rule templates for their genetic algorithm that were specific to knowledge of the domain. For instance, their system might have been initiated with a rule such as “if the ER index is greater than the Pth percentile, and the N day industry trend exceeds S standard deviations, then the company will deliver an earnings surprise of type T.” The genetic algorithm was expected to find useful values for P, N, S, and T. Lajbcygier (P. Lajbcygier, 2004) incorporated the Black-Scholes options pricing model into the representation of his hybrid neural network. In the buying and selling of currencies, a natural symmetry exists that Bhattacharya et al (S. Bhattacharyya et al., 2002) exploited to constrain the combinations of logical and numerical operators that their genetic program generated. Tsakonas et al (A. Tsakonas et al., 2006) used neural logic networks that they argued were intrinsically suited for finance, and they constrained their genetic programming method to only produce syntactically correct neural logic nets. Dempster and Leemans (M.A.H. Dempster & V. Leemans, 2006) added a knowledgebased financial portfolio management layer to their neural network price predictor.

Substantial work has been done on the subject of computer systems to evaluate assets. Machine learning methods have been employed to fine tune the numerical parameters of asset valuation programs. However, too little work has been done on the subject of representing financial knowledge in such a way that a machine

learning program can make meaningful changes in the asset valuation program beyond the level of numeric parameters. One of the few such papers (Bhattacharyya, Pictet, & Zumbach, 2002) showed how symmetry in financial markets could be exploited by a learning program.

Incomplete and Fragmented Support Each of the reported Decision Making Support Systems, and thereby the delivered decision technology, focuses on a specific aspect of financial investing. System inputs, processing, and outputs will be limited to this aspect. For example, a DMSS designed to support asset valuation will have a data base of the pertinent asset information, an appropriate asset evaluation model and solution methodology, and perhaps some asset valuation knowledge. Processing will structure the evaluation model, simulate asset values, and generate forecasts of the asset values. To support the entire financial decision making process, then, it would be necessary to have a more complete DMSS than has been offered in the literature or known practice. One possibility is to have several decision making support systems for financial investing. Such a fragmented approach, however, would require linkages between disparate systems that may have different data formats, various modeling approaches, and different computer software and hardware platforms. This fragmented approach also would negate the potential synergistic effects achievable from integrated decision making support systems (G Forgionne, 2000). The fragmented approach also would involve more time and cost for design, development, and implementation than the integrated approach and be less user-friendly than one integrated and complete system, like the architecture offered in Figure 1. There is also a practical difficulty with incomplete and fragmented DMSS for finance. Most investors and practicing financial advisors will be unaware of the available decision making support or lack proficiency in DMSS

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

 International Journal of Decision Support System Technology, 1(1), 1-14, January-March 2009

usage. When there are multiple support tools available, the practitioner’s lack of awareness and knowledge will necessitate education and training across several support tools and require personnel to provide such education. These impediments could deter practitioners from seeking the desired support and/or lead to inappropriate financial decisions. The potential technical, economic, and practical problems can be alleviated, or even eliminated, by instantiating Figure 1’s architecture for financial investing. Such instantiation can provide an effective and efficient decision making support system for financial investing.

DMSS For Financial Investing Much data are potentially useful for financial investment analysis. Such data would include: • • • •

qualitative information, such as the quality of management and a firm's reputation, stock market data, such as stock price and dividend yield, macroeconomic data (interest rates, exchange rates, inflation rates), and finance statements (balance sheets, income statements).

In addition, various models could assist a financial decision maker. Existing financial models include: • • • •

financial accounting statement frameworks, economically-based financial and stock market ratios, portfolio theory models, and capital asset management models.

Existing statistical methods include descriptive methodologies, such as correlation, and inferential methods, such as principal components analysis. In addition, decision analysis can provide multiple criteria methods

for implementing the financial theories, both for evaluating stocks and composing a portfolio. For instance, while the financial statement models speak to the features of attractive stocks, decision analysis can be used to identify a desirable set of stocks. In any given circumstance, however, only a small portion of the available data will be pertinent to a specific investor’s needs. Moreover, some statistical, economic, or accounting analysis may be needed to synthesize the available data and focus the results for the investor’s needs. Similarly, the investor’s situation may not require all available financial models and supporting decision technologies. As with the data, then, model filtering and focusing may be necessary.

Warehousing and Marting To support the range of needs likely to be encountered in financial investing, it would be useful to create warehouses of potentially relevant financial data and models. The financial data warehouse would draw from available external financial information data sets, such as those available from Standard & Poor’s (www. standardandpoors.com) and Bloomberg (www. bloomberg.com), and internal sources, such as an organization’s accounting ledgers, to form a repository of potentially relevant data for financial analysis. Similarly, the financial model warehouse would serve as a repository for reported theoretical financial analysis methodologies and methods and reported practice-based investment heuristics and methodologies. Such data and model warehousing would become a primary activity in financial engineering, much as data warehousing has become common in information systems operations for modern organizations (Michael Mannino et al., 2008). Internal accounting transactions captured by information systems data warehouses also can, and often do, provide pertinent data for financial analyses (Arlen Khodadadi et al., 2006). To meet the investment needs of a specific investor, pertinent data could be viewed in the

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 1-14, January-March 2009 

financial data warehouse, and relevant models could be viewed in the financial model warehouse. Relevance could be defined interactively by the investor, predefined from the investor’s financial profile and implemented through artificial intelligence, or be determined dynamically by artificial intelligence adjusted interactively by the investor. The views would form virtual data and model marts for the investor. Once the views are accepted by the investor, the accepted data could be extracted from the financial data warehouse to form the investing DMSS database. Similarly, accepted model views could be extracted from the financial model warehouse to form the investing DMSS model base. Captured views and extractions form the pertinent problem knowledge for the investing DMSS knowledge base. Figure 2 illustrates the financial warehousing and marting process for the specific financial investment support. As this figure demonstrates, the investor-controlled data and model marting process ensures that the investing DMSS data, model, and knowledge bases are populated only with filtered and focused information directly relevant to the specific investor’s financial interests and needs. Such filtering and focusing also means that the DMSS bases are likely to be small in volume but dynamic in nature and reflective of the investor’s changing preferences as new

knowledge is gained. Figure 2 also makes it clear that the investor, either directly or through a professional financial advisor, controls the financial marting process, either unassisted or with the aid of artificial intelligence.

DMSS Architecture and Operations In a typical scenario, the investor engages in a financial decision making process. Initially, the investor analyzes securities and identifies the subset of securities deemed worthy of investment. This analysis and identification corresponds to Simon’s intelligence phase of decision making. Next, the investor develops a formal or informal evaluation model, consisting of financial performance measures, alternative investments, and economic and other events, and the relationships between these factors. Such modeling constitutes Simon’s design phase of decision making. Then, the investor utilizes the designed investment model to determine the amount invested in the identified securities. This selection corresponds to Simon’s choice phase of decision making. While the financial decision making can be very dynamic in nature, the process is iterative. Design cannot occur without intelligence, and choice cannot precede design.

Figure 2. Financial warehousing and marting relationship to DMSS for investing

Financial Data Warehouse

Financial Data Mart

Financial Model Warehouse

Financial Model Mart

DMSS for Financial Investing

Investor

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

 International Journal of Decision Support System Technology, 1(1), 1-14, January-March 2009

Figure 3. DMSS for investing architecture Input feedback

Data Base Pertinent investment data

Organize investment elements

Financial status reports

Structure investment problem

Input and outcome forecasts

Knowledge Base Investor profile knowledge

Model Base Invest models

Simulate policies and events

Determine best investment

Solution methods

Recommended investment

Outcome explanations and advice

Computer technology

Investor Output Feedback

INPUTS

PROCESSING

The decision making support system for investing, shown architecturally in Figure 3, can support the financial decision making process in a complete and integrated manner. As this figure shows, the DMSS for investing has a data base that consists of the financial information gleaned from the investor-controlled data marting illustrated in Figure 2. This marting also creates the investor profile that, along with any other previously captured investor-relevant facts and factual relationships, populates the DMSS knowledge base. Similarly, Figure 2’s financial model marting generates the investor-relevant models and model solution methods captured in the DMSS model base. The investor uses workstation-based computer technology and custom software to control the DMSS processing. Initially, the system of-

OUTPUTS

fers a list of securities currently available for investment, and obtained from the financial data warehouse as part of the financial data marting illustrated in Figure 2. This listing supports the investor’s intelligence phase of decision making. As the top feedback loop in Figure 3 demonstrates, the listing will be dynamic and recursive in nature, with the new data and knowledge captured in the system’s databases and knowledgebases for further processing. After obtaining the investor’s desired list, the DMSS prompts the investor for her/his/their desired performance measures. These measures, pertinent uncontrollable inputs (such as economic indicators and organizational qualitative information), captured from the financial data marting, and investment models captured during financial model marting will be used by the

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 1-14, January-March 2009 

DMSS to structure the investment problem. This structure will explicitly and precisely establish the relationships between the invested-specified performance measures, listed securities, and uncontrollable factors. During the structuring, pertinent parameters from the system’s data base, and perhaps further estimated through statistical and other analyses, will be attached to the investor’s specified model by the DMSS. The structuring, which supports the design phase of decision making, is likely to be assisted with artificial intelligence methods stored in the system’s model base. As with the organizing of financial information, the structuring will be dynamic and recursive in nature with the new knowledge and models captured in the system’s knowledge and model bases for further processing. The DMSS will select from the model base, perhaps through artificial intelligence assistance, the solution methodology best suited to the investor’s specified model. Depending on the model and selected methodology, the system will either simulate the outcomes from selected securities under specified financial conditions or recommend the portfolio that best meets the investor’s stated performance measures. Since the results were generated from an explicit and precise model of the investment problem, the DMSS can provide a detailed explanation for the recommendations and advice by tracing the logic in the model’s equations or other relationships. As indicated by the bottom feedback loop in Figure 3, the generated forecasts from the simulations or recommendations from the model optimization will be performed in a dynamic and recursive manner conducive to confidence building on the part of the investor. The simulations and/or optimizations support the choice phase of decision making.

Theoretical Methodolgy The proposed decision and information technology and decision making support architecture have been developed and applied successfully to a closely related problem (Forgionne, 1997). In the approach, investor needs will be identified

from a dialog session between the investor and the prototype system. Predefined production rules will map the selections to appropriate captured financial model components. The research also will explore the use of neural networks to refine the predefined production rules and the potential use of fuzzy logic as a knowledge representation scheme alternative to production rules for the mapping process. Intelligent agents, with encapsulated statistical, management science, accounting, and economic knowledge, will join the relevant captured model components to form a model specific to the investor’s needs. Other intelligent agents search for the data pertinent to the investor’s model from captured data warehouses and data marts, use appropriate statistical methods on the gleaned data to estimate the model’s parameters, attach the estimated parameters to the investor’s specific model, perform an appropriate model analysis, and report recommendations to the investor. The investor then can either accept the recommendations or request additional analysis through pertinent dialog with the prototype system. Further requests dynamically trigger model modifications, analyses, and new recommendations. It should be clear that the successful operation of the theoretical DMSS for investing is based on strong supporting financial data and model warehouses. This link suggests that the methodology for developing the theoretical DMSS should follow the process outlined in Figure 4. Professional organizations already gather much financial data, and the academic and practitioner literature reports many financial models. Moreover, there are several financial data warehouses in professional investment firms and other organizations, some of which may be available to investors (Hsin-Ginn Hwang et al., 2004). Still, the available warehouses may be incomplete and fragmented. Model warehouses are rare and usually not available to investors (I. Belov et al., 2006). Financial data and model marting are underdeveloped areas and will be specific to the investor. The DMSS for investing, of course,

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

10 International Journal of Decision Support System Technology, 1(1), 1-14, January-March 2009

Figure 4. DMSS for investing development process Gather financial data

Gather financial models

Build financial data warehouse

Build financial model warehouse

Form data mart

Form model mart

Build DMSS for Investing

is just a theoretical concept at this point. There are many information and decision technologies available and in use that could be adapted to these tasks (Shu-Heng Chen, 2006; Shu-Heng Chen & Paul P Wang, 2004).

Illustrative Example The implementation of the DMSS for investing concept will be specific to an investor with a particular financing problem. This concept, however, can be illustrated with the aid of an ongoing research project. In this research project, data, models, and knowledge (as suggested in Figure 3) have been identified and preliminary experiments are being done with a prototype iDMSS that is called the Intelligent Investing System (IIS). IIS would ultimately include data from various asset classes, as well as macroeconomic information. Some of the asset classes would include stocks, bonds, commodities, and real estate. The first prototype considers only stocks and relies on the licensed data set called Compustat from Standard & Poor’s. Macroeconomic data sets have been identified on the US Department of Treasury site.

For the model base, the program attempts to collect, formalize, and categorize relevant financial models that look across entities and across time at one entity. For across-entity analysis, the thresholds and rules that are applied to financial ratios of (N.F. Matsatsinis et al., 1997) have been implemented. Those thresholds and rules are used to identify company financial viability based on the latest financial ratios. For modeling financial statements across time, IIS includes an implementation of the forecasting model presented in (Chandan Sengupta, 2004). The forecasts cover not only financial statement items, such as sales and long-term debt, but also stock data, such as stock price and dividends. With the dividend forecasts, the IIS can reach into the model base, grab the Dividend Discount Model, and further value a stock based on the forecast dividends. For portfolio management, IIS includes various implementations of the optimally efficient portfolio. Particularly useful has been the method delineated initially by Merton (Robert Merton, 1972) as implemented in a spreadsheet by Holden (Craig W Holden, 2005). IIS also includes an implementation of the optimization technique presented in (William Sharpe, 2007).

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 1-14, January-March 2009 11

The single index approach to computation of the covariance matrix is also available to IIS. The knowledge base provides guides for problem solving. For instance, when thousands of stocks are being considered, the computation of the covariance matrix is not practical because it requires the covariance to be laboriously computed for every pair of companies. The amount of computation can be reduced from being proportional to the square of the number of companies to being linear in the number of companies by using the simplifying assumption about the stock index. Accordingly, when the portfolio weights on more than one hundred stocks are to be computed, the portfolio optimization routine might use the index model. More innovative examples of the knowledge base at work concern modifications to the forecasting based on financial statements. The knowledge base knows which aspects of the model of financial statements might be refined in which ways. The financial statement models have different types of components; for instance, •





Some relationships in the model are true by definition and entirely enclosed within the financial statement, such as "gross operating income = sales - cost of goods sold". However, other relationships in the model may be elaborated. For instance, interest expense approximately equals (short-term interest rate * short-term debt) + (long-term interest rate * long-term debt). Independent forecasts of interest rates can be obtained and connected to the financial statement forecasts. Furthermore, long-term debt is not strictly a single long-term interest rate times a single long-term debt quantity but is rather a complex function of the portfolio of company borrowings which also can be further investigated by IIS. Other relationships in the financial model may or may not apply to a particular company, depending on the company. For instance, some companies want to manipulate debt and equity to achieve certain target

proportions. If the company has such a goal, then what its target proportions may be is not fixed across companies. IIS formalizes these relationships so that the system can manipulate the model and test the impact on forecasting accuracy. The knowledge base includes both financial-type data, such as industry classification of a company, and computational knowledge. The preceding description of the role of the index model in supporting fast computation of portfolio weights illustrates computational knowledge. However, a deeper example concerns knowledge about various functions that can be used to fit data. For instance, the financial statement forecasting will experiment with linear, polynomial, exponential, and sinusoidal functions to see which best captures certain time series in the financial statement. The reasoning may be seen as occurring in the Processing Phase of the IIS (recall that Figure 3 shows an input phase, a processing phase, and an output phase). The user presents to IIS the user’s investment constraints including the amount of money to invest, the amount of risk to take, and the types of asset classes preferred. The IIS then operates with this information and the data, models, and knowledge. In the processing phase, the IIS engages in extensive back-testing of its tentative recommendations. Compustat provides several decades of data which can be exploited to test hypotheses about assets relative to past performance. Given that appropriate models to apply to a company’s financial statements will depend among other things on the industry classification of the company, the IIS classifies a financial statement by the industry classification of the company and looks for similarities in models based on industry sector. Compustat classifies companies into the Global Industry Classification Standard (GICS), which consists of 10 sectors, 24 industry groups, 62 industries and 132 sub-industries. Reasoning is performed on the GICS structure to determine the appropriate level of generalization depending on the problem at hand.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

12 International Journal of Decision Support System Technology, 1(1), 1-14, January-March 2009

The companies likely to produce the greatest surprise to investors are those companies which least fit any of the standard models. IIS would continually refine its modeling and backtest. A company whose data resisted being modeled would be further investigated. The IIS also uses machine learning, particularly evolutionary computation. In IIS, the genetic algorithm has been used to experiment with the role of knowledge in financial asset evaluation, as suggested by (S. Bhattacharyya et al., 2002), and neural networks have been used to refine criteria, as suggested by (A. Tsakonas et al., 2006). The IIS prototype has been developed in Excel 2007 with Visual Basic for Applications (VBA). The VBA code inserts commands in the worksheets which in turn invoke Compustat and download financial data. The VBA modules also enter formulas into the worksheets which indicate the relations among the data and ultimately are responsible for the asset valuation and portfolio management. Excel was chosen for the prototype for several reasons: • • • •

many financial investors use Excel, existing financial models have often been implemented in Excel and thus can be imported into IIS, Excel has a large library of financial functions embedded within it, output in Excel is easily made visually appealing and the output can include simultaneously the number plus the formulas which computed the numbers (and users can select to immediately see the computation behind the number).

Future prototypes might connect other tools to Excel.

Conclusion Financial investing is a very important problem that has a significant impact on individuals, groups, organizations, the economy, and society. While much wisdom and knowledge has been

offered to guide investments, the support has been incomplete and fragmentary. This article proposes the DMSS for investing as a tool to improve support and suggests a methodology to develop the tool. At this point the proposed system is conceptual and requires further system development. There is a need to: (a) establish a development plan and methodology, (b) identify software to implement the concept, (c) use the software to develop the system and implement a prototype, (d) utilize the prototype to test the efficiency and efficacy of the plan, and (e) establish an implementation plan. The concept also establishes a theory that the DMSS for investing can improve financial decision making by offering complete and integrated support and guidance for the process. This theory suggests the following research question and hypotheses: • • •

Research Question: Can the DMSS for investing improve the process of, and outcomes from, financial investing? Null Hypothesis: The DMSS cannot improve the process of, and outcomes from, financial investing. Alternative Hypothesis: The DMSS can improve the process of, and outcomes from, financial investing.

The question can be answered by designing studies that will compare the process results and outcomes from DMSS for investing usage with the corresponding results and outcomes from existing forms of human and system guidance. There are a number of process and outcome measures from the literature that can be used in these studies (G Phillips-Wren et al., 2006). The studies can be executed through experimental approaches or by other means.

References Belov, I., Kabasinskas, A., & Sakalauskas, L. (2006). A study of stable models of stock markets. Information Technology and Control, 35(1), 34-56.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 1-14, January-March 2009 13

Berry, B. (2004). Editorial. Intelligent Systems in Accounting, Finance, and Management, 12(1), 1-4. Bhattacharyya, S., Pictet, O. V., & Zumbach, G. (2002). Knowledge-intensive genetic discovery in foreign exchange markets. Evolutionary Computation, IEEE Transactions on, 6(2), 169-181. Chen, A. P., & Chen, M. Y. (2006). Integrating extended classifier system and knowledge extraction model for financial investment prediction: An empirical study. Expert Systems with Applications, 31(1), 174-183. Chen, S.-H. (Ed.). (2006). Evolutionary computation in economics and finance. Heidelberg: Springer Verlag.

Hwang, H.-G., Ku, C.-Y., Yen, D. C., & Cheng, C.-C. (2004). Critical factors influencing the adoption of data warehouse technology: A study of the banking industry in taiwan. Decision Support Systems, 37(1), 1-21. Khodadadi, A., Tütüncü, R. H., & Zangari, P. J. (2006). Optimisation and quantitative investment management. Journal of Asset Management, 7(2), 83-92. Khorramshahgol, R., & Okoruwa, A. A. (1994). A goal programming approach to investment decisions: A case study of fund allocation among different shopping malls. European Journal of Operations Research, 73, 17-22.

Chen, S.-H., & Wang, P. P. (Eds.). (2004). Computational intelligence in economics and finance. Berlin: Springer.

Lajbcygier, P. (2004). Improving option pricing with the product constrained hybrid neural network. Neural Networks, IEEE Transactions on, 15(2), 465-476.

Dempster, M. A. H., & Leemans, V. (2006). An automated fx trading system using adaptive reinforcement learning. Expert Systems with Applications, 30(3), 543-552.

Lam, W., & Ho, K. S. (2001). Fids: An intelligent financial web news articles digest system. Systems, Man and Cybernetics, Part A, IEEE Transactions on, 31(6), 753-762.

Dhar, V., & Chou, D. (2001). A comparison of nonlinear methods for predicting earnings surprises and returns. Neural Networks, IEEE Transactions on, 12(4), 907-921.

Mannino, M., Hong, S. N., & Choi, I. J. (2008). Efficiency evaluation of data warehouse operations. Decision Support Systems, 44(4), 883-898.

Doukidis, G. I. (1988). Decision support system concepts in expert systems: An empirical study. Decision Support Systems, 4, 345-354.

Matsatsinis, N. F., Doumpos, M., & Zopounidis, C. (1997). Knowledge acquisition and representation for expert systems in the field of financial analysis. Expert Systems with Applications, 12(2), 247-262.

Forgionne, GA (1997). HADTS: A decision technology system to support Army housing management. European Journal of Operational Research, Volume 97, Issue 2, 363-379.

Merton, R. (1972). An analytic derivation of the efficient portfolio frontier. Journal of Financial and Quantitative Analysis, 1851-1872.

Forgionne, G. (2000). Decision-making support systems effectiveness: The process to outcome link. Information Knowledge-Systems Management, 2, 169-188.

Merton, R. C. (1995). Influence of mathematical modesl in finance on practice: Past, present, and future. In S. D. Howison, F. P. Kelly & P. Wilmott (Eds.), Mathematical models in finance (pp. 1-13). London: Chapman & Hall.

Forgionne, G., Mora, M., Gupta, J., & Gelman, O. (2005). Decision-making support systems. In Encyclopedia of information science and technology (pp. 759-765). Hershey, PA: Idea Group.

Mulvey, J. M., Rosenbaum, D. P., & Shetty, B. (1997). Strategic financial risk management and operations research. European Journal of Operations Research, 97, 1-16.

Gupta, J., Forgionne, G., & Mora, M. (Eds.). (2006). Intelligent decision-making support systems: Foundations, applications and challenges: Springer.

Phillips-Wren, G., Mora, M., Forgionne, G., Garrido, L., & Gupta, J. N. D. (2006). Multi-criteria evaluation of intelligent decision making support systems. In J. N. D. Gupta, G. Forgionne & M. Mora (Eds.), Intelligent decision-making support systems

Holden, C. W. (2005). Excel modeling in investments. Upper Saddle River, NJ: Pearson Prentice Hall.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

14 International Journal of Decision Support System Technology, 1(1), 1-14, January-March 2009 (i-dmss): Foundations, applications and challenges (pp. 3-24).

Tang, J. C. S., & Espinal, C. G. (1989). A model to assess country risk. Omega, 17(4), 363-367.

Rada, R. (2008). Expert systems and evolutionary computing for financial investing: A review. Expert Systems with Applications 34(4), 2232-2240.

Tsakonas, A., Dounias, G., Doumpos, M., & Zopounidis, C. (2006). Bankruptcy prediction with neural logic networks by means of grammar-guided genetic programming. Expert Systems with Applications, 30(3), 449-461.

Roy, B. (1996). Multicriteria methodology for decision aiding. Dordrecht: Kluwer Academic Publishers. Sengupta, C. (2004). Financial modeling using excel and vba. Hoboken, New Jersey: Wiley. Sharpe, W. (2007). Macro investment analysis. Retrieved Feb. 14, 2008, 2008, from http://www. stanford.edu/~wfsharpe/mia/mia.htm

Vellido, A., Lisboa, P. J. G., & Vaughan, J. (1999). Neural networks in business: A survey of applications (1992-1998). Expert Systems with Applications, 17(1), 51-70. Zopounidis, C., & Doumpos, M. (2000). Intelligent decision aiding systems based on multiple criteria for financial engineering. Boston: Kluwer Academic Publishers.

Roy Rada earned a BA from Yale Univ. in 1973, a MD from Baylor College of Medicine in 1977, and a PhD in computer science from University of Illinois in 1980. In the mid-1980s he was editor of Index Medicus, and in mid-1990s professor of computer science at the University of Liverpool. Throughout the first decade of the 21st century he has been a professor of information systems at UMBC. His work is now focused on artificial intelligence for financial investing. Guisseppi A. Forgionne is professor of information systems at the University of Maryland Baltimore County (UMBC). He has published 26 books and approximately 200 research articles, consulted for a variety of public and private organizations, and received several national and international awards for his work.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009 15

Decision Support-Related Resource Presence and Availability Awareness for DSS in Pervasive Computing Environments Stephen Russell, George Washington University, USA Guisseppi Forgionne, University of Maryland ‒ Baltimore County, USA Victoria Yoon, University of Maryland ‒ Baltimore County, USA

Abstract Over the last 10 years, pervasive computing environments and mobile networks have become extremely popular. Despite the many end user benefits of pervasive computing, the intrinsic instability and context ambiguities of these environments pose impediments to data-oriented decision support systems. In pervasive computing environments where users, systems, and computing resources are distributed or mobile, the online or “available” state of decision support-related resources may be intermittent or delayed. Awareness or knowledge of these resources’ online presence and availability can affect the decision making process. This article discusses issues related to data-driven decision support systems (DSS) in pervasive computing environments (PCE) and demonstrates that a decision maker’s awareness of online status and availability can improve decision outcomes. A model for extending DSS resource presence and availability awareness to decision makers is presented and the impact of this knowledge on decision outcomes is evaluated using a management problem simulation. Keywords:

decision making; decision support systems; pervasive computing environments

Introduction Technologies such as data mining and business analytics have seen explosive growth over the last 5 years in both research and industry. At the

same time, advances in 3rd generation networks and communication have made the promise of pervasive computing environments a reality. With the convergence of pervasive computing environments and business analytics, now more

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

16 International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009

than ever, greater volumes of data are available to decision support systems and subsequently decision-makers. One effect is the increase in data usage for both analysis and justification in decision making. In this context, the data is frequently assumed to be completely accurate and available. Moreover, it is also assumed that a decision support system would be available to assist with filtering, processing, analysis and other decision making tasks. Despite the advances in technology, the fundamental characteristics of business decision making have not changed. Many business decision opportunities are time limited, and the phases of Simon’s (1960) decision-making model still apply to business decision-makers. What has changed is industry’s dependence on and related faith in data. There is an implicit belief that data is generally correct and that decision support systems can provide correct deterministic answers to decision problems. The data are used, not only as the basis to derive decision advice, but also to provide supporting justification. As evidence of this change, consider the North American business analytics market. Frost and Sullivan estimate that the enterprise analytics market generated $2.22B in revenue for 2005 and should increase at a Compound Annual Growth Rate (CAGR) of 10.8% from 2006 to 2012, reaching approximately $4.54B in 2012. Business intelligence (BI) is the largest single segment in this market, with $961.4M in revenues, which should reach $1.92B in 2012 at a CAGR of 10.4% (Frost & Sullivan, 2006). The availability of decision support systems is integrally tied to industry’s dependence on data. Gartner estimates that wireless voice and data will continue to displace wireline services, with voice and data services growing at a 6% CAGR - making wireless the fastest-growing segment of the communications services market (Flewelling, 2007). This demand for communication-enabling and data analysis products is clear evidence of businesses’ interest and commitment to data-driven decision support and pervasive computing.

While data-oriented decision support and ubiquitous computing capabilities deliver many benefits to decision makers, in real world situations the availability of decisionrelated resources and support systems is not guaranteed. Moreover, data can be generally considered just one decision-related resource. Other types of computing resources, such as storage or processors must also be considered. What good is data without storage to hold it or a processor to process it? A great deal of research has been conducted to make systems highly available in pervasive computing environments (PCE) through redundant power and communications, data distribution and caching, and dynamic storage solutions. Much of the prior research has sought to address hardware limitations and ensure that computing resources are always available. In practice however, it is difficult, if not impossible, to ensure 100% availability. The telecommunications industry has a concept known as five nines (99.999% up time), which refers to system reliability of a copper-based telephone network. Five nines historically characterized the expected level of service availability that users could expect from a provider’s communications network. With the advance of technologies such as digital subscriber lines (DSL), wireless, and packet-based voice communications, users are willing to accept increasingly lower standards of network quality (Lynch, 2002). This acceptance of ambiguity in network and system availability conflicts with the increasing dependency on systems and data for decision making. Connectivity in pervasive computing environments is classically known to be intermittent and unstable. Intermittent and unstable communications and their underlying networks can have a significant impact on decision support systems, decision making, and decision outcomes. Moreover, time sensitive decisions, which often characterize business decision opportunities, can be even more susceptible to issues of availability. Extending knowledge of system and data availability to decision makers, who expect computer-aided support in these environments, may reduce

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009 17

uncertainty about the level of support available. This in turn, may improve the overall utility of decision support systems. The relevance of research on resource and system availability becomes increasingly important as the popularity and availability of pervasive computing environments intensifies. However, little attention has been paid to the effects of data and system availability on decision making and decision outcomes. This article examines how decision maker’s knowledge of decision support-related resources’ or systems’ online status (presence awareness) and readiness (availability awareness) can improve decision outcomes. Following this introduction, the next section provides a discussion of prior work and background on presence and availability awareness. The third section illustrates presence and availability awareness in a decision making context and presents a model that incorporates these concepts in a traditional decision making support system. The third section is followed by the details of an experiment to evaluate the effects of availability awareness on decision making in a pervasive computing environment. The fifth section presents the results of this experiment, followed by a discussion of the implications for decision support systems. The article concludes with a summary and discussion of future work.

Background and Previous Work There is a significant amount of work on data and system availability. Much of this work focuses on hardware related issues. For example, Imielinski and Badrinath (1994) examined the challenges in data management for mobile computing. Their research identified several issues including bandwidth, location management, and power management. Imielinski and Badrinath’s research has been supported by more recent efforts that provide solutions in each of these areas. Recent research has seen improvements in mobile power management

that claim upwards of 35% improvement in battery lifetimes (Chakraborty, Yau, Lui, & Dong, 2006; Rahmati & Zhong, 2007) and redundant power has become a standard for data centers and servers (Bartlett & Spainhower, 2004). The concept of redundancy has also been applied to data storage (Muthian, Vijayan, Andrea, & Remzi, 2005). Storage redundancy has implications for location management from the perspective of locating data closer to its users, thereby making it more available (Andreas, Alan, & Peter, 2005; Ruay-Shiung, Jih-Sheng, & Shin-Yi, 2007). Redundant storage can be very effective for local access, but the introduction of a network introduces other points of failure or interruption. Network latency is often a direct indicator of network congestion and problems (Shahram, Shyam, & Bhaskar, 2006). Roughan et al. (2004) propose how combining routing and traffic data can be used to detect network anomalies. Russello et al. (2007) provide a comprehensive hardware solution for managing network-distributed data that is policy based. Their approach allows application developers to specify availability requirements for data tuples, while an underlying middleware evaluates various distribution and replication policies to select the one that best meets the developer’s requirements. Research focusing on system and network hardware solutions provides a good practical foundation for addressing data availability in decision support systems. From the perspective of pervasive computing environments, extensive research has been done utilizing location as a context for how, when, and what device is interacting with computing resources. For example, Milliard et al. (2005) used wireless sensors, infrared, and accelerometers coupled with a semantic model to provide information on where a user physically was, in order to adjust resources and preferences accordingly. Perich et al’s (2004) work is similar in that it combines semantic expressions to describe users’ context and location. However, Perich’s work combines the sensors and semantics with a caching algo-

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

18 International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009

rithm to aid data management as users move about the computing environment. The vision of true pervasive computing environments requires that pervasive data availability is essential for pervasive computing (Satyanarayanan, 2001). While this prior research goes a long way towards realizing the vision and promise of pervasive computing, these hardware approaches are not perfect and none of these solutions provide 100% data availability. Additionally, the focus of this research tends to concentrate on the system, network, or system administrators rather than on the end users. There have also been efforts to apply these concepts specifically to decision support. This research is most frequently found in applications of distributed decision support systems and is divided into two different categories: decentralized systems and systems that support decentralized users. Both of these areas emphasize hardware approaches and collaborative components. For example, Du and Jia (2003) developed an enterprise network environment for cooperative problem solving. Compared to traditional client/server approaches, Du and Jia sought to provide a more efficient and secure solution by reducing network traffic and protecting private information. Recent work by Adla (2007) proposed a framework for a distributed cooperative intelligent decision support system. Adla’s framework extends Soubie (1998) work for cooperative knowledge-based systems. The framework described components that effectively decomposed decision tasks and enabled collaboration amongst distributed decision-makers. Adla introduced innovation in his framework by including the concept of decision roles such as facilitator and considered the paradigm of distributed decision-support systems, in which several decision-makers who deal with partial, uncertain, and possibly exclusive information must reach a common decision. In contrast to this other research, much of the work addressing data availability has focused on its psychological impact on decisionmakers. Most significantly, data availability

has been shown to be related to ambiguity, uncertainty, and risk (Kentel & Aral, 2007). The concepts of ambiguity, uncertainty, and risk should be considered as related, yet discrete concepts that may not necessarily be causal. For the purposes of this article, ambiguity is considered in the context of available data that has the capacity to reduce doubt regarding decision variables. Most of this work has studied user-centric perspectives of ambiguity in decision making, such as ambiguity tolerance. There have been several studies examining the effects of ambiguity (Boiney, 1993; Hogarth, 1989; Owen & Sweeney 2000; Shaffer, Hendrick, Regula, & Freconna, 1973) and ways to reduce ambiguity (Geweke, 1992; Hu & Antony, 2007; Pezzulo & Couyoumdjian, 2006) in decision making. This research has shown that a decision-maker’s reactions to ambiguity are likely to be situation-dependent. Potentially relevant factors include not only the degree of ambiguity but also the size of the payoffs and the perceived role of individuals other than the decision-maker (e.g., the decision-maker’s boss or spouse) (Winkler, 1997). Moreover, ambiguity may often be interpreted as risk and reduces a decision-maker’s confidence (Ellsberg, 2001; Ghosh & Ray, 1997). In managerial contexts, the presence of risk and ambiguity may lead to suboptimal decision outcomes. Some work has been done specifically examining the effects on decision making with and without having a DSS available for use. These efforts have evaluated the benefits of using decision support systems through simulations (Bricconi, Nitto, Fuggetta, & Tracanella, 2000; Forgionne & Russell, 2007), case studies (Alter, 2004; Keen & Scott-Morton, 1978), and empirical field studies (Holsapple & Sena, 2005; Sharda, Barr, & McDonnell, 1988). Unlike the work described in this article, these prior efforts examined the efficacy of decision support systems, themselves concentrating on the benefits resulting from the use of the system. Generally these studies have examined availability in absolute terms, either with or without a DSS; not the case where the system is intermittently available.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009 19

Unlike these previous efforts, the emphasis in this article is not on improving system/resource reliability or minimizing psychological effects of ambiguity resulting from data or system unavailability. In a pervasive computing environment, it is expected that reliability will be less than 100% and there will be occasions when data or systems are unavailable. The motivation behind the work in this article is to determine the effects of providing contextual details regarding when or if decision supportrelated resources will be available.

Context Aware Computing Because the concepts behind presence and availability awareness are not new and have been applied in other domains, some attention must be given to the research conducted on these topics. It is important to begin with a definition, as it is difficult to define presence awareness or characterize pervasive computing environment systems without some discussion of context. To be effective, PCE-based systems require some context aware components (Want et al., 1995). From a user’s perspective, concepts that fall under context awareness include changes in a user’s physical state and location, workflow, preferences, or resource interests. Context-awareness also includes environmental conditions. In recent years, there has been a significant amount of work on context aware computing. In this domain, research has been conducted examining aspects of users’ physical environment (Yau & Joy, 2006), usability (Ho & Intille, 2005), system adaptation based on context (Baldauf, Dustdar, & Rosenberg, 2007), location modeling (Becker & Dürr, 2005; Satoh, 2005), personalization (Krause, Smailagic, & Siewiorek, 2006; Yang, Mahon, Williams, & Pfeifer, 2006), and presence (Brok, Kumar, Meeuwissen, & Batteram, 2006; Khan, Zia, Daudpota, Hussain, & Taimoor, 2006; Raento, Oulasvirta, Petit, & Toivonen, 2005; Sur, Arsanjani, & Ramanthan, 2007; Wegdam, 2005). Within the broad domain of context aware computing, the concepts of presence and availability awareness can be grounded in location

awareness research. Much of the research in location awareness concentrates on collaboration (Griswold, Boyer, Brown, & Truong, 2003; Holmquist, J.Wigstrom, & Falk, 1998) and commerce/service provisioning (Crow, Pan, Kam, & Davenport, 2003; Zeidler & Fiege, 2003). Knowing where a user or system is located and whether they are available for interaction can be particularly useful for these activities. While proximity is not a direct issue for availability awareness, it can allude to the quality of a resources’ connectivity. Similarly, knowing the location of a user or resource can suggest when a resource will be available for interaction and it also implies its online state or presence. Previous research in context and location computing provides a sound foundation for hardware and software approaches to capture and utilize information about where and how users use computing devices. However, the primary distinction between previous research in context awareness and that described in this article is that most of the previous solutions capture context information and utilize it within the system, rather than provide it directly to the end user. The solutions that do provide context information to the end user only provide presence (binary online/offline) status and not details about the likelihood of when resources will be available. Moreover, little of this research examined how availability contexts impact decision outcomes, either directly or indirectly.

Presence and Availability Awareness This article does not address all of the issues with context awareness; instead, the focus is on presence and availability awareness, in terms of both data and system resources. A working definition of presence awareness in this article is: knowledge of whether data/system resources are online or connected for communication. Presence awareness provides knowledge of a resource’s state, in terms of connectivity. Presence awareness is the basis for availability awareness, since knowing if a resource is online

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

20 International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009

is the first step in determining availability for use. Succinctly, availability awareness is knowledge of a resource’s readiness to be used. Presence awareness has gained a great deal of popularity in real-time communication applications such as instant messaging (IM). All IM clients incorporate a form of presence awareness so that users can be made aware of when a potential communication partner, or “buddy,” is online. Many of these clients also include availability awareness using techniques such as keyboard and screen saver monitoring. From a research standpoint, most of the studies on IM have been on its utility as a communications medium (Griss, Letsinger, Cowan, VanHilst, & Kessler, 2002; Kwon, Shin, & Kim, 2006; Wang et al., 2004). Jesus Favela’s research differs from the main body of instant messaging communication research. Favela’s group embraces the concept of pervasive computing and has investigated using instant messaging, as a presence and availability awareness tool in agent-based systems. Favela (2002) created AIDA, an instant messaging and presence awareness client for handheld devices running PalmOS. Using this client, Favela created a pervasive computing communication application called DoMo. What is unique about Favela’s efforts is his treatment of network resources, such as scanners and printers, as presence aware items. By extending the notion of presence to documents and other resources, AIDA offers new opportunities for casual encounters in a community of co-authors. For instance, when a user notices that a document has been locked, he/she might send a relevant message or even join his/her colleague in a synchronous collaborative authoring session. Favela’s work, while innovative and unique, did not focus on decision making. His application of presence awareness was much more practical in nature and concentrated on location contexts (i.e. proximity to resources and basic online-offline status of resources). His work did not examine how knowledge of presence and availability would affect users; rather his group studied the implications of a

hybrid software-hardware architecture supporting document management and collaboration in a PCE. Particularly in a PCE, the implications of including presence and availability awareness of data in a decision support system can lead to improved outcomes for users and organizations. In contrast to Favela’s work, the research described in this article differentiates itself by examining how a decision maker’s knowledge of data’s availability can affect decision outcomes.

Presence and Availability Awareness for Decision Support Systems As data-driven decision support technologies become increasingly popular, the availability or knowledge of data’s availability will become increasingly important. Even in situations where a model exists, the data for the model may often come from several systems. Pervasive computing environments will aggravate problems with data availability. Knowledge of when, or if, data will be available can impact the efficacy of a decision making support system that relies on data, particularly when the data come from multiple sources. If a decision support system requires data from external systems and the data from these systems were delayed or completely unavailable, the decision-maker would have to work around the lack of data, and the assistance provided by the system may be limited. Decision timing further complicates the effects of data availability. Depending on the time constraints for the decision, the data may not become available in a timely manner. Figure 1 shows the phases and activities of the decision making process, adapted from Simon’s (1960) model. If the decision opportunity has a time limit, then the intelligence, design, choice and implementation phases are also bounded by time constraints that must cumulatively be less than or equal to the opportunity time limit. Interruptions or delays occurring at any phase can affect

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009 21

Figure 1. Decision making process steps

any subsequent phases, constricting the time available for that phase. This constriction of time can have significant implications for the decision-maker, particularly if the causal delay is unknown. If the decision-maker does not have any understanding the delay duration, then he or she is left to either wait (possibly too long), skip that phase’s tasks (no-wait), or improvise to accommodate the activities in the delayed phase. Decisions made under high pressure time constraints encourage a decision-maker to rely on heuristics and historical approaches that may not be appropriate for the current decision problem (Gonzalez, 2005; Ordonez & Benson, 1997). Moreover, these cascading effects can lead to reduced DSS effectiveness and sub-optimal decision outcomes. In a pervasive computing environment, data may come from a variety of sources and are likely to be outside of the functional DSS itself. Environmental factors and the number of data sources required for the decision opportunity may have a direct impact on inputs, which then can affect processing and subsequent advice output. Moreover, in complex computing environments, the processing and output components may be distributed or physically separate from the other components. This consideration is particularly significant in a PCE because the DSS itself, users, or decision support-related resources may be mobile, have sessions interrupted, or have inconsistent network bandwidth. When there is uncertainty about the availability of decision support-related data or the DSS itself, it introduces an additional decision within the core decision making process. Con-

sider a simple example where someone wants to purchase a digital video disc (DVD) player and their only consideration for the decision is price. However, they want to use it to play a movie for some guests who will be arriving at their home in the next 90 minutes. The decision maker might use a DSS, which has a resource on the Internet that would identify the lowest price DVD available for local purchase. Now, what if the decision maker’s Internet connection is down? The decision maker has an additional decision to make that stems from the original decision process. The decision maker could wait for the connection to come back up and hope it comes back with sufficient time to identify a DVD and travel to make the purchase. The decision maker could try to get connectivity using another medium, such as an Internet-capable cell phone; or alternatively, the decision maker could forgo support altogether and drive to a local brick and mortar store and choose from the on hand selection of DVD players. Figure 2 shows what happens to the traditional decision making process when availability is unknown at whatever step of the decision process. At each step in the primary decision process (shown as shaded boxes with dark arrows), there exists the possibility to spawn a secondary decision process (shown as lighter boxes and dashed arrows), due to uncertainty regarding the availability of resources necessary at that step. Moreover, these secondary decision processes may be terminal or create their own additional processes. To illustrate the effect of availability awareness and expand the previous example: what if the decision maker knew or was informed that the Internet con-

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

22 International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009

Figure 2. Decision making process with no presence and availability awareness

nection would be available 10 minutes from the time the decision maker was made aware of the outage? The decision maker could make an informed decision about how to proceed and could weigh the previous alternatives with additional certainty. Awareness information does not necessarily have to be limited to whether or not a resource is online/offline or when it will be available; it could also include probabilistic information as well. For example, the decision maker purchasing a DVD player could have been provided a probability that the Internet connection would be available within a certain amount of time

and the likelihood that the connection will stay up, once it returns. To support the dynamic nature of pervasive computing environments, a DSS should incorporate interfaces that are capable of providing insight into components’ and resources’ online status and availability. Additionally, the DSS itself should provide status and availability information interactively directly to the decision-maker. In order to meet these requirements in PCEs, a typical DSS architecture is adapted by considering resource characteristics that can cause delays (such as network bandwidth capacity and quality, storage failures, or ex-

Figure 3. Typical DSS with awareness layer added

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009 23

tended processing loads). Figure 3 shows an architecture extending the typical DSS with an added layer of presence/availability awareness. The awareness layer delivers information and feedback from the functional DSS elements through the user interface and the dialog management directly to the decision-maker. Examining this architecture from the inputs’ perspective, the DSS would have information about the databases or data warehouses online status. Should the data or inputs not be online, the availability awareness layer would deliver details about when the data may be available. Similarly from the processing perspective, the decision-maker would have information about processor loading and subsequent task processing delays. Unlike architectures where the decision-maker has no knowledge of the DSS and its resources’ availability, the decision-

maker is provided the necessary information to evaluate the wait/no-wait, accommodate, or abandon secondary decisions resulting from system/resource delays or interruptions. While shown as a monolithic layer in Figure 3 and centralized management of presence and awareness information is necessary, portions of the presence and availability awareness layer could exist within each functional DSS resource, e.g. as part of the database, as a process monitor, or as a function of a printer driver. If the DSS resources, including the DSS itself, are generalized they can be considered a hierarchical set of functions that are independent bottom up and dependent top down. Figure 4 illustrates a generalized hierarchy. While every level may not be necessary for every resource, the level dependencies are evident. Consider an example where a DSS provides guidance by presenting

Figure 4. Decision resource hierarchy

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

24 International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009

information on a graphical map, e.g. a spatial DSS. This DSS may require data from a remote website that converts street addresses to latitude and longitude. This remote website/data would be considered a resource. This resource depends on a hierarchy of dependent functions to be able to serve its purpose. Examining Figure 4, without power/electricity, there is no connectivity or anything else above power. Without connectivity the data in storage cannot be delivered. Without storage, processing cannot occur; there is nothing to process. Without processing, the software cannot operate, and if the software (web server, address-to-lat/long converter, or host operating system) fails, the resource cannot respond to the DSS’s request. There is an implicit dependency from top to bottom but not the other way. It is possible to not have connectivity, yet have power and so on. Each level has a sensor to monitor its own functions and the total availability of these individual levels comprises the resource’s availability. The monitoring sensor at each level may take different forms as appropriate for that level. For example, the power monitoring sensor may be a hardware device that feeds data back to the DSS directly. In contrast, the software monitoring sensor may be software itself that monitors how well a service responds to requests, i.e. doesn’t crash or stop running. The obvious question is: how might details about decision support-related resources’ availability be obtained and quantified? The hardware and software technology already exists to enable both new and existing DSS with presence and availability awareness. Hardware that monitors loading, power, outages, and interruptions have become standard in high availability environments. Research in high availability computing has already identified methods to evaluate, quantify, and monitor hardware related statuses such as power (Rahmati & Zhong, 2007), network characteristics (Shahram et al., 2006), computer components (Weatherspoon, Chun, So, & Kubiatowicz, 2005), processing/computing load (Zhoujun, Zhigang, & Zhenhua, 2007), and storage (Blake & Rodrigues, 2003).

Software presence and awareness solutions are broadly available and can be commonly found in server monitoring, instant messaging, and web service software. Moreover, research in the area of web service composition and quality of service (QoS) can provide solutions delivering awareness knowledge for general software-based resources. There is a significant amount of research studying applications using QoS monitoring for service level agreements, adaptation to changing system conditions, and web service composition (Ali, Rana, & Walker, 2004; Menasce, 2004; Thio & Karunasekera, 2005). Web service composition is a particularly active research area, rich with solutions for service availability, because of the critical nature of this information for process scheduling and execution planning (Peer, 2005; Pistore, Barbon, Bertoli, Shaparau, & Traverso, 2004). These theoretical and accessible approaches provide a wide range of alternatives for implementing the collection, quantification, and monitoring of decision support-related resource presence and availability information. With regard to integrating resource presence and awareness in existing and future DSS, these technologies are well researched and readily available. The model discussed above maps presence and availability awareness information to decision support resources. This information can then be provided to decision makers. In theory, if presence and availability awareness is extended through a DSS to the decision maker, structured decision problems with deterministic outcomes should realize improved support. This theory suggests the following research question and hypotheses. Research Question: Will decision supportrelated resource availability awareness improve decision outcomes? •



Null Hypothesis: Decision support-related resource availability awareness knowledge provided to decision makers will not improve decision outcomes. Alternative Hypothesis: Decision support-related resource availability aware-

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009 25

ness knowledge provided to decision makers will improve decision outcomes.

Simulation Experiment To evaluate the above hypothesis a time constrained, deterministic, structured or semistructured decision problem is desired. The experimental methodology consisted of two scenarios: one where the decision maker is not provided awareness information (control) and a second where the decision maker is provided this information (treatment). In order to isolate the effects of the resource awareness information, support from the DSS should provide specific

and correct advice and the entire population of alternatives should be available to the decision maker whether the decision support-related resource is available or not. Figure 5 shows the control scenario and Figure 6 shows the treatment scenario; the decision maker is denoted as DM. The primary difference between Figure 5 and Figure 6 is the introduction of awareness information in the center of the flow diagram, where the DM has a choice to wait or not. To implement these scenarios a simulation experiment was created. The simulation experiment utilized a complex semi-structured problem, frequently used in management training and evaluation and typically requiring decision making support (McLeod, 1986). This problem involves

Figure 5. Control experimental process

Figure 6. Treatment experimental process

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

26 International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009

a market in which top management uses price, marketing, research and development, and plant investment to compete for a product’s one-quarter total market potential. Demand for the organization’s products will be influenced by: (1) management actions, (2) competitors’ behavior, and (3) the economic environment. The decision objective is to plan a “next quarter” strategy that would generate as much total profit as possible. This profit is the deterministic quantitative result of the decision maker’s choice. Strategy making requires: (1) setting the product price, marketing budget, research and development expenditures, and plant investment and (2) forecasting the competitors’ price, competitors’ marketing budget, a seasonal sales index, and an index of general economic conditions. Twelve additional variables, including plant capacity, raw materials inventory, and administrative expenses, were calculated from the strategy. Initial values for these twelve variables form the alternatives for decision-making. These twenty (controllable, uncontrollable, and calculated) variables jointly influence the profitability of the organization. This profit value is used as the key measure for decision outcome. This management problem formed the core decision opportunity for the simulation, as it provides an explicit quantifiable measure of the decision outcome: profit. While the actual management problem used is considerably more complex (as described above), to illustrate the simulation and experimental methodology in a simple manner, let profit (P) equal forecasted sales (S) multiplied by forecasted price (PR) minus forecasted production cost (PC). P = S * PR – PC The data for forecasted sales, production cost, and price would consist of several historical values for each variable and be supplied by an external system. The data comprising the historical values are considered the decisionsupport related resource, as is the DSS itself. However, these resources may or may not be

available within the time allotted for the decision. Within the DSS, there exists a model that would suggest the appropriate values, from all available historical data, that would provide the maximum profit value. This is the provided advice. In strategy making, it is the decision-maker’s responsibility to select which forecasted values to use, within a 5 minute time limit. The simulated decision-maker takes 3 forms: 1) where the decision-maker always takes the advice provided by the DSS, 2) where the decision-maker takes the advice a random number of times, 3) where the decision-maker never takes the provided advice. Using the simplified management problem here are three example simulation scenarios: 1. The forecasted data is provided within the time allotted for the decision. The DSS identifies the price, sales, and production cost that result in a maximum profit. Decision-maker (DM) 1 accepts the advice. DM-2 may or may not take the advice (randomly); if DM-2 does not take the advice, DM-2 chooses his/her own values. DM-3 does not take the advice and chooses his/her own values. For all three DMs, the values for S, PR, and PC are applied to the management problem resulting in a profit value for each DM. 2. The forecasted data is not provided within the time allotted for the decision. All three DM choose their own values that are applied to the management problem, resulting in a profit value for each. 3. The forecasted data is delayed and may or may not be provided within the time allotted for the decision. If the data will be available within the allotted time, the DSS identifies the values that will result in maximum profit. If the data becomes available and DM-1 decides to wait, then DM-1 takes the advice. If the data are not available or DM-1 does not decide to wait DM-1 chooses his/her own values. If the data are available and DM-2 decides to wait and DM-2 chooses to take the advice

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009 27

then the DSS supplied values are used. If the data are not available or DM-2 decides not to wait or DM-2 declines the advice, DM-2 chooses his/her own value. DM-3 chooses his/her own values. The values for each DM are applied to the management problem, resulting in a profit value for each. The simulation had 5 scenario variables that determined the conditions of the scenario, as shown in Table 1. All of these values were randomly generated for each run of the simulation. It was assumed that each input value would follow a standard normal distribution with a mean of 0 and a standard deviation of 1. To evaluate the effects of data availability on decision making, the simulation was run with and without awareness of the data’s availability, according to Figures 5 and 6. In one case, the decision-maker will be provided knowledge of the availability of the data (treatment) and in a second, the decision-maker has no knowledge

of when or if the data will be available (control). For experimental purposes, the two simulations were run with two additional conditions that affected how values were handled when the decision-maker did not use the DSS advice. The first condition simulated the decision-maker choosing values from the forecasted dataset which consisted of the DSS advice values and 50,000 additional, sub-optimal input values. The second condition simulated the decisionmaker selecting values without the forecast data dataset. Table 2 summarizes the simulations and conditions. The first condition (A) represents the situation where the decision-maker has access to the data but may not have access to the DSS advice. Under condition A, the simulated decision-maker’s values were randomly selected from the forecasted data. Condition B represents the situation where the DSS is available, but the data source may not be. Under condition (B), the decision-maker’s values were selected randomly based on formulas developed to ensure

Table 1. Simulation scenario variables VARIABLE

FUNCTION

Take_Advice

This is a 1 or 0 value determining if the DM who randomly takes the DSS advice decides to take the advice. If 1 then the DM takes the advice, 0 DM doesn’t.

Data_Avail

This is a 1 or 0 value determining if the data is immediately available. If 1 then the data is available, 0 it is not. If this is 0 then the DM-Wait and Delay_Amt values are used.

DM_Wait_YN

This is a 1 or 0 value determining if the DM waits for the data to become available or decides to choose a value before it becomes available. This variable only applies to the simulations where the availability is unknown.

DM_Wait

Decimal value between 0 and 5 minutes that determines how long the DM waits. This variable only applies to the simulations where the availability is unknown.

Delay_Amt

Decimal value between 0 and 10 minutes that determines how long the data will be unavailable.

Table 2. Simulation decision-maker input value conditions SIMULATION Simulation 1 Simulation 2

CONDITION

DECISION MAKER INPUT VALUES…

CONTROL / TREATMENT

A

Are randomly selected from dataset

Availability is unknown

A

Are randomly selected from dataset

Availability is known

B

Are randomly selected

Availability is unknown

B

Are randomly selected

Availability is known

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

28 International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009

that input values would fall within the management problem’s permissible ranges, according to a normal probability distribution. For all randomly generated variables, to incorporate the diversity of inputs from a population of users and scenario conditions, each variable was assumed to follow a standard normal distribution with a mean of 0 and a standard deviation of 1. The choice of a normal distribution was based on the Central Limit Theorem, which roughly states that all distributions approach normality as the number of observations grows large (Barron, 1986; Davidson, 2002). A precise and explicit model of the decision problem and simulation was programmed in Matlab. This software provided a robust programming environment where the simulation could be created and evaluated. Once programmed, 50,000 runs were conducted for each simulation and condition (4 sets as defined in Table 2 of 50,000 runs each). For every run, the run’s scenario variables and resulting profit (decision outcome) for each decision-maker were recorded.

Simulation Results The results of the profit generated by the simulations were analyzed using SPSS and the hypothesis was tested for each type of decision maker and each condition. In summary, the results show that the runs where availability information was known performed significantly better for the simulated decision maker who took the DSS advice (either always or randomly). The decision maker who never took the advice

had the lowest profits of all of the simulations. As might be expected, the availability awareness had no effect on the decision maker who never took the DSS advice and the simulation condition where the DSS (but not the data) was intermittently unavailable yielded higher profits. Table 3 shows a summary of the mean profit for all of the simulations. The runs where availability was known (had awareness) had the best decision outcomes or highest profits, except in the case where the decision maker did not ever take the advice. Availability awareness had the greatest impact on the simulated decision maker who always took the advice. To evaluate the hypothesis, each decision maker is compared between the control and treatment groups using a paired t-test. Tables 4 and 5 show the results of the ttests. The awareness-enabled runs outperformed the “unaware” runs. However, in the case of the decision maker never taking the advice, the difference was not significant (.558 for data intermittently available and .170 for DSS intermittently available) and subsequently the null hypothesis for this case must be accepted. The other two cases, where the decision maker either always or randomly took the DSS advice, were significant with alpha’s lower than .025, rejecting the null hypothesis.

Implications and Limitations As this simulation study illustrates, awareness of decision support-related resources’ presence and availability can lead to different decision outcomes. In addition to presenting

Table 3. Decision maker’s average profit values (dollars) DSS Advice:

Always Accept

Randomly Accept

Aware

Unaware

Unaware

Aware

Unaware

Data Intermittently Available

3,165,699

1,767,831

421,594

-275,659

-2,361,590

-2,376,501

DSS Intermittently Available

4,090,788

3,383,296

2,690,010

2,338,166

1,341,965

1,299,331

Availability:

Aware

Never Accept

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009 29

Table 4. T-test results for data intermittently available simulation runs Paired Differences Std. Deviation

Std. Error Mean

1,397,867.823

4,543,970.936

RANDOM Aware VS Unaware

697,252.604

NEVER Aware VS Unaware

14,910.336

Mean ALWAYS Aware VS Unaware

95% Confidence Interval of the Difference

t

df

Sig. 2 tailed

Upper

Lower

20,321.256

1,358,037.942

1,437,697.704

68.788

49,999

0.000

5,189,774.544

23,209.377

651,761.974

742,743.235

30.042

49,999

0.000

5,691,405.378

25,452.739

-34,977.307

64,797.979

0.586

49,999

0.558

Table 5. T-test results for DSS intermittently available simulation runs Paired Differences Mean

Std. Deviation

Std. Error Mean

95% Confidence Interval of the Difference Upper

t

df

Sig. 2 tailed

Lower

ALWAYS Aware VS Unaware

707,491.906

4,441,803.454

19,864.349

668,557.568

746,426.245

35.616

49,999

0.000

RANDOM Aware VS Unaware

351,843.327

5,835,473.647

26,097.032

300,692.863

402,993.791

13.482

49,999

0.000

NEVER Aware VS Unaware

42,633.342

6,942,609.965

31,048.296

-18,221.654

103,488.337

1.373

49,999

0.170

a model that maps presence and availability awareness to decision support resources, this study examined the effect of this awareness on decision outcomes augmented by DSS. In the cases where the decision maker always took the advice, having availability information provided delivered an average increase in profit of 17-56%. While more research may be necessary, this is significant. The research findings indicate that given time constrained, structured or semi-structured, deterministic decision problems supported by a DSS, resource availability information can have a dramatic impact on decision outcomes. Because the necessary resources need to be known before presence and availability information can be provided the benefit of this knowledge may be limited to

structured and semi-structured decision problems. However, if the decision maker is able to provide structure during the intelligence or design phases of the decision making process, the benefits demonstrated in this study should translate to unstructured problems. The benefits of presence and availability awareness are integrally tied to the architecture of the DSS and the support it provides. Other types of decision making support systems may offer different forms of support and guidance, changing the advice depending on the evolving circumstances in the decision situation. This research suggests that even in these other cases benefits can be gained by providing resource availability information to decision makers. Moreover, the technology to obtain availability

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

30 International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009

information is mature, reducing the barriers for adoption; making practical implementations of this research immediately feasible. By applying this mature technology to existing and future DSS, the challenges of limited bandwidth, overloaded systems, and general delays in settings such as PCEs can be minimized. Resource availability in a PCE is seldomly 100% guaranteed and delays/interruptions are likely. This study examined awareness benefits given a time constrained decision opportunity under conditions where resource availability was intermittent or delayed. The benefits of extending awareness information to the decision maker may be limited if the decision opportunity has a broad time window or delays resulting from unavailability are minimal. The issues of delay are significant, but pervasive computing environments may suffer from complete outages and interruptions where the disruption goes beyond the time boundaries of the decision opportunity. In this case, awareness of resource availability would have a limited effect, essentially mirroring the simulated decision-maker who never accepts the DSS advice. Further study is necessary to evaluate the implications of resource availability awareness in extreme cases of delay. The awareness-enhanced model presented here sought to address the issues evaluated in the simulation experiments. Results from any simulation are only as good as the assumptions used in the analyses. This study assumed that scenario conditions, acceptance rates, and input values would follow normal distributions. A variety of other distributions are possible, including the uniform, binomial and Gamma, distributions. Further studies, then, should examine the sensitivity of the results to changes in distributions. This study also assumed that the management problem was a reasonable representation of many organizations’ strategic decision making problem. Different organizations, however, may utilize different management philosophies, accounting principles, and decision objectives. If so, the decision model should be changed to reflect the organization’s practices, philosophies, objectives, and decision

environments. In particular, the profit equation may be replaced with an alternative measure or measures of performance, some tangible and others intangible. Variables and relationships may be defined and measured differently. Additional environmental factors may be added to the equations. While such alterations would change the specific form of the simulation model, the general model and experimental methodology would still be applicable.

Conclusion This study presented a model that maps presence and availability awareness to decision support-related resources for use by decision support systems. A simulation was conducted to evaluate the effects of awareness on a datadriven DSS supporting management decision making. The results of this study confirm that extending awareness knowledge to decision makers through a DSS improves decision making when compared to DSS without this capability. In addition, these results suggest that availability awareness is important whether it supports data inputs or DSS advice. Since the results are expressed in dollars of net profit, the findings provide an objective measure of determining the relative decision value of presence and availability awareness. Moreover, as a large-scale simulation that approaches a census study, the evaluation results should be generalizable to other applications of DSS that decision problems in distributed or dynamic environments. The results in this study suggest that the benefits of presence and availability awareness are most significant in environments where the reliability and accessibility of decision support resources are not 100% guaranteed; extending the findings of this study to most DSS in pervasive computing environments. With the advances in wireless and mobile networks, most modern computing settings have characteristics of pervasive computing environments. As a result, the benefits of generally extending resource presence awareness information to

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009 31

decision makers will be of increasing importance. Additionally, distributed computing resources, another characteristic of PCEs are already common. The research in this study implies relevance and applicability for these applications as well. However, further study is necessary to operationalize the model proposed in this work. Future work is planned to implement decision support-related resource presence and availability awareness in an operational DSS. This future study will incorporate existing awareness quantification and monitoring solutions such as those discussed in the third section and will expand this study, examining the limitations noted in the implications section.

tions on Dependable and Secure Computing, 1(1), 87- 96. Becker, C., & Dürr, F. (2005). On location models for ubiquitous computing. Personal and Ubiquitous Computing, 9(1), 20-31. Blake, C., & Rodrigues, R. (2003). High availability, scalable storage, dynamic peer networks: pick two. Paper presented at the The 9th conference on Hot Topics in Operating Systems (HOTOS’03), Lihue, HI, USA. Boiney, L. G. (1993). Effects of skewed probability of decision making under ambiguity. Organizational Behavior and Human Decision Processes, 56, 134-148.

References

Bricconi, G., Nitto, E. D., Fuggetta, A., & Tracanella, E. (2000). Analyzing the Behavior of Event Dispatching Systems through Simulation. Paper presented at the 7th International Conference on High Performance Computing - HiPC 2000, Bangalore, India.

Adla, A. (2007). A Distributed Architecture for Cooperative Intelligent Decision Support Systems. IEEE Multidisciplinary Engineering Education Magazine, 2(2), 22-29.

Brok, J., Kumar, B., Meeuwissen, E., & Batteram, H. J. (2006). Enabling new services by exploiting presence and context information in IMS. Bell Labs Technical Journal, 10(4), 83-100.

Ali, A. S., Rana, O., & Walker, D. W. (2004). WSQoC: Measuring Quality of Service Compliance Paper presented at the International Conference on Service Oriented Computing (ICSOC04), New York, NY, USA.

Chakraborty, S., Yau, D. K. Y., Lui, J. C. S., & Dong, Y. (2006). On the Effectiveness of Movement Prediction to Reduce Energy Consumption in Wireless Communication. IEEE Transactions on Mobile Computing, 5(2), 157 - 169.

Alter, S. (2004). A Work System View of DSS in its Fourth Decade. Decision Support Systems, 38, 319-327.

Crow, D., Pan, P., Kam, L., & Davenport, G. (2003). M-views: A system for location based storytelling. Paper presented at the ACM UbiComp 2003, Seattle, WA, USA.

Andreas, H., Alan, M., & Peter, D. (2005). Glacier: highly durable, decentralized storage despite massive correlated failures. Paper presented at the Proceedings of the 2nd conference on Symposium on Networked Systems Design & Implementation - Volume 2.

Davidson, J. (2002). Establishing conditions for the functional central limit theorem in nonlinear and semiparametric time series processes Journal of Econometrics, 106(2), 243-269.

Baldauf, M., Dustdar, S., & Rosenberg, F. (2007). A survey on context-aware systems. International Journal of Ad Hoc and Ubiquitous Computing, 2(4), 263 - 277.

Du, H. S., & Jia, X. (2003). A Framework for WebBased Intelligent Decision Support Enterprise. Paper presented at the 23rd International Conference on Distributed Computing Systems Workshops (ICDCSW’03) Los Alamitos, CA, USA.

Barron, A. R. (1986). Entropy and the Central Limit Theorem. Annals of Probability, 14(1), 336-342.

Ellsberg, D. (2001). Risk, Ambiguity and Decision. New York, NY, USA: Garland Publishing, Inc.

Bartlett, W., & Spainhower, L. (2004). Commercial fault tolerance: a tale of two systems. IEEE Transac-

Favela, J., Navarro, C., & Rodriguez, M. (2002). Extending Instant Messaging to Support Spontaneous Interactions in Ad-hoc Networks. Paper presented

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

32 International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009 at the The ACM 2002 Conference on Computer Supported Cooperative Work (CSCW 2002), New Orleans, LA, USA.

Holsapple, C. W., & Sena, M. P. (2005). ERP Plans and Decision-Support Benefits. Decision Support Systems, 38, 575-590.

Flewelling, R. (2007). Centralized Management Is Key to Controlling Mobile Communications Costs. Enterprise Networks & Servers.

Hu, M., & Antony, J. (2007). Enhancing design decision-making through development of proper transfer function in Design for Six Sigma framework. International Journal of Six Sigma and Competitive Advantage (IJSSCA), 3(1).

Forgionne, G., & Russell, S. (2007). The Use of Simulation as an Experimental Methodology for DMSS Research. In Encyclopedia of Decision Making and Decision Support Technologies, forthcoming. Harrisburg, PA, USA: Idea Group.

Imielinski, T., & Badrinath, B. R. (1994). Mobile wireless computing: challenges in data management. Communications of the ACM, 37(10), 18 - 28.

Frost & Sullivan. (2006). North American Enterprise Analytics Markets. Rockville Centre, NY, USA: Frost & Sullivan Research Service.

Keen, P. G. W., & Scott-Morton, M. S. (1978). Decision Support Systems: An Organizational Perspective. Reading, MA, USA: Addison-Wesley.

Geweke, J. (1992). Decision Making Under Risk and Uncertainty, New Models and Empirical Findings. New York, NY, USA: Springer.

Kentel, E., & Aral, M. (2007). Risk tolerance measure for decision-making in fuzzy analysis: a health risk assessment perspective. Stochastic Environmental Research and Risk Assessment, , 21(4), 405-417.

Ghosh, D., & Ray, M. R. (1997). Risk, ambiguity, and decision choice: Some additional evidence. Decision Sciences, 28(1), 81-104. Gonzalez, C. (2005). Task workload and cognitive abilities in dynamic decision making. Human Factors, 47(1). Griss, M., Letsinger, R., Cowan, D., VanHilst, M., & Kessler, R. (2002). CoolAgent: Intelligent Digital Assistants for Mobile Professionals - Phase 1 Retrospective. Palo Alto, CA, USA: Hewlett-Packard Labs. Griswold, W. G., Boyer, R., Brown, S. W., & Truong, T. M. (2003). The activeclass project: Experiments in encouraging classroom participation. Computer Support for Collaborative Learning, 477-486. Ho, J., & Intille, S. S. (2005). Using context-aware computing to reduce the perceived burden of interruptions from mobile devices. Paper presented at the Proceedings of the SIGCHI conference on Human factors in computing systems.

Khan, M. T., Zia, K., Daudpota, N., Hussain, S. A., & Taimoor, N. (2006). Integrating Context-aware Pervasive Environments. Paper presented at the International Conference on Emerging Technologies (ICET ‘06), Peshawar, Pakistan. Krause, A., Smailagic, A., & Siewiorek, D. P. (2006). Context-Aware Mobile Computing: Learning Context-Dependent Personal Preferences from a Wearable Sensor Array. IEEE Transactions on Mobile Computing, 5(2), 113-127. Kwon, O., Shin, J., & Kim, S. (2006). Context-aware multi-agent approach to pervasive negotiation support systems. Expert Systems with Applications, 31(2), 285. Lynch, G. (2002). The death of five nines. (Telecom Planet). Telecom Asia. McLeod, R. J. (1986). Software Package 11. College Station,TX, USA: Academic Information Service (AIS).

Hogarth, R. (1989). Ambiguity and competitive decision making: Some implications and tests. Annals of Operations Research, 19(1), 29-50.

Menasce, D. A. (2004). Composing Web Services: A QoS View. IEEE Internet Computing, Vol. 8(06), 88-90.

Holmquist, L. E., J.Wigstrom, & Falk, J. (1998). The Hummingbird: Mobile Support for Group Awareness. Paper presented at the Demonstration at ACM 1998 Conference on Computer Supported Cooperative Work, Seattle, WA, USA.

Millard, I., DeRoure, D., & Shadbolt, N. (2005). Construction of a Contextually-Aware Pervasive Computing Environment. Paper presented at the 1st AKT Doctoral Symposium, Milton Keynes, UK.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009 33

Muthian, S., Vijayan, P., Andrea, C. A.-D., & Remzi, H. A.-D. (2005). Improving storage system availability with D-GRAID. ACM Transactions on Storage (TOS), 1(2), 133-170. Ordonez, L., & Benson, L. (1997). Decisions under Time Pressure: How Time Constraint Affects Risky Decision Making. Organizational Behavior and Human Decision Processes, 71(2), 121-140. Owen, W., & Sweeney , R. (2000). Ambiguity Tolerance, Performance, Learning, and Satisfaction: a Research Direction. Paper presented at the IS Educators Conference (ISECON 2000), San Antonio, USA. Peer, J. (2005). Web Service Composition as AI Planning - a Survey: University of St. Gallen. Perich, F., Joshi, A., Finin, T., & Yesha, Y. (2004). On Data Management in Pervasive Computing Environments. IEEE Transactions on Knowledge and Data Engineering, 16(5), 621-634. Pezzulo, G., & Couyoumdjian, A. (2006). Ambiguity-Reduction: a Satisficing Criterion for Decision Making. Paper presented at the Cognitive Science (CogSci 2006), Vancouver, BC, Canada. Pistore, M., Barbon, F., Bertoli, P., Shaparau, D., & Traverso, P. (2004). Planning and Monitoring Web Service Composition. Raento, M., Oulasvirta, A., Petit, R., & Toivonen, H. (2005). ContextPhone: A Prototyping Platform for Context-Aware Mobile Applications. IEEE Pervasive Computing, 4(2), 51-59.

self-managing availability in shared data spaces. Science of Computer Programming, 64(2), 246-262. Satoh, I. (2005). A location model for pervasive computing environments. Paper presented at the Third IEEE International Conference on Pervasive Computing and Communications (PerCom 2005), Kauai, HI, USA. Satyanarayanan, M. (2001). Pervasive Computing: Vision and Challenges. IEEE Personal Communications, 8(44), 10-17. Shaffer, D. R., Hendrick, C., Regula, C. R., & Freconna, J. (1973). Interactive effects of ambiguity tolerance and task effort on dissonance reduction. Journal of Personality, 41(2). Shahram, G., Shyam, K., & Bhaskar, K. (2006). An evaluation of availability latency in carrier-based wehicular ad-hoc networks. Paper presented at the Proceedings of the 5th ACM international workshop on Data engineering for wireless and mobile access. Sharda, R., Barr, S. H., & McDonnell, J. C. (1988). Decision Support System Effectiveness: A Review and an Empirical Test. Management Science, 34(2), 139-159. Simon, H. A. (1960). The New Science of Management Decision. New York, NY, USA: Harper & Row. Soubie, J.-L. (1998). Modelling in cooperative based systems. Paper presented at the International Conference on Cooperative Systems, Cannes, France.

Rahmati, A., & Zhong, L. (2007). Context-for-wireless: context-sensitive energy-efficient wireless data transfer. Paper presented at the 5th International Conference on Mobile systems, Applications and Services San Juan, Puerto Rico.

Sur, A., Arsanjani, A., & Ramanthan, S. (2007). SOA based Context Aware Services Infrastructure. Paper presented at the IEEE International Conference on Services Computing (SCC 2007), Salt Lake City, UT, USA.

Roughan, M., Griffin, T., Mao, M., Greenberg, A., & Freeman, B. (2004). Combining routing and traffic data for detection of IP forwarding anomalies. ACM SIGMETRICS Performance Evaluation Review, 32(1), 416-417.

Thio, N., & Karunasekera, S. (2005). Automatic measurement of a QoS metric for Web service recommendation. Paper presented at the 2005 Australian Software Engineering Conference, Brisbane, Australia.

Ruay-Shiung, C., Jih-Sheng, C., & Shin-Yi, L. (2007). Job scheduling and data replication on data grids. Future Generation Computer Systems, 23(7), 846-860.

Wang, Y., Cuthbert, L., Mullany, F. J., Stathopoulos, P., Tountopoulos, V., Sotiriou, D. A., et al. (2004). Exploring agent-based wireless business models and decision support applications in an airport environment. Journal of Telecommunications and Information Technology 3(2004).

Russello, G., Chaudron, M. R. V., Steen, M. v., & Bokharouss, I. (2007). An experimental evaluation of

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

34 International Journal of Decision Support System Technology, 1(1), 15-34, January-March 2009 Want, R., Schilit, B. N., Adams, N. I., Gold, R., Petersen, K., Goldberg, D., et al. (1995). The ParcTab Ubiquitous Computing Experiment. Palo Alto, CA, USA: Xerox Palo Alto Research Center. Weatherspoon, H., Chun, B.-G., So, C. W., & Kubiatowicz, J. (2005). Long-Term Data Maintenance in Wide-Area Storage Systems: A Quantitative Approach. Berkely, CA, USA: University of California, Berkely, Electrical Engineering & Computer Sciences Department. Wegdam, M. (2005). AWARENESS: A project on Context AWARE mobile NEtworks and ServiceS. Paper presented at the 14th Mobile & Wireless Communications Summit, Dresden, Germany. Winkler, R. L. (1997). Ambiguity, probability, preference, and decision analysis. Journal of Risk and Uncertainty, 4(3), 285-297.

Yang, Y., Mahon, F., Williams, M. H., & Pfeifer, T. (2006). Context-Aware Dynamic Personalised Service Re-composition in a Pervasive Service Environment. In Ubiquitous Intelligence and Computing (Vol. 4159). Heidelberg, Germany: Springer Berlin. Yau, J., & Joy, M. (2006). Context-Aware and Adaptive Learning Schedule for Mobile Learning. Paper presented at the Mobile and Ubiquitous Learning Workshop, Bejing, China. Zeidler, A., & Fiege, L. (2003). Mobility support with REBECA. Paper presented at the 23rd International Conference on Distributed Computing Systems Workshop, Providence, RI, USA. Zhoujun, H., Zhigang, H., & Zhenhua, L. (2007). Resource Availability Evaluation in Service Grid Environment. Paper presented at the 2nd IEEE Asia-Pacific Service Computing Conference (APSCC 2007).

Stephen Russell is currently a visiting assistant professor in the Department of Information Systems & Technology Management at the George Washington University. He has over 20 years of industry experience in the telecommunications, healthcare, and manufacturing industries. He has also been a serial entrepreneur, owning companies that have specialized in software engineering, information resource management services and telecommunications equipment manufacturing. Russell received a BSc in computer science and MS and PhD degrees in information systems from University of Maryland Baltimore County. His primary research interests are in the area of decision support system, information systems development, and intelligent systems. His published research articles appear in Expert Systems with Applications, Decision Support Systems Journal, the Encyclopedia of Decision Making and Decision Support Technologies, and Frontiers in Bioscience.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 35-45, January-March 2009 35

Collaborative Decision Making:

Complementary Developments of a Model and an Architecture as a Tool Support Marija Jankovic, Ecole Centrale Paris, France Pascale Zaraté, Toulouse University, France Jean-Claude Bocquet, Ecole Centrale Paris, France Julie Le Cardinal, Ecole Centrale Paris, France

ABSTRACT Recent years we can hear a lot about cooperative decision-making, group or collaborative decision-making. These types of decisions are the consequences of developed working conditions: geographical dispersion, team working, and concurrent working. In the article we present two research works concerning two different collective decision situations: face-to-face decision-making and synchronous distributed decision-making. These two research studies adopt different approaches in order to support decision-making process, in view to different research objectives. Nevertheless, the conclusions show complementary aspect of these two studies. Keywords:

collaborative decision making; cooperative DSS; DSS

Introduction As underlined by Sankaran and Bui (2008), organizations routinely make decisions that require consultations with multiple participants. Combining all points of view towards a consensus acceptable to all parties is always a challenge. Negotiation and collaborative processes become then a strengthen point for organisations. Modern negotiation theory that finds its roots in decision theory and game theory focuses on interactive processes among antagonists with the attempt to reach compro-

mises. In order to achieve this objective they propose an organisational model for transitional negotiations. According to Wagner, Wynne and Mennecke (1993) much more effort is needed to bring in researchers from diverse perspectives such as Computer Supported Cooperative Work (CSCW), Group Support Systems, computer conferencing, telecommunications, and computer science and engineering, both to broaden the perspectives from which research is conducted and to expand on the number of applications to which GSS technologies may be

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

36 International Journal of Decision Support System Technology, 1(1), 35-45, January-March 2009

applied. On another point of view, cooperative or collaborative decision-making is a more and more complex and process that is predominant in organisations. It has been already noticed in the research literature, a displacement from individual decision-making to collective decision-making (Shim, 2002). These types of decisions are the consequences of developed working conditions: geographical dispersion, team working, concurrent working, etc. Pascale Zaraté and Jean-Luc Soubie (2004) develop a matrix of collective decisions taking into account two principal criteria: time and place. In their work, they also give an overview of several supports and their correspondence with different types of collective decision-making. We then can find different types of collective decision-making process: We define each kind of collective decision making situation: 1. Face to face decision making: different decision makers are implied in the decisional process and meet them around a table. This is a very classical situation; 2. Distributed synchronous decision making: different decision makers are implied in the decisional process and are not located in the same room but work together at the same time. This kind of situation is known enough and is common in organizations; 3. Asynchronous decision making: different decision makers are implied in the decisional process and they come in a specific room to make decisions but not at the same time. The specific room could play a role of memory for the whole process and also a virtual meeting point. This kind of

situation is well known in the Computer Supported Collaborative Work (CSCW) field and some real cases correspond to it, but for decision making it has no intrinsic meaning for a physical point of view, we cannot imagine decision made in organisation in this way: it is the reason why this case has a grey background in Table 1. For us this case could be assimilated to the next situation. Nevertheless, for a mediated communication point of view we have to check what are the impacts induced by this particular situation and this case could be seen as a virtual room well known in the GDSS field. 4. Distributed asynchronous decision making: different decision makers are implied in the decisional process and they do not necessarily work together at the same time and in the same place; each of them have a contribution to the whole decisional process. In this article, we will present the complementary aspects of two research studies concerning two different decision situations of collective decision-making: a conceptual model development and architecture of a tool support system for Cooperative Decision Making Processes. The model corresponds to the first decision situation explained in the previous paragraph, that is face-to-face decision-making and is exposed in the §2 of this article. The second is a proposition of architecture or platform for cooperative decisions in generally. This research concerns the third decision situation, distributed synchronous decision-making. This tool architecture is exposed in the §3 of this article. The fourth part contains a comparative

Table 1. Collective decision making situations Same time

Different times

Same place

Face to face decision making

Asynchronous decision making

Different places

Distributed synchronous decision making

Distributed asynchronous decision making

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 35-45, January-March 2009 37

study of these two researches and points out their complementary aspect.

complexity. Afterwards, in the §2.3, we present globally the developed conceptual model and its application.

Conceptual Model of Collaborative Decision Making

Industrial Context: Project Definition Phase

If we consider the matrix presented in the previous section, the collaborative decision-making process in a synchronous way is a very rich way of decision making on an information and opinion exchange point of view for big and complex projects like vehicle development projects. Nevertheless, this type of decision-making has many difficulties like conflict management, different preferences of decision makers, information retrieval, and different objectives in the process (Jankovic, Bocquet, Bavoux, Stal Le Cardinal, & Louismet, 2006). The recent growing evolution of the enegotiation domain shows that the previously described situations are more and more common. This leads to new kind of systems supporting negotiations in an electronic way of communication the Group Decision Negotiation Support Systems (see for example Sankaran & Bui, 2008). The field research showed that this is the most frequent decision-making type when it comes to development projects. The first phase of development project, i.e. New Product and Process Development (NPPD), is a special phase, because it is a collaborative decisionmaking phase. One of our research objectives was to develop a support in view to help project team in this process. In this purpose we have developed a conceptual model of collaborative decision-making. This model was used to model collaborative decisions identified in the first phase in PSA Peugeot Citroën: 73 identified decisions. Further in this part of the article, in § 2.1, we give a description of the first phase of NPPD process. In § 2.2 we explain the specificities of the decision-making in development project as well as the difficulties created by the project

New Product and Process Development (NPPD) is one of the key processes contributing to enterprise success and future development (Marxt & Hacklin, 2004). Identification of client needs during the market research phase represents a starting point for a Project Definition phase. In PSA Peugeot Citroen, the Project Definition phase is the first phase of NPPD cycle. This phase is characterised by numerous relationships between different actors contributing to the NPPD process and a considerable uncertainty issues to be dealt with. The Project Definition phase is the first phase of NPPD process. The project success emanate from this phase (Morris, 1988). This phase is also a collaborative decision-making phase. The most of strategic decisions are made within this phase. Whelton and Ballard (2002) have conducted a research on the importance of this phase and found that almost 80% of the product and process are specified in this phase. The decision-making is also an engagement of enterprise resources, and therefore it implies the importance of this phase globally for one project. The Project Definition phase is very complex because: • • • •

It is a phase where all aspects of one project are to be defined, Project organisation and management are set up throughout the fulfilment of functions assigned to every project team member, It is a phase of convergence of project objectives through collaborative decisionmaking process, Management bases as well as motivation of project team are built up progressively throughout this phase.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

38 International Journal of Decision Support System Technology, 1(1), 35-45, January-March 2009

Figure 1. Project objectives definition context Enterprise Objectives Concurrence

Client needs Project Team

Project Definition

Stakeholders needs Enterprise Know-How

At the very beginning of this phase, different enterprise departments give the global guidelines for the definition of project objectives to the project team. Some of these departments are marketing, production, innovation, and strategy. The given guidelines represent a transcription of strategic orientations of the enterprise, given by different fields. The project team has also to take into account the results of market segmentation and targeting as well as to integrate client needs (Jankovic, Bocquet, Stal Le Cardinal, & Bavoux, 2006). The mission of the project team is to decompose global orientation given by enterprise on objectives of sub-systems, based upon systems engineering methodology, to discern their global incoherence and propose the adequate solutions. This feasibility study is done also with the help of different knowledge poles, experts for different domains. Project objectives definition is obtained in the balance point between enterprise knowledge and enterprise ambitions, i.e. strategic orientations. This is a negotiation and collaboration process, where project team is progressively converging to project objectives definition. One of the difficulties of this phase is that there are over 150 objectives to monitor on the global level. Correlations between these objectives are not often determined, so there is no certainty in how the changes of one objective will influence the other. Furthermore, the Project Definition phase is crucial for innovation introduction. In this phase, project team

has to decide the innovations that are to be incorporated in the vehicle development. This innovation introduction increases even more the difficulty of identification of possible correlations between project objectives.

Collaborative Decision Making in the Fist Phase: Complexity and Problems Vehicle development project are complex projects (Baccarini, 1996). The difficulties in the project induced by its complexity can be several: project objectives and goals defining (Morris & Hough, 1987), project planning, coordination and control requirements (Baccarini, 1996; Bubshait & Selen, 1992; Melles, Robers, & Wamelink, 1990). Project management methodology’s starting point is clearly defined project objectives and it represents a base for further development of different approaches used in project management (quality management, economic optimisation, risk management). This is clearly opposite to the Project Definition phase needs. During this phase, project team defines project objectives and therefore the existing approaches and methodologies are hardly implemented. Louafa (2004) also confirms that the project characteristics, like the ones we exposed related to the first phase accentuate the limits of existing project management methodology. The upper stated problems of project planning, organisation and coordination within

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 35-45, January-March 2009 39

this phase aggravate the project control. Only existing control within this phase was possible at the very end of it, and on the upper management level, i.e. Senior Directors company level. During this phase, the project manager does not have any insight in the global project progress related to convergence and coherence of project objectives and thus the possibility to introduce the correction activities (Jankovic, Bocquet, Bavoux, Stal Le Cardinal, & Louismet, 2006). The control point was at the end of this phase where project team obtains a “go or no go” decision from the top management. In the case of “no go” decision the deadline for the vehicle development is automatically increased. This augmentation can be up to several months. This additional delay is not acceptable in current conditions where a global course for time reduction is ongoing. There is another danger concerning the control problems. The Project Definition phase influence and determines the project success. If there is no control of validity of project objectives, the whole project is at stake. The whole first phase in PSA Peugeot Citroen is collaborative decision-making phase. Collaborative decision-making is a collective decision-making where different actors have different and often conflictual objectives in decision-making process. Decision-making actors in the Project Definition phase are experts in different domains having different information and knowledge concerning the problem. Therefore their vision of the problem is “coloured” by their knowledge and aspects that they are concerned with. The fact that every decision makers has different objectives implies also that they have different priorities concerning the decision values and alternatives. Hence, the collaborative decision-making represents a rich way for decision alternatives generation and helps project team in the identification of decision impacts. The problems related to collaborative decision-making and project management concern several levels (Jankovic, Bocquet, Bavoux, Stal Le Cardinal, & Louismet, 2006):







Collaborative decision level: The problem of identifying appropriate information about important decision elements. For example: who are the actors in the collaborative decisions, what are the information that the decision makers need to have in the moment of decision making, what is the level of criticality of information needed, what causes the conflicts in collaborative decisions, etc. Collaborative decision-making process level: The difficulty of determination of the influence of collaborative decisions on different activities or decisions further in the Project Definition phase. For example: what are the decisions to be made before and after, what are the decisions that will be influenced by the present collaborative decision, i.e. what project objectives will be influenced, what are the activities influenced by this collaborative decision, etc. Project level: The difficulties to implement the existing project management methods and tools in the management of this phase. For example: the base of project management is to identify activities constituting a phase in new product development determined by the project team in accordance with project goals. The problem is that the project objectives are not defined and in this phase, project complexity does not facilitate identification of activities.

Collaborative Decision Making Model and Its Application Our research on collaborative decision-making is upheld by the field research done in collaboration with PSA Peugeot Citroen. In view to specific context of collaborative decision-making in development projects, the objectives of our research were double: •

To help decision-makers, in this case the project team, in collaborative decisionmaking process,

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

40 International Journal of Decision Support System Technology, 1(1), 35-45, January-March 2009



To help the project team in managing the project.

Therefore we have developed a conceptual model of collaborative decision-making. The aim of this model is to identify and define the intrinsic information and elements necessary for a good quality decision-making. This model represents a result corresponding to decision support objective. It is also a base of project management tool we have developed. This tool will not be detailed in this article. The conceptual collaborative decisionmaking model is developed upon the systems theory of Le Moigne (1990). If we consider the systems theory, in collaborative decision-mak-

ing decisional system is common for two or more operational systems. This can be represented as on the Figure 2. The decision that is taken concerns a joint field of these two processes. Le Moigne (1990) defines the concept of General System as a representation of an active phenomenon comprehended as identifiable by his project in an active environment, in which he functions and transforms teleologically. Therefore we have developed four different views in collaborative decision-making model: Objectives View, Process View, Transformation View and Environment View (see Figure 3). These views are interdependent and are not to be taken separately.

Figure 2. Collaborative decision-making Objectives Level N

Collaborative Decision Objectives Collaborative Decision

Objectives Level N-1

Objectives Level N-1

Decision System

Decision System

Global Information System

Information System

Operational System

Information System

Operational System

Figure 3. Four different views of model Process

Environment

General System

Objectives

Transformations

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 35-45, January-March 2009 41

Objectives View concerns objectives in collaborative decision-making. This view takes into account different objectives influencing this process, as well as their relationships. The collaborative decision-making objectives are also influenced by actors’ preferences. Environment is a complex surrounding system, living and non-living, having multiple relationships with the observed object and thus influencing object’s behaviour. Three different environments influence collaborative decisions in New Product and Process Development: Decision environment, Project environment and Enterprise environment. Each of these environments is identified by his context, determining the influencing factors of collaborative decision-making and different actors relevant for collaborative decision-making. Therefore, Decision Environment is identified by decisionmaking context and actors participating in the collaborative decision-making process. This environment is influences by Project Environment, equally defined by Project Context and Project Influence Groups. Project and Decision Environment are influenced by Enterprise Environment, identified by its context and actors. Process View represents the process of collaborative decision-making. Collaborative decision-making is a complex human-interaction and human-cognition process. Therefore, we have identified 3 general phases of collaborative decision-making process: Identification of the need for decision-making, Decision-making phase and Implementation and Evaluation. In the model we underline that every process implies the utilisation of the resources, human or material. Collaborative decision-making process is mostly human process. Nevertheless, sometimes in order to make a decision, it is required to use a digital mock-up or just a mock-up. These resources have to be planed also. Transformation is a change performed on information and can be spatial (transfer of information) or form (transformation of the information into new information). These transformations can be grouped in two groups: preparatory transformations and implementing

transformations. Preparatory transformations are transformations that are required in order to dispose with elements necessary to decide upon. Implementing transformations are transformation in view to implementation of the decided solution.

A Decision Support Framework for Cooperative Decisions In her work, Zaraté (2005) proposes a Cooperative Decision Support framework. It is composed by several packages: • • • •

4.

An interpersonal communication management system, A task management system, A knowledge management tool, A dynamical man/machine interactions management tool. This framework is described in the Figure

The interpersonal communication management tool must be able as in every kind of CSCW tool, to help users and decision-makers to very easily interact among themselves. The dynamical man/machine interactions management tool must guide the users in their processes of solving problems in order to solve the misunderstanding problems. The knowledge management tool must storage the previous decision made by the group or by other groups. The system must propose solutions or part of solutions to the group in very similar situations. In the case of a different situation the system must be able to propose the solution the most appropriated and the users could accept it or not. This tool is based on a knowledge management tool and based reasoning tools. The part of the system for which the development is deeper, is the task management system. This tool has for objective to propose solutions or part of solutions to users. It calcu-

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

42 International Journal of Decision Support System Technology, 1(1), 35-45, January-March 2009

Figure 4. Cooperative decision support framework architecture

IHM dynamique Dynamical HCI

rUser

SCaD CDSf

Tasks Gestion des Management tâches

SGBM MBMS

Knowledge Capitalisation des Management connaissances

SGBD DBMS

Interpersonal Communication Communication interpersonnelle

IHM dynamique Dynamical HCI

lates the scheduling of tasks and sub-tasks and each role that are assigned to each tasks. It also proposes an assignment of tasks to users or to the system itself. This tool is based on a Cooperative Knowledge Based System developed at the IRIT laboratory. This Cooperative Knowledge Based Architecture is based on libraries of models: users’ models, domain models (or problems models) and contextual models. The calculation of the proposed solutions is based on several techniques: planning tools (for more details see Camilleri, 2000), linear programming (see Dargam, Gachet, Zaraté, & Barnhart, 2004). The main usage principle of this kind of tool is based on the interaction between the system and the users. The system proposes a solution to the group, the group takes in charge some tasks and then the system recalculates a new solution and proposes the new one to the group and etc. The problem or the decision to made is solved steps by steps each actors (system and users) solving parts of the problem.

Knowledge Base de Base connaissances Other user

This tool has been developed by part in an independent way of the proposed conceptual model of the Collaborative Decision Making process. Our objective is to show how this tool could be useful for the proposed conceptual model.

Complementary Study of Model and Decision Support Framework Our idea is to compare the developed model of collaborative decision making process with the proposed framework in order to find lack of representations. Communication tools are very important for cooperative decisions, but also for collaborative ones. Event though these tools are not participating in the actual process of collaborative decision-making, they are indispensable before and after decision-making process. That is, we

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 35-45, January-March 2009 43

consider that these tools participate in expanded collaborative decision-making process. Task management tools support task management and control by task definition and their assignment to different actors in the process. These tools fully support collaborative decisionmaking and are very important for the companies. These tools could be implemented through Operational Research tools as optimisation for example or also through Artificial Intelligence tools as for example planning tools. The main idea of these tools is to propose a plan of actions or tasks to decision makers. As we see it, they are very adapted for the Transformations View of collaborative decision-making model. Transformations View contains information of tasks before and after decision-making, deliverables necessary for good decision-making and important in the implementation, responsibility assignments, etc. Nowadays, it is not just necessary to optimise decision-making, but also to manage and control the realisation of what was decided. Collaborative decisions are very complex because of existence of multitude of objectives, influence of different environments, participation of different actors, etc. Knowledge management tools have real utility in this process and can support Objectives and Environment Views. In Objectives View, it is very important to know the different objectives in collaborative decision-making processes and their relationships. In this kind of decision-making, decision makers do not have the same objectives (Jankovic, Bocquet, Bavoux, Stal Le Cardinal, & Louismet, 2006). It is important to have all these information in order to manage this process and its inevitable conflicts. Information in the Environment View relate to different contexts influencing decision-making, as well as different actors and their roles in the decision-making process. All these information are essential for good and quality decision-making. Some projects and their contexts are very similar, so examination of previous experiences can be very helpful.

The Interaction Human/Machine tool constitutes the last module of the architecture proposed by Zaraté (2005) and is very specific to computer science. As we exposed in this part, almost part of our conceptual model could be supported by the proposed framework. There are also parts that are not directly correlating because of the very nature of collaborative decisions. For example, collaborative decision-making process is mostly done in the scope of face-to-face relationships, so there is no need to develop a support for this part of model. Nevertheless, we could imagine a support for decision structuring due to complexity of collaborative decisions (for example structuring of different elements of one decision, decomposition of the problem in order to have a better insight, etc.). Besides these elements, there are model parts like decision maker’s preferences or Groups of Influences in the company, appertaining to the domain of human relations that rest easy to implement in some developed tools as Team Expert Choice for example but difficult to maintain during the whole collaborative decisional process.

Conclusion In this article we have exposed a comparative study of two research works related to two different decision-making situations: collaborative decision-making, i.e. face-to-face and cooperative decision-making or distributed synchronous decision-making. Even though these two studies do not have the same results (because they have different objectives), a comparative study has pointed out several complementary elements. The main conclusions of this work are the following: • •

Collaborative Decision Making is no more seen as a fact but as processes, Even though these two decision types concern two different decision situation, in general collective decisions show the same needs in terms of operational support,

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

44 International Journal of Decision Support System Technology, 1(1), 35-45, January-March 2009



In order to support these processes in a better manner Tasks Management tools and Knowledge Management tools are necessary.

Phase Management. International Design Conference - Design 2006, Dubrovnik, Croatia.

The next step of this work will consist in defining a global conceptual model for collaborative decision-making including the functionalities proposed by the cooperative decision support framework. This global conceptual model will be the basis of the Data Base Management tool defined in the CDSF. The main objective of our wok is to define a tool for collaborative decision making processes generic enough for supporting any kind of situations (see Table 1).

Louafa, T. (2004). Processus de décision en management de projet integré. from http://infoscience.epfl.ch/record/63797/files/Louafa_ Synth%C3%A8se%20SMP_ASO%2012.05.04. pdf.

References

Le Moigne, J.-L. (1990). La modélisation des systèmes Complexes. Paris, Dunod.

Marxt, C. & Hacklin, F. (2004). Design, Product Development, Innovation: All the same in the End? A Short Discussion on Terminology. International Design Conference - Design 2004, Dubrovnik. Melles, B., Robers, J. C. B. & Wamelink, J. W. F. (1990). A typology for the selection of management techniques in the construction industry. CIB 90 Conference Building Economics and Construction Management, Sydney.

Baccarini, D. (1996). The concept of project complexity - a review. International Journal of Project Management 14(4): 201-204.

Morris, P. W. G. (1988). Initiation Major Projects - the Unperceived Role of Project Management. 9th INTERNET World Congress on Project Management, Glasgow.

Bubshait, K. A. & Selen, W. J. (1992). Project characteristics that influence the implementation of Project management techniques: a survey. Project Management Journal 23(2): 43-46.

Morris, P. W. G. & Hough, G. (1987). The Anatomy of Major Projects: a study of the reality of project management, John Wiley & Sons.

Camilleri, G. (2000). Une approche, basée sur les plans, de la communication dans les systèmes à base de connaissances coopératif. Toulouse, Université Paul Sabatier. Dargam, F., Gachet, A., Zaraté, P. & Barnhart, T. (2004). DSSs for Planning Distance Education: A Case Study. Decision support in an uncertain and complex world: The IFIP TC8/WG8.3 International Conference 2004. Jankovic, M., Bocquet, J.-C., Bavoux, J.-M., Stal Le Cardinal, J. & Louismet, J. (2006). Management of the Vehicle Design Process throughout the Collaborative Decision Making Modeling. Integrated Design and Manufacture in Mechanical Engineering - IDMME06, Grenoble, France. Jankovic, M., Bocquet, J.-C., Stal Le Cardinal, J. & Bavoux, J.-M. (2006). Integral Collaborative Decision Model in order to Support Project Definition

Sankaran, S. & Bui, T. (2008). An organizational model for transitional negotiations concepts, design and applications. Group decision and negotiation 17: 157-173. Shim, J. P., Warkentin, M., Courtney, J. F., Power, D. J., Sharda, R. & Carlsson, C. (2002). Past, present, and future of decision support technology. Decision Support Systems 33(2): 111-126. Wagner, G., Wynne, B. & Mennecke, B. (1993). Group Support Systems Facilities and Software. In Group Support Systems New Perspectives - Jessup, L. and Valacich, J. (Eds) MacMillan Publishing Company. Whelton, M., Ballard, G. & Tommelein, I. D. (2002). A Knowledge Management Framework for Project Definition. Electronic Journal of Information Technology in Construction -ITcon 7: 197-212. Zaraté, P. (2005). Des Systèmes Interactifs d’Aide à la Décision aux Systèmes Coopératifs d’Aide à la

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 35-45, January-March 2009 45

Décision. Toulouse, Institut National Polytechnique de Toulouse.

Zaraté, P. & Soubie, J.-L. (2004). An Overview of Supports for Collective Decision Making. Journal of Decision Systems 13(2): 211-221.

Marija Jankovic is an assistant professor in the Department of Industrial Engineering at Ecole Centrale Paris. She finished her PhD in collaboration with PSA Peugeot Citroen working on collaborative decision making in complex industrial engineering projects. Her current research work focus on project engineering and methodology, collaborative working and decision making, as well as project optimisation by using concepts as robustness and reliability. Pascale Zarate is a assistant professor at Toulouse University (INPT). She conducts her researches at the IRIT laboratory. She holds a PhD in computer sciences / decision support from the LAMSADE laboratory at the Paris Dauphine University, Paris (1991). She received his (French) accreditation to (independently) supervise researches in 2005 on the subject “Cooperative Decision Support Systems.” Her current research interests include: distributed and asynchronous decision making processes; knowledge modeling; cooperative knowledge based systems; cooperative decision making. Since 2000, she is head of the Euro Working Group on DSS. Since she obtained her PhD degree, she edited a book, 9 special issues in several international journals and 4 proceedings of workshops. She published 29 papers, 10 of them in scientific international journals like Journal of Decision Systems, Group Decision and Negotiation, EJOR. Jean-Claude Bocquet is the head of the Department of Industrial Engineering at Ecole Centrale Paris. He received a MSc (1978) in mechanical engineering and a PhD (1981) in artificial intelligence in CAD CAM system from Ecole Normale Superieure of Cachan. He received his (French) accreditation to (independently) supervise researches in 1990 on the subject “Computer Aided Design Engineering.” Julie Le Cardinal is an assistant professor in the Department of Industrial Engineering at Ecole Centrale Paris. Her research interest is to discover new methodologies of design and management in an industrial context. She made her PhD in 2000 about a study of dysfunctions within the decision making process, with a particular focus on the choice of actor. She is now conducting research works about decision-making in project and multi-projects context, capitalisation of dysfunctions within projects and optimisation of project processes. She has a MSc of Ecole Centrale Paris (1997) and a master of engineering in mechanical engineering-industrial design (University of Technology of Compiègne, France).

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

46 International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009

Developing a DSS for Allocating Gates to Flights at an International Airport Vincent F. Yu, National Taiwan University of Science and Technology, Taiwan Katta G. Murty, IOE, University of Michigan, Ann Arbor, USA Yat-wah Wan, GSLM, National Dong Hwa University, Taiwan Jerry Dann, Taiwan Taoyuan International Airport, Taiwan Robin Lee, Taiwan Taoyuan International Airport, Taiwan

Abstract Typically international airports have features quite distinct from those of regional airports. We discuss the process of developing a Decision Support System, and appropriate mathematical models and algorithms to use for making gate allocation decisions at an international airport. As an example, we describe the application of this process at Taiwan Taoyuan International Airport to develop a DSS for making gate allocation decisions for their passenger flights. Keywords:

airport operations; DSS (decision support system); gate allocations to flights; international airport

Introduction The problem of assigning gates to flights of various types (arrival, departure, connection, and intermediate parking flights) is an important decision problem in daily operations at major airports all over the world. Strong competition between airlines and increasing demand of passengers for more comfort have made the measures of quality of these decisions at an airport as important performance indices of

airport management. That is why mathematical modeling of this problem and the application of OR (Operations Research) methods to solve those models have been studied widely in OR literature. The dynamic operational environment in modern busy airports, increasing numbers of flights and volumes of traffic, uncertainty (random deviations in data elements like arrival, departure times from flight time tables and schedules), its multi-objective nature, and

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009 47

its combinatorial complexity make the flightgate allocation a very complicated decision problem both from a theoretical and a practical point of view. Responsibility for gate allocations to flights rests with different agencies at different airports. At some airports gate allocation decisions are made by the airport management themselves for all their customer airlines. At others, some airlines lease gates from the airport on long term contracts. Then those airlines make gate allocation decisions for their flights themselves. Typically international airports have features quite distinct from those of regional airports. The common characteristics of busy international airports all over the world are: they usually serve a large number of different airlines; they normally serve a large number of flights spreading over most of the 24-hour day; they have to accommodate planes of various types and sizes, and a considerable percentage of their flights are long haul flights coming from long distances. These features, and the fact that international airports are much bigger and have much higher volumes of traffic compared to regional or domestic airports, make the problem of assigning gates to flights at an international airport somewhat harder in practice than that at a regional airport. In this article we discuss the process of developing a DSS (Decision Support System), and appropriate mathematical models and algorithms to use for making gate allocation decisions at an international airport. Normally international airports have both cargo and passenger flights, but in this article we will only consider gate allocation decisions for passenger flights. As an example, we describe the on-going work being carried out to develop a DSS at TPE (Taiwan Taoyuan International Airport), the busiest international airport that serves all of Taiwan; to help the team of Gate Allocation Officers make their decisions optimally and efficiently. Being the busiest airport in Taiwan, TPE has all the characteristics mentioned above. Here is a quick summary on the most important

characteristics for the gate allocation decision at TPE. TPE serves about 40 international airlines, gate allocation decisions for all the flights at this airport are handled by the airport itself. TPE has a team of 18 flight operations officers (working three shifts) responsible for these decisions. TPE handles on average about 420 regularly scheduled flights/normal working day (this average varies from 390 to 450/day), and 20 irregular (i.e., unscheduled) flights/day (this average varies from 10 to 40/day); depending on the day of the week. Friday is usually the busiest day of the week, and holidays (Saturdays, Sundays, and other national holidays) are also busy days in comparison to the other working days. The Chinese (Lunar calendar) New Year vacation (which usually occurs in the months of January or February) days are the busiest days of the year at TPE. The number of flights some days before and after the Chinese New Year may be well over 500. A more complete description of the gate allocation problem at TPE is given in the next section. We describe the procedures being used currently at TPE, the mathematical models being developed, and procedures that will be used to solve these models when the DSS is fully implemented, and the expected benefits. We will discuss important design features of the DSS and how it will be incorporated into daily operations.

The Nature of the Gate Allocation Problem at TPE TPE is located approximately 40 kilometers south of Taipei City, the capital and the largest city of Taiwan. Currently there are two terminals at the TPE. Terminal 1 started operations in 1979, in a period of the most dramatic economy growth in Taiwan history. As a result, the traffic volume at the TPE grew rapidly and soon exceeded its original designed capacity. The situation was

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

48 International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009

temporarily relieved after Terminal 2 was put into operation in 2000. As the volume of airline traffic has been going up steadily, TPE will face capacity problems in the near future. Actually, even now during national holidays and the typhoon season, the current capacity is tight. During national holidays such as the Lunar New Year, many non-regular chartered flights are scheduled so that business persons can return to Taiwan from Mainland China to spend the holidays with their families. During the typhoon season (normally from June to September) regular flight schedules are often disrupted as many flights are delayed or cannot land or depart because of the weather condition. Even during normal days, the airport is often covered by severe fog; then the airport may require extra space for parking planes to avoid serious flight delays. In view of this, the government is planning for a third terminal. However the new terminal may not be ready in the near future due to legislation issues, long construction time, and budget constraints, etc. Improving the operational efficiency of existing facilities seems to be the only way to mitigate the capacity issues faced by TPE. One of the most critical operating issues is the allocation of gates to flights. A good gate allocation plan may greatly improve the airport’s operations and increase customer satisfaction.

Current Gate Allocation Process at TPE At present, the gate allocation plan at TPE is generated semi-manually by the 18 flight operations officers (FOOs) of the Flight Operations Section (FOS) who work three shifts around the clock. Normally the FOS receives the arrival/departure flight schedule for the next day from the airlines around 2 PM in the afternoon. The flight schedule information may be on a floppy disk or on a piece of paper. In the latter case, the FOOs on duty need to manually input the data into their computer system, flight by flight. The computer will then make initial gate allocations for each of the flights, subject to

manual adjustments by the FOOs on duty. The initial gate allocation plan for the next day’s flights is completed by 10 PM and distributed to the respective airlines. The computer system employed by TPE for making gate allocations is an on-line heuristic rule based system which incorporates rules originating from FOOs’ experiences. Basically the gate allocation is done in a fashion similar to the “first come first serve” strategy. The airport authority and the airlines have an agreement on the preference of gates for their flights. Each airline has its own set of first preference gates and second preference gates for its flights. If all these gates are occupied at the arrival/departure time of the flight, the flight may be allocated a gate not in the two aforementioned categories. Therefore, for each flight, the system will first try to find a gate that belongs to the first preference category at the time of arrival/departure of the flight. If none of them are available, it will try to find a gate in the second preference category for the flight. Such a process goes down to the third and even the fourth preference category (emergency gates) to look for an available gate. Presently, this process almost never fails to find a gate for a flight. The FOO also has the authority to manually shorten flight gate occupancy time so that all flights have a gate to use at their arrival/departure times. Most of such adjustments of gate occupancy time occur during peak periods for flights. The system also incorporates other rules that deal with flight-gate compatibility, overnight transit flights, gate maintenance schedule, private jets, and emergency gates. Due to the uncertainty in flight departure/arrival time, the initial plan needs to be adjusted throughout the course of the planning day. The FOOs try not to make too many changes to the initial plan to reduce additional communication work and disturbance to the airlines and passengers. For this part, the work is done manually and there are no well defined rules for these adjustments. According to the airport personnel, about 90% of the original gate allocations are unchanged during normal days. During unusual periods of time, such as national holidays or

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009 49

severe weather conditions, this number may go down significantly. In gate allocation, the main concern of the airport is making sure that all flights are gated upon their arrival/departure, and if possible, allocate the most preferred gates to flights; at present most airports in general do not pay too much attention to other aspects associated with gate allocations. At TPE, airport charges, navigation aids service charge, and noise charge are set by Taiwan’s aviation authority (CAA; Civil Aeronautics Administration) which include overfly noise charge, landing charge and bridge fees, use of ground handling area and facilities and other charges [CAA 2007]. These charges do not vary with respect to gate locations. For example, the noise charge is calculated in accordance with each aircraft’s maximum take-off weight and take-off noise level where the aviation noise control area of an airport is announced by the city or municipality government. The boarding bridge or bus charge is calculated in accordance with the aircraft’s number of seats and frequency of use. Therefore, these charges are not relevant to making gate allocation decisions.

TPE’s Concern about the Current Practice The current gate allocation planning process at the TPE seems to work just fine, as most flights are allocated to gates in sufficient time before their arrival/departure time. However, it is labor intensive, and it keeps at least 3 FOOs occupied during each shift (i.e., a total of 9 man-days of FOO’s time daily). With all this work, the FOOs are really hard pressed for time, particularly during peak periods of the day. Also, as the traffic volume grows continuously, the current practice may not be viable for too long in the future. In addition, the airport authority also wonders whether the current system does give the best gate allocation plan or not, and what the quality of the plan is in terms of the many objectives employed by other airports and proposed by researchers.

Soon after one of the authors had just completed an experimental study on gate assignment problem for the Kaohsiung International Airport in 2006 (Yu and Chen, 2007), a small team of the authors from several universities in Taiwan and the U.S. started this project with the TPE’s FOS. Since then, the authors from academia had met with the authors from the TPE several times to clarify the operational constraints, considerations, and objectives, before proposing the mathematical model and heuristics and getting them approved by the airport officials. We now summarize the current operating conditions relevant to the gate allocations operation at TPE, before giving detailed description of our mathematical models and heuristics for solving the gate allocation problem at international airports in general, and TPE in particular, in subsequent sections.

Gates and Aircraft Types Currently TPE uses 78 gates (A1-A9, B1-B9, C1-C10, D1-D10, 501-525, and 601-615) regularly to serve 27 different types of flights ranging from the smallest CL-604 to the largest Boeing 777 and Airbus 380. Among the 78 gates, 30 are apron gates (501-525, and 601-615) (i.e. open air gates without a bridge, passengers have to be taken by coach to and from inside the airport to these gate positions) as shown in Figure 1. In addition there are also three emergency gates 701-703 which are also apron gates.

Various Flight Types The TPE services 40 international airlines and on average 420 regular flights and 20 irregular flights (chartered flights, private planes, etc. that do not operate on a regular schedule) every day. During weekends (Fridays and Saturdays) and holidays, the flight numbers increase to about 450 regular flights and 40 irregular flights. Generally speaking the number of regular flights ranges from 390 to 450, and the number of irregular flights is between 10 and 40.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

50 International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009

Figure 1. Gate Layout of TPE. Gates in the A, B, C, D sections are those with passenger bridges (also referred to as “regular gates”). All other gates are apron gates, i.e. open air locations without any passenger bridges. Emergency gates 701-703 are much farther than other gates from the terminals, and hence are used only when no other gates are available. “Central gates” refers to gates considered to be in the central portion of TPE, these are gates in the A, B, C, D sections. TPE has an operating Skytrain line between Terminals 1 and Terminal 2, with a train running back and forth every 2 to 5 minutes. Using this if necessary, passengers can get from any of these central gates to any other, within 10 to 15 minutes.

There are three types of flights in terms of their origin and final destination: arrival flights whose final destination is TPE; departure flights whose origin is TPE; and transit flights which take a brief stop at the TPE, and then depart for their final destination. On average, there are about 91 arrival flights, 93 departure flights and 132 transit flights per day. Recall that each transit flight needs both an arrival gate, and a departure gate; so each transit flight involves two gate allocation decisions, though as far as possible, a transit flight stays at the same gate for arrival and departure.

the day based on June 2007 data at TPE. From this we see that there are two peak periods of the day for number of flights arriving/departing, one between 7 - 10 AM, and another between 3 - 5 PM. Corresponding charts for the data in various months of 2007 confirm the same observation. However, this pattern is somewhat different for different days of the week (Sunday to Saturday). For solving the gate allocation problem, these peak periods offer most challenge; in off-peak periods the problem is relatively easy to solve.

Peak and Off-Peak Periods

Gate Allocation Policy

We divide the day into 48 thirty-minute intervals and number them serially 1 to 48 with 1 representing the 12 midnight to 12:30AM interval. In Figure 2, we plot the average number of flights arriving/departing in each half-hour interval of

The airport does not give any preference to any airline in making the gate allocations (however see Section 9, which explains that TPE gives preference to flights with larger passenger volumes, and to regular flights, independent

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009 51

Figure 2. This figure shows the typical averages of the total number of flights arriving, departing in each 30-minute interval of a day. Time intervals are numbered serially 1 to 48 (1 is the time slot from midnight to 12:30 AM) are shown on the horizontal axis. On the vertical axis we plot the number of flights arriving, departing in the interval.

of the airline). Basically they employ the “first arrival/departure first assigned policy” for all the flights, regardless of the airline. TPE normally allocates 60 minutes gate time to arrival flights, 90 minutes gate time to departure flights for flight preparation, passenger disembarking, embarking, flight clean up, ground service to the plane, etc. For transit flights, the allocated gate time is 150 minutes, the sum of gate times of an arrival flight and a departure flight. However, these times may vary depending on the size of the aircraft and the time of the day. During peak times, the FOOs may shorten the gate time to accommodate more flights. On average, a 30-minute buffer time is scheduled between two flights using the same gate to take into account the uncertainty in flights’ actual arrival/departure time. This time may also be cut short during peak hours, as the FOOs try to persuade flights occupying gates to finish up their work soon if possible. The FOS has the full cooperation of all the airlines in this effort during peak periods.

Frequency of Gate Shortages As stated earlier, currently, most of the flights are gated in sufficient time before their arrival/ departure. Besides, few minutes waiting on the taxiway is acceptable to the airlines. The few occasions that a flight needs to wait for available gates usually occur during peak times as many national carriers’ flights are returning to their base. As for the case when the airport is too congested so that the arrival flight has to circle around the airport, it never happened at the TPE so far.

Organization of the Article The article is organized as follows. In Section 3, we discuss the data elements and uncertainties associated with them. In Section 4, various objectives to be optimized are explained in detail. We briefly review the mathematical models for the gate allocation problem, and algorithms to solve these models, discussed in previous work on the gate allocation problem in Section 5. In

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

52 International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009

Section 6, we describe the outputs desired by the airport authorities and the scheme used in our solution approach. The strategy used for making gate allocation decisions for a Planning Day in our approach is given in Section 7. Section 8 describes the mathematical models and procedures for gate assignments to flights in a planning interval in our approach. In Section 9, we summarize some of the important design features of the new DSS under development at TPE. Finally, in Section 10 we summarize the expected benefits, and some intermediate results of this ongoing project and draw conclusions in Section 11.

Data Elements, and Uncertainties in Them Flights use different types of planes (large wingspan planes, short wingspan planes, planes for long haul flights, short haul flights, etc., etc.) depending on the expected passenger volume on the flight, length of the flight, and several other considerations. As mentioned above, airport gates are also classified into different types depending on their size, location in the airport (those in the central portion of the airport, remote gates, etc.), etc.; these gate characteristics determine their desirability to airlines for their flights. So, for each flight for which a gate is to be assigned, we need the set of all the gates to which it can be assigned. This set of gates eligible to be assigned to a flight is classified into 1st (most preferred by airline), 2nd (second most preferred), and 3rd (least preferred) categories in order of their preference by the airlines. Sometimes at some airports there may also be a 4th category of gates in decreasing order of preference. This data is available, and it will be used in constructing the objective function to optimize, for determining the allocation of gates to flights (discussed later in Section 4). At TPE even though the classification of gates into 1st, 2nd, and 3rd categories mentioned above for their flights differs from airline to airline, it tends to be very similar. Airlines like

to have their flights use gates in the terminal in which most of their activities take place. So, in general for most airlines the first category gates are those among the A, B, C, D sections in the terminal in which they have their operations. Remote gates 601 to 615 are in the 2nd category (most preferred when no 1st category gates are available) for their flights. Most airlines tend to place Cargo gates (numbers 501 to 525) in the 3rd category (least preferred gates, used only when no 1st or 2nd category gates are available). Emergency gates numbered 701 to 703 may be considered in the 4th category (least preferred among all the gates) for almost all the airlines, they are for emergency use, and will only be used if no other gates are available. The other input data for gate assignment decisions are flight arrival and departure times. This data is subject to considerable uncertainty. As the difference between the actual landing/ departure time of a flight and its scheduled landing/departure time is a random variable, gate allocations for flights on a day cannot be finalized based on information available the previous day about their landing/departure times. That is why on day t – 1 the airport makes a tentative gate allocation plan for all the flights on the next day t based on the information about their landing/departure times available on day t – 1; and on day t they revise these tentative gate allocations as needed by changes in flight landing/departure times. Normally on day t – 1, by 2 PM all the tentative landing/departure time information for all the flights on day t becomes available. Using this information, the airport prepares the tentative gate allocation plan for day t by 10 PM on day t – 1.

The Various Objectives to be Optimized There are various costs associated with gate assignments which need to be optimized simultaneously. We discuss these in decreasing order of importance. Since the gate allocation problem is a multiobjective problem, we solve it by combining the

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009 53

various objectives into a single penalty function, and then determining the gate allocations to minimize the combined penalty function. In this section we will discuss the various objective functions considered, and the penalty coefficients corresponding to them being used in the DSS being developed at TPE.

An Objective Commonly Considered in the Literature In published literature on the gate allocation problem in OR journals, the most frequently used objective is minimization of the walking distance of all the passengers inside the airport. This is also used as the most important goal in the design of airport terminals. This appealing objective is easily motivated and clearly understood, but it leads to very difficult models that can hardly be solved. A disturbing fact that has surfaced in the last few years is the large number of airline passengers feeling various degrees of uneasiness and actual sickness (some even suffering severe health consequences) from sitting without any physical activity for considerable periods of time on medium to long airline flights. In view of this, we feel that it is inappropriate to place a great emphasis on minimizing the total walking distance of all airline passengers inside airports. On the contrary, maybe encouragements should be provided for passengers to walk around when they have the time. Moreover this objective does not measure any real cost, and is not high on either the airport’s or any airlines list of important objective functions to be optimized. Also the availability at many international airports nowadays, of rapid transit (also called Skytrain and other names) service between various terminals in the airport, or clusters or centers of gates separated by some distance, and walking belts inside each terminal to cover long distances, makes this objective function even less important. Airline managers that we talked to, tell us that even though this objective function is emphasized heavily in OR literature, their current practice of assigning gates to flights automatically takes care of this

objective function because in it, gates that are far away from the central part of the airport are assigned only when there is no gate in the central part available at the time of landing a flight. Compared to this objective function, the other objective functions like OBJ 1, 2 discussed below are real costs that are considered high priority objectives by both the airport and all the airlines. So, we will not consider this objective function in our model. However there is a class of passengers, transfer passengers, who only have a limited time (like an hour or so), who are greatly inconvenienced by having to walk a long distance between their arrival and destination gates. One objective that we will consider is minimizing OBJ = the total walking distance that transfer passengers with a limited time to walk between their arrival and departure gates. The number of passengers on each flight, and the destination of each of them in the airport (either the exit, or the gate assigned to some other flights) fluctuate randomly and widely from day to day. So, to evaluate this objective function reasonably accurately, we need data on the number of passengers transferring between every pair of gates; and hence modeling the problem of minimizing this objective function needs binary variables with 4 subscripts; and consequently a large 0-1 integer programming model which is hard to solve. So, we will handle this OBJ indirectly. One way of achieving this objective is to make sure that arriving flights carrying more than a certain number of transfer passengers are assigned to gates in the central part of the terminal i.e., gates more or less equidistant from gates in all corners of the terminal (this guarantees that wherever the departure gate may be, transfer passengers on those flights have to travel only a small distance to reach them). At TPE, the central part of the airport consists of the A, B, C, D sections. We can identify the set of those centrally located gates in the terminal. For an arriving flight j with more than the prescribed number of transfer passengers, and an eligible gate i outside this central set, include a penalty term

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

54 International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009

corresponding to this objective function in the penalty cost coefficient cij corresponding to this assignment. At TPE we incorporate this OBJ into OBJ 1 considered below. The 1st category gates mentioned above are all gates in the central part of TPE; the 2nd category gates are at some distance to the central part; and the 3rd category gates are much farther away. So, the penalty terms corresponding to the allocation of 2nd and 3rd category gates to such flights in OBJ 1 will reflect the impact of this objective OBJ in the penalty function.

OBJ 1: This objective measures the costs or penalties associated with the gate assigned to a flight. Emergency, cargo gates may only be used during periods of heavy flight traffic when regular passenger gates (i.e., those with jetties or passenger bridges) are not available for assignment, and only for certain types of flights for which such gates are suitable. If such a gate i is eligible to be assigned to a flight j, then OBJ 1 measures the penalty term corresponding to this assignment. Also, such an assignment usually results in charges to airlines for coaches and sometimes towing charges, etc. These are real costs that the airlines and the airport want to minimize. So, whenever towing, coach charges are incurred in the assignment of an eligible gate i to a flight j, we include these costs (scaled appropriately) in the penalty coefficient cij1 corresponding to this assignment. Even when a regular gate i is assigned to a flight j, there may be a penalty corresponding to this assignment depending on whether gate

i is in the central part of the airport are not if flight j is carrying a lot of transfer passengers, and the preference category (1st, 2nd, or 3rd as mentioned above) to which gate i belongs for flight j. All these penalty terms corresponding to various gate assignments to flights are to be determined by the decision makers. However, the team of flight operations officers at TPE has not reached a consensus on what the numerical values for the penalty terms cij1 for allocating an eligible gate i to a flight j should be corresponding to this objective function. So, for the moment, we are experimenting with the following values (See Box 1).

OBJ 2: This objective measures the cost associated with the time the plane spends circling around the airport, and waiting on the ground after landing before beginning to taxi to the gate. The plane is asked to circle the airport when there is no gate to receive it if it lands right away, and even the taxiway is too full for it to wait after landing. In the past this type of “necessity for the plane circling around the airport” used to occur sometimes, but nowadays at most airports around the world, this has become an extremely rare event. So, we ignore this in our mathematical model. After landing, the plane will be asked to wait on the taxiway, if the gate i to which it is assigned is either not free momentarily, or the path from the landing point on the runway to gate i is blocked momentarily by some obstruction. So in OBJ 2, we will consider only the costs and penalties for planes having to wait

Box 1.

0, if gate i is in 1st category for flight j,  nd 1, if it is in 2 category, 1 cij =  rd 3, if it is in 3 category, 5, if it is an emergency gate (4 th category). 

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009 55

on taxiways before beginning taxiing to the gate. If a flight lands at 8 PM say, but the gate allocated to it is not going to be free until 8:15 pm, and taxiing needs only 5 minutes; then the plane will be asked to wait on the taxiway for 10 minutes before beginning to taxi to the gate. Such events occur during peak times of the day at most busy airports. They cause inconvenience to the passengers and the flight crew on the plane. We can measure the penalty for such an event by c 2j t j where: tij= time in minutes that flight j plane has to wait on the taxiway after landing before beginning to taxi to gate i if it is allotted to gate i. c 2j = the penalty/minute waiting time on taxiway for flight j.

At present, on average for 90% of their flights, the final gate allocation is the same as the tentative gate allocation made for that flight the day before, and TPE would like to keep as a goal that for 90% or more of their flights this should happen. Other airports may not consider this objective as important as TPE does. We will handle this objective by trying to minimize the number of changes made in the tentative gate allocations. While updating the gate allocations on the planning day we will include a penalty term cij3 for choosing gate i for flight j in the final allocations, where: cij3 = 0, if gate i is the tentative gate allotted to flight j; 1 otherwise.

Brief Review of Previous Work on the Gate The penalty coefficient c 2j may depend on Allocation Problem, and the plane size, average passenger load in flight How our Model for the j, etc. Suitable values for c 2j have to be deter- Problem Differs From mined by airport management, or the concerned Those in Previous Work decision makers. At TPE, the consensus is that up to 10 minute waiting time on the taxiway is acceptable for any flight, but occurrences of such waiting time beyond 10 minutes are a cause for concern. Instead of minimizing a linear function of actual waiting times, the goal at TPE has been to minimize the number of such occurrences. So, if the allocation of a gate i to a flight j implies that this waiting time for flight j will be tij minutes, then we include a penalty term cij2 in the objective function, where:

0, if tij ≤ 10, cij2 =  1, if tij > 10.

OBJ 3: On each planning day this objective plays a role only in updating the tentative gate allocations made the day before for this day. At TPE they want to make as few changes as possible in the tentative gate allocations made already.

Various decision support systems have been developed for the design and the operations of airports. Some of them provide comprehensive decision support for planning and operations of an airport (e.g., Foster, Ashford, and Ndoh (1995), Zografos and Madas (2007), Wijnen, Walker, and Kwakkel (2008)). Other decision support systems are more specific; for example, the disruption management of the aircraft turnaround in Kuster and Jannach (2006)), the movement of planes between gateways and runway in Herrero, Berlanga, Molina, and Casar (2005), the safety of runways under heavy rainstorms in Benedetto (2002), and the decision support for airport expansion in Vreeker, Nijkamp, and Ter Welle (2002). This article focuses on the decision support for gate allocation, which will be the subject for our subsequent discussion. The gate allocation problem is the type of job shop scheduling problem in which generally a job (i.e., a flight) is served once by an available machine (i.e. an idle gate), with various constraints and objectives in

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

56 International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009

matching the jobs to machines. The details of the problem change with its constraints (including size of flights, ready times of flights, closeness of gates to land side facilities, etc.), objectives (including walking distances of passengers, to carousels, during transit, or both, waiting time of flights in taxiways for gates, etc.), division of time horizon (the whole time horizon as a single time slot, or divided into multiple time slots), solution methods (i.e., optimization, rulebased techniques, meta-heuristics, simulation, etc.), and purpose (i.e., planning or real-time dispatching). For a single-slot problem that matches flights to gates without any additional constraint, the problem is a standard assignment problem if the components of the objective function depend on allocating a gate to a flight. The single-slot problem becomes a quadratic assignment problem if the components of the objective function depend on allocating a pair of gates to a pair of flights, e.g., to minimize the walking distance of transfer passengers needs to simultaneously consider the gate allocation of two or more flights. Flights are ready at different time slots in a multiple-slot gate allocation problem. Such problems are generally computationally hard integer programs. Optimization-, simulation-, and rule-based heuristics have been applied to solve these problems. See Dorndorf et al. (2007) for a systematic overview of the gate allocation problem. The review Qi et al. (2004) covers general scheduling problems in the airline industry with gate assignment considered in some problems. As a handy objective, most papers include the walking distance of passengers as a component of the objective function; see, e.g., the pure distance-based objective in Haghani and Chen (1998); the passenger distance and passenger waiting time in Yan and Huo (2001) and Yan et al. (2002); the number of assigned gates and passenger walking distance in Ding et al. (2004a) and Ding et al. (2004b). Bolat (1999, 2000a, 2000b) do not consider walking distance in their objective functions. To handle the uncontrollable nature of flight arrivals and

to find the best tradeoff between utilization of gates and waiting of planes for them, Bolat (1999, 2000a, 2000b) propose to minimize some functionals of slack times between successive usages of gates - the maximum slack time in Bolat (1999) and the sum of variances of slack times in Bolat (2000a, 2000b). All the above papers formulate computationally hard models, either as variants of quadratic assignment problems or non-network type linear integer programs. The problems are solved with combination of optimization and approximation procedures (e.g., Yan and Huo (2001)), heuristics (e.g., Bolat (1999, 2000a), Haghani and Chen (1998)), meta-heuristics (e.g., genetic algorithm in Bolat (2000b), simulated annealing and Tabu search in Ding et al. (2004a)), and simulation (e.g., Yan et al. (2002)). Other airport management functions are a by-product of gate assignment; for example, the simulation framework in Yan and Huo (2001) can evaluate the effect of flexible buffer time for stochastic arrival of flights and can be used for real-time gate assignment. There are many rule-based systems that solve the gate allocation system. Here we only mention Cheng (1997) that combines rules with optimization techniques. Readers interested in rule-based systems can check the references of Dorndorf et al. (2007).

Features of the Mathematical Model that will be Used in our DSS The mathematical model that we will use for making gate allocation decisions in our DSS is described in the following sections. As discussed earlier, some of the important aspects in which our model differs from those in previous literature are the following. 1. Minimizing the total walking distance of all the passengers inside the airport is the main focus of most of the publications in previous literature. Our model does not consider this objective function at all. The many reasons for it are explained in Sec-

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009 57

tion 4. We do consider the total walking distance of all the transfer passengers (who have limited time to get from their arrival gate to their departure gate), but handle it indirectly. 2. We do not rely on large scale integer programming models for this problem that require long solution times and complex software, which makes them impractical for routine daily use. The model that we use is a simple transportation model that takes only seconds to solve, and is in fact more appropriate for the real problem. 3. In our gate allocation decisions, we take into account the “first arrived, first assigned” policy that all airports claim to practice. That is why our approach is close to on-line decision making. This also simplifies the model significantly. The previous literature seems to ignore this policy. 4. Our approach takes into account the uncertainty in flight arrival/departure times, and avoids the need to forecast data elements characterized by high uncertainty. Models in the previous literature assume that data elements are given; presumably they depend on forecasts which tend to be unreliable.

Outputs Needed and the Planning Scheme that we will Use Outputs Needed Experience at TPE indicates that for approximately 90% of the flights on each planning day, their landing time information available the previous day remains correct. Also, for most of the flights, updated information about their landing time available about 3 hours before their actual landing is accurate. Keeping these in mind, TPE has developed the following goals for the gate allocation effort. Each day by 2 PM the airport has all the information on the scheduled landing times for

all the flights next day. Using this data, prepare by 10 PM a tentative gate allocation plan for all the flights next day. For each arriving, departing flight on the planning day, the gate allocation for it should be finalized about 2 hours before its actual arrival, departure based on the latest information available about it.

The Planning Scheme to be Used Flights arrive and land, and depart continuously over time. So, arriving, departing flights form a continuous stream, and before a flight arrives or departs, we need to make a decision about its gate allocation. Suppose a flight A arrives at 8 PM and a gate, 1 say, is allotted to it. Then gate 1 will be occupied by this flight in the period 8 – 10 PM say, and is unavailable for allocation to other flights arriving in this interval. Thus the allocation of a particular gate to a flight limits the choice of gates for some flights arriving after it. Consequently, in the above example, the allocation of gate 1 to flight A at 8 PM, may lead to undesirable allocations to other flights arriving between 8 PM to 10 PM. For this reason almost all the publications in the literature on the gate allocation problem, insist on making the gate allocation decisions for all the flights in a day simultaneously using a large mathematical model covering the whole day. Because of this, they claim that their model outputs the global optimum gate allocation plan for the whole day, without getting trapped in sub-optimal plans over shorter intervals of time. At TPE the airport is committed to treating all its customer airlines equally, and not giving special privilege or preference to any particular airline. This implies that “first arrived, first assigned” policy should be adhered to strictly; similar policy also holds for departing flights. This means that each flight should be allotted to the best gate available for allocation at the time of its arrival, departure, irrespective of how it affects the availability of gates to flights arriving, departing after this flight.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

58 International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009

For example, suppose two gates 1, 2 are the only ones available between 5 PM to 5:30 PM; of these 1 is a 1st category gate and 2 is a 3rd category gate. Suppose flight A lands at 5 PM, and flight B lands at 5:15 PM. By this policy, we must assign gate 1 to flight A. It would violate the “first arrived, first assigned” policy to assign gate 2 to flight A for the sake of assigning the 1st category gate 1 to flight B arriving later than flight A. The “first arrived, first assigned” policy is one that all the airports in the world claim to adhere to. In our conversations with airport officials, we were told that they have to follow this policy in order to maintain good business relations with all their customer airlines. This policy implies that each flight should be allotted the best gate for it available at the time of its arrival, departure; and hence ideally it is best to determine gate allocations by an on-line algorithm which makes real-time decisions for each flight based on availability of gates at the time of its arrival, departure. However, with lots of flights arriving, departing in short durations, and the necessity to announce gate allocations 2 hours before the beginning of the planning interval based on the best information available at that time, and the desire to keep a large percentage of these allocation decisions unchanged as far as possible; it is very difficult to make gate allocations totally on-line for each flight just in time for its arrival, departure. So, we adopt the following practical strategy that is close in spirit to on-line decision making, and yet is easy to implement in practice. We select a short planning interval (like a 15 minute or 30 minute interval), and determine the best gate assignments for all flights arriving, departing in this interval, at gates that will be available for assignment at some point of time in this interval, minimizing the penalty function discussed in Section 4, using a simple static mathematical model. If the optimum solution obtained violates the “first arrived, first assigned” policy for some flights, then it is easy to modify that solution (using swapping and other manual moves) into one which satisfies that property,

since the planning interval is short and at the time of decision making all the necessary data for this interval is known accurately. For this reason we have developed the following planning scheme for making gate allocation decisions. In contrast, models in the literature for gate allocation totally ignore the “first arrived, first assigned” policy.

Selection of the Planning Interval We divide the day into short planning intervals, for gate allocation decisions. Decisions are made for the intervals in chronological order, and decisions made for an interval are taken as fixed in making decisions for future intervals. In the spirit of keeping close to on-line decision making, we find that taking the planning interval length as 30 minutes is convenient and works well. So, we describe the mathematical model in terms of 30-minute planning intervals (interval length can be changed from 30 minutes to similar short duration as appropriate). When interval k is the planning interval, gate allocation solutions for flights arriving in time intervals ≤ k – 1 are fully known, that information can be used to simplify many of the gate assignment constraints in the model for planning interval k. For example, the assignment of a large aircraft to a particular gate may imply that adjacent gates can only accept aircraft of a certain size, or are even completely blocked. So, if Gate 1 is going to be used by a large aircraft flight in time interval k – 1, and that plane will continue to stay at that gate for some time during interval k, then adjacent gates of Gate 1 can simply be made ineligible for allocation to flights using planes of non-acceptable size during planning interval k. Thus, the choice of our short duration planning interval allows us to both avoid the effects of uncertainty in data elements, and also makes it possible to solve the problem using a simpler mathematical model that is easier to solve. Also, it is easier to make simple modifications in the output allocation manually for implementation.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009 59

Strategy Used for Making Gate Allocation Decisions for a Planning Day In the next section, we will discuss a procedure which gives the mathematical model for making gate allocation decisions in a single 30-minute planning interval assuming that the arrival, departure times for all the flights arriving, departing in that interval are known exactly, and discusses how to solve it. Here we will discuss how to use that procedure to generate the outputs needed for the planning day.

To make the tentative gate allocation plan for all the flights on the planning day: This plan has to be prepared by 10 PM of the day before the planning day. The data used for making this tentative plan are the scheduled arrival, departure times for all the flights on the planning day, which becomes available by 2 PM of the day before the planning day. The planning day consists of k = 1 to 48 thirty-minute planning intervals. The allocation decisions in these intervals are made in chronological order one after the other, starting with the first planning interval (00:00 hours to 00:30 hours), using the procedure described in the next section.

To update and make final gate allocation decisions for a planning interval on the planning day: Consider the kth planning interval. Gate allocation decisions for flights arriving, departing in this interval are finalized 2 hours before the beginning of this interval. Nowadays flight arrival and departure information is being continuously updated, and this real time information is delivered continuously to all airport organizations that use it. About 2.5 hours before the beginning of the planning interval, the kth, the arrival, departure

times for flights in the planning interval are known reasonably precisely. Gate allocation decisions for the planning interval are finalized using that data with the procedure discussed in the next section. It is possible that some last minute changes occur in the arrival, departure times of flights in the planning interval, after the gate allocation plan for this interval is finalized. Nowadays such changes are rare, and only few in number. Any necessary changes in gate allocations to accommodate these last minute changes in arrival, departure times, are carried out by the gate allocation officers manually.

Procedure for Gate Assignments to Flights in a Planning Interval Consider the kth planning interval on the planning day. In the tentative plan, gate allocation decisions in this interval are made using the scheduled arrival, departure times of flights, which are available at the time of making this tentative plan, with the procedure described below. Working on finalizing the gate assignments for flights in this interval is carried out 2.5 hours before the beginning of this planning interval. By this time gate allocations for flights in the (k – 1)th interval would have been finalized and are known, and also the updated arrival, departure times of flights in this interval are quite precise. This is the data, and the procedure described below will be used. There may be some flights expected to arrive towards the end of the (k – 1)th interval for which gates have not been assigned in the planning work for that interval. These flights will also be considered for gate assignment in this kth planning interval. Let J, n: n is the number of flights that need to have a gate assigned in this planning interval k. This includes flights which depart or land at some point of time in this planning interval,

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

60 International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009

and flights that are expected to land before but have not been assigned to a gate in the previous interval. J denotes the set of these flights, and the index j is used to denote a general flight in J. i: is the index used to denote a gate in the airport (includes all gates, remote and emergency gates also if they can be used by some flights during heavy peak times), that are expected to be available for assignment to flights in this interval. If a flight is going to be occupied for the entire kth planning interval by a flight assignment made in earlier periods, then it is not even considered in this model. xij: is the decision variable defined for gate i and flight j ∈ J; this variable takes the value 1 if flight j is assigned to gate i in this planning interval, or 0 otherwise. If a gate i is not suitable to assign to flight j for whatever reason (for example if flight j uses a large plane and gate i is not of a size appropriate for it, etc.), then it is made ineligible for assignment to flight j, and the corresponding variable xij is not even considered in the model for planning interval k. Similarly, several of the gate assignment constraints can be taken care of by this ineligibility classification. For example, as mentioned in the previous section, if Gate i is adjacent to a gate occupied by a large plane, and that plane will be there for some time during interval k, then it is made ineligible for all flights j ∈ J with planes of unacceptable size during planning interval k, and the corresponding variables xij do not appear in the gate assignment model for this interval. Let Gj = Set of gates i which are eligible to be assigned to flight j ∈ J in planning interval k. I = U j ∈ J Gj = Set of gates i which are eligible to be assigned to at least one flight j ∈ J in this planning interval. Fi = Set of flights j ∈ J for which gate i ∈ I is eligible to be assigned in this planning interval.

As discussed in Section 4, we will combine the various objectives into a single penalty function to be minimized, to determine an appropriate compromise between the various objectives, while assuring some of the hard constraints in gate assignment. Let cij : The combined positive penalty coefficient associated with the decision variable xij . It is the sum of positive penalty coefficients associated with xij corresponding to the various objectives; these are determined based on trade-offs between the various objectives. When considering gate allocations in planning interval k, flights that are expected to arrive in time interval k – 1, but have not been assigned to gates then, should be given preference. Also, the airport may consider giving preference to certain flights that arrive in planning period k itself. So, partition the set of flights J that need gate assignments in this planning interval k into J1 U J2 where J1= subset of the flights in J that have to be given first preference for gate assignments J2 = the remaining flights in J. We first determine the maximum number r of flights to which gates can be assigned in this planning period k, subject to the constraints mentioned in the previous sections. It leads to the following transportation model. There is a constraint corresponding to each flight that needs a gate assigned in this planning interval k, the one corresponding to each flight j ∈ J1 is an equality constraint specifying that each of these flights must be assigned one gate for itself (because we are required to give these flights first preference for gate allocation); the one corresponding to each flight j ∈ J2 is an inequality constraint specifying that this flight needs one gate for itself if an eligible one for it can be found.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009 61

There is one constraint corresponding to each gate that becomes free at some point of time in this planning interval k and can receive a flight from that time onwards. The one corresponding to gate i is an inequality constraint specifying that this gate can accommodate at most one flight for which it is eligible to be assigned. The objective in this model is to maximize the total number of eligible flight-gate assignments that can be made in this planning interval. The model is

made in the planning interval k. It does not try to find optimal gate assignments. When the above model is feasible, the optimum gate allocations are found by solving another mathematical model which is also a transportation (or network flow) model. It tries to minimize the composite penalty function constructed above, subject to the same constraints as in the above model, and the additional constraint that the total number of flight-gate assignments should equal the maximum possible number r found above. It is

r = Maximum value of ∑ ∑ xij

Minimize ∑ ∑ cij xij

Subject to

Subject to

j∈J i∈G j

∑x

= 1, for each j ∈ J1

∑x

ij

≤ 1, for each j ∈ J 2

∑x

ij

≤ 1, for each gate i ∈ I

ij

i∈G j

i∈G j

j∈Fi

xij ≥ 0, for each j ∈ J , i ∈ G j If this model turns out to be infeasible, it is an indication that there are not enough number of eligible gates available in this planning interval to even assign to all the flights j ∈ J1. Then, the airport authorities can modify and relax some of the eligibility requirements for gate assignments if possible, or modify the set J1 as appropriate, and solve the model with revised information. When this model is feasible, the maximum objective value r in it gives the maximum number of flights among those needing gates in this planning interval, for which gates eligible for them can be assigned in this interval. If r < n = | J |, the remaining n – r flights in J for which eligible gates cannot be assigned in this planning interval will have to be transferred to the next interval for gate assignments. The above model only provides the maximum number of gate assignments that can be

j∈J i∈G j

∑x

= 1, for each j ∈ J1

∑x

≤ 1, for each j ∈ J 2

i∈G j

i∈G j

ij

ij

∑ x ≤ 1, for each gate i ∈ I ∑∑x =r j∈Fi

ij

j∈J i∈G j

ij

xij ≥ 0, for each j ∈ J , i ∈ G j Transportation (network flow) models are easy to solve. Typically, we can expect to have at most 100 to 200 flights to deal with in any planning interval at busy international airports. For problems of this size, either of the above models will require at most 0.5 seconds of a common PC time to solve, using software programs available today. An optimum solution x = ( x ij ) obtained for the 2nd model provides an optimum gate allocation through the interpretation that flight j ∈ J is assigned to gate i in the optimum gate allocation x if x ij = 1. There may be other constraints that have not been included in this model. If so, they can be added to the model. Or the gate assignment

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

62 International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009

team may use their expert judgment to modify an optimum solution of this model into another that can be implemented.

how to use those solutions in daily operations. This should relieve some of the work pressures on the gate allocation officers.

How is this Model Used at TPE Currently

Revised On-Line Heuristic Method

TPE has the policy of giving higher priority for the allocation of passenger and better category gates to flights with larger number of passengers over those with small number of passengers; and to regular flights over irregular ones. So, even though they have preferences between these classes of flights, for flights within each class they adhere to the first arrived first assigned policy. These preferences between classes are used in the process of making manual adjustments to the gate assignment solutions obtained by the mathematical model in each planning interval, to satisfy the “first arrived first assigned” policy. At present TPEs main focus in gate allocations is to have gates allocated to the maximum number (preferably all) the flights while minimizing OBJ 1. Instead of solving the transportation models given in the previous section exactly, they have developed several simple heuristic rules to obtain a good solution of them heuristically. The combination of these rules actually constitutes a heuristic on-line algorithm for gate allocation, which makes gate allocations to flights close to their arrival, departure times based on the availability of gates at that time. Few computational experiments that we carried out on past data indicates that this heuristic method always gives either an optimum solution, or one very close to it under existing volumes of traffic, for minimizing OBJ 1. But as the volume of traffic goes up, the approach using the mathematical model developed here gives much better results and takes much shorter time to obtain them. At the moment gate allocation decisions at TPE are made using this heuristic method, but in the near future TPE plans to begin using the exact solutions of these models and train all gate allocation officers to make them familiar with

Before the DSS can be put into use, the heuristic used currently can be improved by using a revised heuristic method given below. In each planning interval, we allocate gates to flights in chronological order of starting gate time, i.e. the latest time a gate need to be allocated to the arrival/departure flight to avoid delay. Consider flight i needs a gate at time ti. 1. Find all gates which are available for assignment (and not assigned to another flight earlier) at time ti or soon after ti. Suppose this set of gates is J. 2. For each j ∈ J, calculate the objective value for assigning gate j to flight i; cij say. Find c = min {cij : j ∈ J}. Among all j ∈ J with cij ∈ [c, c + ], find a gate which is in the 1st category for the smallest number of flights, and allocate that gate to flight i. In case of a tie, find a gate among those tied which is in the 2nd category for the smallest number of flights. Break ties arbitrarily if there is still a tie. Here is some tolerance to be chosen appropriately, depending on the range of values that the objective function takes. We could take = 0, or some small value. A preliminary test shows that this approach can reduce OBJ 1 for the planning day while keeping all flights gated when they need a gate.

Design Features of the New DSS Under Development at TPE At present in TPE, the gate allocation decisions are made semi-manually, based on a set of heuristic rules that try to minimize OBJ 1

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009 63

discussed in Section 4. Some of the data needed for the model is entered manually, while the heuristic rules are applied through a computer program, with manual adjustment of the output from the computer. The whole process is labor intensive. Among the 6 FOOs on duty in each shift of the day, the time of at least 3 is used up for making and updating the gate allocation decisions. With the result the FOOs are hard pressed for time, particularly during peak periods of the day. The position gets worse as the volume of traffic is increasing with time. This is one of the motivations for the development of this DSS. The goal of the DSS is to take all the objective functions into account; and to automate the process at least to some extent to relieve FOOs for looking after the many other important things in their daily work. The development of the DSS has just started, and work on it will be on-going over the next several years. So, we will just discuss some of the important features that will be incorporated initially. The most important component in it is assembling the data needed for the models as far as possible automatically.

ficients of the objective functions in gate allocation.

Content of the Working Databases •



• •

An Overview of the Proposed DSS Figure 3 is a schematic diagram of the proposed DSS. The system and the working databases provide information for the system modules to plan, manage, and control tasks for gate allocation.

Content of the System Modules •

Content of the System Databases •

• • •

Gate Types Database: locations of gates, characteristics such as the sizes, facilities (e.g., fixed bridge, if available), plane types, etc. of gates; Plane Types Database: types, capacities, wing spans, etc, of planes; Airline Database: locations of facilities and contacts of airlines; Operations Rules Database: the preference and constraints of planes on gates, the set of rules to assign gates to planes, the system parameters such as the value of the coef-

Daily Log Sheets: a record of flight and gate information; the information for flights includes the latest revised arrival or departure times data, with information from airlines, airport authorities interacting with TPE, on-board flight equipment of planes, airport controller, etc.; the information for gates includes any change of status of gates, with information from airlines and FOOs; Flight Information: the status of all scheduled and chartered flights, with different versions generated based on the most up-to-date information in the Daily Log Sheets Database; Gate Information: the status of the gate and the assignment as from the Daily Log Sheet Database; Gate Allocation: the record of allocation of gates to planes based on the current flight information, with different versions generated at scheduled time of a day, or at some special epochs after disruption of planned schedules.





FOO Control Module: this is main panel for FOO to control the DSS, including allocating gates, making manual changes to allocation suggested by the DSS, and maintaining the system; Gate Allocation Engine: based on the information of Flight Information, Gate Information, Gate Allocation, and Operations Rules Database, the engine generates optimization model with appropriate objective functions and constraints to make gate allocation (or to allocate gates based on rules); Report Generations: this module generates all sort of reports for gate allocation, including the allocation of gates, utiliza-

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

64 International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009

Figure 3. A schematic diagram of the DSS

DSS



System Modules

FOO Control Module

Gate Allocation Engine

Report Generation

System Maintenance

Working Databases

Daily Log Sheets

Flight Information

Gate Information

Gate Allocation

System Databases

Gate Types

Plane Types

Airline Info.

Operations Rule Set

tion of gates and of bridges, statistics with respect to airlines and to flights, and the objective function values of the various gate allocations; System maintenance: this module changes the content of the system databases, including rules to for the Gate Allocation Engine to generate its optimization models.

Data Capture with Minimal Manual Intervention When fully implemented, the DSS requires minimum effort to capture data. By design, the System Modules are applications automatically taking data inputs from modules and databases of the system. The content of the System Databases need not be changed unless there is any change in infrastructure on gates, planes, or airlines, or in rules of practices. Among the working databases, the Daily Log Sheets Database is the only interface to collection data, to convert it into useful information for the other three working databases. The most important data elements needed for the Daily Log Sheets Database are: a. Most recent updates of arrival/departure times of flights expected to arrive/depart

in each half-hour planning interval of the planning day. b. Occupancy status of each gate in the halfhour planning interval (will it be occupied during entire interval, and if so by which flight and plane; if not, at what time point in interval will it be available for reassignment). Ideally, the FOS and all airlines can access to the DSS on the web, with different priorities, security controls, and functionalities for different bodies. For data elements in (a), each airline is continually updating the arrival/departure times of its flights through (a graphic user interface of) the Daily Log Sheets Database. Such information immediately becomes public to all bodies, though only the FOS has the authority to revise the gate allocation, if necessary. Similarly, for data elements in (b), whenever an FOO makes a gate allocation to a flight, or alters the gate allocation made earlier for a flight, that information is immediately becomes public. Also, whenever an airline’s work team vacates a gate after work for the flight that arrived/departed from that gate is finished, then the airline enters this gate vacating information into the Daily Log Sheets Database to inform the FOS. Basically, airlines provide inputs of

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009 65

the latest flight information and gate status into the DSS, and in return, they get the gate allocation and the status of gates through the system. With the information from the Daily Log Sheet Database, the FOS serves as the central controller to allocate gates to flights.

The Automatic Data Processing and Decision Support by the Gate Allocation Engine The Gate Allocation Engine serves dual purposes, to prepare the data relevant for the gate allocation decision and actually to make such decisions for the FOOs to endorse or to revise. Using the latest flight and gate information, the Gate Allocation Engine generates the set of gates that will be available for assignment to flights, and the time at which they will be available; in the planning interval. Then, it generates all the constraints and the objective function for the model to determine gate allocations for flights during the planning interval. Finally, it solves the model and generates the gate allocations. The FOOs can then look it over and manually make any changes needed. Once the system is in operation, the whole process to generate and update gate allocations should take no more than one or two man-days of FOO time per day.

Summary of Preliminary Results Obtained at TPE and Expected Benefits From the DSS At TPE, at present volume of traffic, most flights are allocated a gate in sufficient time before their

landing or takeoff even during peak periods, and the waiting time on the taxi-ways after landing for most flights is within 10-15 minutes; and the need for a plane to circle the airport on arrival is extremely rare. During peak traffic periods some planes wait after landing on the taxiway for up to 30 minutes, but the percentage of flight planes that have to wait more than 20 minutes on the taxiway is small. Most of the flights (over 95%) are assigned gates within their 1st and 2nd preference categories. For over 90% of flights on a normal planning day, the gate allocation made for it during the previous day remains unchanged. As the traffic volume is growing, this research is motivated by a desire to plan for the future to get the best utilization of existing facilities; and to provide decision support help to the FOS to make better quality decisions. Besides reducing the manual effort to generate/update gate allocations, the results obtained from this DSS are expected to be of much better quality in terms of OBJ 1, than those obtained by the current heuristic approach while maintain the current level of OBJ 2 as can be seen in Table 1. A preliminary computational study, using data retrieved from TPE’s computer system in February 2007, indicates that the revised on-line heuristic approach is able to reduce OBJ 1 if the traffic level remains about the same as today. However, as the number of flights increases, the proposed assignment models will give better results. Thus, TPE will benefit from this DSS when the airport reaches its designed capacity in the near future. Our computational experiments indicate that the number of planes waiting on taxiway after landing, and those waiting for more than 10 minutes during peak times are zero for all approaches.

Table 1. OBJ 1 for the three approaches. OBJ 2 is zero for each of these approaches. Revised On-line Heuristic

Heuristic used in Current Practice

Proposed Assignment Models

OBJ 1 for the day

127

130

131

OBJ 1 for peak time

77

70

67

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

66 International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009

Conclusion In this article we described an ongoing project at TPE for developing a DSS to allocate gates to flights; discussed the approach and mathematical model to use for gate allocation decisions, and implementation of the proposed mathematical models in the DSS. As in the case of other major international airports, the gate allocations to flights at the TPE are planned in an uncertain environment. The ability to deal with uncertainty in the data elements is critical to the quality of gate allocation plan despite the occurrences of unforeseen events. The quality of gate allocation plans has a variety of measures in the aviation industry, authority, and research. Care should be taken to prioritize different measures as is done in this research. While the airport authority may not fully realize the importance of dealing with uncertainty to the operational efficiency, they generally agree that uncertainty need to be taken into account in making the gate allocations. The large scale combinatorial nature of the gate allocation problem; the stochastic nature of the flight arrival/departure times; the need for quick and quality decisions; necessitates the development of a robust gate allocation model that is simple to use, and one that generates solutions that have the property of flexibility to changes of input data that airports and airlines demand. The rescheduling of gate allocations due to flight schedule disruptions is a hard problem to solve by conventional mathematical models because of the limited time available for it. Since the number of flights involved is typically not high, FOOs are able to handle these heuristically using on-line heuristic approaches discussed earlier. In this article, we proposed a rolling horizon framework for dealing with the uncertainty continuously during the course of the gate allocation process for TPE, which also includes manual adjustments of the solution produced by the model by FOOs. Although this work is still an ongoing project, the strategy proposed in this article for developing the DSS is generally favored and approved by the TPE officials. Soon after the computer program

is fully developed, a full scale side-by-side comparison will be conducted and statistics on various objectives will be collected and compared to validate this model. For security and safety reason, full implementation and integration with TPE’s current information system will be done by the TPE’s own IT team in the near future.

Acknowledgment The research of Yat-wah Wan was partially supported by grant NSC 95-2416-H-259-017 of the National Science Council of Taiwan. The work of Vincent F. Yu was partially supported by the National Science Council of Taiwan under grant NSC95-2221-E-011-223.

References Benedetto, A. (2002) A decision support system for the safety of airport runways: the case of heavy rainstorms, Transportation research Part A-Policy and Practice, 36(8), 665-682. Bolat, A. (1999) Assigning arriving aircraft flights at an airport to available gates, Journal of the Operational Research Society, 50, 23-34. Bolat, A. (2000a) Models and a genetic algorithm for static aircraft gate assignment problem, Journal of the Operational Research Society, 52, 1107-20. Bolat, A. (2000b) Procedures for providing robust gate assignments for arriving aircraft, European Journal of Operations Research, 120, 63-80. CAA (2007) Regulations of charges for the use of state-operated airport, navigation aids and related facilities, 28 Dec. 2007, Laws & Regulations 12-03A, http://www.caa.gov.tw/. Cheng, Y. (1997) A knowledge-based airport gate assignment system integrated with mathematical programming, Computers & Industrial Engineering, 32, 837-852. Ding, H., Lim, A., Rodrigues, B., and Zhu, Y. (2004a) New heuristics for the over-constrained airport gate assignment problem, Journal of the Operational Research Society, 55, 760-768.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009 67

Ding, H., Lim, A., Rodrigues, B., and Zhu Y. (2004b) The over-constrained airport gate assignment problem, Computers and Operations Research, 32, 1867-1880. Dorndorf, U., Drexl, A., Nikulin, Y., and Pesch, E. (2007) Flight gate scheduling: State-of-the-art and recent developments, Omega - International Journal of Management Science, 35(3), 326-334. Foster, T.J., Ashford, N.J., and Ndoh, N.N. (1995) Knowledge based decision support in airport terminal design, Transportation Planning and Technology, 19(2), 165-185. Haghani, A., and Chen, M. (1998) Optimizing gate assignments at airport terminals, Transportation Research - Part A, 32, 437-454. Herrero, J.G., Berlanga, A, Molina, J.M., and Casar, J.R. (2005) Methods for operations planning in airport decision support systems, Applied Intelligence, 22(3), 183-206 . Kuster, J., and Jannach, D. (2006) Handling airport ground processes based on resource-constrained project scheduling, Lecture Notes in Computer Science, 4031, 166-176. Qi, X., Yang, J., Yu, G. (2004) Scheduling problems in the airline industry. In: Leung J.Y.-T., editor, Handbook of Scheduling - Algorithms, Models and Performance Analysis, 50.1-50.15.

Vreeker, R, Nijkamp, P, and Ter Welle, C. (2002) A multicriteria decision support methodology for evaluating airport expansion plans, Transportation Research Part D – Transport and Environment, 7(1), 27-47. Wijnen, R.A.A., Walker, W.E., and Kwakkel, J.H. (2008) Decision support for airport strategic planning, Transportation Planning and Technology, 31(1), 11-34. Yan, S., and Huo, C. (2001) Optimization of multiple objective gate assignments, Transportation Research - Part A, 35, 413-432. Yan, S., Shieh, C.-Y., and Chen, M. (2002) A simulation framework for evaluating airport gate assignments, Transportation Research - Part A, 36, 885-898. Yu, V. F., Chen Y. H. (2007) Optimizing gate assignments: a case study of the Kaohsiung International Airport, Eighth International Conference on Operations and Quantitative Management, Oct. 17-20, 2007, Bangkok, Thailand. Zografos, K.G., and Madas, M.A. (2007) Advanced modeling capabilities for airport performance assessment and capacity management, Transportation Research Record, 2007, 60-69.

Vincent F. Yu is an assistant professor of industrial management at the National Taiwan University of Science and Technology. He received his PhD in industrial and operations engineering from the University of Michigan, Ann Arbor. His current research interests include operations research, logistics/supply chain management, and information management. He had published articles in Computers and Operations Research and Journal of Industrial and Systems Engineering. Katta G. Murty developed the first algorithm for ranking the solutions of a combinatorial optimization problem in the context of the assignment problem, while visiting Case Institute in 1961-62 as a Fulbright scholar from the Indian Statistical Institute where he was an assistant professor then. He then extended this into the branch and bound (B&B) aproach for combinatorial optimization and integer programming, in the context of the traveling salesman problem. His 1968 PhD thesis in OR at University of California, Berkeley, initiated the geometric study of the linear complementarity problem using complementary cones. Currently he is a professor in industrial and operations engineering at the University of Michigan, Ann Arbor; where he has been teaching since 1968. He had visiting professor appointments at Bell Labs, NASA, University of Texas at Dallas, and at universities in Hong Kong, Singapore, Taiwan, India, Saudi Arabia, Armenia, and Germany. He has many research contributions in both theoretical, and applied OR and optimization.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

68 International Journal of Decision Support System Technology, 1(1), 46-68, January-March 2009 His most recent contribution is a new method (the Sphere Method) for linear programming, that shows the promise of being capable of solving large scale linear programs using matrix inversion operations very sparingly. He is currently working on computational testing of this method. He is the author of several popular textbooks in optimization at both undergraduate and graduate levels. Some of these textbooks are also available for downloading on his webpage. Yat-wah Wan received the BS degree in mechanical engineering from the University of Hong Kong, MS degree in industrial engineering from the Texas A & M University, and PhD degree in operations research from the University of California, Berkeley. From August 1991 to December 1993, he served in the Department of Manufacturing Engineering, City Polytechnics of Hong Kong and from December 1993 to July 2004 the Department of Industrial Engineering and Engineering Management, Hong Kong University of Science and Technology. He has been in the Graduate Institute of Global Operations Strategy and Logistics Management, National Dong Hwa University Since August 2004. His research interests include transportation and logistics and the control and optimization of stochastic systems. Jerry Dann is the chief of flight operations section at the Taiwan Taoyuan International Airport, Taiwan. He obtained his bachelor’s degree from Chung Cheng Institute of Technology, Taiwan. Dann had held several management positions at the Civil Aeronautics Administration (CAA) of Taiwan, including chief of Electricity System Section, chief of Aeronautics Communication System Section, and chief of Aerodrome Certification Section. Robin Lee (also known as Chi-Shih Lee) has been a flight operations officer in the flight operations section of the Taiwan Taoyuan International Airport since 2000. He holds a master’s degree in international affairs and strategic studies and a bachelor’s degree in international trade, both from Tamkang University, Taiwan. He has received intensive trainings in flight operation from the Civil Aeronautics Administration (CAA) of Taiwan and in airport ramp operation and management from the Singapore Aviation Academy, Singapore.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009 69

Enabling On-Line Deliberation and Collective Decision-Making through Large-Scale Argumentation:

A New Approach to the Design of an Internet-Based Mass Collaboration Platform Luca Iandoli, University of Naples Federico II, Italy Mark Klein, Massachusetts Institute of Technology, USA Giuseppe Zollo, University of Naples Federico II, Italy

Abstract The successful emergence of on-line communities, such as open source software and Wikipedia, seems due to an effective combination of intelligent collective behavior and internet capabilities However, current internet technologies, such as forum, wikis and blogs appear to be less supportive for knowledge organization and consensus formation. In particular very few attempts have been done to support large, diverse, and geographically dispersed groups to systematically explore and come to decisions concerning complex and controversial systemic challenges. In order to overcome the limitations of current collaborative technologies, in this article, we present a new large-scale collaborative platform based on argumentation mapping. To date argumentation mapping has been effectively used for small-scale, co-located groups. The main research questions this work faces are: can argumentation scale? Will large-scale argumentation outperform current collaborative technologies in collective problem solving and deliberation? We present some preliminary results obtained from a first field test of an argumentation platform with a moderate-sized (few hundreds) users community. Keywords:

collaborative technologies; large scale argumentation; on-line deliberation

The Challenge: towards internet-enabled collective intelligence The spectacular emergence of the Internet has enabled unprecedented opportunities for large

scale interactions, via email, instant messaging, news groups, chat rooms, forums, blogs, wikis, podcasts, and the like. Using such technologies, it is now feasible to draw together knowledgeable and interested individuals and huge information sources on a scale that was impossible a

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

70 International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009

few short years ago. We believe that it is possible to harness these new potentialities to enable “collective intelligence”, i.e. the synergistic and cumulative channeling of the vast human and technical resources now available over the internet (Klein, Cioffi and Malone, 2007) – to address what we call “systemic” problems, i.e. highly complex and widely impactful problems such as climate change, where the nature of the solution depends on the problem setting and the level of analysis (Rosenhead and Mingers, 2001). Reframing the issue in computational terms, we can say that such problems have a very large, unexplored and partially unknown solution space. Through the contributions of large numbers (up to many thousands) of knowledgeable users, a virtual community can enable unprecedented breadth of exploration of the solution space and, if adequately motivated and supported, convergence on high-quality and widely-supported solutions through collective deliberation. The successful emergence of on-line peer production communities, e.g. for Linux and Wikipedia, seems due to an effective combination of intelligent collective behavior and Internet capabilities (Surowiecki, 2004). In a nutshell, openness, large scale, self-organization and the support offered by adequate, low-cost technologies have allowed large groups of users to achieve outstanding results in knowledge creation, sharing and accumulation, to the point that such virtual communities have become a source of inspiration for both organizational scholars and companies (Gloor, 2006; Raymond, 2001; Tapscott and Williams, 2006; von Hippel, 2001; von Krogh and von Hippel, 2006). However, current technologies, such as forums, wikis and blogs, while enabling effective information sharing and accumulation, appear to be less supportive of knowledge organization, use and consensus formation. In particular, little progress has been made to date in providing virtual communities with suitable tools and mechanisms for collective decision-making around complex and controversial problems.

In this article we argue that a new kind of web‑mediated platform is needed in order to overcome the limitations of current technologies in this regard and to properly exploit the potential of collective intelligence on the Internet. We present the design for such a platform, which we call the Deliberatorium, which applies a knowledge organization and visualization approach based on argument mapping to help large, diverse, and geographically‑dispersed groups systematically explore, evaluate, and come to decisions concerning systemic challenges. We will argue that the argumentation approach, by providing a logical rather than a time-based debate representation, and by encouraging evidence-based reasoning and critical thinking, should significantly reduce the prevalence of some critical pitfalls (such as low signal to noise ratios, digression, hidden assumptions, low information disclosure, and so on) often faced by traditional technologies such as forum and wikis, and avoid many of the pitfalls that lead to deliberation failures in small scale groups as well. The article is structured as follows. In the next section we outline the factors that have a major influence on group deliberation failures and discuss the limits faced by current technologies from the perspective of supporting collective deliberation around complex systemic problems. In the second part of the article we outline the design of a large-scale deliberation platform that we believe can transcend these limitations. In the third part we report some preliminary results obtained from a first field test of the Deliberatorium with a community of more than 200 users. In the conclusions we introduce and discuss several research hypotheses we intend to test for in next experiments about how on-line large scale argumentation may improve collective deliberation compared to other technologies, like wikies and forums.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009 71

Supporting on-line deliberation: pros and cons of current internet technologies In the design of a platform for large-scale collective deliberation, it is critical to design effective countermeasures to limit the risk of group deliberation failures. In his book Infotopia, Sunstein (2006) outlines several causes that can induce deliberating groups to fail in making accurate, truthful and reliable decisions as well as some conditions under which group deliberation can work. He points out that deliberating groups typically suffer from three major problems: • •



they do not elicit all the relevant information that their members have because of social pressure (low information disclosure); they are subject to cascade effects: sequential information propagation in the group may produce errors amplification and premature convergence (contributions that happen to have been made early in the group deliberation process can have a disproportionate impact on the final outcome, eclipsing more accurate or useful contributions that came later in the process); they show a tendency toward group polarization: often deliberating groups may assume a position on an issue which is even more extreme than the average opinion, in particular when they are very homogeneous and when the issue is related to values and social identity.

The following conditions, in contrast, seem to help deliberating groups outperform even their best members: •



people believe that the issue has a correct, demonstrable solution (e.g. for so-called “eureka problems”, i.e. where a self-evident superior solution exists) the correct solution enjoys a certain degree of support by the group members before

deliberation starts (in the extreme case, at least one of the group members knows the right solution and is able to persuade the other members). The deliberation failure causes outlined above have been detected in experiments in which groups were required to deliberate about an issue and reach a collective decision. It is important to remark that this huge literature, developed mostly in the ‘80s and ‘90s, is largely concerned with small scale, closed, physically co-located groups of individuals involved in direct interaction in typical social situations, such as political and management committees, juries, assemblies, focus groups, meetings, etc. While there are reasons to believe that the above problems could also appear in groups collaborating through the Internet, to our knowledge no systematic evidence is available to assess the extent to which those effects can be found in large-scale on-line deliberation communities. However, some evidence exists for on-line prediction markets. Participants to a prediction market bet on the supposed best candidate and receive a money prize if their bet is correct, or lose their money in the opposite case. Prediction markets have proven to be a reliable approach for harnessing collective intelligence for such uses as predicting the winners of political elections (University of IOWA prediction market) or forecasting the success of new products (Google), but they cannot be used to deliberate collectively about complex problems for which no obvious limited set of solutions can be pre-defined. The reason for the good performance of prediction markets lies first in the simplicity of the problems they can deal with (there is a known and limited set of possible alternatives), and, second, on the presence of market incentives that motivate individuals to search for more information and prefer rational choices. The more informed the decision makers, the higher the probability that their majority guess is correct, by virtue of the well-known Condorcet’s Jury theorem. This appears to

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

72 International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009

represent a major difference with other on-line collaborative communities, like Wikipedia or the open-source movement, in which many different kind of both extrinsic (e.g. being paid) and intrinsic incentives (reputation, reciprocity, entertainment, voluntary contributions, etc.) are at work (Shah, 2006). Following de Moor and Aakhus (2006), Klein, Cioffi and Malone (2007) classify online deliberation support technologies into three groups: sharing, funneling, and issue networking technologies. By far the most commonly used technologies, including wikis, blogs, and discussion forums, are what we can call sharing tools (Jøsang, Ismail and Boyd, 2007). While such tools have been remarkably successful at enabling a global explosion of idea and knowledge sharing, they face serious shortcomings. One is the signal-tonoise ratio. The content captured by such tools, especially forums, is notorious for often being unsystematic, repetitive, and of highly variable quality. Sharing systems do not inherently encourage or enforce any standard concerning what constitutes valid argumentation, so postings are often bias- rather than evidence- or logic-based. A second issue involves the weak-

ness of sharing-type systems when applied to controversial topics with many diverging perspectives, often leading to such phenomena as forum “flame wars” and wiki “edit wars”. Sharing tools are thus ill-suited to identifying a group’s consensus on a given issue. Funneling technologies, which include group decision support systems, prediction markets, and e-voting, have proven effective at aggregating individual opinions to determine the most widely/strongly held view, but provide little or no support for identifying what the alternatives selected among should be, or what their pros and cons are. Issue networking tools (also known as argumentation or rationale capture technologies, Kirshner, Buckingham Shum and Carr, 2005) fill this gap by helping groups define networks of issues (questions to be answered), options (alternative answers for a question), and arguments (statements that support or detract from some other statement, see Figure 1). Such tools help make deliberations, even complex ones, more systematic and complete. The central role of argument entities encourages careful critical thinking, by implicitly requiring that users express the evidence and logic in favor of the

Figure 1. An example of argument map

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009 73

options they prefer. The results are captured in a compact form that makes it easy to understand what has been discussed to date and, if desired, add to it without needless duplication, enabling increased synergy across group members as well as cumulativeness across time. Current issue networking systems do face some important shortcomings, however. A central problem is ensuring that people enter their thinking as well-formed argument structures – a time and skill-intensive activity - when the benefits thereof often accrue mainly to other people at some time in the future. Most issue networking systems have addressed this challenge by being applied in physically co-located meetings where a single facilitator captures the free-form deliberations of the team members in the form of an commonly-viewable argument map. Issue networking systems have also been used, to a lesser extent, to enable non-facilitated deliberations, over the Internet, with physically distributed participants (Buckingham Shum, 2006; Verheij, 2003). With only one exception that we know of1, however, the scale of use has been small, with on the order of 10 participants or so working together on any given task, far less than what is implied by the vision of large-scale collective intelligence introduced in this article. Since the set up and large scale testing of a collaborative deliberation platform requires considerable effort and higher costs than a tra-

ditional experiment with a small group, the risk of ex-post deliberation failure should be reduced by suitable pre-emptive countermeasures. In the next section we present an approach to the design of an on-line argumentation collaborative platform and propose several implementation solutions aimed at ensuring effective deliberation. The proposed approach is centered around argumentation: we claim that an argumentbased format for knowledge representation and a deliberation process developing through a debate characterized by a dynamic exchange of arguments can improve deliberation performances compared to those obtained by small, physically co-located groups and enhance deliberation performance in the solution of complex problems compared to other more traditional internet collaborative technologies such as forums, wikis and blogs.

THe design of a large scale argumentation community The Proposed Framework One possible way to cope with the complexity arising in large scale on-line deliberation is to move from the issue of designing a platform to the more general problem of organizational design. In other words, we need to figure out how

Figure 2. Components for the design of an on line virtual community Incentives Attract motivate

Mission Attract motivate

Voluntary members provide

provide organize rate

deliberation attention knowledge

Exploration

Convergence

produce

decisions

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

74 International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009

the virtual community will (and should) work. We can model the virtual community as a kind of organization characterized by (Figure 2): • • •

• •

a mission to realize one or more goals coherent with the mission a large group of participants offering portions of their time and attention to the achievement of the goal through information search, knowledge sharing and creation, consensus achievement acting under suitable incentives a set of processes through which members explore the solution space and converge to a decision rules governing the access to the communityand proper interaction roles charged with certain responsibilities.

There are many differences between an on-line virtual community and a traditional organization. First, in the virtual community interaction happens mainly or solely through the internet medium; second, individual contribution is mainly voluntary and limited to three forms: knowledge provision, knowledge rating, and knowledge organization (e.g. classification). Third, a virtual community is a self-organized system in which top-down management and centralization are present to only a very limited extent and to which people join on a voluntary basis, propose and share ideas, form spontaneous teams and proceed to achieve goals pursuing recognition from the outside world (Gloor, 2007). In order for this kind of organizations to work properly, three major governance problems have to be dealt with: • • •

participation goverance involves attracting, retaining, and motivating a critical mass of users with the right skills attention governance involves mediating how that community explores the design and decision space when at work community governance involves the definition of the organizational structure and

processes in terms of hierarchy, rules and incentives In the following we address these three problems in the context of designing an argument based platform.

The On‑Line Argumentation Process The platform is aimed at supporting an on-line collective argumentative debate: ideas submitted by users are supported and attacked by arguments through a dynamic debate whose aim is to uncover chains of pros and cons behind each ideas. Knowledge is structured and organized through argument mapping and visualization (Figure 3). Ideas and arguments are rated by users through voting. It is important to distinguish between the voting process and the way scores are expressed, computed and aggregated. The idea is to have three types of scores: argument scores (where users vote how well grounded and convincing an argument is), idea scores (where users rate how promising and high-impact an idea is), and author scores (where users vote for the reputation of an author). Voting can take place either in a “direct democracy” scenario, i.e. where all users are allowed to participate in the deliberation process without intermediation/delegation, or through proxy democracy. A pure direct democracy approach has several shortcomings: first, not all people who express their preferences will be adequately knowledgeable about the specific topic; second, a high number of participants could post too many arguments and ideas of poor quality, producing a fragmented, low quality debate. A proxy democracy solution with some degree of moderation has, indeed, proved successful in some large-scale implementations (see the Slashdot.org meta-moderators system described in Jøsang et al., 2007 or the proxyvoting in smartocracy.org). In both cases it will be important to ensure the presence of a positive feedback between author reputation, idea and argument quality.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009 75

Figure 3. An example of an on-line argument maps

For instance, suppose one voter likes the idea “support the implementation of a hydrogen economy” so that s/he wants to vote for it to increase the idea score. A possible option is to have a rigid scoring method ensuring that if the idea is not adequately supported by good arguments this and other votes should not affect significantly the idea score. Alternatively, a softer rule could be for the system to simply make it clear, through analysis tools, when people have voted for ideas that don’t have strong logical support: this will hopefully encourage them to look at the argument structure, but without imposing a kind of automatic censoring process that many users may rebel against. Whichever the way, voters and authors will look at the available arguments. The following cases are possible: 1. voters can endorse the existing arguments to support the idea; 2. authors can propose new supporting arguments; 3. voters can read the attacking arguments and get convinced that the Hydrogen economy

is not such a good idea, so they can change their mind and reconsider their support for the idea.

Argument Representation There is a huge body of research about argument analysis and structure with implications for mapping and representation. We can classify this research into two major branches: •



philosophical inquiries (e.g. the New Rhetoric of Perelman, the Informal Logic of Toulmin, Habermas’ Theory of Communicative Action); artificial intelligence and computer science, starting from the early efforts of the so called Yale School (Galambos, Abelson and Black, 1986; Schank, 1986) on case based reasoning to more recent works on argumentative agents and the use of arguments in hypertext, knowledge management and the semantic web (see the recent special issue on Argumentation published by the journal Artificial Intelligence in 2007).

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

76 International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009

The applications related to the first area are mainly in the field of education and legal argumentation, while the second stream has elicited a remarkable degree of attention in different areas of artificial intelligence. In the last decade, considerable effort has been invested by several researchers in the attempt to find a synthesis of these two schools of thought, and to consider several internet related emerging phenomena as field of application. We can call this attempt a socio-technical perspective on argumentation (Carter, 2000; Chesnevar et al., 2006; de Moor and Aakhus, 2006; Kirschner et al. 2003; Mancini and Buckingham Shum, 2006; Rawhan, Zablith and Reed, 2007). The majority of works in this area shares a focus on the use of internet related technologies to implement argumentation frameworks and environments aimed at improving the quality of collective debates and decisions and, more generally, knowledge representation, sharing and transfer. The idea is to exploit the intrinsic structure of argumentation: •

• •

to represent knowledge in a compact and structured way compared to traditional textual representation (knowledge summarization), to retrieve knowledge and connections among pieces of information (creating knowledge networks), to foster debate through argumentative dialogues on the net between users in which ongoing “mass conversations” made by arguments, endorsements and attacks should favor the emergence of more plausible, convincing and shared conclusions about a given topic (convergence) by allowing at the same time a certain amount of conflict.

Several attempts have been made to propose suitable argument representations to achieve these aims. We can classify these attempts using a continuum defined by the degree of formalization, and by the standardization of the proposed argument format. For instance, Rahwan et al. (2007) propose a highly structured formalization

called the Argument Interchange Format (AIF), based on a RDF Schema Semantic Web-based ontology language. Such formats have the advantage of being understood by the machines that are supposed to process such information, such as argumentative agents, but are hard for humans to use. In cases where human involvement is high, we need to look for less-formal argument representations that let people easily perform typical argumentative routines and tasks (posting, editing, elaborating, reading, attacking, etc.). Our proposal in this context involves integrating three approaches: the IBIS approach (Issue based information system, Conklin, 2006), the Toulmin argument analysis structure (1959) and the concept of argument schemes proposed by Walton (1989, 2006). The proposed representation aims to be very close to what people usually mean by argumentation, keeping formalization at a minimum without losing the structuring power of arguments. The IBIS approach represents arguments using three basic elements: Questions, which pose a problem or issue, Ideas, which offer possible solutions or explanations, and pro/con Arguments, which support or reject an idea or other argument. In the IBIS framework arguments develop as trees (as in Figure 3). Several software tools for argument mapping have been developed, but their application has largely been limited to small scale, physically co-located groups, usually requiring the presence of a facilitator. According to Toulmin (1959), Toulmin, Rieke & Janik (1979) an argument is a sequence of interconnected affirmations (claims) that establish the content and the strength of the position of the orator (Hitchcock and Verheij, 2005). As a consequence, argumentative speech can be broken down into a series of claims. Claims can be classified into the following categories, with respect to the functions that they have in the speech: •

the key claim, or conclusion of an argumentation;

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009 77

• • •

the grounds, such as the facts, common sense, and opinions of influential people offered to support a key claim; warrants, meaning the rules that demonstrate how the grounds support the claims; qualifiers, which are expressions or terms that limit the validity of the claims, such as “usually”, “rarely”, “according to what we know”, etc.

Following Toumin’s framework we can define an argument as an inference mechanism (warrant) capable of transferring the degree of truth of a set of premises (grounds) to a conclusion. Walton (2006) classifies arguments in three categories: deductive, inductive, plausible: Deductive arguments (modus ponens, modus tollens, syllogism) are such that if the premises are true, the conclusion must be true (e.g. Portland is in Maine, Maine is in the US, then Portland must be in the US). Inductive arguments are such that if the premises are true, the conclusion is probably true (Most American cats are domestic, Bill is an American cat, Bill is (with high probability) a domestic cat). Plausible arguments are such that if the premises are true then a weight of plausibility is shifted to the conclusion. Plausible arguments are used when not all needed information is available (or is made explicit): “To say that a statement is plausible we mean that it seems to be true based on the data known and observed so far in a kind of situation we are familiar with” (Walton, 2006: 83). Plausible arguments can be classified into “schemes”. Schemes represent stereotypical, commonly-used ways of drawing inferences (Rahwan et al., 2007) that can be considered acceptable in absence of complete information. Structures and taxonomies of schemes have been proposed by many theorists, such as Perelman and Olbrechts-Tyteca (1969). Walton’s exposition is very appealing since his classification is drawn from the everyday use of arguments. A second desirable characteristic of Walton’s schemes is that they are each assigned a set of

critical questions enabling contenders to identify the weaknesses of an argument and potentially attack it (see some examples in table 1). Compared with deductive and inductive arguments we can say that the critical questions can be viewed as “premises that need to be verified”, as they are usually implicit or taken for granted in everyday reasoning. Walton’s scheme theory could be used by readers to recognize and classify arguments proposed by users and check if critical questions are adequately answered, and to help authors to check if their arguments are defendable with respect to the critical questions and, if not, to revise. By merging the IBIS, Toulmin and Walton approaches we represent arguments through argument nets. We define and argument net as a directed graph made up of nodes (claims) and arcs (relationship between claims). A claim can be the premise or conclusion of an argument and can be considered to be true to a certain degree (e.g. based on the level of consensus assigned to it by an audience). An arc links two claims, specifically a premise to a conclusion. It transfers the degree of truth of the premise to the conclusion. Arcs have a semantics related to the specific argument scheme through which they transfer the truth from the premise to the conclusion (e.g. a “causal” semantic according to which a premise A causes a conclusion B, as in “wet weather will make you sick”). The arc semantics, assigned by users, describes the way the conclusion is “inferred” from the premise. Even if argumentative reasoning is not logical reasoning, one can assume the two are similar in that they aim to convince viewers about the truth of a proposition, by “proving” it on the basis of given premises. An example of argument net is shown in figure 4. The proposed representation is aimed at helping people distinguish between the input (grounds) of an argument (i.e. facts, evidence, shared opinions, values, etc.) and the reasoning scheme through which an acceptable conclusion is obtained from the available inputs. This critical distinction is made for two reasons: i) to encourage evidence-based reasoning; and ii)

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

78 International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009

Table 1. Examples of argument schemes (our adaptation from Walton, 2006) Argument scheme

Argument Structure

Critical questions

Expert opinion

Ground: E is an expert in the domain A is in Warrant: trust what expert says

How credible is E (reliable, free of conflict of interests, authoritative, etc.)? Is E an expert in the field A is in? Is E’s assertion based on evidence?

Popular opinion

G: A is generally accepted as true W: Believe what is generally accepted as true

What evidence (e.g. polls) supports that A is generally accepted? Even if A is generally accepted, there are any reason for doubting it is true?

Analogy

G: Case C1 is similar to case C2, A is true in C1 W: repeat things that have proven to work well in the past

Are there differences between C1 and C2? Was A correct (true) in C1? Is there any other case C3 similar to C1 in which A was not correct/true?

Causal (contains as variant the argument form consequences and the slippery slope argument)

G: there is a positive correlation between A and B W: find out causal relationships between things happening together

Is the correlation supported by credible evidence? Is the correlations due to coincidence? Could there be some factor C causing both A and B? Are there any other consequences to A that should be taken in the account? What evidence support that given A, B will really occur? What factors can prevent the causal chain to happen and how much are they probable? What is the weakest link of this chain? How much is the probability that the chain will actually start?

Figure 4. A proposal for the representation of argument nets By appeal to authority (CON) Saddam will use WMD against neighbors including Israel

By cause/effect

By analogy

P: Saddam has WMD

Hitler built a military power to attack his neighbors

to induce users to consider the validity of an argument by assessing the credibility of both the grounds and the reasoning scheme. In other words, an argument’s pitfalls can be found in

Hans Blix said none were found (link to interview)

By appeal to authority CIA and Ministry of Defense has evidence about (link to report)

the supporting evidence, or in the scheme, or in both.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009 79

The Empirical Test of the platform: preliminary results Software Implementation Overview The Deliberatorium is a Common Lisp application developed on top of cl-http, an open source web server developed at MIT (http:/www. cl-http.org:8001/). It provides a simple and consistent web-based user interface that allows users to navigate and edit the argument map as well as communicate with each other. The system’s capabilities is made accessible via a set of tool icons arrayed across the toolbar at the top of the page (figure 5). The tools include: •

the argument map: this allows users to browse and edit the argument map. The argument map, as much as possible, attempts to provide “social translucence” (Erikson et al., 2002), allowing members of the user community get a sense of what





other community members are doing, thereby fostering emergent self-organization. This is achieved by providing visual cues concerning which branches of the argument map are most active, which posts are the most highly-rated, and so on. The system preserves the edit history for all articles in the argument map, which allows one to quickly “roll back” an article to a previous version if desired. search, bookmarks and history: these allow users to find the posts that have given keywords or edit histories, were bookmarked for future reference, or were looked at recently by that user people and home: every user has a customizable home page which lists which articles and comments they have contributed. These allow users to to develop, if they wish, an on-line presence, facilitating reputation-building, networking, and community-building. The people tool provides links to the home pages for all the users registered for the current topic.

Figure 5. A snapshot of the Collaboratorium user interface.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

80 International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009









mail, chatroom and forum: these tools allow users to communicate 1-to-1 (mail), in a public synchronous context (chatroom), or via a public asynchronous threaded discussion (forum). watchlist: this allows users to specify which articles or comments they are interested in, so they can be automatically notified (by email) when any changes are made thereto. When coupled with easy rollbacks, this helps make the knowledge-base “selfhealing”: if an article is compromised in some way, this can be detected and repaired rapidly. survey and thinktest: these allow users to provide feedback on the system (survey) as well as take an analytic reasoning test derived from the widely-used Graduate Management Admissions Test (GMAT). The latter allows us to better understand individual differences in successful use of the argument map, as well as assess whether argument map use improves critical reasoning skills. the help tool: this tool provides a set of textual user guidelines (describing how to participate in an argument-mapping community) as well as help videos (describing how to use the user interface, including all the tools listed here).

See Klein (2007) for a more detailed discussion of the design of the Deliberatorium.

The Set Up of the Experiment A first test of the deliberatorium was performed in December of 2007 at the University of Naples Federico II (Italy) with a community of 220 graduate students, which was asked to deliberate on the topic “the future of biofuels”. The students were all part of a same class from a graduate program in Industrial Engineering, age 23-25, 55% male. Students selected from that class helped to coordinate and manage the experiment, and were required as a result to deal with social pressures from their fellow students and with the fact that most students inevitably

felt the experiment was a course task for which they will be evaluated by their professor. All these circumstances made the context different from a fully open online community and represent a significant limitation of this study. On the other hand, going large scale in these early steps within an uncontrolled experimental setting might not have attracted a critical mass of users and would have prevented us from having a direct contact with them, which has proven to be very useful for debugging, improving and upgrading the software from users’ feedback. The test developed in four phases, starting from early November 2007: 1. Phase 1: preparatory work 2. Phase 2: a three weeks period, in which students were requested to populate the deliberatorium with contents 3. Phase 3: one week for consolidating the knowledge map produced by the community 4. Phase 4: data analysis In the preparatory phase, the students had four 2 hours seminars from external experts about: 1. collective intelligence and its current internet applications 2. argumentation, with focus on the IBIS approach 3. major issues in energy governance with a country focus on Italy and UE policies 4. an instructional demo of the deliberatorium beta version The students were also given a few reading materials: two newspaper and magazine articles about the topic and the IBIS manual available at http://touchstone.com/wp/IBIS. html (Conklin, 2003). We decided to keep at a minimum both the knowledge of the topic and of the platform the students were required to have before starting the experiment since two main objectives of the experiment were to evaluate i) how easy it is for

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009 81

new users to approach a collaborative platform based on argumentation, and ii) how much the platform helps users improve their knowledge and understanding of the topic. As a discussion topic we chose “the future of biofuels”. The criteria we used to select the topic were: 1) it had to be a relevant topic in the current debate about a systemic complex challenge, like for instance how to reduce global warming; 2) it had to be focused enough to help students not get lost into a too wide a debate, considering they had limited time, attention and expertise; 3) it had to be controversial and multifaceted so that the community could explore possible different solutions and perspectives. Instead of giving students an empty argument map, we set up two framing, first level questions and options: 1) what percentage of transportation energy needs in Italy will come from biofuel consumption twenty years from now? (options: limited (less than 20%), moderate (between 20 and 30%), substantial (more than 30%)); 2) how can Italy get the biofuels it needs? (no options). The first was a kind of prediction market question while the second was an open design question. We did not prevent users from adding further first level questions. Before starting phase 2 we prepared two tests to be given to the students before and after the experiment. The first test was aimed at evaluating their knowledge of the topic, and the second was a critical thinking test. Our aim was to see if and to which extent the deliberatorium helps the students improve their knowledge of the topic and their critical thinking skills.

Designing the Argumentation Community: Roles, Rules & Incentives In the design of the deliberatorium virtual community we adopted the framework described in figure 2. In particular, in the deliberatorium case there are three roles: moderators, authors and readers/voters. Moderators are charged with the usual tasks of filtering out noise and rejecting off-topic posts. They, in addition, were charged with ensuring that the argument

map was well-structured, i.e. that all posts were properly divided into individual and non-redundant issues, ideas, and arguments, and were located in the relevant branch of the argument map. This involved classifying and sometimes editing posts, offering suggestions to authors, aggregating similar arguments, and occasionally re-organizing the overall argument map so that related topics are grouped into the same branch. A team of 4 student moderators was selected and trained in argument mapping before the test. One of the authors also joined the moderators team. The on-line argumentation process developed as follows: 1. Authors posted and edited questions, ideas, and pro/con arguments and produced an argument map similar to that in figure 3. While questions and ideas could be posted only as single short sentences, arguments were posted using an on-line form that helped them structure their post in argument form (conclusion, argument scheme and critical questions, argument content, possibility to attach links, references and documents); the form was designed because otherwise people tended to bundle a mishmash of issues ideas and arguments within individual issues and ideas; 2. All users (including moderators, authors and readers) rated arguments and ideas and could send comments to authors through threaded discussion forums associated, like wiki talk pages, with each post. Rating was anonymous; 3. Posts were initially given a status of “pending”, and could only be certified by moderators. Until a post was certified, it could not be rated and nobody, except its author, could link any other posts to it. We also explained that only certified posts will appear in the final, publicly available, version of the argument map. Moderators also left comments, edited, moved, trashed and classified posts. Usually moderators would leave a comment to explain changes. Authors would receive an alert email when

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

82 International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009

their post was modified or trashed (but the trash was never emptied). In the experiment we established a single authorship rule: nobody, except moderators, was allowed to edit a post authored by someone else. Several countermeasures and incentives were set up to limit the negative effects due to limited scale and presence of social and informational pressure usually absent or limited in Internet communities. In particular, we used several extrinsic incentives such as minor awards and five scholarships for the best participants thanks to the support of the Naples City Science Museum with the aim of improving post quality. To limit the negative influence of social pressure on the rating process, a kind of prediction market incentive for voters was set up according to which votes would have been converted at the end of the experiments into awards financed by the sponsor organization in the following way: at the end of phase 2, a team of independent external experts would have identified and ranked the best posts. Then voters would have been assigned a score based on how closely their votes correlated with the expert ratings. The voters with the highest correlation score would have been selected and awarded with educational gadgets.

Preliminary Results Since phase 2 terminated at the end of December 2007, at the time of writing this article, the data analysis had just begun. We are currently collecting and analyzing three types of data: 1. statistics about tool usage and information accumulation (number of ideas, number of arguments, total volume of inputs, etc.), 2. effects on users in terms of users satisfaction as well as impact on topic knowledge and critical thinking skills, 3. quantity and quality of contents. Consequently we can report only some preliminary results. Nevertheless the experiment

was very useful for observing the behavior of users in the field and for improving the software based on users feedback. These changes will help to plan and design future experiments. Among the more relevant modifications to the software there were: the introduction of a chat room for users, changes to the argument map visualization to facilitate content searching and display, introduction of a search functions whose algorithm helped to find “similar” posts (and thus help avoid redundant posts as well as assure new posts are properly located), the ability to upload files, and the addition of new features for moderators such as a merge tool to aggregate overlapping posts and a queue tool to show the queue of uncertified posts. We observed a very high level of user participation as we achieved thousands of posts in just a few days (Figure 6). Remarkably, the deliberatorium was active almost 24 hours per day, except for a hiatus between roughly 3 and 6 am. About 180 out of 220 users participated with at least a few postings. In two weeks they posted nearly 3000 issues ideas and arguments (of which roughly 1900 were eventually certified) in addition to over 2000 comments (table 2). They were, however, relatively few ratings, notwithstanding the presence of extrinsic incentives: each post received an average of only 2.2 ratings. The intensity of participation was very heterogeneous among users, as shown in Figure 7. This distribution of posts per users shows a thick middle of users and it vaguely recalls the power law distribution that has instead been found to be typical of many on-line communities (Madey et al., 2002; Healy and Schussman, 2003). The breadth of coverage, as well as the efficiency of the platform in terms of knowledge accumulation, was quite good: a non-expert community of students was able to create a remarkably comprehensive map of the current debate on biofuels in just a couple of weeks, exploring topics ranging from technological issues to environmental, economic and sociopolitical impacts of the widespread diffusion of biofuels. Moreover, the proportion of out-of-

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009 83

Figure 6. Number & kinds of posts after two weeks Posts since midnight 12/4/07 5000 4500 4000 3500 3000 2500 2000 1500 1000 500 0 0

24

48

72

96

120

144

168

192

216

240

264

288

312

336

360

384

408

432

time/hours

2500

2000

1500

1000

500

0 Issue

Idea

Pro

Con

Comments

Table 2. Number and kind of posts Type of Post

Number of Posts

Number of Certified Posts

% (certified)

Issue

242

89

5%

Idea

962

452

24%

Pro

1488

1045

55%

Con

402

325

17%

Comments

2009

n/a

n/a

Grand total

5003

1911

100%

topic posts was negligible – about 0.1%. The dominant argument scheme was “by authority”, followed by analogy, deductive, and inductive schemes (table 3). It also appears that users were generally not able to associate the right

scheme to their arguments, which increased the moderators’ workload. Though students participation may have been influenced by their perception that the experiment was a course task for which they

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

84 International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009

Figure 7. Distribution of number of posts per user # of contributions per user 60

50

40

30

20

10

0 1

10

19

28

37

46

55

64

73

82

91 100 109 118 127 136 145 154 163 172 181 190 199 208 user rank

Table 3. Most popular argument schemes Scheme Type

# times used

APPEAL TO AUTHORITY

760

ANALOGY

280

DEDUCTIVE ARGUMENT

204

BY INDUCTION

171

FROM CONSEQUENCES

102

CAUSAL

61

APPEAL TO POPULAR OPINION

49

AD HOMINEM

35

could be evaluated by their professor, their informal face-to-face and on-line comments, posted on the deliberatorium as well as on a threaded discussion forum run independently by a students association web site, showed that they found the experiment interesting and appreciated the innovative characteristics of the deliberatorium. As expected, at the beginning of the experiment most users really did not grasp the IBIS logic. Rather, many users adopted a kind of forum frame in which they tended to publish posts as news articles (e.g. “France creates

incentives for biofuel”) rather than as IBIS entities. Common mistakes were: difficulties in distinguishing between ideas and arguments, the tendency to put multiple arguments into a single argument post, linking arguments to a logically irrelevant location in the argument map, questions and ideas proliferating without any associated pro/con arguments, and difficulties in selecting the right kind of scheme for arguments. After a while we observed an improvement in the use of the platform, as users developed confidence, profited from moderator feedback, and learned to use the tool.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009 85

The level of direct debate was moderate. Users did attach many arguments to each other’s posts (72% of all certified posts were arguments, and 70% of these arguments were attached to posts authored by someone else) but the great majority of all arguments (again, 70%) were pros rather than cons. The depth of the argument trees was relatively small (table 4). Most arguments (85%) were attached directly to ideas, with the remainder attached to other arguments. This relative dearth of debate may have been an outcome of the student’s reluctance to criticize the contributions of their peers, and thus may be an artifact of the co-located nature of the user population. Other possible explanations include: i) inertia deriving from the predominant use of forums and wikis, ii) the short time window compared to the learning curve of users with the new tools, iii) the lack of specific expertise and motivation of the students on the topic leading to fast content saturation and inability to explore in-depth specific subtopics; iv) the use of individual awards and prizes together with the single autorship rules may have fostered competition over collaboration; this aspect could explain a certain level of redundancy in contents, emphasis on authoring rather than on debating to maximize individual exposure, fear of being involved into sub-discussions potential characterized by high-conflicting evolution. Other important lessons were learned concerning the moderators and, more generally, community governance. Moderators played a crucial role: in an important sense they led the community. They supported users with comments and suggestions and, by ensuring

a logically-organized argument map, helped users rapidly locate the contexts where their piece of knowledge can best be linked. For these reasons it is crucial to have enough moderators working to ensure fast certification and timely reorganization of the argument map. With the existing data we can roughly estimate the requisite number of moderators per users. A cadre of from 2 to 5 moderators (the number varied from day to day according to their other commitments during the course of the experiment) was able to more or less keep up with 180 active authors, but only by dint of an unsustainably heavy investment of their time. We estimate that a more realistic time commitment would require that between 5 and 10% of the active users be moderators.

Conclusion Limitations of this Study In this article we have presented a new mass collaboration platform we call the deliberatorium. The aim of the deliberatorium is to support large, geographically dispersed communities of users in collective deliberation about complex and controversial issues. The key difference between the deliberatorium and other large-scale collaboration tools like forums, blogs, chat rooms, and wikis is that it supports a logic- rather than time-based knowledge organization structure, based on argument maps. In this article we have argued that this structure makes the deliberatorium a superior tool for supporting large-scale collective deliberation. We have also reported

Table 4. Depth of the argument tree Depth of argument tree

% of all arguments

1

85%

2

12%

3

2%

4

1%

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

86 International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009

some preliminary results of a first field experiment with a community of about 220 users. To our knowledge the deliberatorium represents the first argumentation platform to be applied successfully at this scale. The experiment permitted us to build what is to our knowledge one of the largest argument maps ever built, on a complex topic, over the course of two weeks, starting with novice users. Many more lessons almost certainly remain to be gleaned from the test dataset. The deliberatorium software recorded essentially every user interaction with the knowledge base, including every view or modification or rating of any post, so we have a complete time-stamped record of the evolution of the argument map and what the users did while creating it, a database of over 110,000 distinct events. The results of a thorough analysis of this data will be presented in future publications. The field evaluation was limited in several important ways. One major limitation was the disproportionate use of extrinsic incentives (awards for best participants). It is highly probable that this, in combination with the single author rule, fostered competition over collaboration among users, leading them to focus on authoring rather than on reading, rating and improving what others have authored. This probably encouraged needless redundancy, low information disclosure, produced a relatively moderate level of debate, and did not fully exploit the power of the community to improve the quality of the posts. One of the major changes we will introduce in the next evaluation, therefore, is to enact an open authorship rule, such as that used in Wikipedia, and to rely mostly on intrinsic incentives (like voluntarism and reputation). Another open issue is the rating procedure. The rating procedure can play a critical role in promoting high-quality contributions and convergence in collective deliberation. In the experiment presented in this article the rating tool was extremely simple: users votes expressed how much a user liked a post through a five point scale ranging from 1 (poor) to excellent (5). Further research developments

will concern the design of a more articulated rating procedure aimed at evaluating ideas and argument quality, author reputation, and the community consensus. The experiment involved a relatively small number of users, by Internet standards, and the way students approached the experiment was distorted, no doubt, by the fact that they were co-located peers. Social pressure, for instance, may have had a role in limiting the number of cons compared to pro’s and reduce the number of poor rating, coupled with the students’ low expertise in the topic. The experiment also ran, perforce, over a limited time window. Further evaluations will aim to remove these artificial constraints by assessing the platform with much larger, and truly open, more intrinsically motivated user communities. Increased scale will probably require qualitative changes in design choices and user incentives. Among the most critical improvements we underline: designing mechanisms and rules able to generate a self-organized hierarchy of user roles (readers, authors, and moderators), improving the design of the platform in terms of browsing, Information visualization & retrieval, providing on-line support to users (such as on line help and training tools), and building tools to increase moderators productivity. We are currently identifying other possible contexts for assessing and applying the deliberatorium, ranging from problem solving within companies and professional communities of practice, to learning and education with communities of young students.

Research Implications and Next Steps The experiment presented in this article was the very first test of the Deliberatorium. The main aim of the test was to observe users’ first hand reactions to large scale, internet-based argument debate to improve the design of the platform for the next experiments in which our aim will be to compare the deliberatorium performances with other current sharing tools based on different technologies. A first attempt has been done in a second test scheduled in the late Spring of 2008

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009 87

with a group of 300 students at the University of Zurich. The aim of this test was to compare the new release of the deliberatorium with more traditional technologies, in particular with forums and wikis, given the same structure of solely intrinsic incentives. For this purpose, we created three groups of users debating on a same topic but using, respectively, the deliberatorium, a forum and a wiki. The analysis of the data of this test was still in progress during the writing and reviewing of this article and results will be the object of a future publication. Starting from the empirical results and lessons-learned obtained in the Naples test, in this section we present several research hypotheses for the next test, clustered into three groups: effects on users skills and participation, effects on quality and quantity of knowledge contents, and effects on group deliberation.

Effects on Users Skills and Participation H1: A large-scale collaborative argumentation platform improves users’critical thinking skills compared to forum and wikis. While in forum and wikis people express themselves freely, the deliberatorium requires that users post their contributions through a specific argumentative format. By its very nature, argumentation should encourage critical thinking and evidence-based reasoning through the use of claims, rebuttals, pros and cons, explicit opinions, facts and figures. We expect that after the learning process needed to become confident with the IBIS logic and intense use of the platform, users will improve their critical thinking skills. In order to test this hypothesis users can be given, before and after the experiment, a critical thinking test aimed at measuring if critical thinking skills have improved. A possibility is to use existing standard critical thinking tests, like those employed for graduate program admission or recruitment purposes, consisting of a set of multiple-choice questions that evaluate if users are able to recognize valid

arguments and reasoning fallacies and produce correct deductions. H2: The deliberatorium supports users in gaining and developing greater knowledge of the discussion topic compared to forum and wikis. On-line argumentation should foster debate through internet-enabled “mass conversations” made up of arguments, endorsements and attacks, and should favor the emergence of the most plausible, convincing and widely-shared conclusions about a given topic. To correctly post their contribution, users are required to understand the structure of the discussion with the help of the argument map and other knowledge visualization facilities, to read other users’ contributions which are properly located in the current debate, look for additional information to improve, attack or endorse existing arguments and ideas, and/or create new ones. Even for passive users, the mere browsing of a well organized argument map should help them to develop at least a basic, but critical, understanding of the main issues related to the topic. In order to test this hypothesis we will develop, with the help of experts of the field, a structured multiple-choice test to evaluate topical knowledge before and after the experiment. H3: The level of participation of users decreases compared to forums and wikis due to the difficulties of using argumentation rules to post their contributions. Previous studies with the IBIS logic show that people encounter difficulties in using the argumentation format, especially if they lack previous experience and specific skills (Conklin, 2003). At the small scale this problem is solved by human facilitators who are charged with the task of identifying arguments in the discussion and “coding” them into an argument map. On the other hand, forums and wikis pose no constraints to the way people wish to express their ideas. Consequently, we expect that, on

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

88 International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009

average, less experienced and occasional users may be discouraged from participating. User participation can be measured by several quantitative indicators such as number of logins, number and kind of posts, number of post revisions, number of feedback comments given to other users, etc.

Quality and Quantity of Knowledge Contents H4: The quality of the contents posted by users of the deliberatorium will be higher than in forums and wikis While on one hand the argumentation formalism can represent an obstacle to participation, on the other hand committed users together with a handful of moderators endowed with above-average argument mapping skills should increase the quality and organization of the contents. Content quality and proper knowledge organization would, in turn, increase the pay-off for new users in terms of the knowledge they gain from using the system, so, in the long term, the level of participation could recover. Measuring contribution quality is definitely critical. Content quality indicators can be intrinsic or extrinsic. Intrinsic indicators can be related to redundancy, signal-to-noise ratio, presence of fallacious arguments, and coherence (Lih, 2004; Storrer, 2002; Stvilia et al., 2005). Extrinsic indicators can be defined in terms of users satisfaction or judgments by panels of topic experts through ad hoc surveys. H5: The deliberatorium will underperform forums and wikis in the sheer volume of information H6: The deliberatorium will outperform forums and wikis in the volume of non-redundant information, in the breadth of problem space exploration, and will limit off-topic posts and digression When assessing the quantity of posted information, one should consider several vari-

ables: the sheer volume of posted information, the volume of non-redundant information, the extent to which users are able to explore the problem space, and the level of digression in the discussion. The rationale for H5 is that posting in forum and wikis is definitely simpler, so in the short term a larger amount of contributions can be expected. In the longer term, as the discussion develops more in depth and become more specialized, common users may find it increasingly difficult to contribute with novel information. The rationale for H6 is the following. First, in an argument map multiple posting should be considerably limited by the fact that a post about a specific issue will be just published once and possibly improved by successive revisions. Second, the IBIS logic should encourage users to explore the problem space both vertically (in terms of variety) and horizontally (in terms of depth of discussion about a given issue or idea), since it is probable that arguments and ideas can generate new issues as long as the discussion develops in depth. Third, since the argument map will provide a more rational organization of contents and more focus, it will be easier to identify and marginalize digressions from the main discussion. We expect that the IBIS logic should increase diversity and/or depth because people will find it much easier to know whether or not an idea or argument has been proposed yet, or not. Instead, if there is a huge corpus of text, like in more traditional text-based media, people may simply assume that their point has already been made, because it is too time-consuming for them to check.

Effects on Group Deliberation H7: Compared to wikis and forums the deliberatorium is expected to produce less polarization, more information disclosure and less error propagation We expect that the deliberatorium will contribute to improve collective deliberation by reducing the negative effects of social and informational pressures that are typical of small

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009 89

scale, co-located group decision-making. Large scale on-line communities have a set of desirable characteristics from this perspective: 1. impersonality: anonymity or the fact that members do not know each other can ensure some protection from social pressure and higher decisional independence of participants; 2. asynchronicity: people can enter the debate when they want and are not forced to provide an answer immediately as in face-to-face situations. Users may have, as a result, more time for information search, reflection, and exploration; 3. greater information access: the internet is a formidable low-cost tool for access to a large body of information and evidence that can then immediately being referenced in the deliberatorium discussion 4. greater diversity & turnover: large scale communities have a higher chance to attract diverse, independent and heterogeneous perspectives, thus preserving diversity; on-line large groups are not closed but characterized by variable degrees of participation and high rate of participants turnover; one further advantage of turnover is that it helps keep the discussion vital and less likely to get locked into a rut, e.g. someone new may come a login that opens up a new line of inquiry. 5. parallelism: with large-scale social software, people can make contributions in parallel, so there is little opportunity for a single individual or ideology to dominate the debate, unlike contexts with serial, limited bandwidth, interactions, like forums. To our knowledge, however, there is no empirical evidence to prove that large scale, internet mediated interaction and greater information availability will lower social pressure and improve collective deliberation. On the contrary, some kinds of on-line communities and platforms (e.g. blogs) appear to suffer from polarization, and others (such as wikipedia and forums) often flounder with controversial is-

sues (Sunstein, 2006). We expect that on-line, large-scale argumentation can at least partly avoid these shortcomings: i) by inducing critical thinking and evidence-based reasoning; ii) by encouraging users to look for additional information to support their claims and become more informed about a topic as well as aware of possible different and even contrasting perspectives, iii) by contributing to greater information disclosure since, in order to be convincing, an argument has to be supported by convincing explicit premises; iv) by improving the quality of arguments, since the weaker, more fallacious schemes will be uncovered and easily defeated by a large audience. Finally, critical evaluation of alternative/conflicting solutions should help the community be less inclined to balkanization. In a logically-organized argument map, contrasting perspectives may better coexist that in other collaborative tools like wikies and forums, in which the presence of conflict usually brings about editorial wars. In an argument map, moreover, a given issue and its adversary claim can be closely co-located and are much more difficult to overlook than in traditional blogs and wikis which typically are focused, by self-selection, on just a subset of the possible perspectives.

Acknowledgment The authors wish to gratefully acknowledge the Naples City Science Museum (Città della Scienza) sponsorship for the test implementation and the support in the dissemination of the first research results. The authors also wish to thank Livio Ferraro, Fabiana Ippolito, Samantha Lamberti and Vincenzo Leone for their valuable help as community moderators and research assistantship during the field test.

References Buckingham Shum, S. 2006. Hypermedia Support for Argumentation-Based Rationale: 15 Years on from gIBIS and QOC. In A. Dutoit (ed.). Rationale

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

90 International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009 Management in Software Engineering, 111-132. Berlin: Springer-Verlag. Carter, L.M. 2000. Arguments in Hypertext: a Rhetorical approach. Proceedings of the Hypertext 2000 Conference, San Antonio (TX), 85-91. Chesnevar, C., McGinnis, J., Modgil, S., Rahwan, I., Reed, C., Simari, G., South, M., Vreeswijk, G., Willmott, S. 2006. Towards an argument interchange format. The knowledge Engineering Review, 21(4), 293-316. Conklin, J. 2003. The IBIS Manual: A Short Course in IBIS Methodology, http://cognexus.org/id26. htm#the_ibis_manual. Conklin, J. 2006. Dialogue Mapping: Building Shared Understanding of Wicked Problems. Chichester (UK): Wiley. de Moor, A, Aakhus, M. 2006. Argumentation support from technologies to tools. Communication of the ACM, 49(3): 93-98. Erickson, T., Halverson C., Kellogg, W.A., Laff, M. and T. Wolf 2002. Social Translucence: Designing Social Infrastructures that Make Collective Activity Visible. Communications of the ACM, 45(4): 40-44. 

Kirschner, P.A, Buckingham Shum, S.J. and C.S. Carr (eds.) 2003. Visualizing Arguentation. Spinger Verlag. Klein, M. (2007). The MIT Deliberatorium: Enabling Effective Large-Scale Deliberation for Complex Problems. MIT Sloan School of Management, Research Paper 4679-08. http://ssrn.com/abstract=1085295 Klein, M., Cioffi, M. and T. Malone 2007. Achieving Collective Intelligence via Large Scale On-line Argumentation. Working paper, MIT Center for Collective Intelligence, Cambridge (MA). Lih, A. 2004. Wikipedia as Participatory Journalism: Reliable Sources? Metrics for evaluating collaborative media as a news resource. 5th International Symposium on Online Journalism (April 16-17, 2004), University of Texas at Austin. Mancini, C., Buckingham Shum, S. 2006. Modelling Discourse in Contested Domains: a Semiotic and Cognitive Framework. Int. J. of Human Computer Studies (in press). Madey, G., V. Freeh, and R. Tynan 2002. The open source software development phenomenon: An analysis based on social network theory. Americas Conference on Information Systems (AMCIS-2002).

Galambos, J.A., Abelson, R.P., Black, J.B. (eds.) 1986. Knowledge Structures, Hillsdale (NJ): Lawrence Earlbaum Associates.

Perelman, C., Olbrecths-Tyteca, L. 1969. The New Rhetoric: A Treatise on Argumentation. Notre Dame (IN): University of Notre Dame Press.

Gloor, P. 2006. Swarm Creativity - Competitive advantage through collaborative networks. New York: Oxford University Press.

Rahwan, I., Zablith, F. and C. Reed (2007). Laying the foundations for a World Wide Argument Web. Artificial Intelligence (in press).

Gloor, P. 2007. Coolfarming, http://scripts.mit.edu/

Raymond, E.S. 2001. The cathedral and the bazaar. Sebastopol (CA): O’Reilly.

~pgloor/coolfarming/index.php?title=Main_ Page. Healy, K. and A. Schussman 2003. The Ecology of Open-Source Software Development. Working paper, http://kb.cospa-project.org/retrieve/3150/ healyschussman.pdf Hitchcock, D., Verheij B. 2005. The Toulmin model today: Introduction to special issue of Argumentation on contemporary work using Stephen Edelston Toulmin’s layout of arguments. Argumentation, 19(3): 255-258. Jøsang, A., Ismail, R. and C. Boyd 2007. A survey of trusts and reputation systems for online service provision. Decision Support Systems, 43: 618-644.

Rosenhead, J., Mingers, J. 2001. Rational analysis for a problematic world revisited: Problem structuring method for uncertainty and conflict. Chichester (UK): John Wiley & Son. Shah, S.K 2006. Motivation, governance and the viability of hybrid forms in open source development. Management Science, 52(7): 1000 -1014. Schank, R.C. 1986. Explanation Patterns: Understanding Mechanically and Creatively. Hillsdale (NJ): Lawrence Erlbaum. Storrer, A. 2002. Coherence in text and hypertext. Document Design, 3: 156 - 168.

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009 91

Stvilia, B., Twidale, M. B., Smith, L. C., Gasser, L. 2005. Assessing information quality of a communitybased encyclopedia. In Proc. ICIQ, 442-454.

Von Krogh, G., von Hippel, E. 2006. The Promise of Research on Open Source Software. Management Science, 52(7): 975-983.

Sunstein, C.R. 2006. Infotopia. New York: Oxford University Press

Walton, D.N. 1989. Informal Logic – A handbook of critical argumentation, Cambridge (MA): Cambridge University Press.

Surowiecki, J. 2004. The Wisdom of the Crowds. New York: Doubleday. Tapscott, D., Williams, A.D. 2006. Wikinomics. New York: Penguin Book. Toulmin, S. 1959. The Uses of Arguments. Cambridge (MA): Cambridge University Press. Toulmin, S., Rieke, R. and Janik, A. 1979. An Introduction to Reasoning. New York: Macmillan. Verheij, B. 2003. Dialectical Argumentation with Argumentation Schemes: An Approach to Legal Logic. Artificial Intelligence and Law, 11(2): 167-195. von Hippel, E. 2001. Open Source Shows the Way: Innovation by and for Users – No Manufacturer Required!. Sloan Management Review, Summer.

Walton, D.N. (2006). Fundamentals of Critical Argumentation - Critical Reasoning and Argumentation. Cambridge (MA): Cambridge University Press.

ENDNOTE

1

This exception (the Open Meeting Project’s mediation of the 1994 National Policy Review (Hurwitz 1996)) was effectively a comment collection system rather than a deliberation system, since the participants were predominantly engaged in offering reactions to a large set of pre-existing policy documents, rather than interacting with each other to create new policy options.

Luca Iandoli received his master’s degree in electronics engineering in 1998 from the University of Naples Federico II and a PhD in business and management from the University of Rome Tor Vergata in 2002. Since 2006, he is a professor in the Department of Business and Managerial Engineering, University of Naples Federico II. He got recently a Fulbright Scholarship in in the category research scholar. His current research interests include application of soft computing techniques such as fuzzy logic and agent based systems to model organizational learning and cognition, such as in evaluation and decision making processes, and how to use collaborative internet technologies to support collective and organizational sense-making. He is a member of the editorial board of the Fuzzy Economic Review, Journal of Information Technology: Cases and Applications, and serves as associate editor for the Journal of Global Information Technology Management. Mark Klein is a principal research scientist at the MIT Center for Collective Intelligence, and an affiliate at the MIT Computer Science and AI Lab (CSAIL) as well as the New England Complex Systems Institute (NECSI). His research focuses on understanding the cross-cutting fundamentals of coordination and applying these insights to help create better human organizations and software systems. He has made contributions in the areas of computer-supported conflict management for collaborative design, design rationale capture, business process re-design, exception handling in workflow and multi-agent systems, service discovery, negotiation algorithms, understanding and resolving ‘emergent’ dysfunctions in distributed systems and, more recently, ‘collective intelligence’ systems to help people collaboratively solve complex problems like global warming. Giuseppe Zollo is a full professor of business economics and organization at the Department of Business and Managerial, University of Naples Federico II. In 1985-1986, he was a visiting research associate in

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

92 International Journal of Decision Support System Technology, 1(1), 69-92, January-March 2009 the Department of Economics of Northeastern University, Boston. He has published in several journals and has presented papers at international conferences on innovation management, organization, small innovative firms, and managerial application of fuzzy logic. He is a member of several editorial boards of international and Italian journals. He is vice president of the International Association for Fuzzy-Set Management and Economy (SIGEF) and director of the University of Naples Center for Organizational Innovation and Communication (COINOR).

Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

...one DATABASE with more than 60+ scholarly journals in Computer Science and Information Technology Management

InfoSci-Journals

AFFORDABLE | AUTHORITATIVE | COMPREHENSIVE

n

Institution-wide access to 60+ journals

n

Perpetual access to subscribed years

n

Backfile purchase available

n

No embargo of content

n

40,000+ pages of downloadable full-text in PDF

n

65,000+ reference citations to further research

n

Full electronic collection for the cost of just a few print subscriptions, saving thousands of dollars

“I have faculty … champing at the bit for this kind of content!” - Lia Hemphill, Director of Collection Development, Nova Southeastern University (USA)

www.infosci-journals.com Learn more and apply for a FREE 30-day trial!

®

IGI Global • 701 E. Chocolate Ave., Suite 200 • Hershey, PA 17033-1240 USA 866-342-6657 or 717-533-8845 ext. 100 • Fax: 717-533-8661 • [email protected]

®

(Formerly Idea Group Inc.)

The Industry Leader in Delivering Knowledge in Computer Science and Information Technology Management

AFFORDABLE | AUTHORITATIVE | COMPREHENSIVE

Introducing …

InfoSci-Books

TM

Perpetual Access, Perpetual Value

Enrich your academic research programs in computer science and information technology management with a complete scholarly book collection, for a fraction of the combined print cost.  Perpetual access to complete collection of nearly 1,000 scholarly and reference books from 2000-2008 for a one-time fee  Aggregated full-text database of 16,000+ chapters, reference entries, and proceeding papers

Visit www.infosci-books.com today to register for your free, 30-day trial!

Order through most subscription agents or directly from IGI Global

ww.infosci-books.com

 Unlimited simultaneous use throughout your institution  Deep coverage of 20 subject categories in computer science, technology, and information management  Current-year purchase and annual subscription options also available  Discounts for consortia and other multi-site groups

IGI Global • 701 E. Chocolate Ave., Suite 200 • Hershey, PA 17033-1240 USA 1-866-342-6657 or 717-533-8845 ext. 100 • Fax: 717-533-8661 • [email protected]

IGIP

IGI Publishing

Order online at www.igi-global.com or call 717.533.8845 ext.100 Mon–Fri 8:30am–5:00pm (EST) or fax 24 hours a day 717.533.8661

Human Computer Interaction: Concepts, Methodologies, Tools and Applications (4 volume set) Human Computer Interaction: Concepts, Methodologies, Tools, and Applications penetrates the human computer interaction (HCI) field with more breadth and depth of comprehensive research than any other publication. Panayiotis Zaphiris and Chee Siang Ang, City University of London, UK ISBN: 978-1-60566-052-3; 2,783 pp; October 2008 US $1,650.00 (hardcover + online access) Online Access Only (for institutions): US $1,500.00 Online Access Only Perpetual: US $2,100.00

Encyclopedia of Information Science and Technology, Second Edition (8 volume set) The Encyclopedia of Information Science and Technology, Second Edition is a matchless compendium of over 750 authoritative, research-based entries/chapters contributed by nearly 1,200 researchers and experts from 60 countries on a complete range of topics that, together, define the contemporary state of knowledge on technology and its organizational, managerial, behavioral, and social implications. Mehdi Khosrow-Pour, Information Resources Management Association, USA ISBN: 978-1-60566-026-4; 4,500 pp; September 2008 US $2,995.00 (hardcover + online access) Online Access Only (for institutions): US $2,695.00 Online Access Only Perpetual: US $3,775.00

Medical Informatics: Concepts, Methodologies, Tools, and Applications (4 volume set) Medical Informatics: Concepts, Methodologies, Tools, and Applications holds the most complete collection of cutting-edge medical IT research available in topics such as clinical knowledge management, medical informatics, mobile health and service delivery, and gene expression. Joseph Tan, Wayne State University, USA ISBN: 978-1-60566-052-3; 2,783 pp; October 2008 US $2,495.00 (hardcover + online access) Online Access Only (for institutions): US $2,350.00 Online Access Only Perpetual: US $3,290.00

IGIP

IGI Publishing

Order online at www.igi-global.com or call 717.533.8845 ext.100 Mon–Fri 8:30am–5:00pm (EST) or fax 24 hours a day 717.533.8661