Collaborative Architecture Design and Evaluation

2 downloads 281 Views 308KB Size Report
distributed across different organizations, people, and application domains. User and ..... The Joint Application Development (JAD) method is the most widely ...
Haynes, S. R., Skattebo, A. L., Singel, J. A., Cohen, M. A., & Himelright, J. L. (2006). Collaborative Architecture Design and Evaluation. In proceedings of the ACM DIS 2006: The Conference on Designing Interactive Systems.

Collaborative Architecture Design and Evaluation Steven R. Haynes1, Amie L. Skattebo2, Jonathan A. Singel1, Mark A. Cohen3, Jodi L. Himelright2 1

College of Information Sciences and Technology Penn State University srh10, [email protected]

2

Industrial/Organizational Psychology Penn State University als383, [email protected]

3

Business Administration, Computer Science, and Information Technology Lock Haven University [email protected]

human factors experts to be employed effectively. This is especially true in the case of complex, distributed/collaborative systems that are impacted by organizational and social, as well as technical and psychological concerns. These concerns motivated and guided development of the method and tool described here.

ABSTRACT In this paper we describe a collaborative environment created to support distributed evaluation of a complex system architecture. The approach couples an interactive architecture browser with collaborative walkthroughs of an evolving architectural representation. The collaborative architecture browser was created to facilitate involvement of project stakeholders from geographically dispersed, heterogeneous organizations. The paper provides a rationale for the approach, describes the system created to facilitate distributed-collaborative architecture evaluation, and reports on evaluation results from an ongoing, very-large scale application integration project with the United States Marine Corps. The paper contributes to research on early architecture requirements engineering, architecture evaluation, and software tools to support distributed-collaborative design.

Architecture analysis and evaluation methods differ based on their explicit goals, the quality attributes they consider, the techniques and activities they incorporate, and the level of stakeholder involvement they prescribe [5]. One criticism of evaluation methods is their lack of corresponding tools. A recent survey article reports that only one of eight major evaluation methods is implemented as an architecture evaluation support system [19]. Tool support may be particularly important in today’s systems development environment, which often finds development team members and other project stakeholders dispersed geographically across a large number of organizations. The work described here seeks to expand research in the area of evaluation support tools and distributed-collaborative evaluation methods.

Categories and Subject Descriptors H.5.3 [Information interfaces and Presentation]: Group and organization interfaces - Computer-supported cooperative work; D.2.2 [Software Engineering]: Design Tools and Techniques User Interfaces.

Though complex architecture evaluation is still an evolving area of research, scenario-based methods are among the most well developed [3, 19]. A scenario corresponds to a single temporal sequence of interactions between components of a system [11]. The construction and analysis of scenarios at all stages of the architecture design and development process helps to identify gaps, inconsistencies, and errors within an architecture [5]. Most architecture evaluation techniques now incorporate some form of scenario analysis [3], however, more work is needed to provide realistic methods and supporting tools for large-scale, scenariobased assessments. The approach described here seeks to expand research into scenario-based evaluation.

General Terms Design, Collaborative Evaluation, Human Factors, Verification.

Keywords: Architecture, Evaluation, Integration, Design.

1. INTRODUCTION Architecture evaluation remains an important area of research as well as an ongoing problem for software development practitioners. Today’s interactive system designers frequently carry out their work as part of large-scale integration projects. System architectures are becoming increasingly complex, distributed across different organizations, people, and application domains. User and stakeholder-centric architecture approaches are enjoying increased popularity as a result of this growing complexity. However these methods require techniques and tools to support geographically dispersed organizations, developers, prospective users, and other stakeholders as they work on and reflect on an evolving design.

When an architecture evaluation is carried out early in the system’s development lifecycle it involves not only assessment, but also requirements engineering and design. This significantly complicates evaluation because stakeholders are not only reflecting on past design decisions, but also looking forward to determine how the results of evaluation can inform future development. It also suggests that the architecture being evaluated is continually evolving in response to input and feedback from assessment activities. This iterative cycle is characteristic of modern systems development [7, 9] and also representative of a perspective on design theory known as reflective practice [44]. The view of reflective practice treats design as a “conversation with materials” [43], an inherently iterative process where designers and other stakeholders from different perspectives assess the consequences of design decisions and actions using their own contextual criteria. Our objective here is to describe an architectural evaluation process that embodies this approach by treating architectural representations (for example, UML diagrams) as design materials and incorporating communication

Architecture evaluation is expensive and requires the combined skills of highly trained software engineers, system engineers, and Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. DIS 2006, June 26–28, 2006, University Park, Pennsylvania, USA. Copyright 2006 ACM 1-59593-341-7/06/0006...$5.00.

219

“learning where an attribute of interest is affected by architectural design decisions, so that we can reason carefully about those decisions, model them more completely in subsequent analyses, and devote more of our design, analysis, and prototyping energies to such decisions” [16]. These methods attempt to elicit information from stakeholders to show how components of an architecture work together to achieve a set of specified design goals.

with a large group of distributed stakeholders with a wide variety of views concerning what constitutes valuable information technology support. One approach to supporting this kind of reflective practice early in the development cycle is to provide project participants with a shared representation of the evolving design. This shared representation provides a focal point for reflection and acts as the topic upon which a conversation with design materials can be directed. Referential communication such as this relies on the existence of a commonly assessable object and opportunities for participants to arrive at a shared understanding through discussion around it [33]. An objective of the work reported here was to investigate the utility of a particular type of shared representation, an interactive, collaborative architecture browser, in the context of a complex, real-world system design and evaluation project.

To obtain this broad understanding of how an architecture might perform, many modern methods include a form of scenario generation and analysis [19]. A scenario is an elaborated narrative of a specific, concrete situation where an architecture is used, modified, or extended [16]. Among the most mature and commonly used scenario-based architectural evaluation methods are the Software Architecture Assessment Method (SAAM), the Architecture Tradeoff Assessment Method (ATAM), and Active Reviews for Intermediate Design (ARID).

We are using distributed-collaborative and scenario-based techniques to perform longitudinal evaluations of a complex system architecture that will provide Sense and Respond capabilities to the United States Marine Corps Program Manager, Light Armored Vehicles (PM-LAV). Sense and Respond (S&R) capabilities are achieved through the large-scale integration of information technology ranging from complex sensor equipment installed onboard vehicles in the field to enterprise systems which leverage information gathered from vehicle sensors to respond appropriately to changes in vehicle health. This paper begins with a review of prior research in architecture evaluation, describes the domain and setting for the USMC PM-LAV Sense & Respond project, outlines the method and tools developed to aid evaluation efforts, and reports results on their efficacy.

The Software Architecture Assessment Method (SAAM) is among the first formalized scenario-based architectural assessment methods [30]. The SAAM approach involves gathering a group of stakeholders to collectively identify and prioritize the quality attributes associated with a system architecture. These quality attributes can include non-functional requirements such as reliability, security, usability and performance. After defining the quality attributes with which to analyze the system architecture, stakeholders construct a complete list of all scenarios that the system must accommodate. These scenarios are ranked in accordance with the prioritized list of quality attributes, and the system architecture is evaluated with respect to how well it can execute the most important of the identified scenarios.

2. ARCHITECTURE EVALUATION Early architecture evaluation involves as much requirements engineering and design as assessment. As in the case we describe later, often very little ‘bootstrapping’ knowledge exists to help designers and other project stakeholders understand what an early but evolving system design means to future work practices, where the architecture is deficient, and how it can be improved to provide better technology support for the context of use. Every design decision necessarily constrains the domain of subsequent decisions. The earlier in the design process a decision is made, the more profound its impact will be on the final form of the design. This means that early requirements engineering involves significant risk; errors committed early in the systems development lifecycle are the most expensive to fix because they can result in a chain of subsequent errors [8]. Therefore, providing cost-effective support for early architectural assessment to the broadest community of stakeholders is an important objective for software engineering research.

The application of the SAAM has been shown to increase stakeholder awareness of the system architecture and identify specific areas of the architecture that exhibit high levels of complexity [31]. The results of a SAAM analysis provide only a limited view of a system architecture’s effectiveness and a relatively broad description of needed changes due to its inability to show how design decisions relating to multiple quality attributes simultaneously affect individual components. For situations where the system architecture is under continual refinement or is relatively undocumented, the processes of Active Design Reviews and Active Reviews for Intermediate Designs (ARID) are suggested as more applicable alternatives [16]. Active Design Reviews are a form of architectural assessment requiring stakeholders to study an architectural diagram and its accompanying documentation before taking an exam consisting of open-ended questions. This approach is intended to gauge stakeholders’ understanding of how the architecture is designed to function. By taking an exam, it is proposed that architectural evaluators gain a better understanding of where gaps exist both in the architecture and in stakeholder comprehension [16]. The general idea is that the examination process forces stakeholders to become familiar with the architecture in order to form coherent responses to the open-ended questions.

Evaluating software-intensive system architectures involves understanding and measuring how well both functional and nonfunctional requirements are met by a proposed design. Evaluation can be formative, carried out before or in parallel with design and development activities, or summative, examining an existing system architecture to assess its performance [46]. A number of difficulties arise in attempting to collect, quantify, and substantiate evidence that shows the viability of an architecture in achieving system requirements, especially in formative evaluations where an existing system may not exist. Rather than attempting to quantify or prove the utility and feasibility of a designed architecture, a number of modern methods focus on

Succeeding with any of these formalized, scenario-based architectural assessment methods requires that the evaluation be administered during a physical meeting of stakeholders who have allocated up to a week's worth of time for the evaluation. In addition, it is generally required that the evaluation have access to

220

or software prototypes to help envision the form that a given system architecture will take [2, 35].

a wide variety of stakeholders, including the system architect with the authority to change the architectural diagram, as well as key decision makers, system developers, and prospective system users. Also, ARID’s use of an exam to motivate focused attention on the architectural representation and other specification materials is a ‘heavyweight’ approach in that it prescribes an activity that may seem burdensome or contrived to people already busy with their normal day-to-day activities.

Studies in computer-supported cooperative work (CSCW) have suggested that objects and artifacts, such as UML diagrams, play a central role in facilitating communication and coordination by providing a focal point for questions and grounding discussions with a shared frame of reference [29, 36]. Our approach involves the use of support tools for aiding the collaborative/ cooperative effort in architecture design and evaluation. The use of objects to make explicit a software architecture that is otherwise intangible may be especially important for helping to create a shared understanding or mental model of system entities, attributes, behaviors, and relations. Some reported architecture evaluation cases suggest that these shared representations are the only way to establish a common model and vocabulary for thinking about complex system architectures [34].

3. DISTRIBUTED-COLLABORATIVE EVALUATION Systems development is an increasingly global activity [18, 28, 39]. Designers, developers, and project stakeholders are distributed both geographically and throughout organizations that are often heterogeneous in their structure, culture, and core objectives. Software architecture validation is particularly problematic in the case of distributed development projects [21]. One of the most significant problems involves supporting the coordination required for a distributed team to achieve a shared understanding of the purposes, structures, and behaviors of a complex system architecture as it emerges from collaboration between team members [22].

Other research provides some evidence for the efficacy of using groupware and other collaboration support tools to enhance inclusive requirements engineering and design. In one study, an experiment was carried out to compare the quality of architecture assessment scenarios generated from face-to-face versus groupware supported iterations between designers (graduate student subjects) [4]. Though the potential to generalize from their results is limited, their experiment did suggest that the quality of the architectural assessment products constructed through groupware was as good, if not better, than that produced through face-to-face interaction.

As complex systems development becomes an increasingly global activity, it becomes important to understand how distributed and remote architectural assessment methods can be used to support quality assurance in requirements gathering and architectural design. The primary driver behind efforts to support remote evaluation is to reduce the costs associated with these assessments [38]. Distributed evaluations can provide a lower-cost evaluation method than an ATAM or SAAM assessment, both of which require substantial time and resources to bring necessary stakeholders together. Also, depending on how the distributed architectural assessment is applied, it may be possible to encourage more constructive and collaborative participation in the architectural evaluation process, due to stakeholders being able to contribute to assessments whenever and wherever it suits their schedule. In turn, an increased sense of collaboration may promote better stakeholder understanding of the architecture and improve stakeholder confidence of their influence over architectural decisions.

In [37], researchers applied a range of techniques to help evoke creative behaviors from designers, users, and stakeholders in a large-scale, socio-technical air traffic control system. Results from the application of their scenario-centric method highlight the importance of visual representations and other props as aids to more insightful thinking about the possible forms, opportunities, and obstacles to achieving effective system designs. They also found that a lack of time was among the most important barriers to innovative thinking in early requirements analysis and system design. These findings highlight the need for architecture requirements support tools that can leverage the visualization power of modern technology to deliver more effective architectural representations while simultaneously giving project stakeholders a persistent forum to express and collaborate on their ideas.

A critical success factor in any team-based architectural assessment project is the degree and quality of communication between those involved in evaluation activities. One of the problems with early architecture evaluation, design, and requirements engineering, is providing stakeholders with an adequate representation of what it is they are being called upon to assess. Reflective practice requires an object, a preliminary visual representation, to help developers and stakeholders envision how a technology will function internally, and how this functionality will fit into the activities it is being designed to support. In software architecture assessment, providing this representative object presents significant problems because software systems are becoming increasingly complex, and their structure and behavior is relatively invisible compared to a prototype of a physical artifact, such as a hand tool [10]. A number of devices have been proposed to help address the problem visually representing an early evolving system design. These solutions include the Unified Modeling Language (UML) architectural diagramming method [42], “rich pictures” to help understand the social and organizational consequences of a proposed design [15], and paper

In addition to the use of common representations, different kinds of facilitated group techniques have been proposed to enhance collaborative systems architecture and requirements gathering [20]. The Joint Application Development (JAD) method is the most widely adopted of the techniques for integrating different stakeholder perspectives into requirements, design, and architecture deliberations [47]. Other techniques include the ETHICS method [40] and varying forms of participatory design (PD) from the Scandinavian school [45]. What these techniques have in common is that they focus on inclusiveness in design. They differ however, in their underlying ethos. While JAD seeks inclusiveness as the most effective means for building better systems, ETHICS and PD are primarily concerned with the democratic ideal which suggests that the people whose work is impacted by the introduction of new technology have a right to affect the form that these technological interventions will take.

221

analysis can be used to build a justification for the adoption of the designed system.

Though the use of scenarios is increasingly common in architecture evaluation, there are currently few guidelines to help with applying scenario-based assessment to a distributed environment where stakeholders are geographically dispersed and cannot simultaneously commit time to participate in elaborate virtual meetings. A number of issues have been identified with the use of collaborative systems to support social processes such as design reviews. These issues include acknowledged problems such as the need for a critical mass of content and users to motivate others to use these systems [24] and the difficulty encountered when trying to support subtle, underlying social processes with technologies that are still relatively immature in relation to the human requirements they attempt to meet [1]. One of our proposals for addressing these challenges is through coupling relatively ‘light-weight’, familiar tools such as a discussion forum, with an advanced, online architecture browser.

Ultimately, the outputs gathered from these informal walkthroughs can be used as inputs to a more formal evaluation process such as the SRA, and can lead to a formal cost benefit analysis that is based on more than just intuition. In addition, as the walkthroughs progress and the outputs become more detailed, the nature of the formal evaluation can adapt; thus creating an iterative and evolving evaluation process.

5. STUDY SETTING The United States Marine Corps Program Manager, Light Armored Vehicles (PM LAV) Sense & Respond (S&R) initiative is a process and systems project concerned with enabling end-toend information integration throughout the organization. The S&R architecture reaches from sensors onboard the fleet of LAVs to enterprise decision support systems in the continental United States, with many devices, applications, repositories of information, and human-computer interactions in between. The objective of S&R is to maximize the operational readiness and capability of the LAV fleet through enhanced visibility into vehicle and vehicle component health and performance.

4. ASSESSING ARCHITECTURE VALUE The merits of scenario based evaluation are well documented [11, 12, 14]. Unfortunately, it can be difficult in the early stages of design to obtain a consensus on effective and relevant scenarios of use. While scenario-based tools exist that can be used for the validation and verification of architecture value, these tools typically focus on the later stages of system design, after a more formal specification of the design exists. One notable exception is the System Requirements Analyzer (SRA) [23]. Given a scenario, the SRA makes it possible to validate the non-functional requirements of a system early in its design. However, the key to the accuracy of the SRA lies in its inputs: The SRA relies on the acquisition of scenarios and a probability table that embodies factors that determine the uncertainty of the design. Without this information, the SRA loses all of its value as an evaluation tool. The best way to gather this information early remains a challenge for future research, but it is clear that considerable input must come from domain experts.

Total asset visibility, the ability of the organization to know the location, health, and activity of every vehicle in the fleet, is central to the objective of S&R. Light armored vehicles (LAVs) in the Marine Corps fleet are complex machines that, though durable, operate in extreme conditions and are subject to environmental stresses far exceeding those of civilian vehicles and machinery. Because their mission puts them in harm’s way, it is critical that these vehicles be in top condition when operating in the field as breakdowns can result in catastrophic losses to people, equipment, and mission objectives. With S&R, vehicles operate within a network of information technology, people, and processes all acting to ensure that potential problems are detected before they become manifest. S&R is also concerned with ensuring that chronic maintenance problems become the focus of component re-design projects, which typically involve engineers at the PM LAV working in concert with component suppliers (OEMs).

Identifying the benefits, costs, and obstacles is also a useful technique used to measure architecture value. Moreover, being able to demonstrate benefit can play an important role when justifying the adoption of new technology [32]. However, the collection and analysis of such measures can also be difficult. One case study demonstrated how cost benefit analysis is often based more on intuition and faith than science [25]. Scenario- based evaluation has been shown to help when it comes to identifying costs, benefits, and obstacles in the field [27]. However, this work also suggests that a substantial financial cost is associated with performing this type of evaluation. As a result, the question remains: What is the best way to gather the information needed to evaluate an architecture early in the process, before the requirements have been formalized or the system has been deployed?

S&R architectures are representative of very large scale application integration. Many different components and subsystems work together including sensors, transaction systems, data repositories, communications technology, decision support systems, and intelligent systems. These applications run on a range of devices including telematics control units (TCUs) onboard the vehicle, personal maintenance devices used by mechanics in the field, support servers set up in the theatre of operations, and enterprise systems residing in the United States. Evaluating the S&R architecture presents significant challenges to existing architectural assessment methods. The number and range of different components, applications, and human-computer interactions means that evaluation methods need to be flexible, cost efficient, and usable. Ideally, a single method could be employed to reduce the training costs associated with applying more formal evaluation methods. The method should be able to account for metrics that differ across different components and subsystems but provide comparable results to facilitate aggregate analysis of system weak spots and potential points of failure. The scenario walkthrough and inspection method (SWIM) we are

The approach described here includes an informal process of walkthroughs, using the collaborative architecture browser, as a means to gathering these crucial inputs which include scenarios, costs, benefits, and obstacles. Using the outputs of these walkthroughs, the overall utility of an architecture can be determined and linked to specific scenarios, components, and actors. Maximizing this utility on several different levels of granularity can lead to discoveries about how to better utilize resources and enhance the overall design. In addition, this type of

222

using to evaluate the PM LAV’s S&R architecture was designed specifically for the evaluation of broad, complex, and heterogeneous system architectures.

6. THE CAB We designed and built a Collaborative Architecture Browser (CAB) to support early requirements engineering and architecture evaluation on the Sense & Respond (S&R) project. This architecture browser serves as a site for tracking decisions made throughout the design process concerning rationale for further inquiry and research. In addition, the CAB has allowed for the progressive formalization of the design process. The CAB also serves as a focal point for architecture evaluation walkthroughs as described later in this document. The CAB was built using an open source content management system (Drupal, www.drupal.org) as the shell. Drupal offers an open-source, highly user-configurable environment, which is compatible with standard infrastructure such as the Apache web server. This enabled us to quickly create an environment to support distributed architectural design and evaluation. Drupal also provides a number of features to aid our objectives, such as web access logs generated by Apache, user registration and account tracking, and integrated support for the creation of discussion forums and online surveys.

Figure 1 – Initial Sense & Respond Architecture Diagram Though it is not standard practice to use swimlanes on component diagrams in UML, they allowed us to segment the architecture into the four sub-domains of the Sense & Respond project: The Platform represents elements of the architecture on-board the LAV; Near Platform reflects elements of the architecture in the physical vicinity of an LAV; Theater includes elements of the architecture deployed in the same geographic area as the LAV; and Enterprise incorporates elements of the architecture’s ‘back office’ functionality.

The CAB is a ‘lightweight’ environment in that it uses relatively simple, commercial-of-the-shelf (COTS) and open source/freeware components adapted and configured in a relatively short period of time (three weeks) with limited development resources. We customized the shell to focus collaboration and discussions on an interactive architecture diagram created using COTS tools including Microsoft Visio, Adobe Photoshop, and Macromedia Flash. The creation of this UML component diagram representing the different swimlanes, nodes, components, actors, and relationships was the first step in the development of the CAB.

The UML component diagram was created using basic Flash programming to provide interactive, drill-down capabilities to the architecture representation. Enabling participants to drill down into the individual components of the architecture was critical, as it certainly facilitates decomposition and reduces the high level of complexity that such a large-scale integrated system presents. Conversely, the surface-level diagram supports abstraction and allows for effectively reducing the overwhelming contextual details of the components by depicting a holistic view of the overall system.

The initial diagram was constructed in a three-hour facilitated session during a two-day project stakeholder meeting early in the architecting process. This approach to using an initial, rough representation of the evolving architecture was inspired by the Architecture Reconstruction Method (ARM) [26], a phased approach to architectural reverse engineering. The ARM uses a rough diagram created from various organizational resources when system documentation and other formal specifications are as-yet unavailable. We extended this method by employing three iterative phases: (1) construction of an initial, rough diagram, (2) gathering feedback through walkthroughs, and (3) diagram revision. The initial diagram is shown in the figure below.

The Visio UML diagram was converted to a set of JPEG files, which were then overlaid with Flash to detect mouse events. When users mouse over a navigable swimlane, actor, node or component, the object changed color to suggest navigability. Clicking on the diagram object allows the user to navigate to the next level of the representation where they can either continue to browse, or start to engage in a discussion of the diagram object. The figure above mirrors the top-layer swimlane navigation panel. The second layer of the diagram shows a more detailed view of a selected swimlane (see the figure below). Individual nodes can be highlighted and selected to drill down further.

The purpose of the diagram was to enable engineers, subject matter experts, and other stakeholders with domain and system development expertise to assess and refine the evolving rough architectural diagram without the need for weekly meetings or participants to be collocated. In our attempt to understand the role of and reduce complexity, we address both decomposition and abstraction by gaining insight from stakeholders on the individual components of the diagram while the architecture itself is an explicit representation of the system in its entirety. Thus, the browser allows for decomposition, while making explicit a holistic treatment of the system.

223

facilitating moving through the design process in its increasingly detailed steps that might otherwise be difficult to document due to the potential of slowing down design activity.

7. CAB WALKTHROUGH EVALUATIONS The collaborative architecture browser (CAB) was used as the basis for distributed evaluation walkthroughs with S&R project stakeholders. As stated earlier, walkthroughs are an especially useful technique for disentangling and resolving uncertainty that is associated with intangible system specifications. To maximize our use of each walkthrough, we highlighted different features of the CAB, reviewed the S&R Logistics architecture, and elicited scenarios of architecture use for different architectural components as well as possible metrics that might be used to assess the architecture. The walkthroughs were telephone mediated, incorporating semi-structured interview questions. Most walkthroughs lasted between 30 minutes and one hour. Both the interviewer and the walkthrough participant were on-line using the CAB during the dialogue, which allowed each member to understand the context of what the other was saying at any given point. Study participants were recruited via e-mail. Informed consent was administered at the beginning of each walkthrough. Forty participants were contacted for interviews; data were collected from 15 participants who represented a variety of different stakeholders involved with the project.

Figure 2 – Second Layer, Node Selection The third, terminal diagram layer (see figure below) provides a tabular interface showing basic information about the diagram object. This allows stakeholders to view a rough approximation of the functionality of a specific part of the architecture. Stakeholders who possess subject matter expertise on a given diagram object can add comments using the discussion forum to suggest changes to the information presented.

Questions for the semi-structure interview were adapted from a previous study using scenario-based evaluation (SBE) [27]. This method involves eliciting information not only about the task performed, but also about the actors (roles of the people involved), task goal, system components and features used in the scenario, claims about how well or poorly the system supports the scenario, and information about potential obstacles to completing the scenario. In addition to the focus on the architecture itself, additional openended questions were included at the end of the walkthrough to gather information about participants’ perceptions of the CAB and their intentions to adopt the CAB as a project tool. We were especially interested in identifying benefits and obstacles to adoption and use. Participants were also asked to describe scenarios where they would use the browser for project activities in this part of the walkthrough. As part of the CAB scenario discussion, participants were asked to suggest additional features or tools they may like to see added to the browser.

Figure 3 – Third Layer, Information Display

8. RESULTS AND LESSONS LEARNED

In addition to these core components, the CAB includes other features to support collaboration. These include a chat interface, an integrated glossary of commonly used acronyms (critical in government and Department of Defense projects), an area for file sharing, and discussion forums for topics not directly related to the architecture representation, including scenarios, metrics, important meetings and events, and a simulation being developed as part of the evaluation project. The chat interface and the discussion forums are particularly critical in employing our progressive formalization approach. Specifically, these two communication tools allow the stakeholders to document their progress, discuss modifications, and discuss rationale behind decisions in “real time.” Researchers are often not privy to this type of unstructured communication which is essential to solution development. As researchers, these tools are especially useful in formalizing what is otherwise unstructured information, as well as

The evaluation of a system architecture requires understanding “…the structure or structures of the system, which comprise software components, the externally visible properties of those components, and the relationships among them” [6]. This understanding is typically expressed and discussed using design documentation, often a detailed schema represented in a Unified Modeling Language (UML) or Architectural Description Language (ADL) diagram. In the case reported here, we were attempting to support this process by embedding UML diagrams in a collaborative environment. We identified a number of potential benefits to the approach, and some barriers to be addressed in future research and practical work.

8.1 Browser and Walkthrough Benefits This section describes positive aspects of the CAB and the walkthrough method as identified by walkthrough participants.

224

architecture research and development progresses, the representation is adapted to reflect the technical, social, and economic forces driving its design.

8.1.1 Low Overhead We were able to quickly create and provide access to the collaborative architecture browser with the Sense & Respond architecture representation available for distributed-collaborative review. The amount of time from creation of the first architectural representation (the UML component diagram) to the first telephone walkthrough was only three weeks. The low-cost, ‘light-weight’ environment reported here used open source software tools and infrastructure running on a relatively low-end web server to provide an online collaborative environment. This affordable alternative is in direct contrast to sophisticated, three dimensional, immersive design review environments such as those described in [17]. These tools are powerful, but expensive to acquire, difficult to configure, and require significant physical computing and communications infrastructure.

8.1.3 Supporting a Common Mental Model Many stakeholders in the project, especially government contractors, have a specific development task for a particular component in the architecture. It is easier for them to focus on the specific requirements of their task than to consider the greater goal of the entire architecture. One walkthrough participant remarked, “It’s nice to see how you have been thinking about these things. My view has been limited.” Another participant said, “It's nice to know if things change, so I know I need to change my thinking. It's nice if there's a common place to get specs and upto-date information.” Maintaining a shared mental model is important even for those developing very specific components within the architecture due to the interdependence of the architecture’s various elements. One walkthrough participant remarked that his current component design would not support direct communication between his component and components being developed by other stakeholder groups. He suggested that such direct communication would likely be necessary in the future, and current work needs to be able to facilitate future modifications as quickly and as easily as possible. Viewing and discussing the broad view, as represented in the CAB, allows stakeholders such as this study participant to maintain a perspective that allows for the consideration of system requirements beyond their own narrow development task, as well as the consideration of future requirements across different groups.

8.1.2 Support for Dynamic Development A central goal motivating the creation of the PM LAV S&R architecture browser was to enable the capturing of design rationale as it evolved through stakeholder collaboration. Design rationale captures not only the design decisions that were made but also the alternatives considered and why a particular alternative was selected for implementation. Early prototype architectures might make compromises because of cost or timeliness but we wanted to be able to revisit these compromises on subsequent versions of the prototype, and especially when the results of early versions and trials were used as the basis for specification of a production version of the architecture. CAB enabled changes and design rationale to be captured. One of the advantages of the CAB when coupled with structured walkthroughs was its ability to carry out iterative assessments as the architecture evolved in response both to walkthroughs and other project events such as on-site meetings of the different system stakeholders. For example, if all-hands project meetings were held only once every month, this would mean that for the duration of that month designers must refer to the static architecture agreed upon during the last meeting. One walkthrough participant explained, “We have many participants that go to lots of meetings and make only incremental progress, because they need to rehash everything. If they have a tool to cover previous decisions, they can concentrate on the gaps.” With the architecture browser we were able to begin assessing architectural changes as soon as they were suggested, and immediately were able to include them in an updated online architectural model.

Supporting a shared mental model of the evolving architecture and capturing the discussions from walkthroughs provides certain efficiencies for the project. During a walkthrough, one project stakeholder remarked, “Being able to drill down into and define the items in the diagram and be able to easily identify and explain them to outsiders is great; we move so fast, we don't write those things down.” By documenting change, the architecture browser allows collaborators to stay up-to-date and focus on emerging requirements, rather than on decisions made previously, and recovering the rationale to support later decisions. Another walkthrough participant mentioned, “[CAB] is important for keeping the big picture in place… It's nice to know if things change, so I know I need to change my thinking.”

8.1.4 Supporting Communication One important benefit gained from the walkthroughs was the ability to help coordinate details of the evolving architecture, including project components and terminology including acronyms. One stakeholder said, “It'll be nice to direct people who call me with specific questions to a site that defines this stuff rather than answering each phone call and e-mail.” This issue relates, to some extent, to the system usability heuristic that system designers should strive to “speak the users’ language” [41], by mapping recognized domain concepts to components of the system being designed. We found this to be a particularly important and problematic issue when projects involve a variety of different specialized groups of stakeholders, each with different domain ‘dialects’. For example, some members of the project team alerted us that the titles and subsequent acronyms for certain project components were being changed. Other walkthrough participants were unaware of this development and used the new

Most of the changes made to the architectural representation were suggested directly through walkthroughs with project stakeholders. For example, walkthrough participants frequently identified when changes had been made to make specific components reflected within the diagram obsolete. That is to say, some of the capabilities initially expected to be performed by one component were now expected to be supported through other components in the architecture. Decisions such as the one above were often decided locally in meetings between few stakeholders. The use of the CAB to capture these changes allowed all stakeholders to be privy to these design changes immediately. In one walkthrough, a stakeholder explained, “When collaborating via e-mail only for these projects, only certain people are privy to that; whereas with [CAB], more people are privy to that information.” As early

225

architecture alternatives. For example, when asked how a decision support system would contribute to an operator’s decisions regarding vehicle repairs, the participant suggested that the metric to assess the tool would be “whether operators made better decisions.” While clearly better decisions make a contribution to the organization, in this form the value gained from this contribution is nearly impossible to assess. Other commonly mentioned contributions included decreasing the military footprint and time to respond to logistics needs.

acronym to refer to a completely different element of the architecture. Integration of a project glossary into the CAB helped to establish a common language for stakeholders from the military, other government agencies, industry, and academia. In this way, the architecture browser provides support for developing a shared mental model and vocabulary of the evolving system across different stakeholder groups. The diagram presents a shared perspective of the architecture which all project members can use to ground their understanding, instead of each assuming their own possibly limited or even incorrect understanding of the different architecture components and how they fit together.

8.2.2 Cultural Norms Another potential disadvantage of the approach reported here is that many of the social and psychological issues identified as barriers to CSCW system success, such as disparities in the benefits obtained by different user roles [24] and relatively awkward support for underlying social processes [1], are introduced into architecture evaluation activities which are already complex. For example, walkthrough participant responses suggested that certain cultural norms prevented them from treating the CAB as a substitute for e-mail and other, more familiar tools they already use for collaboration. One walkthrough participant commented that there are problems communicating and making decisions in projects of this nature even during faceto-face meetings; therefore he was doubtful that this tool would improve the already difficult job laid before them. Another participant discussed the all-one-team ethos on the project, which is at odds with the fact that in reality many of the project participants, especially industry partners, normally consider themselves to be competing for the same business.

The browser also provided an efficient method for disseminating project information. For example, one walkthrough participant stated, “When collaborating via e-mail, only certain people are privy to that information; whereas with a central site more people are privy to it.”

8.1.5 Increasing Architecture Clarity An unanticipated advantage of the walkthroughs was the extent to which the walkthrough process itself helped both the research administrators and project stakeholders to identify inconsistencies and omissions in the architecture. For example, talking through an architecture use scenario we noticed an inconsistency between the architecture and one of the key scenarios the architecture was designed to support. Specifically, the scenario suggested that communication between two components was direct. However, the architectural representation suggested this communication was mediated by another component. Conducting walkthroughs forced us to become specific about our understanding of the architecture and to express this understanding to walkthrough participants with clarity. This imposed discipline has helped us to become contributors to the project’s overall coherence.

8.2.3 Critical Mass Another socio-cultural barrier to success found through the walkthroughs was the issue of achieving critical mass of both end users and technical content in the CAB. This issue is widely acknowledged as one of the most important determinants of collaborative system success [24]. Walkthrough participants were often concerned with who else on the project was actively using the CAB, whether CAB use was mandatory for project participants, and whether certain key documents would be only available in at the architecture browser site. One participant suggested, “Some will just say they don't have time for it.”

8.2 Barriers to Browser and Walkthrough Success In this section we describe some of the most significant barriers to collaborative architecture evaluation encountered in the walkthroughs.

8.2.1 Eliciting Measurable Contributions The major objective of architecture evaluation is to identify errors of both omission and commission, and to address problems concerning a lack of shared understanding between project stakeholders. In the Sense & Respond project we also sought to identify and measure the contributions or benefits to be obtained from architecture component support for organizational goals and priorities. Walkthrough participants were asked to identify measurable contributions from different components in the architecture as well as for the CAB itself as a tool to support architecture evaluation and requirements gathering. Consistent with some previous findings [27], participants found specific contributions difficult to identify and articulate. Formal architecture metrics were especially difficult for participants to describe. Most stakeholders indicated that they found it difficult to comment on measurable contributions given the envisioned nature of the architecture. One walkthrough participant exclaimed, “This is new; something like this has not been done yet in a military environment, so there are a lot of unknowns.” Contributions from different components and the metrics used to assess them were mostly ’fuzzy’ tangible or intangible benefits that were difficult to quantify and compare across different

8.2.4 Production Paradox Another barrier we identified was a potential production paradox [13] related to the effort, however minimal, required to learn and use the architecture browser. One participant, for example, argued that it would be easier to communicate about the project and the evolving architecture via telephone or e-mail. Another participant mentioned that the browser would be more useful if all stakeholders could support an administrative position to be responsible for uploading meeting minutes and other material to the site. This participant suggested people are simply too busy to take the time to put in the necessary work to make the tool more useful even for themselves. He commented, “Like every other information system, everyone has enough to do, if it does not add value immediately, there's resistance to using it.”

9. CONCLUSION System and software architecture evaluation techniques have advanced significantly within the past decade, but little progress has been made in the development of formalized methods and tools for distributed-collaborative architectural evaluation. We

226

have developed a tool to support such collaboration and have constructed an environment to help capture the full range of architecture metrics that more formalized evaluation methods will require. These include tools for both direct (surveys, forums) and indirect (web server access logs, account tracking) evaluation. With the CAB and the walkthrough method, we have found that it is possible to evolve an early, draft architectural diagram created from a single design stakeholder meeting to a more accurate representation by gathering comments and suggestions made by subject matter experts over time. In addition, we have found that it is possible to promote this evolutionary process by administering hour-long telephone walkthroughs of the collaborative environment. These walkthroughs are successful in advancing the state of the architecture representation because the CAB facilitates a holistic view on a complex, very-large scale application integration project while at the same time allowing browsers to drill down into specific areas of subject matter expertise. Through use of the interactive, visual architecture representation of the diagram, project stakeholders are more likely to identify problems and suggest changes to specific diagram nodes and components during telephone walkthroughs. The CAB has become a reference point for stakeholders to stay up to date on the latest status of the architecture, its terminology, and important project developments.

3.

Babar, M.A. and Gorton, I., Comparison of scenario-based software architecture evaluation methods. in First AsiaPacific Workshop on Software Architectures and Component Technologies, (2004), 600-607.

4.

Babar, M.A., Kitchenham, B., Zhu, L. and Jeffery, R., An exploratory study of groupware support for distributed software architecture evaluation process. in Software Engineering Conference, 2004. 11th Asia-Pacific, (2004), 222 - 229.

5.

8.

Boehm, B.W. Software engineering economics. PrenticeHall, Englewood Cliffs, N.J., 1981.

9.

Boehm, B.W. A spiral model of software development and enhancement. Computer, 21 (5), 1988, 61.

13. Carroll, J.M. and Rosson, M.B. Paradox of the Active User. in Carroll, J.M. ed. Interfacing Thought: Cognitive Aspects of Human-Computer Interaction, MIT Press, Cambridge, MA, 1987, 80-111. 14. Carroll, J.M., Rosson, M.B., Chin, G. and Koenemann, J. Requirements development in scenario-based design. IEEE Transactions on Software Engineering, 24, 1998, 1156-1170. 15. Checkland, P. Systems thinking, systems practice. J. Wiley, Chichester, UK, 1981. 16. Clements, P., Kazman, R. and Klein, M. Evaluating Software Architectures: Methods and Case Studies. Addison-Wesley, Boston [Mass.]; London, 2001. 17. Daily, M., Howard, M., Jerald, J., Lee, C., Martin, K., McInnes, D. and Tinker, P., Distributed design review in virtual environments. in Proceedings of the Third international Conference on Collaborative Virtual Environments, (San Francisco, 2000), 57-63. 18. Damian, D., Lanubile, F. and Oppenheimer, H.L., Addressing the challenges of software industry globalization: the workshop on global software development. in The Workshop on Global Software Development, ICSE 2003, (Portland, OR, 2003), 793-794.

11. REFERENCES

Andriole, S.J. Fast, cheap requirements prototype, or else! Software, IEEE, 11 (2), 1994, 85.

Beck, K. and Andres, C. Extreme programming explained: embrace change. Addison-Wesley, Boston, MA, 2005.

12. Carroll, J.M. Scenario-based design: envisioning work and technology in system development. Wiley, New York, 1995.

We wish to thank the United States Marine Corps, the Marine Corps Research University, and especially the Program Manager, Light-Armored Vehicles, for supporting this research.

2.

7.

11. Carroll, J.M. Making use: scenario-based design of humancomputer interactions. MIT Press, Cambridge, MA, 2000.

10. ACKNOWLEDGMENTS

Ackerman, M. The Intellectual Challenge of CSCW: The Gap between social requirements and technical flexibility. Human-Computer Interaction, 15, 2000, 179-203.

Bass, L., Clements, P. and Kazman, R. Software Architecture in Practice. Addison-Wesley, Reading, MA, 1998.

10. Brooks, F.P. No silver bullet: Essence and accidents of software engineering. IEEE Computer, 20 (4), 1987, 10-19.

Results of this study suggest the value of providing a distributedcollaborative environment for evaluation and requirements gathering early in the process of architecture design. Including stakeholders in an ongoing dialogue of a naturally evolving architecture before more formal architectural documentation has been developed helps engage stakeholders in an important phase of the architecture development lifecycle, and highlights areas of architecture project risk before specific details of architecture nodes and components are fully specified.

1.

6.

19. Dobrica, L. and Niemela, E. A survey on software architecture analysis methods. Software Engineering, IEEE Transactions on, 28 (7), 2002, 638. 20. Duggan, E.W. Generating systems requirements with facilitated group techniques. Human-Computer Interaction, 18 (4), 2003, 373-394. 21. Ebert, C., Parro, C.H., Suttels, R. and Kolarczyk, H., Improving validation activities in a global software development. in Proceedings of the 23rd International Conference on Software Engineering, (Toronto, 2001), 545 554. 22. Espinosa, J.A. and Carmel, E., The Effect of Time Separation on Coordination Costs in Global Software Teams: A Dyad Model. in Proceedings of the 37th Annual Hawaii international Conference on System Sciences (Hicss'04), (Waikaloa, HI, 2004), IEEE Computer Society, Washington, DC.

Babar, M.A., Zhu, L. and Jeffery, R., A framework for classifying and comparing software architecture evaluation methods. in Australian Software Engineering Conference, (Melbourne, Australia, 2004), 309.

23. Gregoriades, A. and Sutcliffe, A. Scenario-based assessment of nonfunctional requirements. IEEE Transactions on Software Engineering, 31 (5), 2005, 392-408.

227

35. Lichter, H., Schneider-Hufschmidt, M. and Zullighoven, H. Prototyping in industrial software projects-bridging the gap between theory and practice. Software Engineering, IEEE Transactions on, 20 (11), 1994, 825.

24. Grudin, J. Groupware and Social Dynamics: Eight Challenges for Developers. Communications of the ACM, 37 (1), 1994, 92-105. 25. Grudin, J., Return on investment and organizational adoption. in Proceedings of the 2004 ACM conference on Computer supported cooperative work, (Chicago, Illinois, USA, 2004), 324 - 327.

36. Luff, P., Heath, C., Kuzuoka, H., Hindmarsh, J., Yamazaki, K. and Oyama, S. Fractured Ecologies: Creating Environments for Collaboration. Human-Computer Interaction, 18 (1), 2003, 51-84.

26. Guo, G., Atlee, J. and Kazman, R., A Software Architecture Reconstruction Method. in First Working IFIP Conference on Software Architecture (WICSA1), (San Antonio, TX, USA, 1999).

37. Maiden, N., Gizikis, A. and Robertson, S. Provoking creativity: imagine what your requirements could be like. Software, IEEE, 21 (5), 2004, 68.

27. Haynes, S.R., Purao, S. and Skattebo, A.L., Situating evaluation in scenarios of use. in Proceedings of the 2004 ACM conference on Computer supported cooperative work, (Chicago, Illinois, 2004), ACM Press, 92-101.

38. McFadden, E., Hager, D., Elie, C. and Blackwell, J. Remote Usability Evaluation: Overview and Case Studies. International journal of human-computer interaction, 15 (3), 2002, 498-502.

28. Herbsleb, J.D. and Moitra, D. Global software development. Software, IEEE, 18 (2), 2001, 16.

39. Mockus, A. and Herbsleb, J., Challenges of global software development. in IEEE METRICS, (London, England, 2001), 182-184.

29. Hindmarsh, J., Fraser, M., Heath, C., Benford, S. and Greenhalgh, C. Object-Focused Interaction in Collaborative Virtual Environments. ACM Transactions on ComputerHuman Interaction, 7 (4), 2000, 477 - 509.

41. Nielsen, J. Usability Engineering. Morgan Kaufmann, 1994.

30. Kazman, R., Bass, L., Abowd, G. and Webb, M., SAAM: a method for analyzing the properties of software architectures. in, (1994), 81-90.

42. Rumbaugh, J., Jacobson, I. and Booch, G. The Unified Modeling Language Reference Manual. Addison-Wesley, Reading, MA, 1999.

31. Land, R., Improving quality attributes of a complex system through architectural analysis-a case study. in, (2002), 167174.

43. Schön, D. and Bennett, J. Reflective Conversation with Materials. in Winograd, T. ed. Bringing Design to Software, Addison-Wesley, 1996, 171 - 189.

32. Landauer, T.K. The trouble with computers: usefulness, usability, and productivity. MIT Press, Cambridge, Mass., 1995.

44. Schön, D.A. The reflective practitioner: how professionals think in action. Basic Books, New York, 1983.

40. Mumford, E. Systems Design: Ethical Tools for Ethical Change. Macmillan, Basingstoke, UK, 1996.

45. Schuler, D. and Namioka, A. Participatory design: perspectives on systems design. Lawrence Erlbaum Associations Pub., Hillsdale, N.J., 1993.

33. Lau, I.Y.-M., Chiu, C.-Y. and Lee, S.-L. Communication and Shared Reality: Implications for the Psychological Foundations of Culture. Social Cognition, 19 (3), 2001, 350371.

46. Scriven, M. The methodology of evaluation. in Tyler, R., Gagne, R. and Scriven, M. eds. Perspectives of Curriculum Evaluation, Rand-McNally, Chicago, 1967, 39 - 83.

34. Lehto, J.A. and Marttiin, P., Experiences in System Architecture Evaluation: A Communication View for Architectural Design. in, (Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS'05), 2005), 312c.

47. Wood, J. and Silver, D. Joint Application Development, 2nd ed. Wiley, New York, 1995.

228

Suggest Documents