Integrating humans with software and systems ... - Semantic Scholar

3 downloads 11258 Views 1MB Size Report
of humans, software, and systems while, at the same time, being mindful of their respective ... ture dating back to the April 1991 Business Week cover story.
Regular Paper

Integrating Humans with Software and Systems: Technical Challenges and a Research Agenda Azad M. Madni* Intelligent Systems Technology, Inc., 12122 Victoria Avenue, Los Angeles, CA 90066 INTEGRATING HUMANS WITH SOFTWARE AND SYSTEMS

Received 24 February 2009; Accepted 17 April 2009, after one or more revisions Published online 21 July 2009 in Wiley InterScience (www.interscience.wiley.com) DOI 10.1002/sys.20145

ABSTRACT As systems continue to grow in size and complexity, the integration of humans with software and systems poses an ever-growing challenge. The discipline of human-system integration (HSI) is concerned with addressing this challenge from both a managerial and technical perspective. The latter is the focus of this paper. This paper examines this integration challenge from the perspective of capitalizing on the strengths of humans, software, and systems while, at the same time, being mindful of their respective limitations. It presents four key examples of HSI challenges that go beyond the usual human factors requirements. It presents cognitive engineering as a key enabler of HSI and discusses the suitability of the Incremental Commitment Model for introducing human considerations within the complex systems engineering lifecycle. It concludes with a recommendation of specific research thrusts that can accelerate the maturation and adoption of HSI methods, processes, and tools by the software and systems engineering communities. © 2009 Wiley Periodicals, Inc. Syst Eng 13: 232–245, 2010 Key words: human-system integration; software engineering; systems engineering; cognitive engineering; human performance; Incremental Commitment Model

1. INTRODUCTION

Three Mile Island and Chernobyl. An equally compelling example is that of the Patriot Missiles deployed in 2003 in the Iraq war. With this highly automated system, operators were trained to trust the system’s software— necessary design requirement for a heavy missile attack environment [Defense Science Board, 2005]. As it turned out, the missile batteries were operating in an environment with few missiles but many friendly aircraft in the vicinity. Compounding the problem was the fact that the operators were not adequately trained to know that the Patriot radar system was susceptible to recording spurious hits and occasionally issuing false alarms (i.e., identifying friendly aircraft as enemy missiles), without displaying the inherent uncertainty in target identification. Understandably, the operators, being unaware of these limitations, were inclined to trust the system’s assessments and missile launch decisions against potentially hostile targets. In fact, these factors were in play in the unfortunate shoot

The challenges that humans encounter in trying to operate or work with complex systems is well documented in the literature dating back to the April 1991 Business Week cover story entitled, “I Can’t Work This?#!!@ Thing.” This story focused on the difficulties people typically encounter with consumer products [Pew and Mavor, 2007]. However, this disconnect between people and technology is well documented [Parasuraman and Riley, 1997; Chiles, 2001; Hymon, 2009], and is just as apparent in major large-scale system disasters such as * E-mail: [email protected] Systems Engineering Vol. 13, No. 3, 2010 © 2009 Wiley Periodicals, Inc.

232

INTEGRATING HUMANS WITH SOFTWARE AND SYSTEMS

down of a British Tornado and a U.S. Navy F/A-18, for which a Defense Science Board report concluded that “more operator involvement and control in the functioning of a Patriot battery” would be necessary to overcome the system’s limitations [Defense Science Board, 2005, p. 2]. Even today, system operators (e.g., pilots, air traffic controllers, power plant operators and maintainers, and military personnel) continue to receive more than their fair share of blame for systemic failures as noted by Chiles (as cited in Defense Science Board, 2005, p. 2): Too often operators and crews take the blame after a major failure, when in fact the most serious errors took place long before and were the fault of designers or managers whose system would need superhuman performance from mere mortals when things went wrong.

The primary design flaws that Chiles refers to were largely failures in properly coordinating the interactions between people and technology in system development and deployment [Woods and Sarter, 2000]. Simply put, for systems that are designed to work with humans, it is important that systems engineers give adequate consideration to operators and end users.

2. INFUSING HUMAN FACTORS INTO SYSTEMS ENGINEERING In recent years, the DoD’s push to incorporate human factors discipline into systems engineering [Department of Defense Handbook, 1996] led to the creation of the new multidisciplinary field of Human Systems Integration (HSI). Since the advent of HSI, there has been a variety of views on the differences between human factors and HSI. The short answer, in accord with the definition of HSI, is that HSI subsumes human factors along with several other human-related fields such as safety and survivability. However, this clarification begs the question of how to get human factors professionals to view human-related problems from a systems perspective. To this end, the following definitions are offered. Human factors is the study of the various ways in which humans relate to their external environment with the aim of improving operational performance while ensuring the safety of job performers throughout a system’s lifecycle. Human System Integration is the study and design of interactions among humans (operators, maintainers, customers, designers, support personnel) and the systems they work with in a manner that ensures safe, consistent and efficient relationships between them with error avoidance. It is important to realize that HSI is more than the sum of the contributions from the disciplines it draws upon. HSI is intended to optimize the joint performance of humans and systems in both normal operations and contingency situations. It advocates a full lifecycle view of the integrated human-machine system during system definition, development and deployment. As importantly, HSI focuses on all human and system roles during the system lifecycle when applying human-system performance assessment criteria and methods. The HSI applicability regime varies with the size and complexity of the sociotech-

233

nical system (e.g., fighter aircraft versus air traffic control system). HSI is methodology-neutral, architecture-agnostic and concerned with satisfying stakeholder needs, especially operational stakeholders. From an HSI perspective, satisfying operational stakeholder needs means that: • The right tradeoffs have been made between HSI considerations and other systems considerations such as system security, dependability, survivability, and mission assurance. • The joint human-machine system exhibits desired and predictable behavior. • The software and system part of the human-machine system is perceived as having high usability and utility by humans operating (or working with) the system. • The design of the integrated human-machine system circumvents the likelihood of human error especially when adapting to new conditions or responding to contingencies. • The human-system integration strategy takes into account technology maturity level and attendant uncertainties. • The integrated human-machine system operates within an acceptable risk envelope by adapting its functionality and operational configuration, as and when needed to stay within the envelope. • The integrated human-machine system satisfies mission effectiveness requirements while achieving an acceptable ROI for the various stakeholders. Today, most human factors and ergonomic courses do not address the range of requirements required for HSI. When they do, they tend to address the human side of the equation, not the performance, effectiveness, and dependability goals of the integrated human-machine system. Similarly, most systems engineering courses do not go into the necessary depth required to address HSI challenges and the integration of humans into software and systems. Not surprisingly, there is a lack of qualified HSI practitioners today. To remedy this situation requires attacking the problem on three fronts. The challenge for HSI practitioners is to mature and consolidate HSI practices for “prime-time” use [Madni, 1988b, 1988c]. Systems engineering educators need to incorporate HSI concepts, methods, and tools in the systems engineering curriculum. And, finally, the software and systems engineering communities need to foster a culture that embraces humansystem integration (HSI) practices and guidelines as an integral part of the systems engineering lifecycle.

3. ROAD TO THE PRESENT Human-systems integration (HSI) is a multidisciplinary field comprising human factors engineering, system safety, health hazards, personnel survivability, manpower, personnel, training, and habitability. HSI practices recommend that human considerations need to be an important priority in system design/acquisition to reduce lifecycle costs and optimize human-system performance [Booher, 2003; Booher, Beaton, and Greene, 2009; Landsburg et al., 2008]. HSI encompasses interdisciplinary technical and management processes for

Systems Engineering DOI 10.1002/sys

234

MADNI

integrating human considerations within and across all software and system elements as part of sound software and systems engineering practices. This paper is primarily focused on the technical challenges. Heretofore, the need to deliver systems within budget and on schedule has resulted in inadequate attention being given to HSI considerations in systems acquisition and systems engineering. Exacerbating the problem has been the fact that the human factors community has not done an effective job of communicating the value proposition of HSI to acquisition managers, program managers, and systems engineers. In fact, in 1970, Admiral Hyman Rickover characterized the promulgation of human factors considerations into R&D, engineering, and production in shipbuilding as being “about as useful as teaching your grandmother how to suck an egg.” Since then, three compelling developments led to the resurfacing of the importance of human considerations in systems acquisition/engineering. The first was a 1981 Government Accountability Office (GAO) report that cited human error as a key contributor to system failure. The second was the highly publicized series of military, industrial, and commercial accidents involving human error in the 1970s and 1980s. The third was the recognition within DoD that manpower costs were a significant driver of total system lifecycle costs. And, most recently, an NRC [National Research Council, 2007] study suggested that the Incremental Commitment Model (ICM) was one potentially suitable framework (among others) for addressing HSI concerns during the complex system development lifecycle. The ICM, an outgrowth of spiral development, starts with an approximate solution and then iterates in riskmitigated fashion with each iteration incrementally adding capabilities until the final iteration which produces the full capability set. Despite this resurgence of interest in HSI and human error, misconceptions about human performance continue to linger. To begin with, experts in a particular domain (e.g., training) tend to view the solution to a human performance problem as being one that can be fully addressed from within their own domain. Thus, a training specialist views human performance deficiency as a training problem with a training solution. Similarly, a human factors engineer views the same problem as a man-machine interface design problem, while a manpower analyst views the same problem as a human resource/task allocation problem. In reality, the problem could be some combination of the above, requiring a composite solution. The second common misconception is that integrating HSI with non-HSI acquisition/systems engineering domains is the answer to ensuring that HSI is taken seriously. In reality, the HSI domain is quite fragmented itself and needs to be integrated first. It is only after the HSI domain is internally integrated that the impact of human performance on system cost and schedule can be assessed and the business case made for HSI. The third common misconception is that the science that informs and guides human performance is mature enough to be operationalized into principles and guidelines for HSI. In reality, this science is quite new and subject to individual variation when compared to the physical sciences. The fourth common misconception is that human performance metrics have unique definitions. In reality, some of these metrics can have more than one interpretation. For

Systems Engineering DOI 10.1002/sys

example, workload can be defined as the ratio of total tasks to be completed and the time available to complete them, or the number of concurrent cognitive tasks, or the size of the task stack [Madni, 1988b, 1988c]. In reality, the most effective HSI metrics, are those that are tied to context. For military and aerospace missions, context can be characterized by the skill set of humans involved, mission requirements and constraints, cost and schedule constraints, and performance parameters of interest. Furthermore, the metrics need to be valid, reliable, and relevant to the mission’s unique requirements.

4. UNDERSTANDING HUMANS FROM AN HSI PERSPECTIVE Before discussing the integration of humans with software and systems, or integrating HSI into software and systems engineering, it is necessary to examine human strengths and limitations relative to software and systems and use that knowledge to determine how best to integrate humans, software, and systems [Madni, 1988a, 1988b, 1988c; Meister, 1989]. With this perspective in mind, we present key findings from the literature that bear on human-system integration, and, ultimately, on user acceptance (Table I). In what follows, four exemplar problems that illuminate HSI challenges are presented. Who has the Final Say. In several fields (e.g., medical diagnosis, fighter aircraft automation, and power plant control), determining who is in charge and who has the final say is a key HSI problem. For example, the perennial question in medical diagnosis is deciding when the medical community should trust the medical professional and when they should trust the diagnostic aid. Trust and confidence are essential to achieve this “bond of confidence.” What this means is that there are circumstances when the human should not be allowed to override the system and there are circumstances when the system should not be allowed to override the human. With respect to medical diagnosis, the key is to appropriately apply the combination of the medical professional and the diagnostic aid so that based on the circumstances, either the human or the aid or both are in a position to solve the problem. The literature on human-machine systems offers several examples of the complexities involved in designing decision aiding/performance support systems for cognitively demanding environments. There is ample evidence that a suboptimal solution can produce performance degradation of the overall human-machine system. Some of the key findings in this regard are presented in Table II. An important aspect of such performance degradation is the lack of “fit” between the cognitive demand of the work environment, the designed interventions, and the mental models of humans. Risk Homeostasis. Wilde [2001] put forth the hypothesis that humans have their own fixed level of acceptable risk. While initially subject to criticism, this hypothesis was borne out in studies in Munich, Germany and in British Columbia, Canada. In the Munich study, half a fleet of taxicabs was equipped with anti-lock brakes (ABS), while the other half was provided conventional brake systems. Pursuant to testing, it was discovered that the crash rate was the same for both types. Wilde concludes that this result was due to the fact that

INTEGRATING HUMANS WITH SOFTWARE AND SYSTEMS

235

Table I. What We Know about Humans [Madni, 2008]

drivers of ABS-equipped cabs took more risks because they assumed that the ABS would take care of them. By the same token, the non-ABS drivers drove more carefully since they recognized that ABS would not be there to help them were they to encounter hazardous situations. Likewise, it has been found that drivers are less careful around bicyclists wearing helmets than around unhelmeted riders. Wilde states that the massive increase in car safety features has had little effect on the rate or costs of crashes overall, and argues that safety campaigns tend to shift rather than reduce risk [Wilde, 2001]. A related study in British Columbia was concerned with anti-drunk driving government campaign in 1970. The findings once again seem to support the risk homeostasis hypothesis. In this case, the accident rate associated with DUI was reduced by approximately 18% over a 4-month period. However, accidents caused by other factors increased 19% during the same period. The explanation offered was that people started engaging in more hazardous actions on the road (i.e., accepting higher risks) leaving “target risk levels” unchanged.

Assignment of Blame. In 2008, a Metrolink commuter train crashed headlong into a Union Pacific freight locomotive after going through four warning lights. The engineer (i.e., the driver) did not hit the brakes before the L.A. train crash. A teenage train enthusiast later claimed to have received a cell phone text message from the driver a minute before the collision. The possible reasons initially given for the crash included: The engineer (driver) was distracted while texting; the engineer was in the midst of an 111⁄2-h split shift when he ran the first light; the possible breakdown of radio communication between engineer and conductor, who normally call each other by radio to confirm signals the engineer sees; the engineer may have suddenly taken ill; sun glare may have obscured the engineer’s view of the signal. The official Metrolink report stated that Metrolink needs to improve monitoring of its employees, enhance its safety technology, and do more to inform its board members about how the railroad works. The major findings of this report, written by a panel of industry experts, were presented to the Metrolink Board of Directors in December 2008. The panel

Table II. Examples of Human Performance Degradation

Systems Engineering DOI 10.1002/sys

236

MADNI

noted that Metrolink staff needed to improve its oversight of the businesses that the rail carrier hires to run its trains. The panel also said that Metrolink board members—who have the final say over the agency—need to be better informed about the railroad’s operations. In addition, the panel recommended that video cameras be placed in locomotives to monitor train engineers—something employee groups have protested— and called for the agency to move quickly to adopt a GPSbased computer system to track train locations of [Hymon, 2009]. So, was the Metrolink train accident a human error, a systemic problem that manifested itself as a human error, or both? The answer is BOTH. The driver was doing a split-shift; he was tired; he was also multitasking. Humans don’t multitask well and will be error-prone in such circumstances. However, the system was also not designed for integration with the human. The system assumed an optimal human, i.e., one who could multitask, one who would not fatigue, and one who was goal-driven or a utility maximizer. Humans are not any of these! This was an accident waiting to happen. Indiscriminate Automation. A decade ago a blind side indicator was developed for automobiles. It was intended to show an object in the driver’s blind side. However, this device was never approved, because behavioral research suggested that drivers were going to overuse the indicator, and not bother to look back over their shoulder when changing lanes. This would have been clearly an undesirable change in driver behavior. The lesson clearly is that indiscriminate introduction of technology without regard to desired behavior patterns can potentially change human behavior, and not necessarily for the better. This kind of analysis is key to avoiding unintended consequences [Madni, 2008; Madni and Jackson, 2009]. The foregoing examples illuminate several key factors. First, in a tightly-coupled system, any change to the machine will cause humans to change as well. Such a change could be undesirable in the sense that it could lead to unintended consequences. Second, unwarranted assumptions about the human can lead to tragic accidents [Madni, 2008]. For example, assuming that humans are optimal information processors can lead to dire results because humans do fatigue and don’t multitask well. Third, the role of the human and the importance of that role in the overall system is key to architectural paradigm and algorithm selection. Specifically, it is important to determine whether humans are central to system operation, or merely an adjunct or enabler that is expected to be replaced by automation in the future. Fourth, system architects need to

focus on combined human-system operation, not the characteristics of each in isolation. This also means that the focus should be on combined metrics, not individual metrics. And, finally, a change in the operational environment can potentially change how people perceive and compensate for risks. These considerations needs to be taken into account especially for contingency situations that change risks and perception of risks and the compensatory measures that people employ.

5. ROLE OF COGNITIVE ENGINEERING IN HSI AND SYSTEMS ENGINEERING Operationalizing HSI from a technical perspective requires methods and techniques from the field of cognitive engineering. Cognitive engineering draws on multiple disciplines including human factors engineering, cognitive psychology, human-computer interaction, decision sciences, computer science, and other related fields. The primary motivation for introducing cognitive engineering, as an HSI approach, into the systems engineering lifecycle is to prevent technology-induced design failures by explicitly taking into account human processing characteristics within the context of task performance and the operational environment. This characterization covers all human roles (e.g., operator, tester, system administrator, and maintainer) that come into play in the systems engineering lifecycle. Technology-induced design failures have, in part, resulted from the lack of understand of the rapid advances in technology. These advances have resulted in the automation of certain system functions which have changed the operator’s role from being a controller of relatively simple systems to that of a supervisory controller of highly complex, automated systems. This shift in the role of human operators has placed greater emphasis on their ability to: understand the operation of complex systems that accomplish tasks under their supervision; access and integrate relevant information rapidly; and monitor and compensate for system failures. Some of the major problems that can arise in systems that are engineered without regard to cognitive considerations are presented in Table III. The flipside of this equation is just as revealing. Cognitive engineering has certain limitations that prevent its widespread adoption within systems engineering. First, cognitive engineering has historically focused solely on the front-end analysis portion of systems engineering. Second, cognitive engineering is mostly focused on single operator, single workstation. The work in cognitive work analysis of a team of operators collaborating on a shared task is still in the nascent

Table III. Problems Arising from Failure To Incorporate Cognitive Engineering Principles

Systems Engineering DOI 10.1002/sys

INTEGRATING HUMANS WITH SOFTWARE AND SYSTEMS

stages. Third, cognitive engineers have yet to make a compelling Return-on-Investment (ROI) argument for acquisition program managers despite ample supporting evidence. Nevertheless, there are several applications today, especially within the military, that require the infusion of cognitive engineering within systems engineering. For example, the Air and Space Operations Center (AOC) combat operations scenarios provide an excellent platform for highlighting specific gaps in systems engineering that can be filled with the introduction of cognitive engineering considerations (e.g., criteria, principles, and guidelines) within the systems engineering lifecycle. In light of the foregoing, there is a need for a comprehensive model that spans the end-to-end systems engineering lifecycle process and that embeds cognitive engineering practices, processes, and tools at appropriate points in that lifecycle. To successfully implement this approach, requires the “buy-in” of three different classes of stakeholders: the engineering practitioner, the end user/operator; and government acquisition managers/industry representatives (Table IV). Today, after continual DoD prodding, the software and systems engineering communities are beginning to focus on how best to incorporate the human component in system design and development. Misperceptions Linger. Despite the renewed interest in HSI in the software/systems engineering communities, misconceptions linger. The single biggest misconception is that software and system engineers continue to view humans as “suboptimal job performers.” This mindset naturally leads system engineers and designers to build systems that shore up or compensate for human shortcomings. With this mindset, it is not surprising that humans are forced to operate systems that are inherently incompatible with human conceptualization of the system. For example, when systems employ computational methods (e.g., Bayesian Belief Networks) that tend to be incompatible with the human’s conceptualization of the problem domain, these methods often produce manifestations of human shortcomings. Humans, in general, are not necessarily Bayesian [Kahneman and Tversky, 1982] in that human cognitive processes do not seem to naturally follow the Bayesian philosophy. The fact that humans are not Bayesian has been demonstrated in both laboratory settings and actual applications [Meyer and

237

Baker, 2001]. The studies of Kahneman and Tversky [1982] have shown that experts fail to change or update their estimates in view of new information. Mathematically, the failure to update estimates means that P(A/B) = P(A/C); that is, the probability of A is not altered when the conditions governing A are changed from B to C. This equation would only be true if P(A) was independent of any conditioning; that P(A/C) = P(A/B) = P(A). However, in estimating probabilities, it is unlikely that any event would be totally independent of conditions. There are other characteristics of humans that prevent experts from being Bayesian. Some of these characteristics include the inability to grasp the effects of sample size, the frequencies of truly rare events, the meaning of randomness, and the effects of variability. These same failings also contribute to human difficulty in estimating probabilities in general [Kahneman and Tversky, 1982]. The human brain does not follow the axioms (rules) of probability, such as all probabilities lie in the [0, 1] interval and the sum of mutually exclusive probabilities must be 1. Consequently, the probabilities elicited from a human do not represent a true, mathematical probability measure. In short, starting with an erroneous premise, this mindset not only creates conflict in the way that humans and the system “conceptualize” work and “update their beliefs,” but also totally fails to capitalize on human adaptability, ingenuity, and creativity. Methods and Tools. A variety of methods and tools are employed today by cognitive engineering and human factors professionals [Madni, Sage, and Madni, 2005]. By far the most widely used tool is Cognitive Task Analysis (CTA). CTA is a collection of methods and techniques for describing, modeling, and measuring the mental activities (i.e., cognition) associated with task performance [Chipman, Schraagen, and Shalin, 2000]. CTA has been used to assess throughput, quality, and the potential for human errors in information processing tasks. CTA, as originally developed, focused on individual cognition in task performance [Klinger and Hahn, 2003]. CTA has a variety of forms. Klein [1993] identified four broad classes of Cognitive Task Analysis methods: questionnaires and interviews, controlled observation, critical incidents, and analytical methods (Table V). These methods vary with respect to how they elicit/represent expert knowledge, and contribute to expert performance.

Table IV. Challenges

Systems Engineering DOI 10.1002/sys

238

MADNI Table V. Cognitive Task Analysis Approaches

Central to CTA are the concepts of taskwork and teamwork which were developed to differentiate between individual and team tasks [Morgan et al., 1986]. Taskwork consists of individuals performing individual tasks, whereas teamwork consists of individuals interacting or coordinating tasks that are important to the goals of a team [Baker, Salas, and CannonBowers, 1998]. With respect to the latter, Bowers, Baker, and Salas [1994] compiled a team task inventory by identifying teamwork behaviors that were subsequently reviewed and refined by Subject-Matter Experts (SMEs). Thereafter, respondents were asked to rate each task (behavior) on multiple factors (e.g., importance to train, task criticality, task frequency, task difficulty, difficulty to train, overall task importance). The results showed that distinguishing teamwork from taskwork is important and that further research was required to study team behaviors such as interaction, coordination, relationships, cooperation, and communication [McIntyre and Salas, 1995]. In the same vein, Dieterly [1988] identified eight dimensions along which tasks can be decomposed. These dimensions were grouped into two categories: task characteristics that were independent of the team concept, and task characteristics that specifically applied to a team context. The focus of the research became methods for identifying team tasks, measuring team-level concepts, and integrating teamwork behaviors into traditional task analysis methods. It was soon discovered that until these issues were explicitly addressed, there would continue to be a void when analyzing team tasks. This void was confirmed by Bowers, Baker, and Salas [1994], who found a large proportion of unexplained variance when applying task analysis to a team. Over the last decade, however, cognitive engineering researchers began to specifically focus on team CTA, which differs from traditional CTA in that it explicitly focuses on teamwork requirements [Baker, Salas, and Cannon-Bowers, 1998]. The starting point used for team CTA was capturing the cognitive processes of a team by focusing on the various

Systems Engineering DOI 10.1002/sys

ways that the team coordinates and acquires an understanding of the different team members and then synthesizes task elements [Klinger and Hahn, 2003, Bordini, Fisher, and Sierhuis, 2009]. Team CTA today is conducted to assess gaps in team-specific and task-specific competencies. Team-specific competencies typically apply to a particular team, and encompass tasks that are performed by the team. Task-specific competencies, on the other hand, apply only to certain tasks. The findings of Cannon-Bowers et al. [1995] suggest 11 knowledge requirements, 8 specific skill dimensions, and 9 attitude requirements for a team. Stevens and Campion’s [1994] research appears to corroborate these findings. Today a variety of methods are used to perform team CTA [Bonaceto and Burns, 2007] including the use of simulators to examine team processes and performance [Weaver et al., 1995; Madni, Sage, and Madni, 2005]. These simulator/simulation-based approaches allow for observable outcomes as well as subjective assessments of team performance. Advancing the State of Readiness of Cognitive Engineering. One of the main challenges in infusing cognitive engineering principles and practices into systems engineering is to make the value proposition of cognitive engineering clear to acquisition managers, program managers, and industry practitioners, who typically face stringent schedules, development risks, and tight budgets. So, the question that needs to be answered, first and foremost, for acquisition managers is this: Is cognitive engineering ready for prime time use? To answer this question, we need to examine the state of readiness of cognitive engineering. Fortunately, there are several telltale indicators of whether or not a technology is ready for adoption (Fig. 1). This figure, which presents the technology adoption lifecycle (starting with skepticism and concluding with adoption), places the state of readiness of cognitive engineering at the enthusiasm stage. This stage corresponds to the fact that a few positive experiences have been recorded

INTEGRATING HUMANS WITH SOFTWARE AND SYSTEMS

239

Figure 1. Technology adoption lifecycle. [Color figure can be viewed in the online issue, which is available at www.interscience.wiley.com.]

with pilot experiments. Table VI further elaborates on the state of readiness of cognitive engineering. In light of the foregoing, a multipronged strategy is required to garner the attention of acquisition/program managers and industry practitioners. The first is to make the business case for cognitive engineering in terms of return-on-investment (ROI) so we can move the acquisition/program managers and industry practitioners up the ladder of “predisposition to act” in the technology adoption lifecycle (Fig. 1). The second is to overcome the barriers to adoption of cognitive engineering. Barriers to implementing cognitive engineering in systems integration programs stem from three sources: (a) prevailing culture of systems integration program personnel and (b) near-term challenges that perennially confront program managers. Table VII characterizes these barriers. The strategies to overcome these barriers need to squarely focus on program management, culture, and maturity of the

tools. From a cultural perspective, it is important to cultivate program managers who have an open, receptive mind and who are responsible for the introduction of human factors/cognitive engineering in the system’s lifecycle. As importantly, it is important to convey the value proposition of cognitive engineering to program managers in both qualitative and quantitative terms (e.g., ROI, elimination of rework, reduction in human error rates). And, when it comes to the maturity of the available tools, stretch goals need to be defined for tool vendors and the systems engineering community needs to be exposed to the artifacts that can be created using cognitive engineering tools and the impact of those artifacts on systems engineering. The third is to leverage the network of professional societies (e.g., INCOSE, HFES, IEEE SMC), industry associations (e.g., NDIA), and technology forums at universities to spread the word and grow a following. Publishing in systems engineering journals and presenting at major conferences is also

Table VI. State of Readiness of Cognitive Engineering

Systems Engineering DOI 10.1002/sys

240

MADNI Table VII. Barriers to Infusion of Cognitive Engineering into Systems Engineering

key to cultivating a following. The third is to make cognitive systems engineering part of graduate studies in systems engineering and distance learning curriculum at major universities. The fourth is to document case studies that can be shared with the systems engineering community. The case studies framework proposed by Friedman and Sage [Friedman and Sage, 2004] holds promise in this regard. Collectively, these strategies can advance the state of readiness of cognitive engineering, and make it attractive for adoption by acquisition/program managers and industry advocates [Madni, Sage, and Madni, 2005].

6. NATIONAL RESEARCH COUNCIL STUDY RECOMMENDATIONS The recently completed DoD-sponsored NRC study, “HSI in the System Development Process,” identified five core principles that are critical to the success of human-intensive system development and evolution: satisfying the requirements of the system stakeholders (i.e., buyers, developers including engineers and human factors specialists, and users); incremental growth of system definition and stakeholder commitment; iterative system definition and development; concurrent system definition and development; and management of project risks.

Table VIII. NRC Committee Recommendations

Systems Engineering DOI 10.1002/sys

After analyzing several candidate system development models in terms of these five principles, the NRC committee selected the Incremental Commitment Model as the systems engineering framework approach to examine various categories of methodologies and tools that provide information about the environment, the organization, the work, and the human operator at each stage of the design process [Pew and Mavor, 2007]. Although the ICM is not the only model that could be used for this purpose, it provides a convenient and robust framework for investigating HSI concepts. A central focus of the model is the progressive reduction of risk through the system development life-cycle, to produce a cost-effective system that satisfies the needs of the different stakeholders. The committee concluded that the use of the ICM can achieve a significant improvement in the design of major systems, particularly with regard to human-system integration. Table VIII presents a summary of the recommendations offered by this committee. The committee’s research recommendations include: (a) the development of shared representations to enable meaningful communications among hardware, software, and HSI designers as well as within the human-system design group and within the stakeholder community; (b) the extension and expansion of existing HSI methods and tools including modeling and simulation methods, risk analysis, and usability evaluation; and (c) the full integration of human systems and

INTEGRATING HUMANS WITH SOFTWARE AND SYSTEMS

engineering systems. Table IX summarizes their recommendations. The Incremental Commitment Model. The Incremental Commitment Model (ICM) is a risk-driven extension of the spiral model [Boehm, 1988]. It achieves progressive risk reduction in system development to produce a cost-effective, stakeholder-responsive system. With ICM, cost-effectiveness is achieved by focusing resources on high-risk aspects of the development and deemphasizing aspects that are viewed as posing limited risk. All forms of potential risk, including hardware, software, and HSI risks, are assessed to identify risk-reduction strategies at each stage in the system development process. The model recognizes that, in very large, complex systems, requirements change and evolve throughout the design process. As such, the approach to acquisition is incremental and evolutionary: acquiring the most important and well-understood capabilities first; working concurrently on engineering requirements and solutions; using prototypes, models, and simulations as ways of exploring design implications to reduce the risk of specifying inappropriate requirements; and basing requirements on stakeholder involvement and assessments. When tradeoffs among cost, schedule, performance, and capabilities are not well understood, the model provides a framework to specify priorities for the capabilities and ranges of satisfactory performance, rather than insist on providing precise and unambiguous requirements. ICM consists of five phases: exploration, valuation, architecting, development, and operation. Each phase revisits every single major activity: system scoping, goals and objectives, requirements and evaluation, operations and retirement. The specific level of the effort on each activity is risk-driven and thus varies across life-cycle phases and from project to project. ICM provides a suitable framework for concurrently investigating and integrating hardware, software and human factors ele-

241

ments of systems engineering. Specifically, it supports the concurrent exploration of needs and opportunities, and concurrent engineering of hardware and software with the necessary emphasis on human considerations. It employs anchor point milestones to synchronize and stabilize concurrently engineered artifacts. These characteristics of the ICM make it well suited as an “integration platform” for HSI.

7. DEVELOPING A RESEARCH AGENDA In light of the foregoing, we can define the research issues that need to be addressed when infusing HSI principles and criteria into software and systems engineering (Table X). Building on the NRC findings, the following paragraphs address each of these research issues. These are difficult questions to answer. Their answers will be forthcoming gradually as greater emphasis is placed on HSI research. It is in this spirit that the following research thrusts are presented. HSI Problem Identification. At the outset, it is important to identify the underlying concerns that motivate the introduction of HSI in the software/systems engineering process. The underlying problems could be one or more of the following: The system is too difficult to operate, unacceptably high human error rates, the system is not being used at all or not being used as intended, the system is too hard to maintain, the system is too expensive, the system does not scale. To this end, research is needed in advancing the state-of-the-art in virtual prototyping and human-machine interaction simulation. Development of a Shared Representation. Per the NRC’s recommendation, the development of a shared representation is key to enabling meaningful communication among hardware, software, and HSI personnel as well as within the HSI

Table IX. Research Recommendations

Systems Engineering DOI 10.1002/sys

242

MADNI Table X. Research Questions

personnel and the stakeholder community. To this end, the development of a common ontology and a lexical data base that eliminates the polysemy and synonymy problems can serve as a starting point for such discourse. Expansion of Existing Methods and Tools. Thus far the modeling and simulation tools, as well as risk analysis and usability evaluation methods, have focused on front-end analysis and taken a narrow view of human-system integration according to the NRC study. Research is needed to extend the methods and tools to span the full system lifecycle while also enriching the scope of the modeling, simulation, and analytic tools. Full Integration of Human Systems and Systems Engineering. The integration of HSI within systems engineering is not just a technical issue. It is also an organizational and cultural issue. As such, the principles of HSI need to be applied to design and development organizations, not just to the systems being procured. The NRC study recommends the full integration of human systems with systems engineering across the seven key areas presented in the report. To this end, research is needed in extending HSI methods to complex systems and SoS, while recognizing the trend that SoS and complex systems are gradually coming together in terms of how they are viewed. Human Performance Modeling. Human Performance varies nonlinearly with a variety of factors such as stress, anxiety level, workload, fatigue, and motivation level. For example, the Yerkes-Dodson law shows that as stress increases, so does performance, up to a point after which it round out and tapers off (the inverted U-Curve). Cognitive workload becomes a key concern in several functions/jobs that are mentally taxing

[Madni, 1988b, 1988c; Wickens and Hollands, 1999]. Examples of such functions/jobs are air traffic control, fighter aircraft operations, military command and control, nuclear power plant operation, and anesthesiology operations. The key characteristics of high cognitive load tasks are that they are stimulus-driven (i.e., not self-paced), they produce large fluctuations in demand, they involve multi-tasking, they generate high stress, and they are highly consequential. The basic approaches to measuring cognitive load are: analytic (e.g., task difficulty, number of simultaneous tasks), task performance (i.e., primary versus secondary task performance), physiological (i.e., arousal/effort) and subjective assessment (e.g., Cooper-Harper ratings, Subjective Workload Assessment Technique or SWAT). Research is needed in developing human performance models and simulations that exhibit the requisite human performance/behavioral profile as a function of stress and work demands. Such models can then be used in “test-driving” the goodness of HSI in a particular system design and also comparing two or more candidate designs. Architecture Design. The design of the human-system architecture is highly dependent on the roles that humans play in the overall system. In particular, human roles have a significant impact on the architecture depending on whether the human is central to the system or merely an enabling agent [Madni, 2008]. Table XI presents the different human roles that bear on architectural design. Research is needed in architecture design with various levels of involvement of the human. In particular, a human performance testbed needs to be developed that can support architecture sensitivity analysis to changes in critical human parameters.

Table XI. Human Roles and Architectural Implications

Systems Engineering DOI 10.1002/sys

INTEGRATING HUMANS WITH SOFTWARE AND SYSTEMS

System Inspectability. System inspectablity is key to ensuring that the human operator does not make erroneous assumptions about the system state and then act on that erroneous assumption causing damage to the safe operation of the human-machine systems. Cognitive compatibility is concerned with assuring that system “processes” are patterned after or compatible with the human’s conceptualization of work tasks. In particular, a cognitive system is one that employs psychologically plausible computational representations of human cognitive processes as a basis for system designs that seek to engage the underlying mechanisms of human cognition and augment the cognitive capacities of humans, not unlike a cognitive prosthesis. In this definition, emphasis is placed on psychologically plausible machinebased representations, the key distinguishing characteristic of systems that are patterned in a manner compatible with the human’s conceptualization of the task (i.e., task schema). Research in the development of inspectable cognitive systems with explanatory facilities are especially relevant to optimizing human-system compatibility and ultimately achieving robust human-systems integration. Consolidating human performance Body of Knowledge. At the present time, the human performance body of knowledge is highly fragmented. Some of the main categories are: human workload (cognitive and psychomotor), human decision making (under normal and time-stressed conditions; in the face of uncertainty and risk); human perception of risk and risk homeostasis; sociocultural factors in human decision making and consensus building; human vigilance and arousal; physiological/mental stress and fatigue. There are several others as well. Research is needed in determining how these various considerations interact, in what situations they interact, and how. In this regard, organizational simulations can provide new insights [Rouse and Bodner, 2009]. There are other measures that can be taken as well. For example, an updated and online version of Boff and Lincoln’s Engineering Data Compendium [1988] can provide a useful starting point for some of these considerations. Another possibility is to provide access to such information in a Wikipedia format. Integrated Aiding-Training Continuum. Historically, operator (and team) training and aiding have been viewed as distinct capabilities. However, recent research has shown that aiding and training lie along a continuum of human performance enhancement [Madni and Madni, 2008]. Research is needed in defining the architecture of an integrated aidingtraining framework capable of dynamically repurposing Shareable Content Objects (SCOs) for aiding, e-learning, just-in-time-training, or on-demand performance support based on user needs and the operational context. HSI Patterns. Humans interact with systems differently based on their role(s) relative to the system (i.e., supervisory controller, monitor, enabler, or supporting agent). The architecture for each of this context can be expected to be different and potentially amenable to characterization through patterns. In certain complex systems, the human role can be expected to change from, for example, supervisor/controller to an enabler based on changes in context. As such, the architecture needs to be adaptive and the pattern needs to reflect this characteristic. Research is needed in defining various types of architectures and their adaptation requirements, with a view

243

to capturing these findings in the form of architectural patterns.

8. CONCLUSIONS The ultimate goal of system development is to deliver a system that satisfies the needs of its stakeholders (e.g., users, operators, maintainers, owners) given adequate resources to achieve the goal. From a HSI perspective, satisfying stakeholder needs means developing a system that delivers requisite value to the various stakeholders, is predictable and dependable, and capitalizes on the respective strengths of humans and software/systems technologies while circumventing their limitations. Realizing such a system requires: having an understanding of various aspects of human behavior (e.g., cognitive strategies, risk perception, social and cultural considerations); defining HSI metrics that allow one to make judgments about which system from multiple candidates exhibits the better HSI; managing a variety of risks associated with, for examples, user characteristics, deployment environment constraints, and technological maturity. Cognitive engineering and the Incremental Commitment Model are two key enablers that contribute to the achievement of these objectives. Specifically, cognitive engineering provides insights into human cognitive processes and behaviors, and the impact of various factors on human performance including the likelihood of human error. The ICM provides a risk-driven, incremental process for incorporating HSI principles and for investigating and resolving HSI issues during each stage of development. At the present time, human capabilities and limitations and their implications on the design, deployment, operation, and maintenance of systems have not been explicitly addressed in systems engineering and acquisition lifecycles. The discipline of HSI is specifically intended to remedy this problem. However, for HSI to take hold within the systems acquisition and engineering communities, several advances need to occur. First, the fragmented body of knowledge in human performance needs to be consolidated, expanded, and transformed into a form that lends itself to being incorporated into software and systems engineering practices. Second, the HSI community needs to make the business case to communicate the value proposition of HSI. Third, systems acquisition and systems engineering policies and incentives need to be changed. Each of these recommendations provides the basis for the specific research thrusts recommended in this paper.

ACKNOWLEDGMENTS The author gratefully acknowledges discussions with Dr. Donald MacGregor, Professor Andy Sage, Professor Barry Boehm, Carla Madni, and the research staff at Intelligent Systems Technology, Inc. during the writing of this paper. The author also gratefully acknowledges the review of the references for completeness by Dr. Shaun Jackson.

Systems Engineering DOI 10.1002/sys

244

MADNI

REFERENCES D.P. Baker, E. Salas, and J.A. Cannon-Bowers, Team task analysis: Lost but hopefully not forgotten, Indust Org Psychol 35 (1998), 79–83. B. Boehm, A spiral model of software development and enhancement, IEEE Comput 21 (1988), 61–72. K.R. Boff and J.E. Lincoln, Engineering data compendium: Human perception and performance, Harry G. Armstrong Aerospace Medical Research Laboratory, Wright-Patterson AFB, OH, June 1988. C. Bonaceto and K. Burns, A survey of the methods and uses of cognitive engineering, Expertise out of Context, Proc Sixth Int Conf Nationwide Decision Making, 2007, pp. 29–75. H.R. Booher, Handbook of human-systems integration, Wiley Series in Systems Engineering and Management, Andrew Sage, Series Editor, Wiley, Hoboken, NJ, 2003. H.R. Booher, R. Beaton, and F. Greene, “Human systems integration,” Handbook of systems engineering and modeling, A. Sage and W.B. Rouse (Editors), Wiley, Hoboken, NJ, 2009, pp. 1319– 1356. R. Bordini, M. Fisher, and M. Sierhuis, Formal verification of human-robot teamwork, 4th ACM/IEEE Int Conf Human-Robot Interaction (HRI 2009), 2009, pp. 267–268. C. Bowers, D. Baker, and E. Salas, Measuring the importance of teamwork: The reliability and validity of job/task analysis indices for team training design, Mil Psychol 6 (1994), 205–214. J.A. Cannon-Bowers, S.I. Tannenbaum, E. Salas, and C.E. Volpe, “Defining competencies and establishing team training requirements,” Team effectiveness and decision making in organizations, R.A. Guzzo & E. Salas (Editors), Jossey-Bass, San Francisco, CA, 1995, pp. 333–381. J.R. Chiles, Inviting disaster: Lessons from the edge of technology, Collins, New York, 2001. S.F. Chipman, J.M. Schraagen, and V.L. Shalin, “Introduction to cognitive task analysis,” Cognitive task analysis, J.M. Schraagen, S.F. Chipman, and V.L. Shalin (Editors), Erlbaum, Mahwah, NJ, 2000, pp. 41–56. Defense Science Board, Defense Science Board Task Force on Patriot System Performance, Report Summary DTIC No. ADA435837, Department of Defense, Washington, DC, January 2005. Department of Defense Handbook, Human Engineering Program Process and Procedures, MIL-HDBK-46855, Department of Defense, Washington, DC, January 31, 1996. D.L. Dieterly, “Team performance requirements,” The job analysis handbook for business, industry, and government, S. Gael (Editor), Wiley, New York, NY, 1988, pp. 766–777. G. Friedman and A.P. Sage, Case studies of systems engineering and management in systems acquisition, Syst Eng 7 (2004), 84–97. S.A.E. Guerlain, Factors influencing the cooperative problem-solving of people and computers, Proc Hum Factors Ergonom Soc 37th Annual Meeting, Human Factors and Ergonomics Society, Santa Monica, CA, 1993, pp. 387–391. K.R. Hammond, Human judgment and social policy: Irreducible uncertainty, inevitable error, unavoidable justice, Oxford University Press, New York, 1996. K.R. Hammond, R.M. Hamm, J. Grassia, and T. Pearson, Direct comparison of the efficacy of intuitive and analytic cognition in

Systems Engineering DOI 10.1002/sys

expert judgment, IEEE Trans Syst Man Cybernet 17 (1987), 753–770. S. Hymon, Metrolink report urges more oversight, safety equipment, Los Angeles Times, January 8, 2009, p. 85. D. Kahneman and A. Tversky, “The simulation heuristic,” Judgment under uncertainty: heuristics and biases, D. Kahneman, P. Slovic, and A. Tversky (Editors), Cambridge University Press, New York, 1982, pp. 201–208. A. Kirlik, Modeling strategic behavior in human-automation interaction: Why an “aid” can (and should) go unused, Hum Factors 35 (1993), 221–242. G.A. Klein, Naturalistic decision-making: implications for design (SOAR 93-1), Crew Systems Ergonomics Information Analysis Centre, Wright-Patterson AFB, CA, 1993. G. Klein, Implications of the naturalistic decision making framework for information dominance, Report No. AL/CF-TR-1997-0155, Armstrong Laboratory, Human Engineering Division, WrightPatterson AFB, OH, 1997. D. Klinger and B. Hahn, Handbook of TEAM CTA, Contract F41624-97-C-6025, Human Systems Center, Brooks AFB, Klein Associates, Salem, NH, 2003. A.C. Landsburg, L. Avery, R. Beaton, J.R. Bost, C. Comperatore, R. Khandpur, T.B. Malone, C. Parker, S. Popkin, and T.B. Sheridan, The art of successfully applying human systems integration, Nav Eng J 120 (2008), 77–107. A.M. Madni, The role of human factors in expert systems design and acceptance, Hum Factors J 30 (1988a), 395–414. A.M. Madni, HUMANE: A designer’s assistant for modeling and evaluating function allocation options, Proc Ergon Adv Manuf Automat Syst Conf, Louisville, KY, August 16–18, 1988b, pp. 291–302. A.M. Madni, HUMANE: A knowledge-based simulation environment for human-machine function allocation, Proc IEEE Natl Aerospace Electron Conf, Dayton, OH, May, 1988c, pp. 860–86. A.M. Madni, Integrating human factors, software and systems engineering: Challenges and opportunities, Invited presentation at Proc 23rd Int Forum COCOMO Syst/Software Cost Model ICM Workshop 3, Davidson Conference Center, University of Southern California, Los Angeles, October 27–30, 2008. A.M. Madni and S. Jackson, Towards a conceptual framework for resilience engineering, IEEE Syst J 3 (2009). A.M. Madni and C.C. Madni, GATS : A Generalizable AidingTraining System for human performance and productivity enhancement, Phase I Final Report, ISTI-FR-594-01/08, Contract # FA8650-07-M-6790, Intelligent Systems Technology, Los Angeles, CA, January 3, 2008. A.M. Madni, A. Sage, and C.C. Madni, Infusion of cognitive engineering into systems engineering processes and practices, Proc 2005 IEEE Int Conf Syst Man Cybernet, Hawaii, October 10–12, 2005, pp. 960–965. R.M. McIntyre and E. Salas, “Measuring and managing for team performance: Emerging principles from complex environments,” Team effectiveness and decision making in organizations, R.A. Guzzo and E. Salas (Editors), Jossey-Bass, San Francisco, CA, 1995, pp. 9–45. D. Meister, Conceptual aspects of human factors, The Johns Hopkins University Press, Baltimore, MD, 1989. A.M. Meyer and J.W. Baker, Eliciting and analyzing expert judgment: A practical guide, SIAM, Los Alamos National Laboratory, Los Alamos, NM, 2001.

INTEGRATING HUMANS WITH SOFTWARE AND SYSTEMS B.B. Morgan, A.S. Glickman, E.A. Woodard, A.S. Blaiwes, and E. Salas, Measurement of team behavior in a Navy training environment, Technical Report TR-86-014, Naval Training Systems Center, Human Factors Division, Orlando, FL, 1986. K.L. Mosier and L.J. Skitka, “Human decision makers and automated aids: Made for each other?” Automation and human performance: Theory and applications, R. Parasuraman and M. Moulousa (Editors), Erlbaum, Mahwah, NJ, 1996, pp. 201–220. National Research Council, Committee on Human-System Design Support for Changing Technology, “Human-system integration in the system development process: A new look,” Committee on Human Factors, Division of Behavioral and Social Sciences and Education, R.W. Pew and A.S. Mavor (Editors), National Academies Press, Washington, DC, 2007, Chap. 3, pp. 55–74. R.E. Nisbett and T.D. Wilson, Telling more than we know: Verbal reports on mental processes, Psychol Rev 84 (1977), 231–259. R. Parasuraman and V. Riley, Humans and automation: Use, misuse, disuse, abuse, Hum Factors 39 (1997), 230–253. R. Parasuraman, R. Molly, and I.L. Singh, Performance consequences of automation-induced complacency, Int J Aviation Psychol 3 (1993), 1–23. R.W. Pew and A.S. Mavor, Human-system integration in the system development process: A new look, National Academies Press, Washington, DC, 2007. R.E. Redding, Perspectives on cognitive task-analysis: The state of the state of the art, Proc 33rd Annu Meet Hum Factors Soc, Santa Monica, CA, 1989, pp. 1348–1352. W.B. Rouse and D.A. Bodner, “Organizational simulation,” Handbook of systems engineering and modeling, A.P. Sage and W.B. Rouse (Editors), Wiley, Hoboken, NJ, 2009, pp. 763–790. T.B. Sheridan, Man-machine systems, MIT Press, Cambridge, MA, 1974.

245

T.B. Sheridan, Telerobotics, automation, and human supervisory control, MIT Press, Cambridge, MA, 1992. T.B. Sheridan, Humans and automation: System design and research issues, Wiley, Hoboken, NJ, 2002. P. Slovic and A. Tversky, Who accepts savage’s axiom? Behav Sci 19 (1974), 368–373. M.J. Stevens and M.A. Campion, The knowledge, skills, and ability requirements for teamwork: Implications for human resource management, J Management 20 (1994), 503–530. J.A.F. Stoner, Risky and cautious shifts in group decisions: The influence of widely held values, J Exper Soc Psychol 4 (1968), 442–459. M.A. Wallach, N. Kogan, and D.G. Bern, Diffusion of responsibility and level of risk-taking in groups, J Abnorm Soc Psychol 68 (1964), 263–274. M.A. Wallach, N. Kogan, and D.G. Bern, Group influence on individual risk taking, J Abnorm Soc Psychol 65 (1962), 75–86. J. Weaver, C. Bowers, E. Salas, and J. Cannon-Bowers, Networked simulations: New paradigms for team performance research, Behav Res Meth Instrum Comput 21 (1995), 12–24. C.D. Wickens and J.G. Hollands, Engineering psychology and human performance, Pearson, Toronto, ON, Canada, 1999. G.J.S. Wilde, Target risk 2: A new psychology of safety and health: What works? What doesn’t? And why? PDE Publications, Toronto, ON, Canada, 2001. D.D. Woods, and N.B. Sarter, “Learning from automation surprises and going sour accidents,” Cognitive engineering in the aviation domain, N. Sarter and R. Amalberti (Editors), Erlbaum, Hillsdale, NJ, 2000, pp. 327–353. R.M. Yerkes and J.D. Dodson, The relation of strength of stimulus to rapidity of habit formation, J Compar Neurol Psychol 18 (1908), 459–482.

Azad Madni (Fellow) received his B.S., M.S., and Ph.D. degrees in engineering from UCLA with specialization in man-machine environment systems. He is the CEO and Chief Scientist of Intelligent Systems Technology, Inc. He has received several awards and commendations from DoD and the commercial sector for his pioneering research in modeling and simulation in support of concurrent engineering, agile manufacturing, and human-systems integration. He received the 2008 President’s Award and the 2006 C.V. Ramamoorthy Distinguished Scholar Award from the Society for Design and Process Science (SDPS). In 2000 and 2004, he received the Developer of the Year Award from the Technology Council of Southern California. In 1999, he received the SBA’s National Tibbetts Award for California for excellence in research and technology innovation. He has been a Principal Investigator on seventy-three R&D projects sponsored by DoD, NIST, DHS S&T, DoE, and NASA. His research interests are game-based simulations, enterprise systems architecting and transformation, adaptive architectures, shared human-machine decision making systems, and human-systems integration. Dr. Madni is a past president of SDPS and is the Editor-in-Chief of the society’s journal. He is also a fellow of INCOSE, SDPS, and IETE. He is listed in Marquis’ Who’s Who in Science and Engineering, Who’s Who in Industry and Finance, and Who’s Who in America.

Systems Engineering DOI 10.1002/sys

Suggest Documents