Preprint. For details, see: https://arxiv.org/abs/1707.03869
Cognitive Biases in Software Engineering: A Systematic Mapping and Quasi-Literature Review Rahul Mohanani, Iflaah Salman, Burak Turhan, Pilar Rodriguez, Paul Ralph Abstract—One source of software project challenges and failures is the systematic errors introduced by human cognitive biases. Although extensively explored in cognitive psychology, investigations concerning cognitive biases have only recently gained popularity in software engineering (SE) research. This paper therefore systematically maps, aggregates and synthesizes the literature on cognitive biases in software engineering to generate a comprehensive body of knowledge, understand state of the art research and provide guidelines for future research and practise. Focusing on bias antecedents, effects and mitigation techniques, we identified 67 articles, which investigated 47 cognitive biases, published between 1990 and 2016. Despite strong and increasing interest, the results reveal a scarcity of research on mitigation techniques and poor theoretical foundations in understanding and interpreting cognitive biases. Although bias-related research has generated many new insights in the software engineering community, specific bias mitigation techniques are still needed for software professionals to overcome the deleterious effects of cognitive biases on their work. Index Terms—Antecedents of cognitive bias, cognitive bias, debiasing, effects of cognitive bias, quasi-systematic literature review, systematic mapping, systematic literature review
—————————— u ——————————
1 INTRODUCTION
M
OST research on software engineering (SE) practices and methods focuses on better ways of developing software. That is, most studies present an ostensibly good way of doing something (e.g., a design pattern). Another way to improve software development is to identify common errors (e.g., antipatterns), their causes and how to avoid them. This is where the concept of cognitive biases is exceptionally useful. A cognitive bias is a common, systematic error in human reasoning [1]. For example, confirmation bias is the tendency to pay undue attention to sources that confirm our existing beliefs while ignoring sources that challenge our beliefs [2]. Within software development context, cognitive biases may affect: • Behavior, e.g., deciding to use a relational database because it seems like a default option. • Belief, e.g., strongly believing that well-documented requirements are crucial for success without any good supporting evidence. • Estimation, e.g., (over/under)-estimating the time and effort needed to develop a system.
•
Memory, e.g., remembering previous projects as having gone better than they did. • Perception, e.g., perceiving a unit test as designed to break the system when it is actually designed to confirm that the system works. Cognitive biases help to explain many common software engineering problems in diverse activities including design [3], [4], [5], testing [6], [7], requirements engineering [8], [9] and software project management [10]. While there is no definitive index, over 200 cognitive biases have been identified, and few studies attempt to classify cognitive biases [11]. Some research in the information systems community (e.g. [2], [12], [13]) notwithstanding, no previous comprehensive literature review of research on cognitive biases in SE exists. This hinders development of an actionable, cumulative body of knowledge. This literature review tries to systematically uncover all the cognitive biases investigated in SE research, not limited to any specific knowledge area of SE by explicitly focusing on the antecedents and effects of biases, and any debiasing techniques used to mitigate the effects of cognitive biases. The aim of this study is therefore to document the state of the art of human ———————————————— cognitive bias related research in SE, addressing the follow• R. Mohanani, I. Salman, P. Rodriguez are with M3S Group, University of ing research question. Oulu, Oulu, Finland. E-mail:
[email protected],
[email protected], pilar.rodriResearch question: What is the current state of research on
[email protected] cognitive biases in software engineering? • B. Turhan is with the Department of Computer Science, Brunel University To address this question, we adopt a quasi-systematic London, London, UK. literature review and systematic mapping approach. This E-mail:
[email protected] • P. Ralph is with the Department of Computer Science, University of Auck- combination facilitates classifying and aggregating findland, Auckland, New Zealand. ings into a comprehensive and structured body of E-mail:
[email protected] knowledge. By presenting gaps in available research and discussing implications to research and practise, this study
provides grounded avenues for future bias-informed investigations with a view to establishing foundational insights in application of cognitive psychology in SE. This paper is organized as follows: the next section provides a brief primer on cognitive biases, focusing on their causes, effects and consequences. Section 3 describes our research approach. We then present the findings of our systematic mapping (Section 4) and literature review (Section 5). Section 6 discusses the implications and limitations of our findingsand offers avenues for future research. Section 7 summarizes the study’s contributions.
2 COGNITIVE BIASES: A BRIEF PRIMER Human behavior and cognition constitutes a critical research area in software engineering (SE) [14], [15], [16], [17]. Although agile methodologies and empirical research highlight the importance of people and teams involved in software development, SE research and practice continue to focus more on technologies, techniques and processes than common patterns of human-related errors and their manifestation in project failures. Moreover, many studies on human decision-making in software engineering adopt theories and concepts from psychology to address issues in, for example, decision support systems [2], [18], software project management [19] and software engineering and development paradigms [20], [12], [21]. Numerous systematic literature reviews have explored psychological and sociological aspects of SE including motivation [22], personality [23], organizational culture [24] and behavioral software engineering [25]. These studies collectively demonstrate the importance of human cognition and behavior for software engineering success. Since Tversky and Kahneman [26] introduced the term in the early 1970s, a vast body of research has investigated cognitive biases in diverse disciplines including psychology [27], medicine [28] and management [13]. Kahneman [29] went on to differentiate fast, “system one” thinking from slow, “system two” thinking. System one thinking is unconscious, effortless, intuitive, and leads to systematic errors (i.e. biases). System two thinking, contrastingly, is conscious, effortful, more insightful, more rational, more statistical and less prone to systematic errors [29]. However, system-one thinking is not the only source of biased reasoning. More generally, cognitive biases can be caused by: 1. Heuristics, that is, simple and efficient, but not necessarily most accurate, rules for making decisions and other judgments [29]. 2. Illusions [30]—both perceptual and social—for example, the illusion of control [31]. 3. Group processes including groupthink - a psychological phenomenon that occurs within a group of people in which the desire for harmony or conformity in the group results in an irrational or dysfunctional decision-making outcome - by inhibiting critical thinking to avoid conflicts that may lead to excessive optimism [32].
4.
Psychological phenomena, for example, the Pollyanna Principle - an individual's “tendency to be overly positive in perception, memory and judgment” may reinforce optimism bias [12]. 5. Human cognitive limitations, for example, difficulty in quickly processing complex information [P24]. 6. Motivational factors such as rewards and incentives, responsibility / empowerment, and stress, risk or lack of communication channels [22]. 7. Adaptation to natural environments [33]. While the causes of cognitive biases are of great interest in psychology, SE researchers focus more on their effects [21]. While Section 5.2 provides a more comprehensive list, some examples include: • Confirmation bias (defined earlier) is implicated in the common antipattern where unit tests attempt to confirm that the code works rather than to break it [34]. • Optimism bias, the tendency to produce unrealistically optimistic estimates [35], contributes to the tendency for all kinds of projects (not only software projects) to exceed their schedules and budgets [36]. • The framing effect can lower design performance. The framing effect refers to the tendency to give different responses to problems that have surface dissimilarities but are formally identical [37]. Framing desiderata more formally and definitively reduces design quality by impeding creativity [3], [38]. SE researchers are also interested inhibiting cognitive biases or mitigating their deleterious effects [21], that is, debiasing [39], [40]. As Fischoff [39], explains, debiasing interventions can be divided into four levels: To eliminate an unwanted behavior, one might use an escalation design, with steps reflecting increasing pessimism about the ease of perfecting human performance: (A) warning about the possibility of bias without specifying its nature… (B) describing the direction (and perhaps extent) of the bias that is typically observed; (C) providing a dose of feedback, personalizing the implications of the warning; (D) offering an extended program of training with feedback, coaching, and whatever else it takes to afford the respondent cognitive mastery of the task. (p. 426)
The problem with this approach is that options A, B and C are ineffective [41], [42], and option D can be very expensive. Fischoff therefore offered a fifth option: redesign the person-task system to inhibit the bias. Planning Poker, the effort estimation technique used in many agile methods, illustrates option five—it redesigns the person-task system to prevent developers from anchoring on each others’ effort estimates [43].
3 RESEARCH METHODOLOGY This study combines systematic mapping with a quasi-systematic literature review. A systematic mapping study (SMS) is a method that is conducted to get an overview of a particular research area. A systematic map provides a
structure of the type of research reports rather than extracting specific details by categorizing the available details on a broad or poorly explored topic [44], [45]. Meanwhile, a systematic literature review (SLR) aggregates and evaluates an area of literature vis-à-vis specific research questions [46]. A quasi-SLR uses the same methodology with less extensive meta-analysis [47]; i.e., statistically combining quantitative results from different studies of the same context. We employ a quasi-SLR because there are insufficient empirical studies with overlapping variables to facilitate statistically sound meta-analysis. Combining an SLR and SMS is common—the SLR synthesizes the qualitative findings while the SMS provides bibliometric analysis and appropriate categorization of primary studies based on a priori elements [48], [49], [50], [51]. Wherever possible, we followed the guidelines provided by Kitchenham and Charters [46] and Petersen et al. [45].
3.1 Objectives Following Goal-Question-Metric [52], the objective of this paper is as follows. Purpose: To analyze SE literature concerning cognitive biases for the purpose of identifying, aggregating, categorizing and synthesizing the evidence available in the primary studies, with respect to investigating the antecedents to cognitive biases, effects of cognitive biases and debiasing techniques; from the point of view of researchers in the context of empirical and non-empirical studies conducted in academic and industrial contexts.
3.2 Research questions For clarity, we distinguish between mapping study research questions (MQs) and systematic review research questions (SQs). The research questions are defined using the population, intervention, comparison and context criteria, as proposed by Kitchenham and Charters [46]. The research questions for the mapping study are as follows: MQ1. What cognitive biases are investigated in SE studies? MQ2. How are the publication trends concerning cognitive biases in SE? MQ3. Which SE researchers are most active in investigating cognitive biases? MQ4. Which SE knowledge areas are most often addressed by research on cognitive biases? MQ5. In which SE outlets is most cognitive bias research published? MQ6. Which research methodologies are most frequently used to investigate cognitive biases in SE? Meanwhile, the research questions for the systematic review are: SQ1. What antecedents of cognitive biases are investigated in SE? SQ2. What effects of cognitive biases are investigated in SE?
SQ3. What debiasing approaches are investigated in SE?
3.3 Systematic mapping / review protocol To improve rigor and reproducibility of our review process, we developed and mutually agreed upon an initial review protocol [53]. The review protocol was improved by iterative assessment and periodic review. We summarize the major stages of our review protocol as follows. 1. Execute the search string “software AND cognitive bias” for all years up to and including 2016 in online databases (see Section 3.4.1). 2. Remove any duplicate entries using automated and then manual de-duplication (see Section 3.4.2). 3. Apply the inclusion and exclusion criteria described in Section 3.5.1. 4. Expand the sample through retrospective snowballing on reference lists to find additional studies; repeat steps 2 and 3 for additional studies identified in this stage. 5. Expand the sample by reviewing the publication lists of the most active authors for additional studies; repeat steps 2-4. 6. Assess the quality of the selected primary studies based on a priori guidelines (see Section 3.5). 7. Extract the required data from the primary studies based on the research questions. The remainder of this section provides further details on each of these steps. 3.4 Literature search 3.4.1 Database selection and search string formation We searched five online databases recommended by Kitchenham and Charters [46]: IEEE Xplore, Scopus, Web of Science, the ACM Digital Library and Science Direct. These databases are frequently used to conduct SLRs on computer science and SE related topics; and in our experience, provide good coverage, filtering and exportability functions of the searched literature. We defined the search string as “software AND cognitive bias”. Where available, we used filters to avoid articles from domains other than SE. As a result, we reached an initial sample of 826 studies. Table 1 summarizes the search results per online database. 3.4.2 Citation management and de-duplication We used RefWorks (www.refworks.com), to store and organize the retrieved articles, and to automatically remove duplicate articles. We then exported the bibliographic entries to a spreadsheet, where we tagged each entry with a unique three-digit identification number. We manually reviewed the articles for additional duplicates. This produced a sample of 518 articles. 3.5 Primary study selection 3.5.1 Primary study screening process The purpose of screening is to remove articles that do not provide direct evidence concerning the study’s objectives.
Inclusion and exclusion criteria were developed, then reviewed and agreed by the whole team. This resulted in three inclusion criteria and four exclusion criteria. Articles were included if: 1. The context of the study is software engineering, where we define software engineering as “a systematic approach to the analysis, design, assessment, implementation, test, maintenance and reengineering of software, that is, the application of engineering to software” [54]. 2. The article explicitly involves at least one cognitive bias. 3. The article is published in a journal or in a conference, workshop or symposium proceedings. [55] Articles were excluded if: 1. The article was not peer reviewed or was in a language other than English. 2. The article is a doctoral / master’s dissertation or research plan, short paper or report thereof. 3. The article is a secondary or tertiary study. 4. The article’s references to cognitive biases were tangential to its purpose or contributions.
Due to the large number of articles (518), we screened them first based on their titles, consulting the abstract or entire article only when necessary to reach a confident judgment. To develop a common understanding of the topic of study and inclusion / exclusion criteria, two of us piloted the screening process on 20 randomly selected papers. This produced medium to high agreement (Cohen’s Kappa = 0.77 [55]). We then refined the protocol, and screened another 20 randomly selected papers which yielded a higher agreement (Cohen’s Kappa = 0.9). Then, we completed the remainder of the screening. This produced 42 primary studies.
3.5.2 Retrospective snowballing Next, we expanded the sample through retrospective snowballing. That is, we checked the references of all included studies (subject to the inclusion and exclusion criteria) for additional relevant studies. Each time we found a new study, we also checked its references, leading to four iterations (Table 2). This resulted in including 16 additional primary studies.
TABLE 1 DATABASE SEARCH RESULTS Database
Search string
Date
Filters applied
Results
IEEE Xplore
(software AND “cognitive bias”) “software” AND “cognitive bias” software AND “cognitive bias” software AND “cognitive bias” software AND “cognitive bias”
20th December 2016
Only ‘conference publications’ and ‘journal & magazines’ were included.
153
20th December 2016
‘Newsletters’ category was excluded.
69
20th December 2016
Only ‘computer science’ category was included.
296
20th December 2016
No filters were relevant.
15
December 2016
Only ‘journals’ and ‘computer science’ category were included.
293
ACM Digital Library Scopus
Web Of Science Science Direct Total
826
TABLE 2 RETROSPECTIVE SNOWBALLING Snowballing round
# Articles included
Round 1
10
Round 2
4
Round 3
1
Round 4
1
Total
16
3.5.3 Authors’ analysis Next, we reviewed the publication records of the most prolific authors. For every author of four or more primary studies (this covered 95% of the study pool), we reviewed their Google Scholar profile (scholar.google.com) or, if not available, their personal website. This identified 9 additional primary studies. Retrospective snowballing on these new articles did not produce any additional primary studies. This brought the total to 67 primary studies (see Appendix B for a complete list of primary studies). 3.6 Quality assessment To assess the quality of the primary studies, we synthesized multiple recommendations ([56], [57], [58]) into a single set of yes or no quality assessment criteria, to eliminate subjectivity to the best extent. We then divided the primary studies into two mutual exclusive groups—empirical (48) and non-empirical (19)—since some criteria only apply to empirical studies (Tables 3 and Table 4). To improve rigor and reliability, we ran a pilot where two authors independently assessed the quality of 10 randomly selected primary studies, discussed disagreements, refined mutual understanding of the categories, and revised the quality assessment instrument. Then, each assessed half of the remaining studies. We used the quality scores, based on the quality assessment criteria checklist, to better identify the relevance of all included articles and also to help us extract data for subsequent mapping and review in a better and more objective manner. In this study, we do not reject any article based on the quality assessment rubric, as each primary study satisfies at least 50% of the quality criteria. The mean quality score (percentage of quality criteria satisfied against overall quality criteria checklist) for empirical studies is 83.74% with a maximum value of 92% and minimum of 67%. Whereas, the mean quality score for non-empirical studies is 84.90% with a maximum value of 100% and a minimum of 50%. Moreover, for this study we conceptualized quality assessment process as an assessment of the reporting practices of an included article, rather than judging the actual quality of the research involved.
3.7 Data extraction Next, we compiled a list of data elements to extract from the primary studies. Data extraction form is conceptualized to answer the research questions and Table 5 provides summarization of the specific data elements extracted from each primary study. We extracted the data using NVIVO (www.qsrinternational.com), which facilitates organizing and analyzing unstructured qualitative data. We recorded only explicit information as presented in the primary studies to avoid interpretation bias, and recorded a particular cognitive bias if and only if it formed the core topic of investigation or discussion in a paper. Some articles discussed more than one cognitive bias.
4 SYSTEMATIC MAPPING RESULTS This section addresses each mapping question (MQ). We refer to primary studies with a ‘P’ (e.g. [P42]) to distinguish them from citations to other references (e.g. [42]).
4.1 Cognitive biases investigated in SE Regarding MQ1 (What cognitive biases are investigated in SE studies?), the primary studies investigated 47 different cognitive biases (Fig.1). Each bias is defined in Appendix A. The most investigated cognitive biases are anchoring bias (27), confirmation bias (25), and availability bias (17). Some articles use different names for the same bias (e.g. “optimism bias”, “over-optimism bias”). We combined only obviously synonymous biases as indicated in Appendix A. We return to the issue of synonymous biases, which is more complex than it first appears, in Section 6.2. 4.2 Number of studies Fig.2 addresses MQ2 (How are the publication trends concerning cognitive biases in SE?). The earliest paper published was in 1990. There were one or two papers per year until 2001, after which we see a noticeable but inconsistent increase in publications, peaking in 2010. The apparent reason for the spike seen in 2010 could be attributed to an active contribution (4 publications) co-authored by G. Calikli and A. Bener, who even feature in the list of most active authors (see Section 4.3). 2016 records only 2 publications. This might be because the digital libraries did not index all the relevant publications at the time of the search. 4.3 Most active authors To investigate MQ3 (Which SE researchers are most active in investigating cognitive biases?), we ranked all 109 authors by number of publications in our sample. Of these, 97 authored just one or two publications. The top four authors published five or more publications (Table 6), while the rest of the authors had at the most 3 publications each, and hence they are not featured in Table 6. Magne Jorgensen is clearly the most prolific author in this area. Some of the active authors are frequent collaborators; for instance, Calikli and Bener co-authored eight publications, while Jorgensen and Molokken collaborated on four.
TABLE 3 CLASSIFICATION OF PRIMARY STUDY BASED ON RESEARCH TYPE Research Type
Description
Primary studies
Empirical
Empirical research studies can be defined as research based on observation (viz. interviews, case studies), to test a hypothesis (viz. experimentation) or to construct a hypothesis (viz. grounded theory).
P1, P2, P5, P6, P7, P9, P10, P11, P12, P13, P14, P16, P17, P18, P19, P20, P25, P26, P27, P30, P31, P33, P36, P43, P57, P59, P60, P61, P58, P63, P64, P40, P41, P42, P52, P53, P54, P55, P50, P51, P56, P46, P48, P49, P44, P45, P65, P67.
Non Empirical
Non-empirical research covers studies that are conducted without any empirical methodology (viz. theoretical, conceptual or opinion papers).
P3, P4, P8, P15, P21, P22, P23, P24, P32, P35, P39, P62, P47, P28, P29, P34, P37, P38, P66.
TABLE 4 QUALITY ASSESSMENT CHECKLIST Quality criteria
Empirical
Non-empirical
Whether a motivation for the study was provided? Whether the aim (objectives, research goal, focus) were reported? Whether there was a mentioning of the context (SE knowledge areas) in which the study was carried out? Whether the research design or methodology to address the aims of the research was provided? Whether a description of used samples was provided? Does the paper position itself in the current body of existing literature? Whether the data collection method(s) was reported? Whether The data analysis method(s) was reported? Whether the threats to validity were reported? Whether there was any relevance (industry OR academia) reported? Whether the relationship between researchers and participants were mentioned? Whether the findings / conclusion been reported or not?
X X X
X X X
X X X X X X X X
X
X
X
X
TABLE 5 DATA EXTRACTION ELEMENTS Element
RQ
Year Source avenue Publication type
MQ2 MQ5 MQ5
Author name Subfield of SE Research methodology Cognitive bias Antecedents Effect of bias Debiasing approaches
MQ3 MQ4 MQ6 MQ1 SQ1 SQ2 SQ3
Description
The publication year of the study. The name of the journal / conference or workshop where the study was published. We defined two categories for this - conference and journal. Further, we classified workshop and symposium publications as a conference proceeding. The authors of the study. The classification was done based on SWEBOK version 3 knowledge areas of SE. Experiment, grounded theory, survey, case study, interview, think-aloud protocol analysis. The targeted bias(s) as the focus of investigation or discussion in the study. The factors explored as an antecedents to the investigated cognitive bias(s). The reported effects of the investigated cognitive bias(s). Debiasing treatment / technique / strategy reported to mitigate the effects of investigated cognitive bias(s).
TABLE 6
MOST ACTIVE AUTHORS Name of the author
# Primary studies
Jorgensen Bener Calikli Molokken
14 8 8 5
4.4 SE Knowledge areas investigated for cognitive biases To answer MQ4 (Which SE knowledge areas are most often addressed by research on cognitive biases?), we categorized the primary studies using the knowledge areas specified in the Software Engineering Body Of Knowledge [59], as shown in Table 7. The most frequently investigated knowledge area is SE management (22) followed by software construction (13). Many critical knowledge areas including requirements, design, testing and quality are under-represented. Table 7 does not include mathematical foundations, computing foundations, engineering foundations, SE economics, software configuration management or SE professionalism because no studies corresponded to these areas. 4.5 Preferred publication outlets We summarize the findings concerning MQ5 (In which SE outlets is most cognitive bias research published?) in Table 8. Articles are scattered across many outlets, with 58% in journals or academic magazines, and the rest in conferences, workshops or symposiums. The outlet with the most articles (5) is Journal of Systems and Software. TABLE 7 SE KNOWLEDGE AREAS INVESTIGATED SE knowledge area
# Primary studies
SE Management Software Construction Software Design Software Requirements Software Testing SE (General) Software Quality Software Maintenance SE Process SE Models & Methods
22 13 11 7 6 4 1 1 1 1
4.6 Frequently utilized research methodology Table 9 addresses MQ6 (Which research methodologies are most frequently used to investigate cognitive biases in SE?). Most of the empirical studies employed laboratory experiments. Correspondingly, predominately qualitative research approaches (e.g. case studies) are underrepresented. Nine empirical papers reported multiple studies, of which seven reported multiple experiments whereas, two utilized multimethodological approach. Other approaches includes studies that used pre-existing data sets or did not report sufficient details about the research methodology employed.
5 SYSTEMATIC REVIEW RESULTS This section reports the findings of the quasi-SLR. Following the three SLR research questions, it is divided into antecedents of biases, consequences of biases, and debiasing strategies.
5.1 Antecedents of cognitive biases This section addresses SQ1 by reports the antecedents of cognitive biases investigated in the primary studies. Table 10 provides a summary of findings reported in this section. 5.1.1 Anchoring and adjustment bias Anchoring and adjustment is a common heuristic in which one makes estimations by adjusting an initial value called an anchor. Anchoring bias is “the tendency, in forming perceptions or making quantitative judgments of some entity under conditions of uncertainty, to give excessive weight to the initial starting value (or anchor), based on the first received information or one’s initial judgment, and not to modify this anchor sufficiently in light of later information” [60, p. 51]. In other words, anchoring bias is the tendency to stick too closely to the anchor [26]. Suppose, for example, that two developers are discussing how long a task will take. The first developer guesses two days. Further suppose that, if not for the initial estimate (anchor), the second developer would have guessed one week. However, she estimates three days by adjusting the anchor. Several primary studies suggest potential antecedents of anchoring bias, including (lack of) expertise. According to Jain et al. experts are less susceptible to anchor bias than novices [P17]. Meanwhile, uncertainty, lack of business knowledge and inflexible clients exacerbate anchoring during decision-making in project-based organizations [P65]. Other biases, including confirmation bias and availability bias, may also promote anchoring and adjustment [P18]. Meanwhile, one primary study found that system designers anchor on preconceived ideas, reducing their exploration of problems and solutions [P35]. Similarly, developers tend to re-use existing queries rather than writing new ones, even when the existing queries do not work in the new situation [P2]. Two primary studies ([P18], [P19]) investigated developers in adjusting systems due to change requests. They found that missing, weak, and neglect of traceability knowledge leads developers to anchor on their own understanding of the system, which causes poor adjustments (deficient modifications). Another study [P21] found that decision-makers struggle to adjust to the new or changed environments because they were anchored on their outdated understanding of a previous context. The astute reader may have noticed something amiss here. Anchoring bias refers to insufficiently adjusting a numerical anchor to estimate a numerical property. “Anchoring” on pre-conceived ideas or outdated knowledge and “adjusting” queries and code is not anchoring and adjustment as understood in the cognitive biases literature. We
TABLE 8 TOP PUBLICATION OUTLETS Source Avenue Journal of Systems and Software Journal of Software Systems Engineering Conference International Conference on Software Engineering Information and Software Technology Journal Empirical Software Engineering Journal International Conference on Evaluation and Assessment in Software Engineering IEEE Transactions on Software Engineering Communications of the ACM Information & Management Journal Information Systems Journal International Conference on Information Systems International Conference on Predictive Models in Software Engineering International Symposium on Empirical Software Engineering and Measurement Psychology of Programming Interest Group Workshop Advanced information systems engineering Journal International Journal of Project Management European Conference on Cognitive Science AI Magazine Americas Conference on Information Systems Canadian Conference on Electrical and Computer Engineering Document Engineering (Symposium) Ethics and Information Technology Journal European Conference on Information Systems European Software Process Improvement IASTED International Conference on Software Engineering Industrial Management & Data Systems Journal Information Systems Management Journal Information Systems Research Journal International Conference on Electronics, Communications and Computers International Conference on Information Science and Technology International Conference on Information, Business and Education Technology International Conference on Intelligent user interfaces International Workshop on Emerging Trends in Software Metrics Journal of Association for Information Systems Journal of Visual Languages & Computing Management Science Journal Proceedings of AGILE conference Research in Systems Analysis and Design: Models and Methods Journal Scandinavian Journal of Information Systems Science of Computer Programming Journal Software Quality Journal Transaction on Professional Communication Journal Ubiquity Journal Workshop on a General Theory of Software Engineering (GTSE) International Workshop on Cooperative and Human Aspects of Software Engineering
# Primary studies 5 3 3 3 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
will revisit this confusion in Section 6.2. For now, we organize results according to what primary studies ostensibly investigate TABLE 9 FREQUENTLY UTILIZED RESEARCH METHODOLOGY Research methodology
# Primary studies
Experiment Case study Interview Grounded theory Protocol analysis Survey Other approaches
26 6 3 3 3 3 8
5.1.2 Optimism bias Optimism bias is the tendency to produce overly optimistic, estimates, judgments and predictions [61]. Most of the primary studies addressing optimism bias involve practitioners in the context of effort estimation. Several factors appear to aggravate optimism bias. Practitioners in technical roles (e.g. developers) appear more optimistic than those in non-technical roles (e.g., software managers), at least in estimating web development tasks [P52]. The implication here is surprising: knowing more about specific tasks leads to less accurate effort estimates. Meanwhile, one primary study [P54] of students enrolled in software development course had several interesting findings: • Estimates of larger tasks (e.g. whole projects) are more accurate than estimates of decomposed tasks (e.g. a single user story). • Estimates for more difficult subtasks are more optimistic while estimates for easy subtasks are more pessimistic. • Students are also optimistic about the accuracy of their estimates; that is, their confidence intervals are far too tight. • Estimates are more accurate in hindsight; that is after a task is completed. Human perceptions and abilities also appear to affect optimism. For example, practitioners judge more concrete, near-future risks more accurately than more abstract, farfuture risks [P25]. Contrastingly, optimism bias is related to the way some projects are initiated. For example, Winner’s curse is a phenomenon in which the most competitive bid is selected [P61]. Rather than the most efficient or capable organization’s bid, the most competitive bid is often the least realistic, or even least honest. 5.1.3 Availability bias Availability bias is a tendency to allow information that is easier to recall unduly influence preconceptions or judgments [62]. For example, availability bias manifests in two ways when searching software documentation [P14]: Profession-
als may use inappropriate keywords because they are familiar and easy to remember, or they may struggle to find answers in unfamiliar locations in the documentation. Nevertheless, prior-knowledge has positive overall effects on efficiency and effectiveness in documentation searches [P14]. Availability bias can also manifest in project-based organizations, when decisions related to future estimates concerning project time, costs and other resources are made solely based on information concerning events that are either easy to recall, or due to lack of any available historical records or absence of lesson learnt from previously completed projects [P65].
5.1.4 Confirmation bias Confirmation bias is the tendency to pay undue attention to sources that confirm our existing beliefs while ignoring sources that challenge our beliefs [2]. For instance, the tendency of software testers to choose test cases that confirms the available hypothesis instead of selecting test cases that are inconsistent with the hypothesis. One of the antecedents to confirmation bias is the presence of prior knowledge of software practitioners during knowledge retrieval from software documentation. Although, using prior-knowledge is time-effective, it might result in manifestation of confirmation bias in software practitioners as the search is then mainly focused on confirming the answers that they already know from their prior knowledge, i.e., practitioners tried to confirm their prior knowledge [P14]. Recency of experience and training levels in logical reasoning and mathematical proof reading are reported to be other antecedents to confirmation bias. For example, participants (developers/testers) who were experienced but inactive as a developer/tester showed less confirmation bias than those who were both experienced and active in their roles [P5]. However, the study does not report the reasons for such an observation. Calikli and Bener also report that participants who had training in logical reasoning and mathematical proof reading manifested less confirmation bias in software testing [P5]. This is due to the fact that they were trained to approach the task at hand more logically rather than exercising the program in the required way only. 5.1.5 Fixation Fixation is the tendency to focus disproportionately on one aspect of a situation, object or event—particularly self-imposed or imaginary barriers [12]. Only one primary study explicitly investigated fixation [P20]. It found that presenting desiderata as requirements leads to more fixation on those desiderata than presenting them as a less-structured list of ideas. 5.1.6 Overconfidence bias Overconfidence bias is the tendency to overestimate one’s skills and abilities [63]. The manifestation of (over)-confidence bias is attributed to ‘planning-fallacy’- the tendency to underestimate project completion time [P47]. A very specific focus towards the completion of short-time tasks
might lead project managers to neglect other useful information, leading to illusion of control and overconfidence in their abilities [P47].The lack of one’s ability to reflect on one’s own past experiences and fundamental (primary) way of thinking and assessing a situation might also lead to overconfidence bias among software developers [P47].
5.1.7 Pessimism bias Pessimism bias is the tendency to report the status of any task in worse shape than the actual situation [19]. In SE, pessimism bias manifests, when a project manager reports a situation in far worse condition than it really is [P33]. One of the major reasons contributing towards such behavior is to acquire more resources (e.g., funds, developers etc.) to successfully complete the project. This might be stemming from the lack of confidence in one’s abilities, or to safeguard one’s own reputation or of the team’s, to complete the project in stipulated time in the contract [P33]. 5.1.8 Hindsight bias When people know the actual outcome of a process, they tend to regard that outcome as having been fairly predictable all along – or at least more predictable than they would have judged before knowing the outcome [64]. In this regard, weak historical basis of lessons learned is an antecedent to hindsight bias in context of project based organizations [P65]. However, the study does not clearly establish any link or rationale, and lacks empirical evidence concerning the manifestation of hindsight bias and its antecedent. 5.1.9 Mere exposure effect It is a tendency to develop a preference for things that people are familiar with [65]. This bias is expressed when a project participant prefers to retain its current responsibilities and role in the project as opposed to a different role. Project participants often disregard any change due to the fear of moving out of the comfort-zone or due to fear of accepting new responsibilities [P65]. 5.1.10 Parkinson’s Law effect It is a tendency to procrastinate the execution of activities until the agreed end date [66]. According to the only primary study focusing on Parkinson’s Law effect, lack of motivation during tasks extending for longer than usual duration with poor monitoring of the progress, might be the reason for the manifestation of this bias especially in project guided organizations [P65]. 5.1.11 Halo effect The halo effect is the tendency to use global evaluations to make judgments about specific traits [67]. For example, in a job interview, the interviewer might incorrectly infer that more charismatic candidates are more technically competent. This effect tends to occur when employees’ performances are evaluated in the software organizations. In this respect, subjective evaluation criteria are an antecedent to halo effect because first impression promotes the judgements in making subjective evaluations. The first impression however, might be an effect of employee’s personal
problems or inter or intra team communication problems [P65].
5.1.12 Sunk-cost fallacy Sunk-cost fallacy is a logical error due to the tendency to invest more future resources in a situation, in which a prior investment has been made, as compared with a similar situation in which a prior investment has not been made [68]. Sunk-cost fallacy is manifested during software project decision making, where future decisions about the cost or time estimates are chosen based on only immediate benefits [P65]. In such scenarios, costs and time estimates are disregarded mainly due to inherent humane characteristics of being stubborn or of being in one’s confort-zone for extended time period [P65]. 5.2 Effects of cognitive biases This subsection addresses SQ2 by describing the effects of cognitive biases found in the primary studies. Table 11 summarizes these results. 5.2.1 Anchoring and adjustment bias Anchoring and adjustment bias affects multiple aspects of software development including development, effort estimation, implementing change requests and prototyping. For example, in the context of artifact and query re-use, developers who anchored on the existing solutions – using it as an initial or starting base, introduced unrequired functionality as part of the final solution. This tendency resulted in errors and previously omitted functionalities propogating to the final solution [P2], [P44]. Furthermore, productivity-estimates of software development teams were affected due to anchoring bias, as project managers anchored their effort, time and cost estimates to the initial lower estimates (conservative estimate figures) in comparison with actual project status report. Such behavior provided project managers an advantage of justifying their proposed time and resources for the project [P53]. System analysts, after clearly understanding a system, tend to illogically anchor to their current understanding rather than reconsidering the system’s initial and present situation. Similarly, it is possible that an analyst adjusts to the initially prepared prototype rather than preparing a new withconflicting information [P4]. Research suggests multiple types of anchors besides point estimates when software developers deal with change requests, especially when traceability is missing [P19]. For example, rather than looking into the code-base that implemented the functionality (related to the change), they focus on (anchor) those parts of implementation that they worked on in the past and adjust the solution accordingly [P19]. The solution may suffer incorrect changes (adjustment) because traceability was missing or weak. Besides the industrial context, effects of anchoring and adjustment bias can also be observed among students in academic context. For example, information technology students anchored to the content of an earlier course while guessing initial architectural solutions of a problem, in the software architecture class [P66].
5.2.2 Confirmation bias The tendency of confirming prior beliefs rather than looking for counter evidence has affected many areas of SE. For example, this approach led practitioners to a false belief about the completeness and correctness of their queries during software documentation searching [P14], and as a result, they searched only those parts of the documentation that confirmed the answers to their queries. Similarly, developers, while acquiring information to deal with a change-request tend to confirm their prior knowledge rather than exploring the implementation completely [P19]. Technical people and business people interact with each other with certain notions. Each group has its own notion (confirmation bias) that the other group is not going to buy their idea or understand their point [P29]. This leads to constant conflicts between the two groups during joint work. Moreover, tradeoff studies, where multiple alternatives to a solution are considered with multiple criteria, also suffer from confirmation bias. The analysts and decision makers, while analyzing the alternatives or criteria, tend to ignore the results that do not confirm their preconceived ideas [P39]. In one of the opinion studies, it is suggested that software practitioners such as testers and requirements analysts might fall prey to confirmation bias and desire the results they believe in during pilot-runs before the actual task, thus risking the entire project [P28]. Confirmation bias in software testing context leads to an increased number of production defects. A high level of defect rate is therefore may be directly related to a high level of confirmation bias [P5], [P7]. It is supposed that people are more likely to design/run test-cases that would confirm the expected execution of a program rather than those test-cases that could identify the failures in a program [P34]. Debugging activity is also prone to confirmation bias, therefore, developers should not remain under the impression that they have located the right cause of a bug [P34]. 5.2.3 Hyperbolic discounting Hyperbolic discounting is the tendency to prefer smaller, rewards in the near future over larger rewards later on [69]. It is argued that software designers and managers, due to manifestation of this bias, usually settle for quick-fixes of problems and take resort to simple refactoring, over other alternatives that might require more effort and planning [P38]. Although, the advantage of quick-fixes requiring less effort is more immediate, the implemented solution might not be the best solution. 5.2.4 Availability bias To mine essential information from software documentation, practitioners tend to search only those locations that are either easy to recall or are easily available. However, this strategy most often fails to search the correct / required information. Developers even portray a tendency to rely on their past experience which is easily recalled or available. However, this leads to systemic errors as past an
individuals’ experiences may not be consistent with the recent state of the system [P14], [P18]. Availability bias can lead to many problematic situations, some of which are as follows: • Availability bias might cause over-representation of specific code because certain code-features will appear more frequently that are developed or maintained by specific developer or a team of developers, or are easily remembered by them [P34]. • Developers sometimes might even end up overrating their individual contribution unrealistically due to manifestation availability bias [P34]. • Availability bias might even contribute to certain controversies. For instance, due to prior experience, developers would choose languages that they are comfortable with inspite of better or more suitable languages available to choose from [P34].
5.2.5 Framing Effect The framing effect is the tendency to react differently to situations that are fundamentally identical but presented (or framed) differently. One study [P20] found that the framing desiderata as requirements reduced creativity in software designs (compared to framing as “ideas”). 5.2.6 Attentional bias Attentional bias refers to how our recurring thoughts can bias our perceptions [70]. For example, suppose a project manager is conducting a performance review with a software developer. A developer with an anxiety disorder might pay more attention to criticism than praise. He might perceive the meeting as more negative and critical than a similar, healthy developer. In other words, the anxious person's recurring, self-critical thoughts negatively bias his perception of the review. One primary study ostensibly demonstrated one of the effects of attentional bias [P30]. Siau et al. investigated experts' interpretations of entity-relationship diagrams with conflicts between their relationships and their labels. For example, one diagram showed a parent having zero-tomany children, when a person obviously needs at least one child to be a parent. The experts failed to notice these contradictions because they only attended to the relationships. The study claims that this is "attentional bias"; however, it does not explain what recurring thoughts were involved, or how they led to ignoring the surface semantics. This is another example of confusion in the literature (see Section 6.2). The bias at hand is more likely fixation; the experts' analysis was biased by their fixation on one aspect of the models. 5.2.7 Representativeness The representativeness heuristic is the tendency to make a judgment by fitting an object into a stereotype or model based on a small number of properties [26]. Due to representativeness heuristic, people tend to expect results they consider to be representative or typical to a default value without considering the actual size of the sample.
TABLE 10 ANTECEDENTS OF COGNITIVE BIASES Cognitive Bias
Antecedents
Anchoring & Adjustment bias
Reusing previously written queries; difficult to identify referential points (anchors) [P2]. Missing, weak and disuse of traceability knowledge [P18], [P19] Recalling domain related information from past knowledge [P19]. Not being able to adjust to the new environment [P21]. Development experience [P17]. Uncertainty of future actions, lack of business knowledge and historical basis and inflexible clients [P65]. Confirmation and availability bias occurrence during system design changes; forming primary understanding of design, requirements and tasks [P18]. Technical roles (project manager, technology developers); effort estimation strategy [P52]. Task difficulty; task size [P54]. Estimates asking format [P49]. Psychologically distant project risks [P25]. Winners curse in project bidding [P61]. Prior-knowledge; earlier experience in searching [P14]. Lack of historical records of lessons learned [P65]. Selection of test-cases due to impossibility to perform exhaustive testing [P34]. Prior-knowledge [P14]. Lack of training in logical reasoning and mathematical proof reading; experience and being active in a role (developer / tester) [P5]. Framing desiderata as “requirements” [P20].
Optimism bias
Availability bias Confirmation bias
Fixation (Over-) Confidence bias Pessimism bias Hindsight bias Mere exposure effect Parkinson’s law effect Halo effect Sunk-cost fallacy
Inability to question the fundamental way of thinking [P47]. Planning fallacy [P47]. Wrong reporting due to lack of confidence or for acquiring more resources [P33]. Weak historical basis of lessons learned [P65]. A new and different role assignment in a project than before, leaving comfort zone and pessimism about the outcomes of change [P65]. Lack of motivation in the case of long duration tasks with poor monitoring of the progress [P65]. Subjective evaluation criteria being affected by the first impression [P65]. Extending the decisions based on immediate benefits without giving due attention to the costs and time required for a change, stubbornness and comfort zone characteristics of project managers [P65].
Similar to availability bias, representativeness bias can lead to some code related features to appear more frequently than the others, which may be in contrast to the actual situation [P34]. Owing to representativeness, a software tester may choose to disregard a set of error producing test cases that could in turn result in increased defect density [P8].
5.2.8 Miserly information processing, bandwagon effect and status-quo bias Miserly information processing is the tendency to avoid deep or complex information processing [37]. Bandwagon effect is, “the propensity for large numbers of individuals, in social and sometimes political situations, to align themselves or their stated opinions with the majority opinion as they perceive it” [60, p. 101], and status-quo bias is manifested when one inclines to a status quo irrationally [71]. Miserly information processing may occur, when a team gathering requirements from the clients readily agrees with them without any requirements’ reconsideration [P22]. Bandwagon’s effect may take place when team-
members readily agree to the team-leader’s decision without reconsidering other possible options [P22]. Later, if the chosen possibility leads to unwanted circumstances and into an overwhelming situation, then the team-members will defend their choices illogically – status quo bias effect [P22]. The results of these biased situations may lead to a less liked or a low-quality product because acceptance of the requirements and commending a team-leader for his decision is not done diligently.We should, however, caution the reader that these arguments are not backed by empirical observations and come from an opinion article.
5.2.9 Overconfidence bias One primary study suggests that overconfidence may cause problems in eliciting information from users. Specifically, an overconfident analyst may cease data collection before all pertinent information has been found [P4]. 5.3 Debiasing approaches The primary studies propose debiasing techniques for only 10 of the 47 cognitive biases investigated. However, some
of these techniques target a family of, similar, or interrelated biases. This section describes the proposed debiasing approaches for these biases. Table 12 further summarizes all our findings. Here we must emphasize caution—none of the primary studies provide strong empirical evidence for the effectiveness of a debiasing techniques. All of the proposed techniques are merely suggestions.
5.3.1 Availability bias For the case where availability bias hindered searching software documentation, three techniques that may mitigate this problem are proposed: developing a “frequently asked questions” document, introducing spelling conventions, and ontology-based documentation [P14]. Ontologybased documentation refers to documents that formalize multiple relationships between discrete pieces of scattered information, to traversal and search [P14]. Another suggestion for mitigating availability bias is to maintain detailed records of software practitioners’ gained experience during software project development lifetime [P55]. Availability bias also affects software design. Projectspecific traceability – a “technique to describe and follow the life of any conceptual or physical software artifact throughout the various phases of software development” [P18, p. 111], may mitigate not only availability but also anchoring bias and confirmation bias [P17], [P18]. Framing the context to highlight relevant information may also help [P34]. Availability bias exacerbates failure rate in software project management. By explicitly uncovering and exploring new situations and possibilities of cost, time and effort estimation decisions via participatory decision-making, helps to promote retrospective reflection amongst project participants. This minimizes the possibility of decisions made solely on the most recent or available information [P67], [P65]. 5.3.2 Confirmation bias Despite being one of the most frequently investigated cognitive biases, few techniques for mitigating confirmation bias have been proposed. Explicitly asking for disconfirmatory evidence against routine guidelines is one approach [P34]. Similarly, asking designers to generate and simultaneously evaluate multiple alternative solutions should hinder premature convergence on early confirmatory evidence [P39]. As discussed in Section 5.3.1, traceability may also help [P17]. 5.3.3 Anchoring and adjustment bias Anchoring bias is especially problematic during effort estimation, which is a special case of forecasting. Forecasting is an area of intense study in the operations research community; however, this review only covers SE research. Still, forecasting can be divided into expert-based (i.e., based on human judgement) and model-based (i.e., based on a mathematical model). Since anchoring-and-adjustment only occurs in expert-based forecasting, adopting modelbased forecasting can be considered a debiasing technique (with the catch that these models are subject to statistical bias and variance) [P55], [P65]. However, model-based forecasting often requires information that is not available
in software projects, so several suggestions for debiasing expert-based effort estimation have been proposed. For example, planning poker [P40]—an estimation technique where participants independently estimate tasks and simultaneously reveal their estimates—is specifically designed to prevent anchoring bias. Because developers construct their estimates simultaneously, there is no anchor to adjust. The developers therefore cannot apply the anchoring and adjustment heuristic, and are therefore not susceptible to anchoring bias. Motivating practitioners to explore multiple solutions by simultaneously planning the design problem and problem solution (co-evoluation); and selecting the best solution option relaxes any dependencies (anchors) on any implicit constraints due to project-specific problems [P35]. In addition, some have argued that anchoring on an initial estimate can be mitigated by explicitly questioning the software project managers via different facilitators (project members or stakeholders) in a company. In such scenarios, knowledge sharing facilitated by team members’ technical competence helps project managers to avoid anchor their initial estimates based on prior experience or self-assessments [P67], [P65]. One study [P39] also suggested warning participants about initial estimations and raising awareness about potential bias. However, such weak interventions are unlikely to help [39]. Directed questions based approach is further suggested to mitigate the occurrence of anchoring bias; for instance: “What is your starting point for estimating (that) duration? Why did you start there?” [P4].
5.3.4 Overconfidence and optimism bias Overconfidence and optimism are two of the most investigated cognitive biases in the SE literature (see Fig. 1). Browne and Ramesh propose addressing numerous cognitive biases, including overconfidence, using directed questions; for instance: “Play the devil's advocate for a minute; can you think of any reasons why your solution may be wrong?” [P4]. “Directed questions attempt to elicit information through the use of schemes or checklists designed to cue information in the user's memory” [P4]. Another primary study argues that Planning Poker helps practitioners mitigate optimism [P64]. However, planning poker does not explicitly encourage converging on more realistic (or pessimistic) estimates. It seems more plausible that the range of estimates would mitigate overconfidence. Double-loop learning that involves encouraging individuals and organizations to engage in self-reflective learning by explicitly confronting the initial assumptions and developing more appropriate ones, is yet another approach suggested to mitigate overconfidence bias [P47]. Meanwhile, one primary study found that the framing of estimation questions affects optimism. Specifically, asking “How much effort is required to complete X?” provided less optimistic estimates than asking, “How much can be completed in Y work hours?” [P49]. Focusing on tasks (rather than time periods) could therefore be considered a debiasing technique.
TABLE 11 EFFECTS OF COGNITIVE BIASES Cognitive Bias
Effects
Anchoring & Adjustment bias
Presence of non-required functionality and errors in the final query [P2]. Presence of non-required functionality and errors in the final solution [P44]. Depleting long term productivity for companies [P53]. Inaccurate adjustments to the system and prototype might sabotage the process of system’s understanding [P4]. Ignorance of the actual change related implementation while accommodating change requests [P19]. Adjusting according to recently exposed knowledge or information [P66].
Confirmation bias
False belief about the completeness and correctness of the answer/solution [P14]. Not exploring the system’s implementation completely for dealing with a change request [P19]. Conflicts between technical and non-technical people [P29]. Rejecting results and measurements that do not conform to the analysts beliefs [P39]. Higher defect rate [P5]. Higher number of post-release defects [P7]. Running relatively less number of fault indicating test-cases; wrong impression of finding the right root-cause during debugging [P34]. Desire for a particular result in pilot-runs [P28]. Putting less effort into fixing and refactoring to have an immediate reward [P38].
Hyperbolic discounting Availability bias
Framing effects and Fixation Attentional Representativeness Miserly information processing Bandwagon effect Status quo bias Overconfidence bias
Ignoring the unfamiliar and less used keywords and locations while seeking for the information in the software documentation [P14]. Relying on easily accessible past experience that might be inconsistent with the current system’s [P18]. Misrepresentation of code features [P34]. Lowering Creativity [P20]. Failure in considering alternative possibilities [P30]. Misrepresentation of code features [P34]. Possibility of dropping error producing test-cases [P8]. Readily agreeing to the client’s requirements without a second thought [P22]. Readily agreeing with team-leader’s suggestion without a second thought [P22]. Illogical defense of the previously made choices [P22]. Effects analyst’s behavior in requirements gathering and causes problems in software development phase [P4].
5.3.5 Representativeness Several techniques for mitigating the effects of representativeness are proposed, as follows: • Using consistent user interfaces in decision support systems [P21]. • Explicitly asking practitioners for alternative solutions (or ideas) [P55]. • Encouraging retrospective analysis and assessment of practitioners’ judgments [P55]. • Warning practitioners about the bias as a priori (which, again, is unlikely to work) [P55]. • Frequently training the participants to avoid representativeness [P55].
5.3.6 Hindsight bias Hindsight bias can be mitigated by distributing the responsibilities of a software project equally among all the involved project managers and not overwhelming any specific group of managers with critical decision-making responsibilities [P65]. Hindsight bias can also be mitigated by maintaining the sequence of software development process activities in a specific order [P65]. 5.3.7 Mere exposure effect Some of the proposed (although not empirically evaluated) approaches for mitigating mere exposure effect are as follows [P65]: • Trial and error through pilot tasks during software development processes.
• •
Explicitly focusing on the final goal or objective of a software project during development lifetime. Involving and considering judgments, opinions, views and concerns of the stakeholders based on their knowledge.
5.3.8 Planning fallacy Planning fallacy is the tendency to underestimate taskcompletion time based on unrealistic estimates [89]. Few of the techniques for mitigating the effects of planning fallacy are proposed, as follows [P65]: • Assessing the progress of all the planned, ongoing and completed tasks involved in a project on a daily basis. • Early involvement of clients and various stakeholders involved in a project during planning, development and release phase. • Considering stakeholders’ and experts’ judgments during time estimations based on the past knowledge and experience. • Ascertaining flexibility in software project plans. 5.3.9 Halo effect Some of the approaches proposed to mitigate the ill effects of halo effect are as follows [65]: • Evaluating an artifact or the project participants on a checklist with the help of other professionals. • By using objective metrics and introducing transparency in sharing the personal problems faced by all the employees in a company. 5.3.10 Sunk-cost fallacy Sunk-cost fallacy can be mitigated by [65]: • Introducing flexibility and accepting changes and alternatives in project plans. • Involving stakeholders and experts during project planning and development phase. • Explicitly uncovering the risks involved in various cost based estimations. • Organizing team meetings and project demonstrations to update project participants about any updates on a daily basis can even mitigate the manifestation of sunk-cost fallacy.
6 DISCUSSION 6.1 Implications for Researchers This results of this study suggest four ways of improving research on cognitive biases in SE: conducting more qualitative and multimethodological research; integrating results better across studies; investigating neglected areas; and addressing confusion. First, experiments are clearly the preferred method in our sample. This is appropriate insofar as most of the primary studies investigate causality, rather than establish facts or develop theories. However, it is crucial to better understand not only whether but also how and how much cognitive biases affect productivity, quality, and software engineering success [72]. Demonstrating a bias in a lab is
not the same as establishing a significant effect on real projects. Some biases (e.g. fixation) may completely derail projects (e.g. through complete failure to innovate) while others may have measurable but practically insignificant effects. Measuring effects on key dependent variables of interest (e.g. project success) rather than bad proxies (e.g. profitability) is essential for differentiating high-impact biases from low-impact ones. Additionally, better understanding of the mechanisms, through which biases sabotage success, will help us create effective debiasing practices. Qualitative and exploratory research is needed better to understand where and how cognitive biases manifest in SE practice, and how biases and debiasing techniques are perceived by practitioners. Multimethodological approaches may also help. For example, combining a randomized control trial with a protocol study may help to not only demonstrate a causal relationship (experiment) but also reveal the cognitive mechanism underlying the causal relationship (protocol study). Second, most cognitive biases have not been investigated in a software engineering context at all. For instance, the Dunning-Kruger effect is the tendency for less skilled people to overestimate their abilities [73]. For example, suppose a programmer is reviewing a sophisticated code library. The more skilled the programmer, the more she can recognize all of the subtle features of the library: the clean code, modular architecture, informative comments, clever use of design patterns, recursion and code-reuse. An incompetent programmer, contrastingly, notices none of these features and simply thinks ‘sure, I could have built this.’ The Dunning-Kruger effect may help to explain conflicts between more and less skilled developers, or between technical and non-technical actors. Collaborative practices including pair programming, peer code review and participative design may ameliorate these conflicts. Of course this is speculation, but the point is that there are many more biases, which might help explain important SE phenomena, that SE research has not yet investigated. Along the same lines, many biases have been investigated by only one or two studies. Similarly, several SE knowledge areas are not represented in the sample; that is, no studies investigate cognitive biases in knowledge areas such as configuration management. Since cognitive biases are universal human phenomena, some biases likely interfere with virtually every aspect of software engineering. Furthermore, most studies simply demonstrate biases; few develop and empirically evaluate debiasing techniques. As discussed in Section 3, simply warning participants about biases does not work, which makes debiasing techniques crucial for practical impact. Third, research on cognitive biases in SE suffers from the same problems as research on cognitive biases in psychology: most studies are disconnected from each other. This is called the archipelago problem [75]: Most of these illusions have been studied in complete isolation without any reference to the other ones …. Experimental methods as well as theoretical explanations of cognitive illusions thus resemble an archipelago spread out in a vast ocean
TABLE 12 DEBIASING APPROACHES Cognitive Bias Availability bias
Confirmation bias
Anchoring / Adjustment bias
Overconfidence bias and optimism bias
Representativeness Hindsight bias Mere exposure effect Planning fallacy
Halo effect
Sunk-Cost fallacy
Debiasing technique / strategy Appropriate documentation of information [P14]. Traceability [P18], [P17]. Appropriate framing of information to highlight problematic areas [P34]. Maintaining detailed records [P55]. Participatory retrospective decision-making meetings [P67]. Stakeholders’ judgement and expert involvement [P65]. Traceability [P17]. Exploring disconfirmatory evidence explicitly [P34]. Generating multiple solutions [P39]. Traceability [P18], [P17]. Directed-question based approach [P4]. Planning Poker [P40]. Generating multiple solutions [P35]. Increasing awareness of the bias, warning participants, disregarding initial anchors or status-quo [P39]. Statistical prediction methods [P55]. Technical competence and explicit questioning via facilitators [P67]. Directed-question based approach [P4]. Planning Poker [P64]. Double-loop cognitive learning [P47]. Appropriately framing estimation questions [P49]. Explicitly asking for alternate solutions [P55]. Maintaining consistency in User Interface [P21]. Distributing equal responsibility among all project managers, Maintaining the software project development process activities [P65]. Trial and error through pilot tasks, Explicit focus on the final goal of the project, Stakeholders’ judgment based on their knowledge [P65]. Daily assessment of planned and accomplished tasks, early involvement of clients in project planning and development, flexibility of project plans, process standardization and stakeholders’ judgment [P65]. Evaluation of an artifact / team members with other professionals based on a preprepared checklist, using objective metrics and transparency in sharing the personal problems faced by all the employees in a company [P65]. Being flexible and accepting alternative plans, stakeholders’ and expert judgments, to uncover the risks involved in cost management, daily team meetings and project demonstrations [P65].
without any sailor having visited more than one (or at the most two) of the small islands yet. In trying to further explore this largely unknown territory, we believe that cognitive illusions … should be placed within the broader context of general cognitive processes. (p. 399)
Cognitive phenomena rarely act in isolation. Better integrating across studies, biases and knowledge areas may be necessary to understand complex cognitive explanations for routine SE problems. For example, design creativity is inhibited by the nexus of framing effects, fixation and requirements engineering practices [3].
Fourth, research on cognitive biases in SE (and to a lesser extent in cognitive psychology) suffers from widespread confusion, which we address in the next subsection.
6.2 Widespread Confusion Analyzing many of the primary studies was hindered by at least four kinds of confusion. This does not mean that all, or even some, of the primary studies are bad. Rather, these studies attempt to apply concepts from a social science discipline to an applied science context, which is often quite different from the context in which those concepts originated. Communication across scientific communities can be difficult, and some miscommunication is normal
[99]. That said, this subsection attempts to clear up some of the more common misunderstandings. First, cognitive biases are often confused with related phenomena such as heuristics and fallacies. Representativeness and anchoring-and-adjustment are heuristics; that is, simple but imperfect rules for judging and deciding. Heuristic reasoning is often correct, but can be biased in specific ways or under specific circumstances. For example, when estimating using anchoring-and-adjustment, we tend to adjust too conservatively. One heuristic can underlie many biases; for example, the representativeness heuristic is implicated in several common errors in probabilistic reasoning. Meanwhile, fallacies are specific errors in logic. For example, the sunk cost fallacy is a logical error in which past (lost) investment is used to justify further (irrational) investment. A fallacy is therefore situated in a specific argument. In a contrast, we can think of a cognitive bias as a tendency for many people to repeat the same kind of fallacious argument in different contexts. For example, commitment bias is basically the tendency to make the sunk cost fallacy. To summarize, a cognitive bias is a tendency for different people, in different contexts, to experience similar reasoning errors. Heuristics are one of several phenomena that cause cognitive biases. Fallacies are one way that some cognitive biases manifest in some situations. Cognitive biases can be caused by other phenomena, including cognitive illusions, and can produce other outcomes, including inaccurate estimates and irrational decisions. Second, as we saw in Section 5, some primary studies conflate different biases. Specifically, several studies mischaracterized a situation in which an actor became unduly preoccupied with some artifact, which distorted their behavior. Research on cognitive biases is particularly susceptible to this kind of confusion because a host of biases concern paying too much attention to some aspects of a situation while ignoring others. This brings us to the third area of confusion. We noted above that the problem of using different terms for the same bias (e.g. confidence and over-confidence) is more serious than it appears. For example, when we get preoccupied with a number, it is called anchoring bias. But when we get preoccupied with a default option, it is called default bias. We call preoccupation with presentation the framing effect. Preoccupation with information that supports our beliefs is confirmation bias, while preoccupation with the typical use of an object is functional fixedness. Preoccupation with the first few items in a list is primacy, while preoccupation with current conditions is status quo bias. This seems suspicious. All of these biases concern one aspect of a situation receiving too much attention and subsequently having too much influence on our reasoning. Perhaps they are simply different manifestations of the same underlying cognitive phenomenon—or perhaps not. Investigating the neurophysiological underpinnings of cognitive biases is obviously beyond the scope of SE research.
This raises the fourth kind of confusion. Since the cognitive mechanisms for many biases are not understood, both relationships and distinctions between biases are suspect. For instance, arguing that framing causes fixation is questionable because both the ‘cause’ and ‘effect’ may be the same phenomenon in different guises. One primary study addressed this by organizing similar biases into “biasplexes” [P23] to help reason about debiasing. We have three practical suggestions for addressing these areas of confusion: 1. If an SE researcher wishes to use a theory from a reference discipline, more extensive reading is needed. Appling theories from reference disciplines to new contexts is difficult. Skimming a single paper or Wikipedia article is rarely sufficient preparation. 2. If possible, collaborating directly with a cognitive psychologist will solve many of these problems. A psychologist is likely not only to have more background in the underlying theory but also more training in empirical methods. (We discussed aspects of this paper with three different professors of psychology to make sure that our interpretations are reasonable and our nomenclature correct.) 3. Faced with a manuscript applying a theory from a reference discipline to SE, editors should consider inviting a review from a member of the reference discipline—in this case, a cognitive psychologist. Only seven of the primary studies involved a psychologist in the core research group. More could have consulted psychologists, but collecting that data would require a different kind of study.
6.3 Implications for Practice The most important take-aways of this research for practitioners are as follows: 1. Cognitive biases are universal human phenomena. They affect everyone and likely have deleterious effects on all aspects of software development, regardless of industry, platform, programming language or project management philosophy. 2. Emails, meetings and otherwise creating awareness that our reasoning is biased has no effect on mitigating cognitive biases. 3. Individuals can be debiased, one subject at a time, through extensive (and expensive) training such as that received by meteorologists and actuaries. Most computer science and software engineering degrees likely do not provide this kind of training. 4. Alternatively, biases can be mitigated by redesigning the person-task system to inhibit the bias (e.g. what Planning Poker does for anchoring bias). Redesigning the person-task system is the most feasible option for addressing a cognitive bias in most organizations. Some (at least potentially) effective debiasing interventions include:
• • • • • • • •
Planning poker to prevent anchoring bias in effort estimation [P40]. Reference class forecasting or model based forecasting to mitigate optimism in effort estimation [P47]. Ontology-based documentation to mitigate availability bias [P14]; Directly asking for disconfirmatory evidence to reduce confirmation bias [P34]; Generating multiple, diverse problem formulations and solution candidates to mitigate framing effects [P55]. Using confirmation bias metrics to select less-biased testers [P7]. For security, employ independent penetration testing specialists to reduce confirmation bias [76]. Designate or encourage a devil’s advocate in retrospective meetings to avoid groupthink [77].
6.4 Implications for Education As explained above, reasoning can be debiased through extensive training. What would constitute extensive training varies depending on the bias. For example, extensive training in effort estimation might consist of estimating several thousand user stories, with immediate, unambiguous feedback. This is probably impractical, especially considering the great variety of software projects and the domain knowledge necessary to estimate tasks accurately. However, extensive training in design creativity might be more practical. For example, consider an assignment where students have to design a suitable architecture for a given teaching case using UML. Now suppose students are instead asked to design four alternative architectures, all of which should be reasonable and quite different from each other. Or, suppose students are asked to design just two architectures, one uncontroversial and one radical. If most design-related assignments throughout their degrees call for multiple alternatives, generating several design candidates will seem natural. Students are thus more likely to carry this practice into their professional work, much like the programming styles and testing practices taught today. Implement all of the alternatives is unnecessary. Whether and how to teach the theory of cognitive biases or particular debiasing techniques is another matter. Rather than teaching it as a stand-alone topic, it may be more efficient to explain many cognitive biases in the context of common problems and practices. For example, a usual topic in software project management course is effort estimation. The instructor bemoans poor effort estimation and explains how Scrum and Extreme Programming prescribe planning poker. The students might then do a planning poker exercise on their term project. This presents planning poker through a method lens. In contrast, the instructor can present it through a theory lens. First, we explain the problem (which is readily understood), followed by the key causes of the problem: 1) anchoring bias; 2) optimism bias; 3) pressure from manage-
ment and stakeholders to deliver faster. Then, specific debiasing techniques—planning poker for anchoring bias and reference class forecasting for optimism bias—can be described by the instructor and practiced by the students. This approach gives students a more nuanced, evidencebased conception of both the problem and potential solutions.
6.5 Limitations The findings of this research should be considered in light of several limitations. Despite utilizing a very broad search string to search relevant studies from major online databases, employing multi-level retrospective snowballing and authors’ publication list analysis, our search might have missed certain studies concerning cognitive biases in SE; for instance, relevant studies in other domains including sociology, management science, psychology and investigating cognitive biases in the context of certain SE processes. It is also possible that some relevant studies were not indexed in the databases we used. We excluded studies that did not explicitly focus on cognitive biases; for instance, studies that simply mention the term cognitive bias but not actually focus the central investigation(s) on cognitive biases. We also excluded studies that considered specific effects or errors caused by cognitive biases, as cognitive biases itself. For instance, studies that considered statistical estimation errors due to optimism biases, exclusively as ‘statistical bias’ and ‘estimation bias’. Subjective decisions related to inclusion or exclusion of a study was mitigated by involving at least two researchers during each phase of analysis. The areas of confusion discussed in Section 6.2 particularly hinder analysis by forcing extra interpretation. Finally, our recommendations for debiasing SE professionals are primarily based on suggestions by, not empirical results of, the primary studies. We simply have insufficient empirical research on debiasing to attempt evidencebased guidelines at this stage. 6.6 Future research Section 6.1 suggested four ways of improving research on cognitive biases in SE: conducting more qualitative and multimethodological research; integrating results better across studies; investigating neglected areas and; addressing confusions around names. We also highlight the benefits of collaborating with psychologists. More generally though, this study explores the application of a broad psychological theory to software engineering. Developing an evidenced-based cumulative body of knowledge necessitates more such studies focusing on different theoretical foundations for SE [78].
7 CONCLUSION This study provides a comprehensive view of research on cognitive biases in software engineering. It identifies 67 primary studies exploring 47 cognitive biases. A constant and sustained interest in this area is clear, especially in software project management. Substantial opportunities for
impactful research remain not only in other areas (e.g. design, testing, and devops) but also considering other biases. Some of the key findings of this paper are: • Despite SE research focusing on multiple antecedents and effects of cognitive biases, SE research literature lacks investigations concerning antecedents or effects of biases that are supported by credible empirical evidence in human psychology research, e.g. time-pressure as an antecedent to confirmation bias. This highlights the negligence of research focused towards certain antecedents rooted in human psychology and critical in SE practice. • This study found debiasing approaches for only 10 cognitive biases. Of which, most of the approaches were situational and context dependent in their nature. • The debiasing approaches were not based on any credible theoretical knowledge from human psychology domain, failing to target specific or a group of antecedents of biases being addressed.Similarly, in some cases, the proposed dibiasing approaches even failed to produce effective results. Surprisingly, this study found no evidence of any debiasing approaches towards more robust biases like fixation and framing effects. • Although, the investigations reported in primary studies mainly used experiment based research approach, this study found no replications or family of experiments to further test the effectiveness of debiasing techniques, effects of biases or to explore antecedents of biases in various context or situations during a software project lifetime. • Finally, we noticed that the primary studies were very discrete in nature, i.e., studies did not make use of any previously developed knowledge and were simply standalone investigations. This might result in studies investigating same cognitive biases being mostly unrelated, or failed to agree on any common understanding of the phenomena of biases in SE. This study contributes primarily by providing a comprehensive view of the antecedents to cognitive biases in SE, their effects and various debiasing techniques deployed to counter the harmful effects of cognitive biases in SE practices. Although, much evidence based research concentrated to study the antecedents and effects of cognitive biases in SE practices, there exists acute shortage of empirical research investigating the effectiveness of debiasing practices. Inspite of literature recognizing 47 cognitive biases in play, most of the mainstream research is concentrated only towards 10-12 biases. This study also highlights the widespread areas of confusion concerning the psychological nomenclature, distorted conceptualization of certain cognitive biases based on different situational aspects, truncated understanding of various underlying cognitive mechanisms triggering biases and research using different
terms for the same cognitive biases. Finally, this study also presents the major implications of cognitive biases in SE research, practice and education contributing to cumulative knowledge building. . In summary, cognitive biases help to explain many common problems in SE. While they are robust against simply raising awareness, professionals can often be debiased through simple interventions. The better we understand the theoretical foundations of common problems and practices, the easier it will be to develop effective interventions to mitigate biases and therefore alleviate their corresponding problems. Ultimately, we hope that this literature review will encourage many SE researchers to further explore the phenomena of cognitive biases in SE and thus will serve as a base for future research.
References [1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11] [12]
[13]
[14]
M. G. Haselton, D. Nettle, D. R. Murray, M. G. Haselton, D. Nettle, and D. R. Murray, “The Evolution of Cognitive Bias,” in The Handbook of Evolutionary Psychology, Hoboken, NJ, USA: John Wiley {&} Sons, Inc., 2015, pp. 1–20. D. Arnott, “Cognitive biases and decision support systems development: A design science approach,” Inf. Syst. J., vol. 16, no. 1, pp. 55–78, 2006. R. Mohanani, P. Ralph, and B. Shreeve, “Requirements fixation,” in Proceedings of the 36th International Conference on Software Engineering - ICSE 2014, 2014, pp. 895–906. A. Tang and Antony, “Software designers, are you biased?,” in Proceeding of the 6th international workshop on SHAring and Reusing architectural Knowledge - SHARK ’11, 2011, p. 1. C. Mair and M. Shepperd, “Human judgement and software metrics,” in Proceeding of the 2nd international workshop on Emerging trends in software metrics - WETSoM ’11, 2011, p. 81. L. M. Leventhal, B. E. Teasley, and D. S. Rohlman, “Analyses of factors related to positive test bias in software testing,” Int. J. Hum. Comput. Stud., vol. 41, no. 5, pp. 717–749, Nov. 1994. G. Calikli and A. Bener, “Empirical analyses of the factors affecting confirmation bias and the effects of confirmation bias on software developer/tester performance,” in Proceedings of the 6th International Conference on Predictive Models in Software Engineering - PROMISE ’10, 2010, p. 1. G. J. Browne and V. Ramesh, “Improving information requirements determination: a cognitive perspective,” Inf. {&} Manag., vol. 39, no. 8, pp. 625–645, 2002. N. Chotisarn and N. Prompoon, “Forecasting software damage rate from cognitive bias in software requirements gathering and specification process,” in 2013 IEEE Third International Conference on Information Science and Technology (ICIST), 2013, pp. 951–956. M. Jorgensen and S. Grimstad, “Software Development Estimation Biases: The Role of Interdependence,” IEEE Trans. Softw. Eng., vol. 38, no. 3, pp. 677–693, May 2012. V. A. Gheorghiu, G. Molz, and R. Pohl, “Suggestions and illusions,” 2004. P. Ralph, “Toward a Theory of Debiasing Software Development,” Springer, Berlin, Heidelberg, 2011, pp. 92– 105. M. Fleischmann, M. Amirpur, A. Benlian, and T. Hess, “Cognitive Biases in Information Systems Research: a Scientometric Analysis,” ECIS 2014 Proc., no. 26, pp. 1–21, 2014. A. P. Sage and W. B. Rouse, Handbook of systems engineering and management. J. Wiley {&} Sons, 1999.
[15] [16]
[17] [18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28] [29]
[30] [31] [32] [33] [34]
[35]
C. E. Zsambok and G. Klein, Naturalistic Decision Making. Taylor and Francis, 2014. C. R. B. de Souza, H. Sharp, J. Singer, L.-T. Cheng, and G. Venolia, “Guest Editors’ Introduction: Cooperative and Human Aspects of Software Engineering,” IEEE Softw., vol. 26, no. 6, pp. 17–19, Nov. 2009. G. M. Weinberg, The psychology of computer programming. Dorset House Pub, 1998. D. Arnott and G. Pervan, “Eight key issues for the decision support systems discipline,” Decis. Support Syst., vol. 44, no. 3, pp. 657–672, 2008. A. P. Snow, M. Keil, and L. Wallace, “The effects of optimistic and pessimistic biasing on software project status reporting,” Inf. {&} Manag., vol. 44, no. 2, pp. 130–141, 2007. J. Parsons and C. Saunders, “Cognitive heuristics in software engineering applying and extending anchoring and adjustment to artifact reuse,” IEEE Trans. Softw. Eng., vol. 30, no. 12, pp. 873–888, Dec. 2004. W. Stacy and J. MacMillan, “Cognitive bias in software engineering,” Commun. ACM, vol. 38, no. 6, pp. 57–63, Jun. 1995. T. Hall, N. Baddoo, S. Beecham, H. Robinson, and H. Sharp, “A systematic review of theory use in studies investigating the motivations of software engineers,” ACM Trans. Softw. Eng. Methodol., vol. 18, no. 3, pp. 1–29, May 2009. S. S. J. O. Cruz, F. Q. B. da Silva, C. V. F. Monteiro, C. F. Santos, and M. T. dos Santos, “Personality in software engineering: preliminary findings from a systematic literature review,” in 15th Annual Conference on Evaluation {&} Assessment in Software Engineering (EASE 2011), 2011, pp. 1– 10. D. E. Leidner and T. Kayworth, “Review: a review of culture in information systems research: toward a theory of information technology culture conflict,” MIS Q., vol. 30, no. 2, pp. 357–399, 2006. P. Lenberg, R. Feldt, and L. G. Wallgren, “Behavioral software engineering: A definition and systematic literature review,” J. Syst. Softw., vol. 107, pp. 15–37, 2015. A. Tversky and D. Kahneman, “Judgment under Uncertainty: Heuristics and Biases,” in Utility, Probability, and Human Decision Making, Dordrecht: Springer Netherlands, 1975, pp. 141–162. T. (Ed) Gilovich, D. (Ed) Griffin, and D. (Ed) Kahneman, Heuristics and Biases. Cambridge: Cambridge University Press, 2002. G. B. Chapman and A. S. Elstein, “Cognitive processes and biases in medical decision making.” 2000. A. Shleifer, “Psychologists at the Gate: A Review of Daniel Kahneman’s Thinking, Fast and Slow,” J. Econ. Lit., vol. 50, no. 4, pp. 1080–1091, Dec. 2012. R. Pohl, Cognitive illusions : a handbook on fallacies and biases in thinking, judgement and memory. Psychology Press, 2004. E. J. Langer and E. J., “The illusion of control.,” J. Pers. Soc. Psychol., vol. 32, no. 2, pp. 311–328, 1975. I. L. Janis, Groupthink : psychological studies of policy decisions and fiascoes. Houghton Mifflin, 1982. A. Wilke and R. Mata, “Cognitive Bias,” in Encyclopedia of Human Behavior, Elsevier, 2012, pp. 531–535. G. Calikli and A. Bener, “Preliminary analysis of the effects of confirmation bias on software defect density,” in Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement - ESEM ’10, 2010, p. 1. E. Kutsch, H. Maylor, B. Weyer, and J. Lupson, “Performers,
[36]
[37] [38]
[39]
[40] [41]
[42] [43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
[52] [53]
[54]
trackers, lemmings and the lost: Sustained false optimism in forecasting project outcomes — Evidence from a quasiexperiment,” Int. J. Proj. Manag., vol. 29, no. 8, pp. 1070–1081, 2011. B. Flyvbjerg and Bent, “From Nobel Prize to Project Management: Getting Risks Right,” eprint arXiv:1302.3642, Feb. 2013. Stanovich and K. E., What intelligence tests miss: The psychology of rational thought. Yale University Press, 2009. P. Ralph and R. Mohanani, “Is requirements engineering inherently counterproductive?,” Proceedings of the Fifth International Workshop on Twin Peaks of Requirements and Architecture. IEEE Press, pp. 20–23, 2015. B. Fischoff, “Debiasing,” in Judgment under Uncertainty: Heuristics and Biases, D. Kahneman, P. Slovic, and A. Tversky, Eds. Cambridge, USA: Cambridge University Press, 1982. R. P. Larrick, “Debiasing,” in Blackwell handbook of judgment and decision making, John Wiley {&} Sons, 2008. E. Pronin, D. Y. Lin, and L. Ross, “The Bias Blind Spot: Perceptions of Bias in Self Versus Others,” Personal. Soc. Psychol. Bull., vol. 28, no. 3, pp. 369–381, Mar. 2002. E. Pronin, “Perception and misperception of bias in human judgment,” Trends Cogn. Sci., vol. 11, no. 1, pp. 37–43, 2007. N. C. Haugen, “An empirical study of using planning poker for user story estimation,” in Proceedings - AGILE Conference, 2006, 2006. D. Budgen, D. Budgen, M. Turner, P. Brereton, and B. Kitchenham, “Using Mapping Studies in Software Engineering.” K. Petersen, R. Feldt, S. Mujtaba, and M. Mattsson, “Systematic mapping studies in software engineering,” EASE’08 Proc. 12th Int. Conf. Eval. Assess. Softw. Eng., pp. 68– 77, 2008. B. Kitchenham and S. Charters, “Guidelines for performing Systematic Literature Reviews in Software Engineering,” Engineering, 2007. G. H. Travassos, P. S. M. dos Santos, P. G. Mian, A. C. D. Neto, and J. Biolchini, “An Environment to Support Large Scale Experimentation in Software Engineering,” in 13th IEEE International Conference on Engineering of Complex Computer Systems (iceccs 2008), 2008, pp. 193–202. B. Kitchenham, “What’s up with software metrics? – A preliminary mapping study,” J. Syst. Softw., vol. 83, no. 1, pp. 37–51, 2010. B. Kitchenham, R. Pretorius, D. Budgen, O. Pearl Brereton, M. Turner, M. Niazi, and S. Linkman, “Systematic literature reviews in software engineering – A tertiary study,” Inf. Softw. Technol., vol. 52, no. 8, pp. 792–805, 2010. D. Budgen, A. J. Burn, O. P. Brereton, B. A. Kitchenham, and R. Pretorius, “Empirical evidence about the UML: a systematic literature review,” Softw. Pract. Exp., vol. 41, no. 4, pp. 363–392, Apr. 2011. F. W. Neiva, J. M. N. David, R. Braga, and F. Campos, “Towards pragmatic interoperability to support collaboration: A systematic review and mapping of the literature,” Inf. Softw. Technol., vol. 72, pp. 137–150, 2016. V. R. Basili, G. Caldiera, and D. H. Rombach, “The Goal Question Metric Approach,” Encycl. Softw. Eng., vol. 1, 1994. I. Steinmacher, A. P. Chaves, and M. A. Gerosa, “Awareness Support in Distributed Software Development: A Systematic Review and Mapping of the Literature,” Comput. Support. Coop. Work, vol. 22, no. 2–3, pp. 113–158, Apr. 2013. P. A. Laplante, What every engineer should know about software engineering. Taylor {&} Francis, 2007.
[55]
[56]
[57]
[58] [59]
[60] [61]
[62]
[63]
[64]
[65]
[66] [67] [68]
[69] [70]
[71]
[72]
[73]
[74]
[75]
J. Cohen and Jacob, “Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit.,” Psychol. Bull., vol. 70, no. 4, pp. 213–220, 1968. T. Dyba, T. Dingsoyr, and G. K. Hanssen, “Applying Systematic Reviews to Diverse Study Types: An Experience Report,” in First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007), 2007, pp. 225–234. M. Ivarsson and T. Gorschek, “A method for evaluating rigor and industrial relevance of technology evaluations,” Empir. Softw. Eng., vol. 16, no. 3, pp. 365–395, Jun. 2011. B. Kitchenham, “Procedures for Performing Systematic Reviews,” 2004. A. Abran, J. W. Moore, P. Bourque, R. Dupuis, and L. L. Tripp, Guide to the Software Engineering Body of Knowledge, vol. 19759, no. 6. 2004. VandenBos and G. R. (Ed), APA Dictionary of Psychology. American Psychological Association, 2007. R. A. Baron, “The cognitive perspective: a valuable tool for answering entrepreneurship’s basic ‘why’ questions,” J. Bus. Ventur., vol. 19, no. 2, pp. 221–239, 2004. R. M. Poses and M. Anthony, “Availability, Wishful Thinking, and Physicians’ Diagnostic Judgments for Patients with Suspected Bacteremia,” Med. Decis. Mak., vol. 11, no. 3, pp. 159–168, Aug. 1991. M. Nurminen, P. Suominen, S. Äyrämö, and T. Kärkkäinen, “Applying Semiautomatic Generation of Conceptual Models to Decision Support Systems Domain,” Software Engineering. ACTA Press. M. R. Leary, “Hindsight Distortion and the 1980 Presidential Election,” Personal. Soc. Psychol. Bull., vol. 8, no. 2, pp. 257– 263, Jun. 1982. J. A. O. G. da Cunha, F. Q. B. da Silva, H. P. de Moura, and F. J. S. Vasconcellos, “Decision-making in software project management,” in Proceedings of the 9th International Workshop on Cooperative and Human Aspects of Software Engineering CHASE ’16, 2016, pp. 26–32. C. N. Parkinson and O. Lancaster, “Parkinson’s law, or The pursuit of progress,” 2011. E. Long-Crowell, “The Halo Effect: Definition, Advantages & Disadvantages,” in Psychology, 2015, p. 104. H. R. Arkes and P. Ayton, “The sunk cost and Concorde effects: Are humans less rational than lower animals?,” Psychol. Bull., vol. 125, no. 5, p. 591, 1999. R. J. Wirfs-Brock, “Giving Design Advice,” IEEE Softw., vol. 24, no. 4, pp. 13–15, Jul. 2007. Y. Bar-Haim, D. Lamy, L. Pergamin, M. J. BakermansKranenburg, and M. H. van IJzendoorn, “Threat-related attentional bias in anxious and nonanxious individuals: A meta-analytic study.,” Psychol. Bull., vol. 133, no. 1, pp. 1–24, Jan. 2007. W. Samuelson and R. Zeckhauser, “Status quo bias in decision making,” J. Risk Uncertain., vol. 1, no. 1, pp. 7–59, Mar. 1988. P. Ralph and P. Kelly, “The dimensions of software engineering success,” in Proceedings of the 36th International Conference on Software Engineering - ICSE 2014, 2014, pp. 24– 35. J. Kruger and D. Dunning, “Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments.,” J. Pers. Soc. Psychol., 1999. K. Koffka, “Principles of gestalt psychology Chapter I Why Psychology? An introductory question,” Princ. Gestalt Psychol., pp. 1–14, 1935. V. Gheorghiu, G. Molz, and R. Pohl, “Suggestion and
[76] [77]
[78]
[79]
[80] [81]
[82] [83] [84]
[85]
[86]
[87]
[88] [89]
[90]
[91] [92] [93]
[94]
[95]
[96]
Illusion,” in Cognitive illusions : a handbook on fallacies and biases in thinking, judgement and memory, R. Pohl, Ed. Psychology Press, 2004, pp. 399–421. B. Arkin, S. Stender, and G. McGraw, “Software penetration testing,” IEEE Security and Privacy. 2005. C. MacDougall and F. Baum, “The Devil’s Advocate: A Strategy to Avoid Groupthink and Stimulate Discussion in Focus Groups,” Qual. Health Res., 1997. P. Ralph, M. Chiasson, and H. Kelley, “Social theory for software engineering research,” in ACM International Conference Proceeding Series, 2016. M. E. Oswald and S. Grosjean, “Confirmation Bias,” in Cognitive illusions: A handbook on fallacies and biases in thinking, Judgement and Memory, Hove, UK: Psychology Press, 2004, pp. 79–96. Plous and Scott, The psychology of judgment and decision making. Mcgraw-Hill Book Company, 1993. I. Watson, A. Basden, and P. Brandon, “The client-centred approach: expert system development,” Expert Syst., vol. 9, no. 4, pp. 181–188, Nov. 1992. A. Colman, A dictionary of psychology. Oxford, UK: Oxford University Press, 2009. D. G. Jansson and S. M. Smith, “Design fixation,” Des. Stud., vol. 12, no. 1, pp. 3–11, Jan. 1991. H. Schwartz, “Predictably Irrational: the hidden forces that shape our decisions,” Bus. Econ., vol. 43, no. 4, pp. 69–72, 2008. E. D. Smith, Y. J. Son, M. Piattelli-Palmarini, and A. Terry Bahill, “Ameliorating mental mistakes in tradeoff studies,” Syst. Eng., vol. 10, no. 3, pp. 222–240, 2007. A. Madni, “The Role of Human Factors in Expert Systems Design and Acceptance,” Hum. Factors, vol. 30, no. 4, pp. 395– 414, 1988. R. Bornstein and C. Carver-Lemley, “Mere Exposure Effect,” in Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, 2004, pp. 215–234. B. Robinson-Riegler and G. L. Robinson-Riegler, Cognitive psychology: Applying the science of the Mind. Pearson, 2004. L. J. Sanna and N. Schwarz, “Integrating temporal biases: The interplay of focal thoughts and accessibility experiences,” Psychol. Sci., vol. 15, no. 7, pp. 474–481, Jul. 2004. H. Omer and N. Alon, “The continuity principle: A unified approach to disaster and trauma,” Am. J. Community Psychol., vol. 22, no. 2, pp. 273–287, Apr. 1994. L. A. Granka, “The Politics of Search: A Decade Retrospective,” Inf. Soc., vol. 26, no. 5, pp. 364–374, Sep. 2010. B. Shore, “Systematic biases and culture in project failures,” Proj. Manag. J., vol. 39, no. 4, pp. 5–16, Nov. 2008. D. T. Gilbert, E. C. Pinel, T. D. Wilson, S. J. Blumberg, and T. P. Wheatley, “Immune neglect: A source of durability bias in affective forecasting.,” J. Pers. Soc. Psychol., vol. 75, no. 3, pp. 617–638, 1998. J. T. Jost and M. R. Banaji, “The role of stereotyping in system-justification and the production of false consciousness,” Br. J. Soc. Psychol., vol. 33, no. 1, pp. 1–27, Mar. 1994. E. Bozdag, “Bias in algorithmic filtering and personalization,” Ethics Inf. Technol., vol. 15, no. 3, pp. 209– 227, Sep. 2013. C. R. Kenley and D. C. Armstead, “Discounting models for long-term decision making,” Syst. Eng., vol. 7, no. 1, pp. 13– 24, 2004.
[97] [98]
[99]
N. Taylor, “Making actuaries aess human: Lessons from behavioural finance,” Staple Inn Actuar. Soc. Meet., 2000. B. R. Forer and B. R., “The fallacy of personal validation: a classroom demonstration of gullibility.,” J. Abnorm. Soc. Psychol., vol. 44, no. 1, pp. 118–123, 1949. T. S. Kuhn, "The Structure of Scientific Revolutions", 2nd enl. ed. University of Chicago Press.
APPENDIX A: DEFINITIONS OF COGNITIVE BIASES INVESTIGATED IN SE Cognitive bias
Definition
Primary studies
Anchoring (and adjustment) bias
“The tendency, in forming perceptions or making quantitative judgments of some entity under conditions of uncertainty, to give excessive weight to the initial starting value (or anchor), based on the first received information or one’s initial judgment, and not to modify this anchor sufficiently in light of later information” [60, p. 51]. Adjustment bias is a psychological tendency that influences the way people intuitively assess probabilities [26]. The tendency of our perception to be affected by our recurring thoughts [70]. Availability bias refers to a tendency of being influenced by the information that is easy to recall and by the information that is recent or widely publicized [62]. “The tendency for large numbers of individuals, in social and sometimes political situations, to align themselves or their stated opinions with the majority opinion as they perceive it” [60, p. 101]. “The tendency to maintain a belief even after the information that originally gave rise to it has been refuted or otherwise shown to be inaccurate” [60, p. 112]. The tendency to overestimate one’s own skill, accuracy and control over one’s self and environment [63]. The tendency to search for, interpret, focus on and remember information in a way that confirms one's preconceptions [79].
P1, P2, P8, P11, P12, P17, P18, P19, P21, P23, P32, P36, P62, P55, P49, P44, P35, P59, P58, P40, P39, P53, P51, P45, P67, P66, P65
Attentional bias Availability bias Bandwagon effect
Belief perseverance (Over-)confidence bias Confirmation bias
Contrast effect Default bias End-user bias Endowment effect Fixation Framing effect Halo effect Hindsight bias
Hyperbolic discounting IKEA effect / I-designed-itmyself effect
The enhancement or reduction of a certain perception's stimuli when compared with a recently observed, contrasting object [80]. The tendency to choose preselected options over superior, unselected options [71]. End-user bias occurs when there is a mismatch between the developer and end-user’s conceptualizations of the subject matter domain [81]. “The tendency to demand much more to give up an object than one is willing to pay to acquire it” [82]. The tendency to disproportionately focus on one aspect of an event, object, or situation, especially self-imposed or imaginary obstacles [83]. “The tendency to give different responses to problems that have surface dissimilarities but that are really formally identical” [37] The halo effect can be defined as the tendency to use global evaluations to make judgments about specific traits [67]. When people know the actual outcome of a process, they tend to regard that outcome as having been fairly predictable all along – or at least more predictable than they would have judged before knowing the outcome [64]. The tendency of people to prefer options that offer smaller rewards with more immediate payoff to options with larger rewards promised for future [69]. The IKEA effect depicts how object valuation increases when that object has been self-created [63].
P30, P42 P8, P11, P12, P14, P18, P19, P24, P31, P41, P45, P39, P62, P34, P17, P66, P67, P65 P22, P23
P23, P29 P17, P21, P43, P4, P39, P47 P5, P6, P7, P8, P9, P10, P11, P13, P14, P15, P17, P18, P19, P21, P22, P28, P32, P41, P45, P46, P39, P57, P38, P34, P66 P38 P23 P27 P26 P20 P20, P21, P66 P37, P65 P59, P62, P51, P65
P38 P26
Information bias Infrastructure bias Invincibility bias Knowledge engineer Mere exposure effect Miserly information processing Misleading information Neglect of probability Normalcy effects (Over-)optimism bias Parkinson’s Law effect Pessimism bias Planning fallacy Popularity bias Primacy and recency effects Representativeness Selective perception Semantic fallacy
Semmelweis reflex Sequence impact Situation bias Status quo bias / System justification Subject-matter expert
The strong tendency to ask for more additional information, especially in times of uncertainty [68]. The location and availability of preexisting infrastructure such as roads and telecommunication facilities influences future economic and social development [84]. The tendency to over trust one’s own abilities [84]. As the knowledge-engineer (KE) becomes familiar with the problem domain, a mental framework takes shape resulting into a mismatch [85]. “Increased liking for a stimulus that follows repeated, unreinforced exposure to that stimulus” [86, p. 31]. The tendency to avoid deep or complex information processing [37].
P11, P16, P27, P29, P38
The tendency to blindly follow the provided information without being able to self-evaluate [87]. The tendency to discount any risks that people perceive as being less than certain [88]. Systematically “underestimating the probability or extent of expected disruption” during a disaster” [61]. The tendency to be over-optimistic, overestimating favorable and pleasing outcomes [88].
P22, P59, P51
Human tendency to procrastinate the execution of activities until the end date originally agreed [66]. The tendency to report the status of any task in worse shape than the actual reality [19]. The tendency to underestimate task-completion time [89]. The tendency to choose options that are socially or empirically more popular than other options [90]. The tendency of people to remember the first and the last few items more than those in the middle [69]. The tendency in which a person reduces many inferential tasks to simple similarity judgements [26]. The tendency to perceive events differently by different people [80]. The tendency to ignore the underlying semantics depicted by the information models and focus mainly on the structural constraints given, even for models that had obvious contradiction between the structural constraints and the underlying semantics [26]. Unthinking rejection of new information that contradicts established beliefs or paradigms [91]. Sequence impact is the tendency for people to overestimate the length or the intensity of future feeling states [92]. The human tendency to follow a previous course of action, only because it was used previously [2]. The tendency to irrationally prefer, maintain and defend current conditions, operating procedures, status, or social order [93]. The tendency of a decision-maker to influence the final outcome of the decision process, typically at higher levels in management hierarchy [62].
P39 P39 P27 P23, P65, P67 P22
P38 P23 P48, P59, P47, P23, P25, P33, P60, P61, P56, P64, P63, P52, P53, P50, P54 P65 P33, P59 P47, P65 P3 P38 P8, P31, P34, P62, P55, P45 P62 P42
P23 P59, P51 P21 P22, P23, P39 P27
Sunk cost fallacy
Technical bias
Time based bias Valence effect Validation bias Validity effect
Wishful thinking
The sunk-cost fallacy is a decision-making bias that reflects the tendency to invest more future resources in a situation in which a prior investment has been made, as compared with a similar situation in which a prior investment has not been made [68]. The bias caused by third party manipulation or popularity of certain individual factors such as personal judgments, organizational factors such as company policies and external factors such as advertiser requests [94]. Time-based bias involves a reduction of attention via short-term thinking and hyperbolic discounting errors [95]. The tendency to give undue weight to the degree to which an outcome is considered as positive or negative when estimating the probability of its occurrence [96]. The tendency in which a person will consider a statement or another piece of information to be correct if it has any personal meaning or significance to them [97]. “The validity effect occurs when the mere repetition of information affects the perceived truthfulness of that information the validity effect occurs similarly for statements of fact that are true and false in origin, as well as political or opinion statements” [98, p. 211]. The tendency to underestimate the likelihood of a negative outcome and vice versa [62].
P65
P3
P32, P38 P23 P27 P23
P23, P51, P59
APPENDIX B: LIST OF PRIMARY STUDIES [P1] G. Allen and J. Parsons, “Is query reuse potentially harmful? Anchoring and adjustment in adapting existing database queries”, Information Systems Research, vol. 21, no. 1, pp. 56-77, 2010. [P2] G. Allen and B. J. Parsons, “A little help can be a bad thing: Anchoring and adjustment in adaptive query reuse”. Proceedings of ICIS 2006, vol. 45, 2006. [P3] E. Bozdag, “Bias in algorithmic filtering and personalization”, Ethics and Information Technology, vol. 15, no. 3, pp. 209-227, 2013. [P4] G. J. Browne and V. Ramesh, “Improving information requirements determination: a cognitive perspective”, Information & Management, vol. 39, no. 8, pp. 625-645, 2002. [P5] G. Calikli, and A. Bener, “Empirical analyses of the factors affecting confirmation bias and the effects of confirmation bias on software developer/tester performance”. Proceedings of the 6th International Conference on Predictive Models in Software Engineering, pp. 10, September 2010. [P6] G. Calikli, A. Bener, and B. Arslan, “An analysis of the effects of company culture, education and experience on confirmation bias levels of software developers and testers”, Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering, vol. 2, pp. 187-190, May 2010. [P7] G. Calikli, A. Bener, T. Aytac, and O. Bozcan, “Towards a metric suite proposal to quantify confirmation biases of developers”. in 2013 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, pp. 363-372, October 2013. [P8] G. Calikli, A. Bener, B. Caglayan, and A. T. Misirli, “Modeling Human Aspects to Enhance Software Quality Management”, In Proceedings of 33rd International Conference on Information Systems, 2012. [P9] G. Calikli and A. B. Bener, “Influence of confirmation biases of developers on software quality: an empirical study”. Software Quality Journal, vol. 21, no. 2, pp. 377-416, 2013. [P10] G. Calikli and A. Bener, “An algorithmic approach to missing data problem in modeling human aspects in software development”, In Proceedings of the 9th International Conference on Predictive Models in Software Engineering, pp. October 2013. [P11] N. Chotisarn and N. Prompoon, “Forecasting software damage rate from cognitive bias in software requirements gathering and specification process”. In 2013 IEEE Third International Conference on Information Science and Technology (ICIST), pp. 951-956, March 2013. [P12] N. Chotisarn and N. Prompoon, “Predicting Software Damage Rate from Cognitive Bias in Software Design Process”. In Proceedings of the 2013 International Conference on Information, Business and Education Technology (ICIBET 2013). Atlantis Press, March 2013. [P13] P. Conroy and P. Kruchten, “Performance norms: An approach to rework reduction in software development”. In 25th IEEE Canadian Conference on Electrical & Computer Engineering (CCECE), pp. 1-6, 2012. [P14] K. A. de Graaf, P. Liang, A. Tang, and H. van Vliet, “The impact of prior knowledge on searching in software documentation”. In Proceedings of the 2014 ACM symposium on Document engineering. pp. 189-198, September 2014. [P15] J. Defranco-tommarello and F. P. Deek, “Collaborative problem solving and groupware for software development”, Information Systems Management, vol. 21, no. 1, pp. 67-80, 2004. DOI: 10.1201/1078/43877.21.1.20041201/78987.7 [P16] I. Hadar, “When intuition and logic clash: The case of the objectoriented paradigm”. Science of Computer Programming, vol. 78,
no. 9, pp. 1407-1426, 2013. [P17] R. Jain, J. Muro, and K. Mohan, “A cognitive perspective on pair programming”. In Proceedings of AMCIS, pp. 444, 2006. [P18] K. Mohan, and R. Jain, “Using traceability to mitigate cognitive biases in software development”, Communications of the ACM, vol. 51, no. 9, pp. 110-114, 2008. [P19] K. Mohan, N. Kumar, and R. Benbunan-Fich, “Examining communication media selection and information processing in software development traceability: An empirical investigation”. IEEE Transactions on Professional Communication, vol. 52, no. 1, pp. 17-39, 2009. [P20] R. Mohanani, P. Ralph, and B. Shreeve, “Requirements fixation”, In Proceedings of the 36th International Conference on Software Engineering, pp. 895-906, May 2014. [P21] M. Nurminen, P. Suominen, S. Äyrämö, and T. Kärkkäinen,”Applying semiautomatic generation of conceptual models to decision support systems domain”, In Proceedings of the IASTED International Conference on Software Engineering, ACTA Press, 2009. [P22] P. Ralph, “Possible core theories for software engineering”. In 2nd Workshop on a General Theory of Software Engineering, pp. 3538, May 2013. [P23] P. Ralph, “Toward a theory of debiasing software development”, “In EuroSymposium on Systems Analysis and Design, Springer Berlin Heidelberg, pp. 92-105, 2011. [P24] J. E. Robbins, D. M. Hilbert, and D. F. Redmiles, “Software architecture critics in Argo”, In Proceedings of the 3rd International Conference on Intelligent User Interfaces, pp. 141-144, January 1998. [P25] E. Shalev, M. Keil, J. S. Lee, and Y. Ganzach, “Optimism Bias in managing IT project risks: A construal level theory perspective”, In Proceedings of 22nd European Conference on Information Systems, 2014. [P26] O. Shmueli, N. Pliskin, and L.Fink,”Explaining over-requirement in software development projects: an experimental investigation of behavioral effects”. International Journal of Project Management, vol. 33, no. 2, pp. 380-394, 2015. [P27] B. Shore, “Bias in the development and use of an expert system: Implications for life cycle costs”. Industrial Management & Data Systems, vol. 96, no. 4, pp. 18-26, 1996. [P28] F. Shull, “Engineering values: From architecture games to agile requirements”. IEEE Software, vol. 30, no. 2, pp. 2-6, 2013. [P29] F. Shull, “Our Best Hope”, IEEE Software, vol. 31, no. 4, pp. 4-8, 2014. [P30] K. Siau, Y. Wand, and I. Benbasat, “The relative importance of structural constraints and surface semantics in information modeling”. Information Systems, vol. 22, no. 2, pp. 155-170, 1997. [P31] B. G. Silverman, “Critiquing human judgment using knowledge-acquisition systems”, AI Magazine, vol. 11, no. 3, pp. 60, 1990. [P32] E. D. Smith and B. A. Terry, “Attribute substitution in systems engineering”. Systems engineering, vol. 13, no. 2, pp. 130-148, 2010. [P33] A. P. Snow, M. Keil, and L. Wallace, “The effects of optimistic and pessimistic biasing on software project status reporting”. Information & Management, vol. 44, no. 2, pp. 130-141, 2007. [P34] W. Stacy and J. MacMillan, “Cognitive bias in software engineering”. Communications of the ACM, vol. 38, no. 6, pp. 57-63, 1995. [P35] A. Tang, “Software designers, are you biased?”. In Proceedings of the 6th International Workshop on Sharing and Reusing Architectural Knowledge, pp. 1-8, May 2011. [P36] A. Tang, and M. F. Lau, “Software architecture review by association”, Journal of Systems and Software, vol. 88, pp. 87-101, 2014. [P37] P. Tobias, D. S. Spiegel, “Is Design the Preeminent Protagonist
in User Experience?”, Ubiquity, May 2009. [P38] R. J. Wirfs-Brock, “Giving Design Advice”. IEEE Software, vol. 24, no. 4, pp. 13-15, 2007. [P39] E. D. Smith, Y. J. Son, M. Piattelli-Palmarini, and A. T. Bahill, “Ameliorating mental mistakes in tradeoff studies”. Systems Engineering, vol. 10, no. 3, 2007. [P40] N. C. Haugen, “An empirical study of using planning poker for user story estimation”. In AGILE 2006 (AGILE'06), pp. 9-pp, July 2006. [P41] A. J. Ko and B. A. Myers, “A framework and methodology for studying the causes of software errors in programming systems”. Journal of Visual Languages & Computing, vol. 16, no. 1, pp. 41-84, 2005. [P42] K. Siau, Y. Wand, and I. Benbasat, “When parents need not have children—Cognitive biases in information modeling”. In International Conference on Advanced Information Systems Engineering, pp. 402-420, May 1996. [P43] S. Chakraborty, S. Sarker, and S. Sarker, “An exploration into the process of requirements elicitation: A grounded approach”, Journal of the Association for Information Systems, vol. 11, no. 4, pp. 212, 2010. [P44] J. Parsons and C. Saunders, “Cognitive heuristics in software engineering applying and extending anchoring and adjustment to artifact reuse”, IEEE Transactions on Software Engineering, vol. 30, no. 12, pp. 873-888, 2004. [P45] M. G. Pitts and G. J. Browne, “Improving requirements elicitation: An empirical investigation of procedural prompts”. Information Systems Journal, vol. 17, no. 1, pp. 89-110, 2007. [P46] G. Calikli, B. Aslan, and A. Bener, “Confirmation bias in software development and testing: An analysis of the effects of company size, experience and reasoning skills”, In Proceedings of 22nd Annual Psychology of Programming Interest Group Workshop, 2010. [P47] C. Mair and M. Shepperd, “Human judgement and software metrics: vision for the future”. In Proceedings of the 2nd international workshop on emerging trends in software metrics, pp. 81-84, May 2011. [P48] M. Jørgensen, “Identification of more risks can lead to increased over-optimism of and over-confidence in software development effort estimates”. Information and Software Technology, vol. 52, no. 5, pp. 506-516, 2010. [P49] M. Jørgensen and T. Halkjelsvik, “The effects of request formats on judgment-based effort estimation”, Journal of Systems and Software, vol. 83, no. 1, pp. 29-36, 2010. [P50] M. Jørgensen, K. H. Teigen, and K. MoløKken, “Better sure than safe? Over-confidence in judgement based software development effort prediction intervals”, Journal of Systems and Software, vol. 70, no. 1, pp. 79-93, 2004. [P51] M. Jørgensen, “Individual differences in how much people are affected by irrelevant and misleading information”, In Second European Conference on Cognitive Science, Delphi, Greece, Hellenic Cognitive Science Society, 2007. [P52] K. Moløkken and M. Jørgensen, “Expert estimation of web-development projects: are software professionals in technical roles more optimistic than those in non-technical roles?”. Empirical Software Engineering, vol. 10, no. 1, pp. 7-30, 2005. [P53] T. K. Abdel-Hamid, K. Sengupta, and D. Ronan, “Software project control: An experimental investigation of judgment with fallible information”. IEEE Transactions on Software Engineering, vol. 19, no. 6, pp. 603-612, 1993. [P54] T. Connolly and D. Dean, “Decomposed versus holistic estimates of effort required for software writing tasks”, Management Science, vol. 43, no. 7, pp. 1029-1045, 1997. [P55] M. Jørgensen, D. I. Sjøberg, “Software process improvement and
human judgement heuristics”, Scandinavian Journal of Information Systems, vol. 13, no. 1, pp. 2, 2001. [P56] K. Moløkken-Østvold and M. Jørgensen, “Group processes in software effort estimation”, Empirical Software Engineering, vol. 9, no. 4, pp. 315-334, 2004. [P57] G. Calikli and A. Bener, “Preliminary analysis of the effects of confirmation bias on software defect density”, In Proceedings of 4th International Symposium on Empirical Software Engineering and Measurement, 2010. [P58] E. Løhre and M. Jørgensen, “Numerical anchors and their strong effects on software development effort estimates”, Journal of Systems and Software, vol. 116, pp. 49-56, 2016. [P59] M. Jørgensen and B. Faugli, “Prediction of overoptimistic predictions”, In Proceedings of 10th International Conference on Evaluation and Assessment in Software Engineering, British Computer Society, 2006. [P60] M. Jørgensen and Gruschke, “Industrial use of formal software cost estimation models: Expert estimation in disguise?”, In Proceedings of Conference on Evaluation and Assessment in Software Engineering (EASE’05), pp. 1-7, 2005. [P61] M. Jorgensen and S. Grimstad, “Over-Optimism in Software Development Projects:" The Winner’s Curse", In 15th International Conference on Electronics, Communications and Computers (CONIELECOMP'05), pp. 280-285, February 2005. [P62] M. Jørgensen and D. Sjøberg, “The importance of not learning from experience”. In Proceedings of Conference on European Software Process Improvement. pp. 2-2, 2000. [P63] K. Moløkken and M. Jørgensen, “Software effort estimation: Unstructured group discussion as a method to reduce individual biases”. In Proceedings of the 15th Annual Workshop of the Psychology of Programming Interest Group (PPIG 2003), pp. 285-296, 2003. [P64] K. Molokken-Ostvold and Haugen, “Combining estimates with planning poker-an empirical study”, In Proceedings of 18th Australian Software Engineering Conference (ASWEC 2007). pp. 349358, 2007. [P65] J. A. O. G. da Cunha and H. P. de Moura, “Towards a substantive theory of project decisions in software development project-based organizations: A cross-case analysis of IT organizations from Brazil and Portugal”. In Proceedings of 10th Iberian Conference on Information Systems and Technologies (CISTI), pp. 16), 2015. [P66] H. van Vliet and A. Tang, “Decision making in software architecture”. Journal of Systems and Software, vol. 117, pp. 638-644, 2016. [P67] J. A. O. da Cunha, F. Q. da Silva, H. P. de Moura, and F. J. Vasconcellos, “Decision-making in software project management: a qualitative case study of a private organization”. In Proceedings of the 9th International Workshop on Cooperative and Human Aspects of Software Engineering, pp. 26-32, May 2016.