individual differences in cognitive biases: evidence ...

5 downloads 2652 Views 338KB Size Report
belief bias, overconfidence bias, hindsight bias, base rate neglect, outcome bias and ... Also, with the exception of hindsight bias, all cognitive biases showed ...
This is preprint version of paper published in Intelligence 50 (2015) 75-86. DOI: 10.1016/j.intell.2015.02.008. Please cite from final published version: Teovanović P., Knežević, G., Stankov, L. (2015). Individual differences in cognitive biases: Evidence against one-factor theory of rationality. Intelligence, 50, 75-86.

INDIVIDUAL DIFFERENCES IN COGNITIVE BIASES: EVIDENCE AGAINST ONE-FACTOR THEORY OF RATIONALITY 1 Predrag Teovanović 2 and Goran Knežević University of Belgrade

and Lazar Stankov Australian Catholic University

ABSTRACT In this paper we seek to gain an improved understanding of the structure of cognitive biases and their relationship with measures of intelligence and relevant non-cognitive constructs. We report on the outcomes of a study based on a heterogeneous set of seven cognitive biases - anchoring effect, belief bias, overconfidence bias, hindsight bias, base rate neglect, outcome bias and sunk cost effect. New scales for the assessment of these biases were administered to 243 undergraduate students along with measures of fluid (Gf) and crystalized (Gc) intelligence, a Cognitive Reflection Test (CRT), Openness/Intellect (O/I) and Need for Cognition (NFC) scale. The expected experimental results were confirmed – i.e., each normatively irrelevant variable significantly influenced participants’ responses. Also, with the exception of hindsight bias, all cognitive biases showed satisfactory reliability estimates (αs > .70). However, correlations among the cognitive bias measures were low (rs < .20). Although exploratory factor analysis produced two factors, their robustness was doubtful. Cognitive bias measures were also relatively independent (rs < .25) from the Gf, Gc, CRT, O/I and NFC and they define separate latent factors. This pattern of results suggests that a major part of the reliable variance of cognitive bias tasks is unique, and implies that one-factor model of rational behavior is not plausible. Keywords: cognitive biases, rationality, judgment and decision making, intelligence, factor analysis

This research was supported by the Ministry of Education, Science and Technological Development of the Republic of Serbia, Project No 179018 2 [email protected] 1

1

This is preprint version of paper published in Intelligence 50 (2015) 75-86. DOI: 10.1016/j.intell.2015.02.008. Please cite from final published version: Teovanović P., Knežević, G., Stankov, L. (2015). Individual differences in cognitive biases: Evidence against one-factor theory of rationality. Intelligence, 50, 75-86.

1. INTRODUCTION Intelligence encompasses a very broad range of cognitive

1.1. Cognitive Biases as Departures from Normative Models of Rationality

processes and empirical evidence for its generality derives from

Empirical research in the areas of judgment and decision

the presence of positive manifold and the finding of about .30

making, as well as memory and reasoning, has produced reliable

average correlation between a large collection of cognitive tests

evidences that the outcomes of cognitive processes often

(see Carroll, 1993). Developments from the outside of individual

systematically depart from what is normatively predicted to be

difference tradition may lead to creation of new types of cognitive

rational behavior. With the arrival of the heuristics and biases

tasks that can enrich our understanding of intelligence. A good

research program in the early 1970s, these findings have been

example has been the study of working memory (Baddeley &

referred to as cognitive biases 3 (see Method section for example

Hitch, 1974). Recently, Stankov (2013) has pointed out that some

tasks) that arise as a consequence of heuristics, that is,

measures of rationality – e.g., measures of scientific and

experience-based strategies that reduce complex cognitive tasks

probabilistic reasoning (Stanovich, 2012) – may reach .25 to .35

to simpler mental operations (Gilovich, Griffin, & Kahneman,

correlation with tests of intelligence. Although there is paucity of

2002; Kahneman, Slovic, & Tversky, 1982; Tversky & Kahneman,

information about psychometric properties of measures of

1974). By producing many cognitive bias tasks that depict

rationality, Stankov (2013) stated that “…cognitive measures

circumstances under which relying on heuristics leads to

based on studies of decision making and principles of scientific

systematic violations of normative models, this program

and probabilistic reasoning are perhaps the most interesting

emphasized the conditions of predictable irrationality (Ariely,

recent addition to the study of intelligence…” (p. 728). He also

2009).

pointed out that since probabilistic and scientific reasoning are

On the other hand, proponents of ecological rationality have

known to be open to cognitive biases which do not always show

argued that rational behavior should not be defined with respect

correlations with intelligence, it is important to study cognitive as

to abstract normative standards, or – as Gigerenzer (2004) puts

well as non-cognitive aspects of the latter.

it - “rationality is not logical, but ecological” (p. 64). Within this

In this paper we examine factorial structure of rational

paradigm, cognitive biases are not considered as errors of

reasoning tasks used to assess seven cognitive biases and relate

cognitive processing, but rather a result of highly constrained and

these to the well-known psychometric measures of intelligence

artificial experimental conditions since cognitive bias tasks

and aspects of personality and thinking dispositions. Two

diverge considerably from those in the natural environment

plausible outcomes can be anticipated. First, there may be

(Gigerenzer, 1996, 2004; Gigerenzer, Hoffrage, & Kleinbölting,

sufficient evidence for communality among the bias measure and

1991; Hertwig, Fanselow, & Hoffrage, 2003; Hoffrage, Hertwig, &

one or more well-defined bias factors, correlated with tests of

Gigerenzer, 2000).

intelligence, may arise. This would place cognitive bias measures

It was only in the late 1990s that researchers became cognizant

well within the traditional understanding of intelligence. Second,

of the considerable variability across participants on each of the

there may be poor support for the presence of either common

cognitive bias tasks. After Stanovich and West’s (1998, 2000) call

factors or for the correlation of bias measures with tests of

for a debate about the role individual differences play in the

intelligence. While this outcome would not necessarily place

deviation between the outcomes of cognitive process and those

cognitive biases outside cognitive domain, their standing would

of normative models, a growing body of correlational studies of

become restricted to a relatively narrow domain of decision

cognitive biases emerged. Two separate topics, which can be

making. Under this latter scenario, cognitive biases will have the

distinguished from the perspective of differential psychology, are

status similar to some of the measures from neuropsychology;

briefly summarized in the following sections.

they are employed to detect cognitive deficits but are infrequently used in mainstream intelligence assessment.

3

Systematic departures from normative models are sometimes referred

to as cognitive illusions (Pohl, 2004), thinking errors (Stanovich, 2009) and thinking biases (Stanovich & West, 2008). 2

This is preprint version of paper published in Intelligence 50 (2015) 75-86. DOI: 10.1016/j.intell.2015.02.008. Please cite from final published version: Teovanović P., Knežević, G., Stankov, L. (2015). Individual differences in cognitive biases: Evidence against one-factor theory of rationality. Intelligence, 50, 75-86.

1.2. Correlates of Cognitive Biases Intelligence was undoubtedly the prime candidate for predicting individual differences in cognitive biases. The initial findings of negative modest correlations of intelligence tests with belief bias, confirmation bias, base rate neglect, outcome bias, overconfidence bias and hindsight bias were interpreted as clues about the importance of algorithmic limitations in the emergence of predictable fallacies (Stanovich & West, 1998, 2000). However, some studies suggest that at least two cognitive biases included in the present study - anchoring effect (Furnham, Boo, & McClelland, 2012; Stanovich & West, 2008) and sunk cost effect (Parker & Fischhoff, 2005; Stanovich & West, 2008) may not be related to cognitive ability measures. As a result of the most comprehensive study on this subject, Stanovich and West (2008) have provided lists of cognitive biases that do and do not show association with intelligence and have argued that the correlation should be expected only when considerable cognitive capacity is required in order to carry out the computation of a normatively correct response to a bias task. Some other aspects of cognitive functioning are also related to cognitive biases. Previous research has shown that low scores on Cognitive Reflection Test (CRT), which was devised as a measure of “the ability or disposition to resist reporting the response that first comes to mind” (Frederick, 2005, p. 36), are related to probability overestimation (Albaity, Rahman, & Shahidul, 2014), conjunction fallacy (Hoppe & Kusterer, 2011; Oechssler, Roider, & Schmitz, 2009) and impatience in time-preference judgment (Albaity et al., 2014; Frederick, 2005). CRT is also related to performance on a broad range of cognitive bias task and it has predictive validity over and above intelligence (Toplak, West, & Stanovich, 2011, 2014). This is reminiscent of Stanovich’s assertion that individual differences in the detection of the need to override heuristic responses, that are assessed by the scales of Actively Open-Minded Thinking and Need for Cognition (Stanovich, 2009, 2012; Stanovich & West, 2000, 2008), may be related to cognitive biases. Similarly, it is plausible to assume that the personality trait of Openness/Intellect, which is associated with cognitive performance, may also account for variance in performance on cognitive bias tasks. 1.3. Relationships among Cognitive Biases The other topic deals with the generality of individual differences in cognitive biases, and questions how these biases are related to each other. De Bruin, Parker, and Fischhoff (2007)

have stated that positive manifold among cognitive bias tasks indicate an underlying ability construct which they have termed the decision-making competence. Stanovich and West (1998) were the first to report significant positive correlations among belief bias, base rate neglect and outcome bias (Experiment 1), as well as between overconfidence and hindsight bias (Experiment 4). Subsequent studies have shown that reliability of composite scores derived from a relatively large set of bias tasks is poor (Toplak et al, 2011, 2014; West, Toplak, & Stanovich, 2008) and that correlations among cognitive biases are only of modest strength (Klaczynski, 2001; Stanovich & West, 1998, 2000). Eventually, it became clear that it is possible to extract at least two latent factors from the matrices of intercorrelations between various cognitive bias measures (De Bruin et al. 2007; Parker & Fischhoff, 2005; Weaver & Stewart, 2012). Taken together, these results indicate that cognitive biases space is multidimensional. A number of classifications of cognitive biases available in the literature today also suggests that the population of cognitive biases is heterogeneous. Conceptually, cognitive biases differ with respect to normative models they violate. From a theoretical point of view, biases can be distinguished with regard to cognitive processes they tap (Pohl, 2004; Stanovich, 2009), whether they are

considered

as

consequences

of

heuristics,

artificial

procedures, biased error management (Haselton, Nettle, & Andrews, 2005), selective attention, motivation or psychophysical distortions (Baron, 2008). Similar points were made by other investigators (see Arnott, 2006; Carter, Kaufmann, & Michel, 2007; Stanovich, 2003). From the methodological point of view, performance on cognitive bias tasks can be evaluated in terms of consistency, by comparing related responses, or in terms of accuracy, relative to external criteria (De Bruin et al., 2007; Parker & Fischhoff, 2005). This corresponds to Kahneman’s distinction between

coherence

rationality

and

reasoning

rationality

(Kahneman, 2000; Kahneman & Frederick, 2005). It is important to note that empirical classification of cognitive biases can also be based on individual differences in information obtained from multiple tasks and measures. Although some previous studies in this field did employ factor analysis (De Bruin et al. 2007; Klaczynski, 2001; Liberali, Reyna, Furlan, Stein, & Pardo, 2012; Parker & Fischhoff, 2005; Weaver & Stewart, 2012), their primary aims were not to identify latent dimensions of the cognitive bias domain. For the purposes of the study presented here, multi-item instruments for the measurement of individual differences in 3

This is preprint version of paper published in Intelligence 50 (2015) 75-86. DOI: 10.1016/j.intell.2015.02.008. Please cite from final published version: Teovanović P., Knežević, G., Stankov, L. (2015). Individual differences in cognitive biases: Evidence against one-factor theory of rationality. Intelligence, 50, 75-86.

seven cognitive biases were devised and administered along with

2. METHOD

measures of intelligence and relevant non-cognitive constructs. These cognitive biases were selected with the intention to increase the level of sample representativeness by covering a variety of categories in alternative taxonomies (see Table 1).

2.1. Participants The participants were 243 undergraduate students (22 males) who attended first-year Introductory Methodology course on the

1.4. Aims

Faculty of Special Education and Rehabilitation at the University

This study had three aims. The first was to estimate the reliability

gave their informed consent before taking part in the study.

of Belgrade. Their mean age was 19.83 (SD=1.31). Participants

of cognitive bias measures. Significant effects of normatively irrelevant variables would confirm experimental reliability of examined cognitive biases, while satisfying levels of internal consistency would indicate that individual differences in cognitive biases could be precisely measured. The second aim was to assess if correlations between cognitive bias measures are high enough to extract meaningful common factor(s). However, a single latent factor of rationality was not expected due to a pronounced theoretical and methodological heterogeneity of the cognitive bias domain (Arnott, 2006; Carter et al., 2007; Haselton et al., 2005; Kahneman & Frederick, 2005; Pohl, 2004; Stanovich, 2003, 2009). Furthermore, the reported degrees of common variance among individual cognitive biases in previous studies do not provide evidence for a strong unidimensional construct of rationality (De Bruin et al. 2007; Klaczynski, 2001; Parker & Fischhoff, 2005; Stanovich & West, 1998, 2000; Toplak et al, 2011, 2014; Weaver & Stewart, 2012; West et al., 2008). The third aim was to estimate correlations of cognitive biases with measures that address well-established constructs such as fluid and crystalized intelligence, openness and need for cognition, as well as cognitive reflection.

2.2. Measures of Cognitive Biases 2.2.1. Anchoring Effect Anchoring effect refers to a systematic influence of initially presented values on numerical judgments. A simple two-step procedure for elicitation of this phenomenon, referred to as a standard paradigm of anchoring (Epley & Gilovich, 2001; Strack & Mussweiler, 1997), was firstly presented in the seminal work of Tversky and Kahneman (1974). In the first step of this procedure, subjects are asked to make a judgment if certain quantity, such as percentage of African countries in the United Nations, is higher or lower than number that is randomly generated, e.g. by spinning a wheel of fortune (comparative task). In the second step, participants are asked to provide their numerical estimates for the very same quantity (estimation task). The results showed notable effects of arbitrary numbers on participants’ estimates (e.g. percentage of African countries in the United Nations was estimated to be 25 and 45 for groups that were presented with anchors 10 and 65, respectively). Subsequent research revealed that anchoring effect is “one of the most reliable and robust results of experimental psychology” (Kahneman, 2011, p. 119; for a recent review, see Furnham & Boo, 2011).

Table 1 Alternative Classifications of Cognitive Biases. Pohl

Stanovich

Baron

Arnott

Carter et al.

De Bruin et al.

(2004)

(2009)

(2008)

(2006)

(2007)

(2005)

Coherence

Judgment

Focal Bias

Availability

Adjustment

Reference point

Consistency

Logic

Thinking

Override Failure

Availability

/

Confirmatory

Accuracy

Overconfidence Bias

Calibration

Judgment

Egocentric Processing

Motivational Bias

Confidence

Control Illusion

Accuracy

Hindsight Bias

Coherence

Memory

Override Failure

Imperfect correlation

Memory

Output Evaluation

Consistency

Base Rate Neglect

Probability theory

Thinking

Mindware Gap

Focus on one attribute

Statistical

Base Rate

Accuracy

Sunk Cost Effect

Utility theory

Judgment

/

Imperfect correlation

Situation

Commitment

Accuracy

Outcome Bias

Coherence

Judgment

Override Failure

Imperfect correlation

/

Confirmatory

Consistency

Cognitive Bias

Normative Model

Anchoring Effect Belief Bias

4

This is preprint version of paper published in Intelligence 50 (2015) 75-86. DOI: 10.1016/j.intell.2015.02.008. Please cite from final published version: Teovanović P., Knežević, G., Stankov, L. (2015). Individual differences in cognitive biases: Evidence against one-factor theory of rationality. Intelligence, 50, 75-86.

It was also reported that the size of the anchoring effect depends

study. On eight inconsistent items, logical validity of argument

on the relative anchor distance (Hardt & Pohl, 2003; Strack &

and empirical status of conclusion were in conflict. Four of them

Mussweiler, 1997) and that it most probably follows the inverted-

were valid, but unbelievable (e.g.”All mammals walk. Whales are

U curve - i.e., moderate anchors produce more pronounced

mammals. Therefore, whales walk.”) and four were invalid, but

anchoring effects than extremely low and extremely high anchors

believable (e.g. ”All fish have gills. Catfish has gills. Therefore,

(Wegener, Petty, Detweiler-Bedell, & Jarvis, 2001). Although

catfish is a fish.”). In the wording of eight consistent items, that

elegant, the standard paradigm of anchoring lacks information

had the same formal argument structure as their inconsistent

about anchor-free estimates for each participant, and therefore

counterparts, logic was congruent with the believability of the

it is not suitable either for the collection of anchoring effect

conclusion. Four of them were both valid and believable (e.g. ”All

measures or for the control of anchor distance at the individual’s

men are mortal. I am a man. Therefore, I am mortal.”) and four

level. For these reasons, we employ an extended procedure. In

were both invalid and unbelievable (e.g. ”All trolley buses use

the present study we introduce the anchor-free estimates prior

electricity. Boilers use electricity. Therefore, trolley buses are

to assessing the anchoring effect proper. Specifically, 24 general

boilers.”). Participants were instructed to evaluate syllogisms by

knowledge questions (e.g. “What is the number of African

indicating whether the conclusion necessarily follows from the

countries in the United Nations?”, “What is the highest

premises or not, assuming that all premises are true.

temperature ever recorded?”, “How many castles are there in

Although previous studies had used a number of incorrect

Vojvodina?”) were presented on a computer screen in the intial

responses to inconsistent items as a measure of belief bias (e.g.

phase, one at a time, and participants were instructed to state

Klaczynski & Daniel, 2005; Stanovich & West, 1998), we have

their estimates (E1) by using a numeric keypad. In the next phase,

employed a slightly different scoring strategy. This was done in

which followed immediately, the standard paradigm of anchoring

order to obtain more reliable measures of individual differences

was applied and participants were instructed to answer the same

in susceptibility to belief bias, i.e. to control for random variance

set of questions with an option to amend their initial responses.

that was expected considering the dichotomous response format.

For each of the 24 questions, participants were administered a

The responses to each pair of items were scored as biased if, and

comparative task and a final estimation task. In the comparative

only if, the participant indicated a correct answer on a consistent

task, participants were required to indicate if their final response

item and an incorrect answer on a corresponding inconsistent

was higher or lower than the value of anchor (A). Anchors were

item. In other words, incorrect responses on consistent items

set automatically by multiplying anchor-free estimates (E1) with

were treated as indicators of possible random responding to

predetermined values (that ranged from 0.2 to 1.8 between

counterpart inconsistent items that exploit the same formal

questions). Afterwards, on the estimation task, participants stated

argument structure. Incorrect responses on consistent items were

their final response (E2) by using the numeric keypad.

ascribed to random sources of variance (lack of attention,

On each question, a bias score was calculated as a difference

temporary memory deactivation, distraction, etc.), since it could

between the two estimates (E2-E1), divided by a distance of

not be assumed that these judgments were based either on a

anchor from the initial estimate (A-E1). The scores that had been

priori beliefs or on logical validity, simply because these two were

lower than 0 (lack of anchoring) and higher than 1 (total

consistent.

anchoring) were brought to boundaries of acceptable range (16.41% of all scores). The average bias score for the 24 questions was used as a measure of susceptibility to anchoring effect.

2.2.3. Overconfidence Bias Overconfidence bias is a common inclination of people to overestimate their own abilities to successfully perform a

2.2.2. Belief Bias Belief bias is a predictable tendency to evaluate deductive arguments on the basis of believability of conclusion, rather than on the basis of logical validity (Evans, Barston, & Pollard, 1983). Eight pairs of syllogistic reasoning problems, some of those based on examples from the previous research (Markovits & Nantel, 1989; Stanovich & West, 1998), were used in the present

particular task (Brenner, Koehler, Liberman, & Tversky, 1996). In order to avoid experimental dependecy that allows alternative interpretation of association between measures of confidence and intelligence as artificial (Ackerman, Beier, & Bowen, 2002), it has been recommended to obtain these measures by using different instruments (Pallier et al., 2002). In this study, confidence rating scales were attached to the 21 items of the Letter Counting

5

This is preprint version of paper published in Intelligence 50 (2015) 75-86. DOI: 10.1016/j.intell.2015.02.008. Please cite from final published version: Teovanović P., Knežević, G., Stankov, L. (2015). Individual differences in cognitive biases: Evidence against one-factor theory of rationality. Intelligence, 50, 75-86.

test (Stankov, 1994). Participants were required to count how

2.2.5. Sunk Cost Effect

many times target letters occurred while different combinations

Sunk cost effect refers to the tendency to “continue an

of letters were displayed serially on the screen, one second apart.

endeavor once an investment of money, effort, or time has been

After responding to each of the items, participants were

made” (Arkes & Blumer, 1985, p. 124). Eight hypothetical

instructed to assess their degree of confidence in that answer on

scenarios were used to describe various situations in which the

an 11-point rating scale, ranging from 0% to 100% in steps of 10%.

decision maker had to choose between an option connected to

Difference between the confidence score, expressed as the mean

unrecoverable past expenditure (sunk-cost option) and an option

percentage of confidence judgment across all the items in the

that was worded as more beneficial (normative option). Most of

test, and the accuracy score, expressed as the percentage of

the tasks were based on examples from the existing literature (De

correct answers, was used as a measure of calibration (Stankov,

Bruin et al., 2007; Thaler, 1980), but were modified to engage

1999). Greater absolute values pointed to higher miscalibration,

undergraduate students (e.g. “It is the last day of your summer

with positive values indicating overconfidence and negative

vacation. For that day, you have previously paid for a boat-

values indicating underconfidence. However, only 9 participants

excursion to a nearby island. Suddenly, you are offered a dream-

(3.7%) had a negative calibration score, while the vast majority

activity for free: one hour of scuba diving with an instructor.”).

showed a strong overconfidence bias (72.8% participants had

Participants were instructed to make a choice between two

calibration score higher than 20).

options by using rating scale that ranged from 1 (most likely to choose the normatively correct option) to 6 (most likely to choose

2.2.4. Hindsight Bias

sunk-cost option). Average rating scores were used as measures

Hindsight bias is a propensity to perceive events as being more

of susceptibility to sunk cost effect.

predictable, once they have occurred (Fischhoff, 1975). In other words, judgments made with the benefit of information about the

2.2.6. Base Rate Neglect

outcome of an event differ systematically from judgments made

Base rate neglect refers to a tendency to ignore statistical

without such knowledge. However, meta-analytically estimated

information of prior probabilities in favor of the specific evidence

overall effect size of outcome information is small (Christensen-

concerning the individual case (Kahneman & Tversky, 1973). For

Szalanski & Willham, 1991; Guilbault, Bryant, Brockway, &

each of 10 causal base rate problems in the study, participants

Posavac, 2004) as is the internal consistency of hindsight bias

were presented with two kinds of information: (a) base-rates, i.e.

measures (Pohl, 1999, according to Musch, 2003). Hindsight bias

background information (e.g. “The probability of winning a lottery

is usually regarded as a hard-to-avoid consequence of

ticket is 10%.”) and (b) information concerning the specific case

information processing and storage. Outcome information

(e.g. “John, who was wearing his lucky shirt and who had found

irreversibly changes knowledge representation (Hoffrage et al.,

a four-leaf clover that day, decided to buy one lottery ticket.”).

2000), or it serves as an anchor when trying to reconstruct

Participants were required to estimate the probability of the

forgotten estimates (Hardt & Pohl, 2003). In this study, within-

criterion behavior (e.g. “What is the probability that John bought

subject memory/recall design was employed. In the first phase,

a winning ticket?”) on an 11-point percentage scale. Individual

participants were instructed to provide an answer to a set of 14

cases were described by using information normatively irrelevant

questions (e.g. to find the exception in a set such as ”cow, wolf,

in respect to the criterion behavior, but which were supposed to

cat, donkey, deer“), and also to assess their answer on a

cue stereotypical beliefs. The base rate neglect score was

confidence rating scale. In the second phase, participants were

expressed as the proportion of responses that differed from the

instructed to recall their initial confidence rating immediately after

base rate information in the direction implied by the specific

receiving feedback for the specific answer. Responses were

information (e.g. higher that 10% in the above example).

coded as hindsighted if the participants answered by lowering their confidence after negative feedback and raising their confidence after positive feedback. Bias score was expressed as a proportion of hindsighted responses.

2.2.7. Outcome Bias Outcome bias is the tendency to judge the quality of a decision based on the information about the outcome of that decision.

6

This is preprint version of paper published in Intelligence 50 (2015) 75-86. DOI: 10.1016/j.intell.2015.02.008. Please cite from final published version: Teovanović P., Knežević, G., Stankov, L. (2015). Individual differences in cognitive biases: Evidence against one-factor theory of rationality. Intelligence, 50, 75-86.

These judgments are erroneous in respect to normative

2.3.3. Three-Dimensional Space Test (3D-Space Test)

assumption that “information that is available only after decision

This 39-item test (Wolf, Momirović, & Džamonja, 1992) is an

is made is irrelevant to the quality of the decision” (Baron &

adapted version of an instrument that was a part of the General

Hersey, 1988, p. 569). In this study, within-subject design was

Aptitude Test Battery, firstly published by the U.S. Employment

employed. In the first phase, participants were presented with

Service in 1947. In each task, participants were presented with a

descriptions of 10 decisions made by others under uncertain

two-dimensional stimulus figure that contained dotted lines to

conditions (e.g. “Two seconds before the end of the basketball

indicate where the fold can be made. They were instructed to

match, with the score 79 to 77 in favor of the opponent, a player

select which one of the four three-dimensional figures can be

of the Serbian national basketball team decided to make a shoot

made by folding the stimulus figure. A time limit of 10 minutes

that would earn three points”), and also with the outcome of this

was imposed.

decision, that was either positive (e.g. “...and he scored”) or negative (e.g. “...and he missed”). Participants’ task was to judge

2.3.4. Vocabulary Test

the quality of decision by indicating whether the decision maker

Vocabulary test (Knežević & Opačić, 2011) measures verbal

should pursue the same choice in similar situations on a rating

knowledge and concept information. The test consists of 56 items

scale ranging from 1 (absolutely not) to 6 (absolutely yes). A week

of increasing difficulty. Subjects were required to define the

later, a parallel form of the questionnaire was administered, with

words (e.g. “vignette”, “credo”, “isle”) by choosing the answer

the same 10 decisions but with different outcomes: if the outcome

from among six alternatives. There was no time limit for the

was positive in the first test, it was negative in the second, and

completion of this test. On average, the participants completed

vice versa. The answering scale was the same. Pairs of items were

this test in 13 minutes (SD=2.03).

compared and the measure of outcome bias was calculated as the difference between ratings of the same decisions with the positive and the negative outcome.

2.3.4. Analogies Test Analogies Test (Wolf et al., 1992) is a measure of analytical reasoning through the use of partial analogies. The test consists

2.3. Cognitive Ability Measures 2.3.1. Raven’s Progressive Matrices (RPM) RPM is the well-known marker of fluid intelligence. A total of 18 items was drawn from both standard and advanced versions of the Raven’s Progressive Matrices test (Raven, Court, & Raven, 1979). This version of instrument was previously used in Pallier et al. (2002). For each task, participants were asked to identify the missing symbol that logically completes the 3x3 matrix by choosing from among five alternatives. Participants were allowed 6 minutes to complete the test. 2.3.2. Swaps Test The original procedure proposed by Stankov (2000) was adapted by using visual symbols (e.g. bear, car, and hat images) instead of letters. Three symbols were presented in a line on the computer screen and followed by verbal instructions to mentally switch the position of two of them. A total of 20 items differed in the number of such instructions, with a minimum of one and maximum of four required swaps that were serially displayed on the screen. Participants were asked to indicate the current arrangement of the three symbols after all the swaps had been completed by choosing their answer among five alternatives.

of 39 relatively easy multiple choice items, with four alternatives. Each item had a stem (e.g., “food: man = fuel: ?”) and participants were asked to respond by choosing their answer from among four alternatives (e.g., “oil, automobile, gas, wheel”). A time limit of 2 min was imposed. 2.3.5. Synonyms-Antonyms Test Synonyms-Antonyms Test (Wolf et al., 1992) consists of 40 items, each representing a pair of words which are either synonyms or antonyms (e.g. “black – white”, “thin – fat”). Subjects were asked to judge which of the two possible relations (i.e.. either synonym or antonym) is present. Time was limited to 2 minutes. 2.3.6. Cognitive Reflection Test (CRT) Cognitive Reflection Test (Frederick, 2005) is composed of three questions that trigger most participants to answer immediately and incorrectly. For example, to a question “If it takes 5 machines 5 minutes to make 5 widgets, how long it take 100 machines to make 100 widgets?” as much as 78.1% of participants answered “100 minutes”, although the correct answers is “5 minutes”.

7

This is preprint version of paper published in Intelligence 50 (2015) 75-86. DOI: 10.1016/j.intell.2015.02.008. Please cite from final published version: Teovanović P., Knežević, G., Stankov, L. (2015). Individual differences in cognitive biases: Evidence against one-factor theory of rationality. Intelligence, 50, 75-86.

2.4. Non-Cognitive Measures

3. RESULTS

2.4.1. Openness/Intellect

3.1. Reliability of Cognitive Bias Measures

Openness/Intellect (O/I) is an empirically derived dimension of personality. Its “central characteristic (is) the disposition to detect,

Descriptive statistics for the seven cognitive biases measures

explore and utilize abstract and sensory information” (DeYoung,

are presented in Table 2, with higher scores indicating a more

2011, p. 717). Among the Big Five personality traits, O/I is the most

pronounced bias. The results confirm experimental reliability of

closely correlated with intelligence (Ackerman & Heggestad,

examined cognitive bias phenomena. Mean bias scores deviated

1997). For a quick assessment of this trait, 12 items from NEO-FFI

from normative values ranging from 1.64 (for hindsight bias) to

(McCrae & Costa, 2004) were used, each of which was to be rated

3.58 (for base rate neglect) standard deviations, thus revealing

by the participants on a five-point scale according to how well it

large effect sizes of normatively irrelevant variables according to

described themselves (e.g. “Sometimes when I am reading poetry

Cohen's conventions (Cohen, 1992).

or looking at a work of art, I feel a chill or wave of excitement.”). 2.4.2. Need for Cognition Need for Cognition (NFC) represents a measure of cognitive style that reflects "the tendency for an individual to engage in and enjoy thinking" (Cacioppo & Petty, 1982, p. 116). People high on need for cognition are more analytical in their thinking strategies and generally more thoughtful. In this study, an 18-item Need for Cognition scale (Cacioppo, Petty, & Kao, 1984) was administered to participants with instructions to rate the extent to which they agree with each statement (e.g. “I prefer my life to be filled with puzzles that I must solve”). 2.5. Procedure All measures were computer administrered in two sessions, one week apart. Overall testing time was about 60 minutes per session. In the first session, the participants completed a battery of six cognitive ability tests and the first part of the outcome bias questionnaire. In the second session, Need For Cognition and Intellect/Openness scales were administered, along with the Cognitive Reflection Test, the second part of outcome bias questionnaire, and the instruments for measuring individual differences in six other cognitive biases.

Table 2 Descriptive Statistics for Cognitive Bias Tasks. Cognitive Bias

M

SD

Anchoring Effect

Normative Value 0

0.44

0.18

Cohen’s d 2.05

Belief Bias

0

4.05

2.34

2.05

Overconfidence Bias

0

32.56

18.99

1.71

Hindsight Bias

0

0.33

0.20

0.24

Base Rate Neglect

0

0.78

0.22

3.58

Sunk Cost Effect

1

3.16

1.06

2.04

Outcome Bias

0

1.55

0.91

2.43

Note. For cognitive biases that were defined in respect to consistency criterion, within-subject designs were employed with normatively irrelevant variables as experimental factors and Cohen’s ds were calculated by using standard formula d = (M1-M2)/SD. For cognitive biases that were defined with respect to the accuracy criterion, Cohen’s ds were calculated by formula d = (M-μ)/SD, where μ is normative value.

Results of the Kolmogorov-Smirnov test, displayed in Table 3, show that scores were approximately normally distributed for measures of anchoring effect, overconfidence bias and sunk cost effect. Outcome bias scores were positively skewed, while the distribution of base-rate neglect scores was negatively skewed and also leptokurtic. Platykurticity was observed for belief bias and hindsight bias measures.

Table 3 Discriminative Properties and Internal Consistency of Cognitive Bias Measures.

Range Normality Deviations KS test Reliability Cognitive Bias Potential Observed Skewness Kurtosis Z p N Alpha Anchoring Effect 0-1 .06-.92 0.08 -0.18 0.78 .62 24 .77 Belief Bias 0-8 0-8 -0.12 -1.12 2.05