Mentalistic Metatheory and Methodology Donelson E. Dulany Department of Psychology, University of Illinois For need of a term, and want of a better term, I have labeled this metatheory “mentalism” to suggest a focus on that aspect of mental activity given to us in any waking moment—consciousness. Mentalism proposes an analysis of mental episodes that can reveal lawfulness among those conscious states. Consciousness may symbolically represent a present out there in perception, a past in forms of remembrance, and a future in modes that vary from hopes and fears to plans and intentions. Consciousness may even symbolically represent past or future conscious states and mental episodes, as well as an abstracted sense of self, all in higher-order awareness. The metatheory is summarized here and has been elaborated variously in Dulany (1991, 1997, 2002, 2004, 2009). The mentalistic metatheory at this point is most distinctive in these ways, ways that set it apart from computational, information processing, connectionist, and global workspace views: -----Conscious states are the sole carrier of symbolic contents. This unique capability permits consciousness to explain a wide variety of mental activities, thus demonstrating the adaptive value of consciousness. -----The metatheory is thus contrary to controversial claims for activity of a “cognitive unconscious,” for symbolic activity outside consciousness. ------Consciousness is not a separate “system” or “space”, but rather the carrier in various modes of various kinds of contents temporally interrelated by nonconscious operations within two kinds of mental episodes. -----The metatheory implies more analytic theories of the function of consciousness than do these more standard views. Presented at Association for the Scientific Study of Consciousness, Berlin, Germany, June 2009.
A mentalistic methodology is one that can provide competitive support for a mentalistic theory, jointly with assertions of report validity, both of which are components of hypotheses experimentally examined. Credibility of those hypotheses, relative to others, can be jointly maximized with observance of the Duhem-Quine thesis, Bayesian inference, and the logic of theoretical networks (Dulany 1968, 1991).
The Mentalistic Metatheory Nothing as complex as consciousness will submit to anything as simple as a definition. Living with consciousness, we may refine our intuitions with a metatheory that will imply and constrain mentalistic theories for empirical examination. In doing so, we may make the working assumption that conscious states are coordinate with brain states. That position is consistent with conscious states appearing in causal assertions, but it in no way entails metaphysical assumptions of a nonmaterial ontological status or of “free will” in the sense of indeterminism—metaphysical positions beyond the scope of the science as we know it. Both “hard problems” (e.g. Chalmers, 1996) remain hard. Symbolic representation, too, resists definition and must be functionally specified. Most fundamentally, symbols (a) may activate other symbols (e.g. “latte” may activate “café”) and (b) may appear as subjects and predicates of propositions (“A latte is on my desk”). Their functional specification is further elaborated by (c) participating in the special proposition “’This’ refers to ‘that’,” and (d) in the intentions controlling actions that warrant that special assertion. I reach and sip that latte. A mental state is a conscious state with a conscious mode that is the sole carrier of symbolic content. Symbolic content may be represented in an “identity” code, in which things are attentionally identified as such, or in a “literal” code that precedes and surrounds an attentional state, as a “fringe” (Dulany, 2001). In addition, there is a sense of agency. Any conscious mental state can be held with a sense of possession that can vary in degree and frequency over occasions and persons. At the moment, for example, “This perception of a computer screen is mine,” and “I believe that ____.
Conscious modes may be propositional, e.g. I believe (or perceive) that ____ . They carry a propositional content, an assertional form in which something is predicated of a subject. Conscious modes may also be sub-propositional mode, only a sense (feeling) of ____ , and so they carry a sub-propositional content. In the vernacular, it is “that” vs. “of” We can see then that any mental state is at the intersection of a set of variables—which may be experimentally varied or set at some value: Agency -- e.g. likelihood or strength of that sense of possession. Mode--e.g. degree of belief, perception, or feeling. Subject of proposition – relation of subject term’s referent, e.g. action or object or event, to what is “scored correct”. Predicate of proposition – subjective relation of subject term’s referent to some predicated event, e.g. “Relation of Action i to Outcome j”. We can use those two forms of modes carrying those two forms of content within two fundamental forms of mental episodes: Deliberative mental episodes are those in which a propositional content is yielded by a nonconscious deliberative operation (e.g. inference, decision, judgment) from two or more propositional contents. Evocative mental episodes are those in which a sub-propositional content is yielded by an associative-activational operation upon two or more sub-propositional contents. It is one form of automaticity. The two forms of mental episodes would have this in common: A conscious mental state is yielded by nonconscious mental operation from other conscious mental states. In notation we could say this: Cs Statein+1 Ncs Op (Cs Statejn,..,Cs Statekn-m), And when these states quantitatively vary--as, for example, in a theory of propositional (intentional) control (Dulany, 1968) or theory of propositional learning and causal reasoning (Carlson & Dulany, 1988)--a model of a set of mental episodes, refining the process assertions, would have this general form: Cs Statesin+1 = f(Cs Statesjn,...,Cs Stateskn-m).
The nonconscious mental operation is simply the relation among those conscious states, the neural process interrelating the neural basis of those states, and thus may be represented in a quantitative model by the interrelating function. Consciousness is not a separate system When we analyze mental episodes, the way mental operations move us among conscious states, we could even think of this as a way of formalizing what James (1890/1950, p. 243) seems to have meant by the “flights and perchings” of consciousness. What is the role of higher order awareness? The powerful ability of conscious states to carry symbolic content permits us to symbolically represent our own conscious states and mental episodes in higher order awareness (“metacognitively,” “reflectively”) by mental operations that are remembrance, categorization, inference, and even anticipation, hope or fear. All of these may usefully, though imperfectly, provide a higher order representation of first-order awareness of a world beyond mental activity. This first-order mental state, however, in no way requires a higher order state to become a conscious state—in contrast to a range of higherorder-thought theories reviewed and critiqued in Gennaro (2004). Mental states within mental episodes are intrinsically conscious states. What are the relations between forms of mental episodes? An evocative mental episode may become represented by a proposition in higher order awareness. For example, when driving, “’Red’ (the perception or thought) associatively activates ‘Stop’ (the thought)”, an evocative mental episode. But one can readily represent that with “Red meant stop.” In this case, Evocative mental episode Proposition. With repetition of a propositional form, the subject comes to evocatively activate its predicate. In learning to drive, the proposition, “Red means stop”, through repetition, becomes that evocative mental episode, “Red” associatively activates “Stop”. So, in this case, Proposition Evocative mental episode. A domain of mental episodes? They are bounded by the output of sensory transduction and the input to motor transduction. This is a common use of the term for a transforming process in engineering. There is a logical necessity for a process yielding a first sensory symbolic code from physical stimuli and a process yielding motor response from a last symbolic code in an intention, percept, or sensation. In the latter case, there may also be evocative episodes in which conscious sensory feedback participates in
regulation of motor activity. With automatization of action, however, deliberative episodes and intentions “drop out” rather than “drop down” to an unconscious (Dulany, 1997), consistent with diminishing fMRI activity for the controlling network (e.g. Chein & Schneider, 2005). Lawfulness within the metatheory refers to the relations among the states within the mental episodes, the relations among mental episodes, and also the relations of those states and mental episodes to prior stimulus events and subsequent action. The Self? The self is a vernacular concept in which a first person is in some sense actor or recipient. Within the metatheory, the “Self” can rather naturally refer to a cluster of perceptions, beliefs, feelings, intentions with “I” as agent. It can also refer to a cluster of perceptions of, beliefs about, and feelings toward that “Me”---the recipient self. Realistically, too, these are too numerous for the Self at any moment to be other than a subset or summary of these states represented in higher-order awareness at some brief moment or limited time of assessment. The status of nonconscious memories? When outside consciousness, memories consist of particular neural networks, a view consistent with this part of connectionist metatheory, but inconsistent with standard cognitive representations of what we know or believe constituting propositional networks—ideational processes like those in consciousness. In learning, those networks are established, as a consequence of deliberative processing or by associative-activations, and then activated in various ways in remembering. This is contrary to the dominant computer metaphor, on which we learn by “storing” and remember by “retrieving.” Despite a misleading vernacular, too, what we “know” or “believe” or “intend”, as functionally specified, is not the same state when in consciousness at the moment and outside consciousness in memory. A place for brain states within this metatheory? Yes…in principle they can enrich these theoretical networks. At present, however, brain imaging lacks the ready sensitivity to the neural correlates of specific conscious states, those readily represented in higher order awareness and readily reported. Even with pattern analyzers for fMRI, there is an extended “training” of the analyzer, limited to only a few inputs—not a personal dictionary. Furthermore, among other limitations at present, distinctively specific representations of modal values, momentary agency, and
propositional forms seem currently to be beyond the technology (Dulany, 2008). We can, however, expect the technology to advance—and add to its present value in identifying broader functions and categories of content. Aspects of this metatheory have been used by others explicitly, for example, Carlson (2002), Perruchet & Vinter (2002, and Tzelgov (1997), and less explicitly in many studies that investigate the roles of conscious states in a wide range of mental activities, or methodologically challenge various claims for unconscious perception, learning, memory, etc.
Mentalistic Methodology Most fundamentally, this is a logic of competitive support for theoretical hypotheses describing roles for conscious states, together with hypotheses of report validity (mappings)--as well as auxiliary hypotheses of procedural control. The “theory” that predicts is actually a complex, and “The relation of what is said to what is experienced becomes a hypothesis within a theory inductively supportable in competition with alternative explanations” (Dulany, 1968, p. 381). This makes use of Cronbach and Meehl’s (1956) classic “construct validation”, which was monotheoretic, but elaborates it with competitive support. A logic of phenomenal reports: We must first recognize that phenomenal reports of an experimental subject need to be reported in the experimenter’s physicalistic data language, and the subject’s conscious experience must be described in the experimenter’s phenomenological theory language. If face validity is challenged, there is a logic for its competitive support along with support of the theory in which that phenomenal experience appears. A first person data language for phenomenal experience is unacceptable in failing to meet one standard and essential requirement of a data language when data reports are challenged: generality over observers when provided the same experimental conditions. Interesting states of consciousness have an intrinsic variability over subjects within experimental conditions. This is one of the reasons for failure of classical introspection (Titchener, 1912) in which the subject actually became a co-experimenter and was termed the “observer”. Indeed that variability of conscious states often makes reports superior to instructions for their mappings.
Assessments need to be theory focused. Mentalistic theories specify conscious states with agency, modes, propositional contents, and subpropositional contents, all of which can vary. Verbal protocols (Ericsson & Simon, 1993) lack that focus, but can be valuable in identifying relevant mental episodes during preliminary work within a paradigm. Furthermore, the use of reports to identify conscious states, or even mental episodes, is not equivalent to asking subjects to explain their actions, and their failure to validly theorize does not in any way discredit validly reporting states of awareness--contrary to the often cited Nisbett & Wilson (1977). As with any assessment, there are sensitivity and validity conditions to be met: precise understanding of the assessment, verbalizability of the experience, motivation to report accurately, subject-experimenter agreement on criteria of report, and assessment within memory limitations. A report of an awareness at that moment—content, mode, or sense of agency--is a report of first-order awareness. For validity of report, there is need for that state to have been attentively established in memory. As has been well established in a literature beginning with Sperling (1960), and recognized by Block (2007), an awareness of a stimulus in literal code briefly precedes attentional identification—and may fail to become established in memory required for reporting, despite other activations. A report of awareness of a prior awareness—content, mode, mental sense of agency, or episode—is a report of higher-order awareness. For validity of report, this prior content needs to be established and maintained in memory, and the imperfections of memory are well known. The intrinsic difficulty of meeting all of these conditions perfectly raises serious questions about claims for “null awareness” in studies reporting the subliminal as unconscious--perception, learning, or judgment-and accounts for the controversy. Nevertheless, these conditions can be and have been met sufficiently for obtaining strong interrelationships among reports and among prior manipulations, reports, and actions. Duhem-Quine thesis. In the philosophy of science (e.g. Suppe, 1989), this thesis has long drawn on Duhem (1906) and Quine (1951). We examine a larger hypothesis, H = {T, M, A}--Theory, Mappings, and Auxiliaries—and in the famous quotation, “They go to the court of experience as a corporate body.” So it is {T, M, A} D, the data. We know, however, that any data may follow from an indefinite number of hypotheses, the well-known “under-determination of theory by data”, and to
assert “absolute confirmation” would be the logical fallacy of “affirming the consequent.” Furthermore, if some predicted data are not found, {T, M, A} not-D, disconfirmation disconfirms disjunctively: Not-D not-T or notM or not-A (or any combination). It could be a failure of theory, report validity, procedural control, or any combination. There is no absolute disconfirmation either. This says a little formally what we generally know and need to keep in mind because theories according causal roles to conscious states often stand in opposition to theories according those causal roles to unconscious states, and we need a methodology, not simply of confirmation or disconfirmation alone, but of competitive support. Bayesian inference. In the absence of absolute confirmation of one alternative and absolute disconfirmation of another, we need the best available, the most rational, model for arriving at comparative credibility of alternative theories in the light of data. We need to accord relative credibility, empirically supported, to theories realistically interpreted--and thus that would inherently be a logic of competitive support. Although various forms of Bayesian analyses are offered in the philosophy of science, (e.g. Shafer, 2001), a relatively simple representation will serve our purposes. In a common form of Bayes’ theorem we can express the relative prior credibilities of two hypotheses, their relative predictability of a data set, and the relative credibility of the two hypotheses in the light of that data. We may elaborate this kind of comparison for D-Q aggregates in this way:
P[(T,M,A)1|D] P[D|(T,M,A)]1 P(T,M,A)1 _____________=_________________________ P[(T,M,A)2|D] P[D|(T,M,A)]2 P(T,M,A)2 a postiori credibilities
likelihood ratio
a priori credibilities
These prior credibilities would realistically reflect what we already know about prior evidence and rationale in support of theoretical hypotheses, together with hypotheses of report validity and procedural control as well. Clearly, too, the a postiori credibility of one aggregate obtains a relative benefit in the degree the likelihood ratio benefits that aggregate. Sometimes that is enough.
Further elaboration beyond the standard Bayesian analysis can be needed, however, once we recognize the “flexibility” of D-Q components. Revised auxiliary assumptions for simple data may be highly credible and allow the “disconfirmed” hypothesis to predict as well as the other—to “push the likelihood ratio to one”—with a Bayesian stand-off. In fact, ingenuity in doing so seems to reflect the degree to which there can be competition between theoretical views within the community--especially between explanations attributing effects to conscious symbolic states or to unconscious symbolic states with strong prior commitments (Dulany, 2003). Logic of theoretical networks. From Hempel (1952) onward, e.g. Cronbach & Meehl (1956), Churchland (1986), Carruthers (2000), Dulany (1968 to 2007), the meaning of realistically interpreted theoretical constructs— e.g. a conscious state term—is conveyed by the set of theoretical and mapping assertions in which it appears. The richer the theory the richer the meaning. Extending that rationale from meaning to support, we can also recognize this: The richer the theoretical network, the richer the network of data it can predict. And the richer the data network, the narrower the range of credible alternative interpretations with saving auxiliary assumptions. Duhem-Quine, Bayes, Network Logic. When the hypothesis of interest is supported in opposition to a competing hypothesis, there may be attempts to save the competing hypothesis with a new set of auxiliary assumptions—something not uncommon when data have supported a theory giving a causal role to conscious states. We may denote an aggregate with that new auxiliary assumption in this way: (T,M,A)’. And in doing so, that may “push the likelihood ratio to one.” P[(T,M,A)1|D] P[D|(T,M,A)]1 P(T,M,A)1 _____________ = _______________________ P[(T,M,A)'2|D] P[D|(T,M,A)]'2 P(T,M,A)'2 a postiori credibilities
likelihood ratio
a priori credibilities
On Bayesian logic, however, this comes with a Bayesian price. If the a priori of the hypothesis to be salvaged is now less credible, the a postiori must remain less credible than the hypothesis of interest: P(D|T,M,A)'2 < P(D|T,M,A)1.
That’s rational competitive support.
But can we know those probabilities well and precisely enough? Traditional sources of knowledge must continue to inform our priors, and we may approximate a rational model with readily available ordinal judgments: all three ratios themselves; the a priori becoming ordinally greater or lesser, or even reversing, whenever the likelihood ratio is other than 1. Is this all too subjective to serve empirical objectivity? What is intrinsically subjective may become empirically and objectively warranted to the degree it is rationally revised—iteratively—in the light of defensible empirical data. Bayesian inference provides a normative model as guide.
Illustrative Case: Explicit and Implicit Learning In essence, one mentalistic theory holds this (Dulany, 1997): Explicit learning proceeds by deliberative evaluation of conscious hypotheses, directly yielding propositional rules. Implicit learning proceeds by associative-activation between sub-propositional conscious contents establishing evocative mental episodes, which may be represented as propositional rules in higher order awareness. These are rules that may be acted upon. This contrasts with standard views of explicit learning as conscious learning and implicit learning as unconscious learning and use of an unconscious grammar, as prominently presented by Reber (1993) and others. For learning, subjects were presented instances of a finite state grammar to inspect in Dulany, Carlson, & Dewey (1984) and were given a series of G and NG instances to classify with G and NG feedback in Dulany & Pritchard (2007). In both cases, they were tested with the requirement that they classify as G or NG letter strings that could either satisfy or not satisfy the grammar— and also to report what about the string implied its G or NG status. For clearer separation of the explicit and implicit in D&P’07, the explicit were given longer and more reflective trial times and the implicit were required to respond with the “first that comes to mind”. They were tested under explicit and implicit memory instructions, with old strings, as well as with new strings--new combinations of old letters or new letters within the same grammar, with and without awareness of a new-old letter relation. Principal results: With richness that comes from propositional rules varying in rule validities--each the probability of correct classification if acted upon, validities of rules in consciousness predicted correct classification, within all conditions, with mean r = .93, slope = 1.00, intercept = .02, and non-significant residual.
Transfer to novel forms occurred only under explicit conditions, where deliberative inference was available—and to novel letter strings only with instructed awareness of the relation between the new and old string forms. Could we add auxiliary assumptions to a D-Q aggregate such that the standard theory would explain this richness of the data? The rule reports were only emergents of, or guesses following, an unconsciously abstracted grammar? No available and credible emergence or guess process explains (a) prediction without significant residual from rule subject validities or (b) the requirement of deliberative processing (explicit learning and memory), as well as awareness of old-new relations, for generalization to novelty.
Illustrative Case: Causal Inference by Propositional Learning Traditionally, e.g. Cheng (1997), analyses of causal learning have focused on the associative relations among cause, ~cause, effect, ~effect, the models describing various combinations and transformations of conditional probabilities from a 2X2 table—with no focus on conscious states or inferences. In contrast, a mentalistic theory of propositional learning, Dulany (1979), describes deliberative inferences among a network of conscious beliefs, resulting in varying degrees of belief in a cause-effect relation—a process theory refined by a quantitative model. It is experimentally applied in Carlson & Dulany (1988) to identifying the suspect cause of a murder effect. Subjects were presented two mysteries, each with 12 trials of clues with different suspects provided 4 different ratios of incriminating and exonerating clue. On each trial they reported the theory’s belief states varying from extreme positive to extreme negative (ß = +1 to -1). (a) From degree of belief a clue is associated with a suspect, βA, and degree this clue implies guilt or innocence, the “forward implication”, ßF, one infers subjective evidence, ßE, its implied guilt or innocence for this suspect: ßEnij = ßAnij X ßFnij , and these predictors strongly interacted in the prediction of ßE, F(1,569) = 1379.61. (b) “Convincingness” of that evidence is the product of subjective evidence for that clue, ßE, and the degree of belief that it would be true or false only of the true murderer, the “backward implication”, ßB. Then from prior belief in this suspect’s guilt-innocence and convincingness of that evidence for this suspect one may infer a revised belief:
ßHn+1,i = ßHni + /ßEnij X ßBnij/(1-ßHin), if ßEnij > 0 ßHn+1,i = ßHni, if ßEnij = 0 ßHn+1,i = ßHni, - /ßEnij X ßBnij/(1+ßHin), if ßEnij < 0. Over all subjects and trials, correlation of predicted and reported causal beliefs was .91, slope of .98, and near zero intercept—with the predictions closely tracking reports over trials for the 4 different ratios. Are there auxiliary assumptions for a D-Q aggregate that might permit an associative model to explain the richness of data in this case? (a) The subjects’ reports are post-process explanations? This lacks credibility for sequential reports describable by equations none of the subjects knew. (b) Or the statements are post-process emergents? This lacks credibility in the absence of a mechanism for transforming simple associations into these theoretically described relations among belief states.
General Summation and Conclusions A mentalistic metatheory addresses the most fundamental questions about mind—roles of the conscious and non-conscious in mental activity. In contrast to other metatheories, it specifies conscious states as the sole carriers of symbolic representation of world beyond and of other thought. It is more analytic than other metatheories—recognizing a sense of possession, various propositional and sub-propositional modes and mental episodes, as well as symbols that can be in literal or identity codes. It therefore implies distinctive theories in a number of domains. A mentalistic methodology specifies a logic of competitive support. for hypotheses embodying theory as well as hypotheses of report validity. Phenomenal reports must be interpreted within the experimenter’s physicalistic data language and conscious states within the experimenter’s phenomenal theory language. Assessment must be theory focused and observe sensitivity and validity conditions. On the Duhem-Quine thesis, hypotheses evaluated are aggregates of theory, mappings, and auxiliaries. Well established problems of confirmation and disconfirmation call for a method of competitive support of aggregates.
Bayesian inference provides a normative standard of rational inference for examination of comparative credibility of hypotheses in the light of data Elaborating the logic of theoretical networks, we may say that the richer the theoretical network and the data network it implies, the less credible are alternative interpretations—and the stronger the competitive support. Examples are briefly presented of competitive support for theories of implicit and explicit learning and of causal learning. Our emphasis should be on what consciousness explains, as the sole carrier of symbolic representations of the world beyond and within—and on this the explanation of consciousness, its adaptive significance, becomes increasingly clear.
References Block, N. (2007). Consciousness, accessibility, and the mesh between psychology and neuroscience. Behavioral and Brain Sciences, 30, 481-548. Carlson, R.A. (2002). Conscious intentions in the control of skilled mental activity. The Psychology of Learning and Motivation, 41, 191-228. Carlson, R., & Dulany, D. (1988). Diagnostic reasoning with circumstantial evidence. Cognitive Psychology, 20, 463-492. Carruthers, P. (2000). Phenomenal consciousness: A naturalistic theory. Cambridge, U.K.: Cambridge University Press. Chalmers, D. (1996). The conscious mind. New York: Oxford University Press. Cheng, P.W. (1997) From covariation to causation: A causal power theory. Psychological Review,104, 367-405. Chein, J.M. & Schneider, W. (2005). Neuroimaging studies of practice-related change in fMRI and meta-analytic evidence of a domain-general network for learning. Cognitive Brain Research, 25, 607-623. Churchland, P.S. (1986). Neurophilosophy. Cambridge, MA: MIT Press. Cronbach, L.J. & Meehl, P. (1955). Construct validation in psychological tests. Psychological Bulletin, 52, 281-382. Dulany, D.E. (1968). Awareness, rules, and propositional control: A confrontation with S-R behavior theory. In T. Dixon, & D. Horton, (Eds.), Verbal behavior and general behavior theory (340-387). New York: Prentice Hall. Dulany, D.E, (1980). Outline of a theory of propositional learning. Unpublished manuscript, University of Illinois. Dulany, D.E., Carlson, R.A., and Dewey, G.I. (1984). A case of syntactic learning and judgment: How conscious and how abstract? Journal of Experimental Psychology: General, 113, 541-555. Dulany, D.E. (1991). Conscious representation and thought systems. In R.S. Wyer, & T. K. Srull (Eds.), Advances in social cognition (Vol. 4, 97-120). Hillsdale, NJ: Erlbaum. Dulany, D.E. (1997). Consciousness in the explicit (deliberative) and implicit (evocative). In J. Cohen & J. Schooler (Eds.) Scientific approaches to consciousness (179-212).
Mahwah, NJ: Lawrence Erlbaum Associates. Dulany, D.E. (2001), Inattentional awareness, Psyche, 7(05), http://psyche.cs.monash.edu.au/v7/psyche-7-05-dulany.html. Dulany, D.E. (2002) Mentalistic metatheory and strategies. Behavioral and Brain Sciences, 24, 337-338. Dulany, D.E. (2003). Strategies for putting consciousness in its place. Journal of Consciousness Studies, 10(1), 33-43. Dulany, D.E. (2004). Higher order representation in a mentalistic metatheory.In R.J. Gennaro, (Ed.), Higher Order Thought Theories of Consciousness (pp. 315-338). Amsterdam & Philadelphia: John Benjamins. Dulany, D.E. & Pritchard, E. (2007) Awareness and novelty in explicit (deliberative) and implicit (evocative) learning and memory of a finite state grammar. Association for the Scientific Study of Consciousness. Dulany, D.E. (2008). How well are we moving toward a most productive science of consciousness? Journal of Consciousness Studies, 15(12). 77-100. Dulany, D.E. (2009). Psychology and the study of consciousness. In T. Bayne, A. Cleeremans, P. Wilken (Eds.) The Oxford Companion to Consciousness. Oxford, UK: Oxford University Press. Ericsson, K.A., & Simon, H.A. (1993). Protocol analysis: Verbal reports as data. Cambridge: The MIT Press. Gennaro, R.J. (Ed.), (2004) Higher Order Thought Theories of Consciousness. Amsterdam & Philadelphia: John Benjamins. James, W. (1890). The principles of psychology. New York: Henry Holt Nisbett, R.E., & Wilson, T.D. (1977), ‘Telling more than we can know: Verbal reports of mental processes’, Psychological Review, 84, pp. 231-259. Perruchet, P., & Vinter, A. (2002). The self-organizing consciousness. Behavioral and Brain Science, 25, 297-329. Reber, A.S. (1993). Implicit learning and tacit knowledge: An essay on the cognitive unconscious. New York: Oxford University Press. Shafer, M.J. (2001). Bayesian confirmation of theories that incorporate idealizations. Philosophy of Science, 66(1), 36-52. Sperling, G. (1960). The information available in brief visual presentation. Psychological Monographs, 74, 1-29. Suppe, F. (1989). The semantic conception of theories and scientific realism. Champaign, IL: University of Illinois Press. Titchener, E. B. (1912), ‘Prolegomena to the study of introspection’, American Journal of Psychology, 23, 427-448. Tzelgov, J. (1997). Automatic but conscious: That is how we act most of the time. In R.S. Wyer, & T.S. Srull (Eds.) Advances in social cognition, vol. 10,.( 217-230), Mahwah, NJ: Erlbaum.
I may be contacted at
[email protected]