effect, RTs for reaction times, M&M for Michaelis and Menten, DT for decisional task, ..... The background is that M&M's kinetics equation [25] is a mathematical.
Far East Journal of Mathematical Sciences (FJMS) © 2018 Pushpa Publishing House, Allahabad, India http://www.pphmj.com http://dx.doi.org/10.17654/ Volume …, Number .., 2018, Pages …
ISSN: 0972-0871
RATIONAL CURVES MODELING PSYCHOPHYSICAL TESTS DATA: A COMPUTATIONAL APPROACH COMPATIBLE WITH TBM COGNITIVE MODEL A. Aimi, F. Martuzzi and E. Bignetti Department of Mathematical, Physical and Computer Sciences University of Parma Italy Department of Food and Drug University of Parma Italy Department of Veterinary Science University of Parma Italy Abstract “The Bignetti Model” (TBM) is a probabilistic-deterministic model postulating that free will is an illusion with a key role in cognition processes. TBM is in accordance with the evidence that rehearsal of the same stimulus in psychophysical tests facilitates the inter-trial priming effect. This paper aims at modeling the reaction times data sets obtained from a series of press/no-press decisional tasks, with an increasing presence of distractors, by means of rational functions. The Received: December 23, 2017; Accepted: March 6, 2018 2010 Mathematics Subject Classification: 65D10, 91E10. Keywords and phrases: cognitive model, psychophysical test, rational function, data fitting, enzyme kinetics, Bayes’ theorem.
A. Aimi, F. Martuzzi and E. Bignetti
2
goal is obtained by the interaction between numerical analysis tools and meaningful assumptions made for the cognition process. The resulting curves are conceivable in two different frameworks, compatible with TBM: Michaelis and Menten enzyme kinetics, assumed as metaphoric background, and Bayes’ Theorem applied to mental information processing. This work represents an example of how computational mathematics can be applied in the context of cognitive psychology.
1. Introduction In the current literature and in different context, from psychology to neurobiology, from cognitive sciences to experimental philosophy, it is still debated whether FW1 is an illusion or not (see e.g. [3, 4, 26, 29, 5]). TBM [3] states that FW is an illusion playing a key role in cognition processes; it explains the decision mechanism of the so-called “voluntary” action (ACTION) and how the experience is built up from the critical evaluation of the action’s outcomes (COGNITION). In particular, in order to carry out an action decision-making, the agent’s UM looks in memory archives for an action paradigm that has been utilized in similar or identical experiences. Thereafter, the agent’s CM, that self-attributes the responsibility of the action (FW illusion), can critically ameliorate the action protocol in memory archives; this upgrading will be useful to fulfil the expectations for the future action decision-making. Hence, ACTION and COGNITION do occur in distinct but circularly interdependent moments (see Figure 1).
1
Throughout the paper, the following abbreviations will be used: FW for free will, TBM for
“The Bignetti Model”, UM for unconscious mind, CM for conscious mind, PE for priming effect, RTs for reaction times, M&M for Michaelis and Menten, DT for decisional task, SRT for simple reaction times, STM for short-term memory, LTE for learning-through-experience.
Rational Curves Modeling Psychophysical Tests Data: …
3
Figure 1. A schematic view of the main events in TBM. ACTION: the socalled “voluntary” action in response to a stimulus is unconsciously decided on the basis of the earlier experience. COGNITION: the action outcomes are consciously experienced; this experience may either confirm the expectations or further improve the knowledge that will facilitate better action in the future. More specifically, a rehearsal of a stimulus should facilitate a more and more efficient reaction due to classic PE. In particular, the first reactions given in response to a stimulus manifest a cognitive modality (i.e. at first one needs understanding the stimulus and then elaborating a reaction), while, after some repetitions, the paradigm is known and reactions become faster, instinctive and more efficient, thus manifesting a reflexive modality. Though a stimulus was not the same as the one before, the agent would gain a cognitive added value; this experience will anyhow fix an extra-knowledge in memory that will be somehow used in analogous situations in the future (consider, as an example, the consequence of the compulsory schooling in daily life). According to TBM, the reaction in response to an external stimulus is elaborated by UM. To this aim, UM replicates the behavioral paradigm already experienced in analogous situations. If there isn’t any past example in memory archives, the reaction is quite aleatory and the probability of success is very low; however, the deal of experience can be deterministically improved action after action, a-posteriori. When the action is carried out, a series of stimuli through sensory afferent fibers returns to the mind, so triggering the attention of CM. CM deludes itself to be the action decisionmaker (see in Appendix the role of FW illusion in this point of TBM), so that it can critically evaluate the action outcomes by attributing a reward or a
4
A. Aimi, F. Martuzzi and E. Bignetti
blame. Then, on a cause-effect base, CM can update the paradigms in memory. This is a learning process that cannot change the action executed just now, but may increase the probability of success of further UM’s reactions. Clearly, TBM exhibits a post-adaptive mechanism akin to the Darwinian evolutionary mechanism and to the operant mechanism of animal intelligence. Moreover, TBM has developed a probabilistic-deterministic cognitive system, able to explain how a coherent and functional stream of thoughts may be elaborated in intelligent mental processing [3, 4]. Let us note that there is a strong analogy between probabilisticdeterministic behavior in cognition, as proposed by TBM, and other systems in nature. Koch’s book [16] reports a lot of bio-physical examples, at the molecular or cellular level, that analyze the relationship between aleatory processes related to singly-taken component and collective mechanism leading to predictable (deterministic) effects; other psycho-chemical examples are exhaustively reported in [2, 3, 4]. Recently, preliminary experiments, based on press/no press psychophysical tests, have been carried out to challenge TBM [6]. Subjects were engaged to press or refrain to press a computer key in response, respectively, to sweet (SWEET) and salted (SALTED) food items images. The results showed a shortening of RTs with trials’ repetition; the experiments output, which is indicative of a progressive subjects’ learning of the task paradigm, is compatible with TBM expectations. Let us note that this kind of function has been firstly described by Ebbinghaus [10] and, then, officially termed “learning curve” by Bryan and Harter [7, 13, 30]. In literature, data obtained by a two-choice task [12] can be elaborated by statistical methods (e.g. ex-Gaussian, shifted Wald parameters or diffusion-based models [28, 23, 24]). Here, we propose an alternative computational approach capable of giving a continuous model for the acquired data, compatible with TBM and suitable both to show the inter-trial PE and to highlight its impairment caused by specific distractors, i.e. effects observable only by means of repetitive trials [21].
Rational Curves Modeling Psychophysical Tests Data: …
5
The above recalled preliminary results were so promising that led us to stress TBM with further tests; hence, a more exhaustive set of data is now presented and, above all, the collected RTs are modeled by means of rational curves conceivable in two different frameworks exhibiting a noticeable analogy with TBM [4, 6]. The first one is a molecular example: the enzyme reactivity occurring in catalytic amount in solution, interpreted by the famous kinetic study carried out by Michaelis and Menten in [25]. The mathematical M&M analysis leads to a curve that reports a series of initial enzyme reaction velocities Vi monitored at different substrate concentrations. As a matter of fact, the series of Vi corresponds to the averaged macroscopic observations of as many microscopic catalytic rates as the enzyme molecules; therefore, the probabilistic enzyme/substrate collision gives rise to a predictable hyperbolic function of enzyme reaction rates. The second one is the framework of information processing, successfully interpreted by Bayes’ theory. Without any preceding experience (related to the prior probability), the probability of success of an action in response to a stimulus (i.e. the posterior probability) is practically null; conversely, after iterative stimulations and reactions, the agent can learn from the experience thus upgrading his skill until the reactions tend to fulfil the expectations. In other words, the higher is the probability of encountering the same stimulus, the higher is the probability that the agent may upgrade his baggage of knowledge towards a deterministically efficient answer: the proof that a learning process is progressively increased up to a maximally saturated level comes from the evidence that the ratio between prior and posterior probabilities tends to 1. Note that a similar concept can be found in [15], where a bayesian learning model fitted to a variety of learning curves was introduced and where it is said that “As experience accumulates, one makes better decisions” and “With each repetition of the activity, one grows more informed and the decision gets better and better; hence the model generates a learning curve”.
A. Aimi, F. Martuzzi and E. Bignetti
6
Summarizing, the aim of this work is to provide and investigate an explicative and paradigmatic continuous model for RTs data sets, compatible with TBM, and the goal has been obtained by the interaction between numerical analysis tools and meaningful assumptions made for the cognition process. Hence, the work represents an example of how computational mathematics can be applied in the context of cognitive psychology. The paper is organized as follows: in Section 2, the employed press/no press psychophysical tests are described in detail, while in Section 3 strategies for rational function modeling are taken into account and a unifying continuous model for the collected data is finally obtained. Section 4 gives an interpretation of this model in terms of Bayes’ Theorem applied to mental information processing. Results are deeply discussed from a cognitive point of view in Section 5 and conclusions are briefly reported in Section 6, while principal points of TBM are recalled in Appendix. 2. Psychophysical Tests Description Press/no press DTs are carried out by means of a dedicated software, developed by M2 Scientific Computing (see [6] for more details), running on a standard desk-computer. Subjects (110 university students of both sexes2) are engaged in 48 sequential trials per DT; at each session, subjects must press the space bar in response to a SWEET food item or refrain from pressing it in response to a SALTED one (24 SWEET and 24 SALTED are randomly presented on the computer screen for 40 ms). The subject may answer to the stimulus within 1 s; the software can register subjects’ RTs in ms. Af-terwards, it appears the instruction to press the bar again within 4 s in case the subject thinks he has given a wrong answer, in either direction (the real and presumed mistakes are comprised within 3-5% of the total answers). To analyze the data, all RTs, recorded at each trial, are averaged and plotted as a function of trial number: in this way, a LTE curve [10] trend is emphasized by each DT data set. This kind of behavior is exhibited by a 2
Informed consent was obtained for experimentation with human subjects.
Rational Curves Modeling Psychophysical Tests Data: …
7
quite homogeneous population that has been engaged in the same task (see also preliminary experiments published in [6]). Note that the use of averaged RTs does not affect our study, because the evaluated standard deviations at each session showed a sufficiently small discrepancy of each individual recorded time from the average value. At the end of each task, subjects are engaged in a calibration test to monitor the time required for their fastest, instinctive and automatic reflex. To this aim, the considered calibration test is the “traffic-lighter test”, also named the “red & green” DT. With the green light, participants must press the enter key, as fast as possible; these RTs are called SRT. The test is repeated several times to get a significant averaged value for each participant. The considered DTs tests are made of 48 trials of increasing complexity: DT1: 1 SWEET and 1 SALTED, presented 24 times each DT1-control: 1 SWEET presented 24 times and 24 SALTED (not shown) DT2: 2 SWEET and 2 SALTED, each repeated 12 times DT3: 8 SWEET and 8 SALTED, each presented 3 times DT4: 24 SWEET and 24 SALTED As already observed for press/no press DTs in [6], the rehearsal of identical trials triggers the PE, so the subjects’ responses become faster and faster giving rise to the classical learning curve (as in DT1 and DT1-control). Conversely, if the recurrence of identical SWEET is reduced due to the introduction in the task of larger and larger amount of different SWEET, the learning curves are progressively impaired (as in DT2, DT3 and DT4); this effect is usually attributed to the increase of distractors in the task. In [6], it has been concluded that the distractors are SWEET, i.e. items of the same semantic nature of the stimuli. To confirm this hypothesis, other controls have been made by inserting in DT1 several non-food images (e.g. an umbrella, a car, a pencil, a portrait etc.); none of them changed the DT1 learning curve (not shown). Note that understanding the influence of distractors is a current research field [20].
A. Aimi, F. Martuzzi and E. Bignetti
8
3. Rational Functions Based Model for DTs Data Let us consider the four RTs data sets of the type {( Si , RTi )}i24 =1 , each
related to the corresponding press/no press DT1, DT2, DT3 and DT4 tests described in the previous Section, where Si is the index of the session where the ith SWEET appeared on the screen and RTi the corresponding RTs arithmetic average, evaluated w.r.t. all participants responses. These four data sets are plotted in Figure 2, together with the corresponding standard logarithmic best fitting curve RT ( S ) = a log( S ) + b, obtained by means of
the change of variable x = log( S ) and analyzed in [6] as preliminary approach.
Figure 2. Averaged RTs in the four psychophysical tests: DT1 (top-left), DT2 (top-right), DT3 (bottom-left), DT4 (bottom-right), together with the corresponding standard logarithmic best fitting equipped by squared relative error in Euclidean norm.
Rational Curves Modeling Psychophysical Tests Data: …
9
The accuracy of the logarithmic model can be checked a posteriori considering the squared relative error in Euclidean norm, i.e. εi2 ∑ 2 i = 1 Er := , with εi := RT ( Si ) − RTi 24 2 RT ∑i =1 i 24
(1)
which is shown in the same figure. Now, we study the possibility of giving an alternative and unifying continuous model capable of approximating the four data sets producing errors (1) similar to those obtained by the logarithmic model, compatible with TBM and elaborated on the basis of M&M enzyme kinetics, considered as a metaphor to explain the cognitive process underlying the probabilisticdeterministic LTE curves. This metaphor will be clarified in Section 5, where results will be deeply discussed from a cognitive point of view. The background is that M&M’s kinetics equation [25] is a mathematical formalism that describes the hyperbolic dependence of initial enzyme reaction rates as a function of substrate concentrations [S] in “steady-state experimental conditions” (i.e. the assays are carried out in solution with catalytic amounts of enzymes [E] and an excess of [S]). According to M&M’s assumptions, the scheme of enzyme catalysis can be written in two steps:
[E ] + [S ] ⇔ [ES ] ⇒ [E ] + [P ], where [ES] is the amount of binary complex and [P] is the amount of reaction product. The second step is considered the slowest step in the overall enzyme recycling process, while the production of the binary complex [ES] in the first step is a dynamic equilibrium regulated by the mass-action law. This equilibrium regenerates [ES] in the first step much faster than it is destroyed in the second one; therefore, at the very beginning of each kinetic assay, [ES] may be considered constant and initial velocity is proportional to it, i.e.
10
A. Aimi, F. Martuzzi and E. Bignetti Vi = K cat [ES ],
where K cat is the rate constant. According to the collision theory, it is known that the substrate binding of a single enzyme molecule is an aleatory process. However, with many enzyme molecules, the ratio between the concentration of the binary complex and the total enzyme concentration, which in [17] is called fractional saturation and which represents the binding probability, i.e. the probability that the enzyme is bound to the substrate [8], can be proved to be a hyperbolic function of the substrate concentration [17]:
[ES ] [S ] , = [Etot ] K m + [S ]
(2)
where K m represents the enzyme affinity for [S]. Therefore, [ES] in the different assays varies from 0, when [S ] = 0, to the maximal value
[ES ] = [Etot ], for [S] growing to infinity. As a consequence, also Vi dependence on [S] can be deterministically predicted as Vi =
Vmax [S ] , K m + [S ]
(3)
Vi is where Vmax := K cat [Etot ] is the saturation value for Vi . Note that Vmax equal to the binding probability or, in other words, Vi represents a probability curve in a-dimensional space. Fixing this metaphoric background, the key point for the above declared goal is to investigate if it is possible to obtain, at first, a good continuous rational model for DT1 data set, which is related to the DT without distractors, where the same SWEET image is repeated randomly 24 times in the 48 sessions of the test. This is described in the following Subsection. 3.1. The rational model for DT1
We consider a rational function of degree 2 of the type
Rational Curves Modeling Psychophysical Tests Data: … RT ( S ) = C −
Vmax S , Km + S
11 (4)
depending on three real, not trivial, parameters: C, Vmax , K m . This function has an horizontal asymptote, given by RTmin := lim RT ( S ) = C − Vmax , S →∞
(5)
and a vertical one in S = − K m . Due to (4) and (5), parameter C can be rewritten as C = RT (0) = RTmin + Vmax .
(6)
Let us further observe that from (4) we can define V ( S ) := C − RT ( S ) =
Vmax S , Km + S
(7)
i.e., a M&M enzyme kinetics probability curve in a-dimensional space, which, in particular, has an horizontal asymptote, seen as saturation value; of course, it holds lim
V (S )
S → ∞ Vmax
= 1.
(8)
In principle, we would have to determine the parameters C, Vmax , K m , but taking advantage of a meaningful assumption for the cognition process we can reduce the number of unknowns. In fact, supposing that, at infinity, RTmin can be approximated by the averaged SRT latency (see Section 2) exhibited by all participants, i.e. 280 ms, we will succeed in obtaining an initial model, whose parameter C will be used in a successive improvement of the rational curve. To this aim, we can proceed using equality (6) to rewrite (4) as RT ( S ) − RTmin = Vmax −
Vmax S V K = max m . Km + S Km + S
(9)
A. Aimi, F. Martuzzi and E. Bignetti
12
Now, we can apply an approximated best fitting procedure to the ratio in the right-hand side of (9), rewritten as
a1 , searching unknowns a1, a2 a2 + S
as solution of the linearized problem 24
min
∑ [a1 − (RTi − RTmin ) (a2 + Si )]2 .
(a1 , a2 )∈R i =1 2
(10)
Having defined
⎡ ( RT1 − RTmin ) S1 ⎤ ⎡1 − ( RT1 − RTmin ) ⎤ ⎢ ⎥ ⎢⋅ ⎥ ⋅ ⋅ ⎢ ⎥ ⎢ ⎥ ⎡ a1 ⎤ A = ⎢⋅ ⋅ ⋅ ⎥, ⎥ , a = ⎢ ⎥, b = ⎢ ⎣ a2 ⎦ ⎢ ⎥ ⎢⋅ ⎥ ⋅ ⋅ ⎢ ⎥ ⎢ ⎥ ⎢⎣( RT24 − RTmin ) S24 ⎥⎦ ⎢⎣1 − ( RT24 − RTmin )⎥⎦
(11)
this is done solving the linear system of order 2 AF Aa = AFb,
(12)
under the hypothesis that A is a full rank matrix. This technique is similar to what is usually done for rational interpolation [31] and can be found, for instance, in [18] in the context of complex-curve fitting, as recently recalled in the applicative papers [9, 27]. In Figure 3 we show the rational model obtained for DT1, fixing the value of RTmin , together with the corresponding squared relative error in Euclidean norm (1). From this curve, we obtain that C = RT (0) = 860ms. The following step can be conceived as both a validation of the parameter C and an improvement of the previous curve. Actually, going back to equation (4) and fixing the constant C = 860, we search the remaining two parameters Vmax and K m in the following way. From (7), we deduce the equality Km 1 1 1 = + , C − RT ( S ) Vmax S Vmax
(13)
Rational Curves Modeling Psychophysical Tests Data: … and with the change of variables
13
y = (C − RT ( S ))−1, x = S −1, a =
−1 K m Vmax and b = Vmax , Vmax and K m can be determined after finding the
standard linear best fitting coefficients a, b of the regression line y = ax + b, given the double reciprocal of ( Si , C − RTi ). The Pearson coefficient, which measures a priori the linear correlation, is very good because it turns out to be 0.96. It is worth noting that the use of double reciprocal of given data in Biochemistry dates back to [19].
Figure 3. DT1 data and continuous RT ( S ) model, having fixed the
parameter RTmin , obtained by linearized rational best fitting. In Figure 4 we show the rational model obtained for DT1, fixing the value of C, together with the corresponding squared relative error in Euclidean norm (1), which is less than the previous one. Moreover, it turns out the value of RTmin = 288ms, which in this model is a computation output, is in good agreement with the averaged SRT latency exhibited by all participants. For these reasons, we will consider the last rational model, coming from the standard linear best fitting on the double reciprocal of DT1 data, to continue the numerical analysis of the remaining data sets in the following Subsection.
A. Aimi, F. Martuzzi and E. Bignetti
14
Figure 4. DT1 data and continuous RT ( S ) model, having fixed the
parameter C, obtained by linear best fitting on the double reciprocal of
( Si , C − RTi ). Remark. For the discrete data set {( Si , Vi )}i24 =1, where Vi := C − RTi ,
remembering definitions given in (1), (4), (7), we can write Vi =
Vmax Si + εi . K m + Si
(14)
Equation (14) is the model addressed in optimal design for M&M kinetic studies [22], where statistical methods are used to optimally choose substrate concentrations [S ]i at which monitoring initial enzyme reaction rates Vi , before applying a fitting process to obtain unknown parameters assuming certain errors εi distributions. 3.2. The rational model for the remaining data sets
The last obtained curve, shown in Figure 4, is the starting point to deduce the remaining ones, operating as in a non-competitive inhibition framework of the M&M model [25]. Incompetitive and mixed inhibition models have been computationally analyzed too (not shown), but the four data sets
Rational Curves Modeling Psychophysical Tests Data: …
15
approximation obtained in the non-competitive context has revealed more accurate. Hence, we suppose that the remaining data sets follow a continuous model of the type RT ( S ) = C −
app Vmax S , Km + S
(15)
where C, Vmax , K m are the same as in DT1 data last model and where, using a typical M&M enzyme kinetics notation, V [I ] app := max , with α := 1 + , Vmax α Ki
(16)
being [I] the amount of inhibitor and Ki the affinity constant of the inhibitor for the enzyme, which is usually related to a large amount of substrate [S]. The problem now is to properly fix in a meaningful way these new parameters for our RTs modeling. Table 1. Results for the rational RT ( S ) and M&M probability V ( S ) curves
obtained from the computation. For Table readability, except for the values of [I] and Ki , the shown integer values are approximations rounding exact values. Vmax = 572,
K m = 5,
K i = 45
RT (1)
RT (48)
RTmin
V (1)
V (48)
DT1: [I ] = 0
755
337
288
105
DT2: [I ] = 12
777
447
409
DT 3 : [I ] = 21
788
503
DT 4 : [I ] = 23
790
514
app Vmax
E r2
523
572
0.0075
83
413
451
0.0202
470
72
357
390
0.0090
482
70
346
378
0.0132
In (16), [I] has been chosen as the number of SWEET images not conforming to the initial one, given as first stimulus: in this way, [I] can be conceived as related to the probability of encountering a distractor (i.e.
16
A. Aimi, F. Martuzzi and E. Bignetti
inhibitor) during DT tests and therefore it is differently fixed from DT2 to DT4. The value of Ki = 45, instead, has been chosen as the index of the trial (each DT test is made by 48 trials) giving the least squared relative errors in Euclidean norm. In Table 1, we show results obtained from the computation. Note that errors are similar to, if not slightly better than, those obtained by logarithmic best fitting, already shown in Figure 2. In Figure 5, we present the rational approximation of the four data sets as obtained from the previously described procedure: the four curves follow the corresponding data trend, giving acceptable squared relative error in app as function of [I] are plotted in Figure Euclidean norm. The values of 1 Vmax
6. In Figure 7 we show the four M&M probability curves V ( S ) defined similarly as in (7), all normalized w.r.t. Vmax , while their double reciprocals are reported in Figure 8.
Figure 5. Data sets and related rational curves RT ( S ).
Rational Curves Modeling Psychophysical Tests Data: …
app Figure 6. Values of 1 Vmax as function of [I].
Figure 7. M&M probability curves V ( S ) Vmax .
17
18
A. Aimi, F. Martuzzi and E. Bignetti
Figure 8. Double reciprocal of probability curves.
To conclude, let us remark that the linear best fitting on the double reciprocals of the data has been applied only for the no-inhibition DT1 case, while the remaining curves have been deduced from the first one, suitably fixing as just described above the additional parameters. This choice was made in order to explore if the non-competitive inhibition framework of the metaphoric M&M background could be capable of furnishing a sufficiently good unifying model for the collected data in presence of distractors. Anyway, also the linear best fitting on the double reciprocals of DT2, DT3 and DT4 data sets, fixing the same value of C used in DT1 as unifying normalization constant, has been done and results are analogous to what has been obtained with our procedure: for instance, the squared relative error on DT4 data turns out to be Er2 = 0.0080, therefore only slightly better than the corresponding one in Table 1 and the corresponding one in Figure 2. Summarizing, we can conclude that the RTs data rational model proposed in this paper gives results comparable with those obtained by the initially shown logarithmic best fitting model, which was our goal.
Rational Curves Modeling Psychophysical Tests Data: …
19
4. M&M Probability Curves and Bayes’ Theorem
From the equation of M&M probability curves V ( S ) Vmax , after a straight-forward manipulation, we can write Km V ( S ) V ( S − 1) S − 1 + 1 . = Vmax Vmax Km +1 S
(17)
Then, in the context of press/no-press psychophysical tests here taken into account, (17) can be interpreted as Bayes’ Theorem formula p ( A | E ) = p ( A)
p ( E | A) , p( E )
(18)
since V ( S ) Vmax can be seen as the posterior probability, which is up-graded after each experience (read: trial), V ( S − 1) Vmax as the prior probability and the remaining factor on the right-hand side of (17) as the ratio between p ( E | A) and p ( E ). Note that p ( E ) is independent of the action-cognition mechanism shown in Figure 1: for the two-choice DTs considered in this paper, we have p ( E ) = 0.5.
Figure 9. Likelihood function.
A. Aimi, F. Martuzzi and E. Bignetti
20
In Figure 9, we show the likelihood function p ( E | A) : for growing values of S, P ( E | A) tends to the value of p ( E ) or, equivalently, the ratio between posterior and prior probabilities converges to 1. Note that the obtained graph represents the tail of a Gaussian distribution: this is worth of further investigation, scheduled as next work. Let us finally recall that formula (18) is used also in [11] to describe the reaction to a stimulus, in particular to explain how an organism changes the environment with interaction, having received a sensory signal from the environment itself. 5. Discussion
In a previous paper [6], some of us have carried out preliminary psychophysical press/no press experiments with SWEET and SALTED food items; the press reaction was due to SWEET appearing on the screen. The results showed a shortening of the RTs from trial to trial, i.e. a learning curve due to a PE. Herein, new experiments have been added, at first, to confirm PE and, secondly, to give a deterministic prediction of the underlying psychological mechanism by analogy with two quite different biological systems: the molecular behavior of enzymes in steady-state reactions as explained by M&M model and mental information processing as explained by Bayes’ theory. The paradigm that underlies all DTs can be envisaged as the sequence of compulsory steps: - step 1: item identification - step 2: action decision-making - step 3: press/no press action It is highly conceivable that each of the RTs may be the sum of the time required to account for the 3 steps in the above sequence; though, it seems that PE is progressively impaired from DT1 to DT4.
Rational Curves Modeling Psychophysical Tests Data: …
21
Then, the challenge of this work is to find out a unique model that may mathematically predict all the data and, in particular, PE impairment on the basis of a common paradigm as the one described above. A noticeable PE reduction occurs as SWEET per task are changed from only one kind (repeated 24 times) to 24 kinds, all different, respectively from DT1 to DT4. In literature, it is known that subjects engaged in press/no-press or go/no-go psychophysical experiments, may be distracted by items different from the expected signal; if distractors belong to the same semantic category of the signal, the effect is particularly evident on PE impairment. This phenomenon was confirmed also by our data since PE impairment increases from DT2 to DT4; conversely by changing the SALTED items or randomly introducing in the tasks images of totally different semantic category, like a ball, a car, an umbrella and so on, the PE observed in DT1 is not altered (data not shown). An explanation of our collected RTs data stands on the hypothesis that each subject memorizes the paradigm of his task in the STM (see [14] as a recent contribution to STM); the rehearsal of this paradigm in STM, at each trial, typically reinforces the expectations from the preceding trial to the next one thus ameliorating the performance (i.e. RTs shortening). In DT1, steps 1 and 2 becomes simpler and simpler from trial to trial, thus shortening significantly RTs. Conversely, step 3 is a mechanical, purely muscular motion, so it is a physiological reflex that cannot decrease below an intrinsic limit (see Figure 10). In the opposite situation, i.e. in DT4, all 24 SWEET are different, so the subject must recall the new SWEET image from long-term memory at every trial; then, RTs of step 1 cannot be shortened. Moreover, step 2 cannot exhibit a great deal of automaticity since it is not clear whether the continuous change of SWEET in step 1 would implicate also different decisions or not (i.e. a change in paradigm); due to this uncertainty, the subject has to recall in STM the rules of the original paradigm all the times. A partial automaticity in step 2 will become visible only after several trials, when it becomes highly probable in subject’s mind that all the SWEET identified in step 1 belong to the category which requires a press response; in conclusion, step 2 can only partially contribute to the overall shortening of
22
A. Aimi, F. Martuzzi and E. Bignetti
RTs (see Figure 11). All these effects cause in DT4 the greatest PE impairment, standing DT2 and DT3 curves in between.
Figure 10. Representation of DT1, according to 3-compulsory steps paradigm.
Figure 11. Representation of DT4, according to 3-compulsory steps paradigm.
To give a quantitative explanation of PE impairment from DT1 to DT4, the data have been analyzed according to a computational approach compatible with two different scenarios: (a) M&M enzyme kinetics equation and (b) Bayes’ theory about mental information processing. (a) In the M&M framework, a unifying continuous approximation of RTs data has been obtained from suitable rational curves, in presence of a noncompetitive inhibition (see Section 3). The metaphoric assumptions are that:
Rational Curves Modeling Psychophysical Tests Data: …
23
(1) the subject’s mind corresponds to the M&M enzyme concentration in catalytic amount ([Etot ]