Matlab optimization of an IMPRINT model of human behavior

1 downloads 107 Views 212KB Size Report
annealing (SA) and genetic algorithms (GA) as search techniques. The optimizers were used to repeatedly search a 5-parameter space defined by the model of ...
Matlab optimization of an IMPRINT model of human behavior William D. Raymond Department of Psychology University of Colorado, Boulder Boulder, CO 80309-0345 303-492-5032 [email protected]

Bengt Fornberg Department of Applied Mathematics University of Colorado, Boulder Boulder, CO 80309 303-492-5915 [email protected]

Carolyn J. Buck-Gengler Department of Psychology University of Colorado, Boulder Boulder, CO 80309-0345 303-492-6943 [email protected]

Alice F. Healy Department of Psychology University of Colorado, Boulder Boulder, CO 80309-0345 303-492-5032 [email protected]

Lyle E. Bourne, Jr. Department of Psychology University of Colorado, Boulder Boulder, Colorado 80309-0345 303-492-4210 [email protected] Keywords: Model optimization, genetic algorithm minimization, simulated annealing, IMPRINT, Matlab, digit data entry, cognitive and motoric modeling, individual differences

ABSTRACT: Models of human performance can be improved through optimization, although search of large parameter spaces can be exceedingly time-consuming. One approach to facilitating optimization is the use of the Matlab computing environment. In the current study, an IMPRINT model of a cognitive and motoric task that had been used in two experiments was translated into Matlab code that used matrix syntax. Two optimizers available within the environment (using genetic algorithms and simulated annealing) were then employed to heuristically search a 5-variable parameter space to validate the IMPRINT model’s hand-derived parameter values. Multiple search results were consistent with each other and also with the experimental results. Because of the stochastic nature of the model, it was possible to use search result variability to examine relationships among model parameters. This analysis led to a better understanding of relationships among parameters in the model. 1. Introduction Computational models of human performance seek to capture perceptual, cognitive, and motoric behavior across a range of contexts and conditions using a relatively small number of model parameters. Identifying parameter values that optimize model goodness-of-fit to human data can be challenging, even if the range of contexts and conditions is constrained, because a model’s parameter space can be very large. The most basic approach to parameter optimization is direct computational exploration of the multivariate parameter space through repeated model runs. However, this method can be exceedingly time consuming, the more so as the length of each model run and the size of the parameter space increase. One avenue of attack that has been pursued to overcome search problems is to increase computational power. In one recent study exploring the possibility of large-scale computational resources for parameter

optimization, Gluck, Scheutz, Gunzelmann, Harris, and Kershner (2007) used Wright-Patterson's High Performance Computing Center (HPCC) to explore a 4parameter space defined by an ACT-R model of human performance on a simple arithmetic task under various levels of fatigue. Although the HPCC made a detailed search of the space feasible, incremental exploration of 1.7 million points in the space still consumed 96,000 processor hours over a period of several weeks. The current study takes advantage of two opportunities for dramatically reducing optimization times, and thus the power of parameter optimization, without increasing computational power. The first step was conversion of an IMPRINT model of human performance on a simple cognitive and motor task into a programming environment, Matlab, designed for high-speed scientific computation. The IMPRINT model was initially developed to simulate human performance in a digit data

entry task that had been used in two laboratory experiments. The second step was the use, on the converted model, of two specialized multivariate optimizers available in Matlab, which use simulated annealing (SA) and genetic algorithms (GA) as search techniques. The optimizers were used to repeatedly search a 5-parameter space defined by the model of human performance in order to approximate global optima for the parameters. The model optimization reported here is part of a larger research program aimed at understanding the effects of training on human performance that includes the computer modeling of human performance (Gonzalez, Fu, Healy, Kole, & Bourne, 2006; Healy, 2006; Raymond, 2007).

2. The experimental tasks and results

additional repetition priming effect that results in specific sequence learning with repetitive practice. Subjects’ TRTs changed with practice, regardless of hand used for typing, but TRTs were generally longer when using the left hand than when using the right hand. The first keystroke had, on average, longer response times (RTs) than subsequent keystrokes, which was taken as evidence of cognitive processing for reading and encoding of digits before typing the first keystroke. The remaining keystrokes thus largely reflected the physical component of typing. Mean RTs decreased across blocks on all keystrokes in the first half of both experiments, with most improvement on the first keystroke, suggesting both cognitive and motoric learning, with learning greater for the cognitive component than for the motoric component.

The experiments were used in a study by Healy, Kole, Buck-Gengler, and Bourne (2004) to investigate the effects of prolonged practice on speed and accuracy in data entry.1 In the experiments, right-handed subjects were presented with 10 blocks of 64 4-digit numbers on a computer screen in two session halves of five blocks each, for a total of 640 trials. On each trial, the subjects typed the number and the enter key as quickly and accurately as possible on the computer keyboard’s number pad. In Experiment 1, 64 numbers were each seen once per block in a given session half, so that each number was repeated five times per half, with different sets of 64 numbers in each half. In Experiment 2, numbers were not repeated, so subjects saw 640 unique 4-digit numbers (64 per block) in the course of the experiment. In Experiment 1, subjects used the left hand to type all numbers; in Experiment 2, hand use (left, right) was crossed with session half (first, second). Subjects did not see the digits that were typed and were not given any feedback on accuracy.

Without number repetition (Experiment 2), the first keystroke RTs increased across blocks in the second session half, although the other keystrokes RTs decreased, suggesting eventual cognitive slowdown with continued practice. The onset of slowdown varied across individuals.

Subject response times were measured on each trial for each keystroke that subjects typed, up to and including the enter key. Total response times (TRTs; 4 digits plus enter key) for correct responses varied for all subjects across individual trials, with TRTs on individual trials (as well as keystroke response times) showing a right-skewed distribution. However, block means for TRTs decreased in the first half of both experiments, but decreased to a greater extent when numbers were repeated, in Experiment 1, suggesting both skill learning and an

3. The IMPRINT model

1

See Buck-Gengler, Raymond, Healy, and Bourne (2007) for a more detailed description of the model and its development. Note that although the model simulates both response times and accuracy, only parameters controlling response times were optimized, so accuracy results and modeling will not be discussed here.

The third keystroke was longer, on average, than the second and fourth, suggesting that subjects divided the 4digit numbers into two 2-digit chunks and performed additional cognitive processing between chunks (see Fendrich, Healy, & Bourne, 1991). However, about half of the subjects showed far less evidence of digit chunking than did the other half, indicating that subjects made a fairly consistent strategy choice to divide numbers into chunks or not, although the likelihood of chunking varied within each group. There was no relation between a subject’s strategy choice and RT change with practice (Raymond, Buck-Gengler, Healy, & Bourne, 2007).

The data entry task was modeled in IMPRINT using goaloriented programming (Buck-Gengler et al., 2007). In the IMPRINT model of the task, main and goal networks run in parallel to simulate a subject performing data entry. The main network simulates the display of numbers on the computer, and the goal network simulates human performance in typing each number. The main network sets experiment variables for typing hand and number repetition, allowing simulation of both experiments within the same model. The goal network was based on a cognitive model of data entry that consists of three processing stages: (a) digits are read from the screen and a mental representation of the number is created; (b) the representation is used to guide development of a motor output plan; and (c) the motor

plan is accessed and implemented to execute each keystroke in sequence. The model’s stages are performed sequentially on contiguous series of digits until all digits have been processed. The cognitive model was implemented as a sequence of IMPRINT tasks that comprise a trial. Each task contributes a fixed proportion of a trial’s TRT, where TRT is selected once for each statistical subject from a normal distribution around an experimentally derived group mean. Task times (and thus TRTS as well) are varied across trials by multiplying each task’s proportion of TRT by an amount drawn for the task from a right skewed distribution. As in the experiments, all task times generated by the model become shorter with typing practice, and in the simulation of Experiment 1, with number repetition. Cognitive and motoric speed improvements with practice are governed by power functions of trial number. Improvement accumulates only on correct trials. Repetition priming is simulated with a power function of repetition number (i.e., block). Task times for left-hand typing are a constant multiple of right-hand task times. The onset of cognitive slowdown for each statistical subject is taken from an experimentally derived, rightskewed distribution, and slowdown for the first keystroke is implemented by adding a linear function of block after slowdown onset to task times for the first keystroke. Statistical subjects were randomly designated with equal probability as chunkers or non-chunkers, with chunkers having a higher probability of chunking on any trial than non-chunkers. The mathematical functionality of the goal network’s simulation of subject performance was concentrated in “beginning” and “ending” effects for the IMPRINT tasks, with flow control implemented using IMPRINT control architecture. The values for parameters controlling the effects modeled in the final version of the IMPRINT simulation reported in Buck-Gengler et al. (2007), which will be referred to as the “hand-derived” parameter values, were determined from the experimental results through the following iterative process. A single set of parameter values was used to simulate both experiments. A value was estimated from the data for a model parameter and, using that value, with the value of other parameters held constant, the model was run to simulate a group of statistical subjects in each experiment. The results of the model runs were compared against the experimental results affected by the parameter. The parameter setting was then adjusted to attempt to minimize discrepancies between experiment and model result, and the model was run again with the new values. Each parameter was set in this way. Occasionally setting one parameter required readjusting

parameters that had been previously set. Goodness-of-fit of the model was assessed for each experiment with correlation coefficients and root mean square errors (RMSEs) on block means (for each RT measure in all conditions) between model outputs and the experimental data. In particular, correlations and RMSEs for Experiment 1 were calculated using block means of each of the 5 keystroke times (one for each digit and the enter key) (50 measures); the same measures were used for each of the 4 subject groups (hand use in first session half X hand use in second session half) in Experiment 2 (200 measures).

4. Conversion from IMPRINT to Matlab Matlab is a computing environment and programming language developed by The MathWorks that evolved from FORTRAN in the late 1970's, and has since become one of the most widely used programming environments in science and engineering. It comprises powerful tools for graphics display, optimization, and code debugging/profiling. Matlab commands are initially interpreted, but statements are compiled on their first execution and subsequently reused. The programming language allows matrix operations, and when this capability is exercized, execution speeds approach the computer hardware’s theoretical maximum. Because functionality in the IMPRINT model had been implemented as code within the “beginning” and “ending” effects of the tasks, translating the IMPRINT model’s algorithms into the Matlab programming language was straightforward. It was also possible to make use of matrix operations to implement the repetitive components of the data entry task. For example, the operations that apply to all numbers in a block (e.g., RT improvements that accrue through learning, quantified by a power function of trial number) are represented in an array to which operations can be applied as a whole, rather than to individual elements in turn. The resulting Matlab implementation of the data entry model was shorter than the IMPRINT model and is easily comprehensively viewed as a single program. It also ran at a fraction of the time required for the IMPRINT model. Fornberg, Raymond, and Best (2007) provide a comparison of aspects of the IMPRINT and Matlab models. In summary, when running on roughly similar processors (about 3 GHz, single processors), a full simulation of 32 statistical subjects in Experiment 1 (each with 10 blocks of 64 numbers) required about 24 minutes for the IMPRINT model and only 0.086 seconds for the Matlab implementation. The goodness-of-fit of both models to the experimental data were comparable.

It should be stressed that this evaluation might be misleadingly favorable to Matlab, because it was possible to readily translate the model-specific operations of the IMPRINT model into Matlab's matrix syntax. Not all IMPRINT models could be as easily translated or run as efficiently in Matlab. Moreover, without use of matrix operations, Matlab’s speeds would likely drop by a factor of around 10.

5. Optimization method Matlab can be supplemented with a large number of optional Toolboxes, one of which includes both the SA and GA search techniques. These techniques are both immensely successful strategies for heuristically searching through high-dimensional parameter spaces to approximate global optima more effectively than an exhaustive parameter space search. Both search techniques borrow their key ideas from processes in nature: crystal formation through slow cooling in the case of SA, and biological evolution for GA. SA was recognized by Computing in Science and Engineering as one of the top 10 algorithms of the century for its profound impact on many areas of science and engineering (see Dongarra & Sullivan, 2000). GA represents an equally powerful concept. Invoking either SA or GA for Matlab model optimization requires only 23 additional lines of Matlab code. Both GA and SA were use in the current study, so that the search techniques could be compared.

sequences, the repetition priming parameter was necessarily positive. The left-hand penalty parameter is a time multiplier of right-hand typing times, and so must be greater than 1. Finally, the cognitive slowdown parameter is a linear additive to TRT, and so must be positive to produce longer TRTs after the onset of slowdown. The other, theoretically unconstrained, bound for each parameter was chosen to create a range about 10 times the hand-derived value. This magnitude was chosen to allow the search ample opportunity to find any nearby optima. However, note that the values chosen were generous pragmatic bounds on each of the parameters. For example, a cognitive learning parameter value outside the selected search range would result in TRT decreases that would be impossible for human subjects to achieve. The parameters defining the search space, the hand-derived IMPRINT values, and the search range selected for each parameter are shown in Table 1. Table 1. Parameters, their values in final IMPRINT model, and optimization search ranges. Parameter name

Hand-derived value

Search range

Cognitive learning

-0.0450

[-0.50, 0.00]

Motoric learning Repetition priming Left-hand penalty Cognitive slowdown

-0.0150 0.0500 1.1250 0.0125

[-0.20, 0.00] [0.00, 0.50] [1.00, 3.00] [0.00, 0.20]

The first step in parameter optimization was to define the parameter space to be searched. The parameters in the IMPRINT/Matlab models can be divided into constants that affect RTs at the trial level, at the subject level, and at the task level. Parameters affecting performance at the trial and subject levels (i.e., simulation of the rightskewed distribution of keystroke RTs on individual trials, the distribution of subject average TRTs, the distribution of slowdown onset blocks for subjects, and chunking group distribution and chunking rate distribution for each group) were not optimized. The parameters chosen for optimization were thus the constant values in functions that control the effects on aggregate performance with practice at the task level, namely the effect of typing hand, the rates of cognitive and motoric learning, the rate of learning from sequence repetition, and the rate of cognitive slowdown after slowdown onset.

In order to optimize the performance of the model for both experiments, the objective function (i.e., the function chosen for minimization during search) was the average of the RMSE calculated from a simulation run of Experiment 1 and the RMSE calculated from a simulation run of Experiment 2. Correlation values were not included in defining the objective function, because they were always very high. RMSEs for the simulations were calculated as in the goodness-of-fit evaluation of the IMPRINT model described earlier. Thus, optimization was performed using RMSEs that were calculated from the block RT means across all subjects from all conditions. Because we are using only 5 free parameters for optimizing the model fit to a large number of data points (i.e., the 250 block RT means that are used to define the objective function), the issue of overfitting is not expected to arise.

After selecting the parameters for optimization, ranges defining the search space were defined. The search ranges were each bounded on one side by a theoretical constraint. The learning parameters were required to be negative to ensure improvement with practice (as exponents of a power function). To ensure improvement on repeated

Random variables are used in the model to simulate human performance variability. Consequently, RMSE values generated by the model using the same parameter settings fluctuate approximately 10% from simulation run to simulation run, potentially posing problems for optimizer methodologies. To evaluate the effects of this

variability on search results, the optimizers were run multiple times, each run starting at a different random point within the parameter search range. The GA and SA optimizers were both run 20 times, and the number of model evaluations was equated for the two optimizers. On each GA search a population of size 30 evolved through 60 generations, for a total of 1800 simulations of each experiment on a given run. The typical time for each GA simulation run was about 5 minutes for a total optimization time of 100 minutes. Each SA search run also involved 1800 simulations of each experiment. The typical time for each SA run was also about 5 minutes. Searching was constrained to remain within the search ranges on all runs. Matlab's GA routine assumes that each evaluation of the objective function that uses the same input values will give the same result. Consequently, the final generation should contain the best result for the run. However, because of the random noise in the model, this assumption is violated in the present case, and the best result in the final generation of a run is not necessarily the best result encountered through all model evaluations in the run. The optimization procedure was, therefore, modified to keep track of the best result encountered in all generations throughout each run. This set of best parameter values is reported as the optimal parameter set for that run of GA.

6. Search results The distributions for the 5 parameters of the 20 values selected by the 20 search runs of both optimizers are shown in Figure 1. As can be seen in the figure, the values for all parameters are mostly closely clustered around their hand-derived counterparts, and the values are largely well within the selected parameter search ranges. Moreover, the sets of parameter values from the 20 runs are highly correlated with each other and also with the hand-derived values, (all GA r2 > 0.99 and all SA r2 > 0.87), verifying that all search runs settle on points in the same general region of the parameter space, close to the point defined by the hand-derived values. One exception to the clustering of solutions is the set of parameter values for the repetition priming parameter from the SA runs, to which we return later in this section. The objective function values for the 20 GA runs averaged .0587, and 12 runs had values somewhat better than the best IMPRINT RMSE average for the two experiments (.0592). SA did less well overall, with the average objective function values for the 20 SA runs only .0626, and only 2 SA runs producing a better values than the IMPRINT model’s RMSE average for the two experiments.

Variation in search results, presumably, is largely a consequence of the stochastic nature of the model. This variation can be useful for identifying trade-offs in parameter settings by examining correlations between pairings of parameter values for all search runs. Correlations between all pairs of parameter values are shown for both search techniques in Table 2. Most correlations do not agree across optimizers. For many of the pairs one value is low, and for others the correlations are in opposite directions. However, two stronger relationships were revealed that are consistent for both GA and SA: Motoric learning is highly negatively correlated with the left-hand penalty, and cognitive learning has a substantial negative correlation with cognitive slowdown. The relationship between motoric learning and the left-hand penalty reflects the fact that as the model’s rate of motoric learning is allowed to decrease (with increases of the parameter value), the lefthand penalty must also decrease. However, motoric learning has an effect on each item, whereas the left-hand penalty affects subjects on all items, when the left hand is used, so that the correlation has no implications for the model. The relationship between cognitive learning and cognitive slowdown reflects the fact that as the rate of cognitive learning in the model is allowed to increase (with decreases in the parameter value), the rate of cognitive slowdown must also increase. This compensatory relationship is expected from the fact that the cognitive learning parameter contributes the majority of RT decrease per item with practice, and only cognitive slowdown contributes to RT increases per block with practice. Note other similar relationships between cognitive slowdown and the other two parameters that contribute to decreased RTs with practice for SA. Table 2. Correlations among optimal parameter values for 20 runs of GA (italic) and SA (non-italic) optimization. Motoric Repetition Left-hand Cognitive learning Priming penalty slowdown Cognitive learning Motoric learning Repetition priming Left-hand penalty

-0.276 -0.038

-0.247 0.348

-0.013 -0.043

-0.536 -0.466

-0.058 0.277

-0.920 -0.840

0.124 -0.474

0.150 -0.204

0.397 -0.211 0.053 0.543

Figure 2 makes use of Matlab’s graphics capabilities to aid in visualizing the 5-parameter space. It shows the objective function in pairwise combinations of the five parameters as they vary across their search ranges using

GA, while holding the remaining three variables fixed at their hand-derived values. The top right half of Figure 2 displays the objective function in the form of surface plots, and the same data are displayed as contour plots in the bottom left half. On these latter plots the hand-derived values appear as solid black dots. Note that the surfaces displayed in the surface plots are extremely irregular. The bumpiness of these surfaces reflects the stochastic processes in the model. The hand-derived values are in all cases located in low regions of the RMSE surfaces, once again confirming that the manual procedure was successful in approximating a local minimum. The graphs in Figure 2 that include repetition priming as a variable suggest that this dimension of the parameter space is rather flat. Because SA follows a single search trail through the space during optimization, it is prone to spending more time in such less interesting areas of the space when it happens into them. As a consequence, final values from SA for repetition priming are more spread out across the search interval than are the values from GA, or values from either optimizer for other parameters. It is important to note that the illustrations in Figure 2 represent an exhaustive search of only two variables at a time, and therefore do not represent anything like a full 5parameter space search. Each surface display was generated at a cost of 441 model evaluations. For around 1800 model evaluations (i.e., about four times the computational cost of generating just one of these surface plots, or around 5 minutes of computing time), either SA or GA can carry out a full 5-parameter space heuristic search, arriving at a global minimum approximation of the objective function. If it had not been for the randomness in the model data and the random starting points within the search space on each run, every search run would have reached the same local minimum. Nevertheless, both optimizers were capable of finding relatively consistent approximate minima.

7. Conclusions Model optimization in the study was comfortably accomplished on a single processor PC by the use of the specialized optimizers available in the Matlab computing environment. The use of Matlab was possible because it was relatively easy to translate the original IMPRINT model’s algorithms into Matlab code that used Matlab’s matrix processing capabilities. The resultant Matlab model could be run extremely quickly, compared to the IMPRINT model, and specialized optimizing tools available in a Matlab toolbox could thus be employed to search the model’s parameter space. Whether this option is possible for another model depends on the translatability of the model into Matlab code.

In the current study, both GA and SA optimizers performed well, even given the stochastic nature of the human performance model. In areas of the search space where the objective function was relatively flat, GA was found to be more robust than SA when the number of evaluations for the two was the same. SA follows a single search trail through the space, and so is more prone to spend more time in less interesting areas of the space. Limiting the number of runs in each search thus sometimes caused the SA search to find approximations that were less good. GA, on the other hand, searches many adjacent areas simultaneously, reducing the likelihood of spending large resources in unproductive areas, and allowing it generally to do a better job of finding parameter values that minimized the objective function. The results of the optimizations validate that the handderived values are consistent with globally optimal values across the theoretically and pragmatically bounded 5parameter subspace. However, these optimization techniques could also have been used to derive values close to the hand-derived values in a fraction of the time spent calibrating the IMPRINT model directly. Validation of the hand-derived values also has theoretical implications. For example, validation that the optimal rate of motoric learning is a fraction of the rate of cognitive learning supports the conclusion that the cognitive component of the task improves more with practice than does the motoric component. The variation between different optimization runs also provided information about model parameter uncertainty ranges and allowed exploration of parameter covariance, resulting in the identification of close relationships between parameters. These results can lead to a better understanding of the relationships among parameters in the model. The current focus has been on optimizing model goodness-of-fit to experimental data. However, goodnessof-fit is only one measure of a model’s validity (see Roberts & Pashler, 2000). A good model should also be able to predict behaviors that can be tested. The execution speed of the Matlab model also makes it possible to efficiently explore predictions of the model for other practice scenarios, for example other repetition values, extended amounts of practice, or different patterns of hand use.

8. References Buck-Gengler, C. J., Raymond, W. D., Healy, A. F., & Bourne, L. E., Jr. (2007). Modeling data entry in IMPRINT. In T. Kelley & L. Allender (Eds.),

Proceedings of the Sixteenth Conference on Behavior Representation in Modeling and Simulation (BRIMS) (pp. 205-206). Orlando, FL: Simulation Interoperability Standards Organization. Dongarra, J., & Sullivan, F. (2000), Top 10 algorithms of the century, Computing in Science and Engineering, 2, 22-23. Fendrich, D. W., Healy, A. F., & Bourne, L. E., Jr. (1991). Long-term repetition effects for motoric and perceptual procedures. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 137-151. Fornberg, B., Raymond, W. D., & Best, B. (2007), Model Evaluation - ACT-R, IMPRINT, and Matlab: Timing and complexity comparisons on a keystroke data entry test problem. Unpublished manuscript, University of Colorado, Boulder. Gluck, K., Scheutz, M., Gunzelmann, G., Harris, J., & Kershner, J. (2007), Combinatorics meets processing power: Large-scale computational resources for BRIMS, In T. Kelley & L. Allender (Eds.), Proceedings of the Sixteenth Conference on Behavior Representation in Modeling and Simulation (BRIMS) (pp. 73-83). Orlando, FL: Simulation Interoperability Standards Organization. Gonzalez, C., Fu, W.-T., Healy, A. F., Kole, J. A., & Bourne, L. E., Jr. (2006). ACT-R models of training data entry skills. In Proceedings of the Fifteenth Conference on Behavior Representation in Modeling and Simulation (pp. 101-109). Orlando, FL: Simulation Interoperability Standards Organization. Healy, A. F. (2006, August). What we know and what we need to know in learning science to achieve greater efficiency and effectiveness in training. Invited paper presented at the Army Science of Learning Workshop. Hampton, VA. Healy, A. F., Kole, J. A., Buck-Gengler, C. J., & Bourne, L. E., Jr. (2004). Effects of prolonged work on data entry speed and accuracy. Journal of Experimental Psychology: Applied, 10, 188-199. Raymond, W. D. (2007, February). Toward implementing a taxonomy of training effects in IMPRINT. IMPRINT Technical Interchange Meeting, Boulder, CO. Raymond, W. D., Buck-Gengler, C. J., Healy, A. F., & Bourne, L. E., Jr. (2007, April). Predicting differences in individual learning and performance in a motor skill task. Paper presented at the 77th Annual Convention of the Rocky Mountain Psychological Association Meeting, Denver, CO. Roberts, S., & Pashler, H. (2000). How persuasive is a good fit? A comment on theory testing. Psychological Review, 107, 358-367.

9. Acknowledgments This research was supported in part by ARO Grant W9112NF-05-1-0153 to the University of Colorado.

Author Biographies WILLIAM D. RAYMOND is a Psychology Research Associate, University of Colorado, Boulder. BENGT FORNBERG is Professor of Mathematics, University of Colorado, Boulder.

Applied

CAROLYN J. BUCK-GENGLER is a Psychology Research Associate, University of Colorado, Boulder. ALICE F. HEALY is College Professor of Distinction, Department of Psychology, University of Colorado, Boulder, and Director of the Center for Research on Training at the University of Colorado. LYLE E. BOURNE, JR. is Emeritus Professor of Psychology, University of Colorado, Boulder.

Cognitive learning [-0.20, 0.00]

GA SA

Motoric learning [-0.10, 0.00]

GA SA

Repetition priming [ 0.00, 0.30]

GA SA

Left-hand penalty [ 1.00, 1.50]

GA SA

Cognitive slowdown GA [ 0.00, 0.10] SA Figure 1. Outcomes of 20 GA and 20 SA optimizations of 5 model parameters. The horizontal line pairs represent the parameter search ranges (for which the numerical bounds are given under each parameter label). Vertical hash marks through the range lines indicate search results for the parameters from each run. Triangles above the range lines indicate parameter search results in solutions with low values of RMSE, averaged over the 2 experiments (

Suggest Documents