Journal of Educational Psychology 2014, Vol. 106, No. 3, 605– 607
© 2014 American Psychological Association 0022-0663/14/$12.00 DOI: 10.1037/a0035607
Introduction to the Special Section on Computer-Based Assessment of Cross-Curricular Skills and Processes Samuel Greiff and Romain Martin
Birgit Spinath
University of Luxembourg
Heidelberg University
This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Keywords: computer-based assessment, cross-curricular skills, behavioral processes, domain-general
Fortunately, the advent of computers in virtually any setting of educational assessment has allowed for the emergence of innovative assessment procedures. In addition to offering increased flexibility, computer-administered tests record log-file data during task execution, thus providing further insight into behavioral processes that are not captured by final performance data. For instance, time on task may be used to better understand how students become involved in a proposed task or to yield information about the type and quality of cognitive processing that occurs while students work on an educational assessment. From an applied perspective, computer-based assessment environments and the use of computer-generated log-file data are now found in a large range of educational settings, including international large-scale assessments such as the Programme for International Student Assessment (PISA) and the Programme for the International Assessment of Adult Competencies (PIAAC). Crosscurricular skills have become integral parts of the assessment framework in interactive problem solving (Organisation for Economic Co-operation and Development [OECD], 2010), collaborative problem solving (OECD, 2012), problem solving in technology-rich environments (OECD, 2009), and electronic reading assessment (OECD, 2011). In addition, process data are now implemented in the scoring procedures of large-scale educational assessments, for example, to correct for obvious guessing as identified through a lack of the required exploration behavior or to integrate behavioral data as potential performance indicators that go beyond merely scoring the number of correct answers. As a consequence, research concerning the setup and use of computer-based assessment instruments in educational contexts is quickly emerging and has great relevance to researchers and practitioners alike. This special section in Journal of Educational Psychology pays tribute to the general need for rigorous empirical research in this field. This need is illustrated through the assessment of cross-curricular skills in particular, stressing the importance of developing a theoretical understanding of these skills and the added value of computerized assessments gained through the setup of interactive and complex assessment environments and the use of log-file data. This special section is composed of articles that report on the development of theoretically sound and scientifically validated assessment instruments for cross-curricular skills and on the benefits of methodological advances associated with computer-based assessment, such as the benefits of log-file analyses for assessing classical and cross-curricular cognitive abilities. Articles are related to assessment issues in education, and some of them exhibit strong ties to international or national large-
This special section presents a collection of articles that were submitted to Journal of Educational Psychology in response to a call for papers on computer-based assessment of cross-curricular skills and processes. The development of innovative computerbased assessment instruments that target cross-curricular skills and processes and the validation of these instruments within educational psychology has been a field of ongoing scientific inquiry with substantial research activity in recent years. After a selective and stringent peer-review process, this special section includes six articles that report cutting-edge research and present a crosssection of different topics. Why is a special section on computer-based assessment of cross-curricular skills and processes both timely and important to researchers interested in the field of educational psychology? There are a number of good reasons, but a major reason is that the cognitive and interpersonal skills necessary for successful participation in society have undergone great changes in recent decades. Studies have shown that tasks at school, university, and work have become more demanding and less bound to single-subject matters or domains (e.g., Autor, Levy, & Murnane, 2003; Spitz-Oener, 2006). Tasks now more often involve cross-curricular, nonroutine, and complex skills and processes (e.g., problem solving) that are applicable in diverse situations and content areas (Greiff et al., 2013; Hautamäki et al., 2002). Mayer and Wittrock (2006) highlighted the importance of problem solving as one prime example of a cross-curricular skill for educational psychologists. In fact, they proposed that helping students become better problem solvers is one of the greatest challenges in educational psychology. Consequently, the development of assessment instruments to measure cross-curricular skills and processes as well as their validation has been an ongoing field of inquiry in psychometrics and educational psychology alike. However, the dynamic and interactive nature of these skills implies that their assessment may not lie within reach of classical paper-and-pencil instruments.
Samuel Greiff and Romain Martin, Department of Psychology, University of Luxembourg, Luxembourg-Kirchberg, Luxembourg; Birgit Spinath, Department of Psychology, Heidelberg University, Heidelberg, Germany. This research was funded by a grant from the Fonds National de la Recherche Luxembourg (ATTRACT “ASKI21”). Correspondence concerning this article should be addressed to Samuel Greiff, Department of Psychology, University of Luxembourg, 6, rue Richard Coudenhove Kalergi, 1359 Luxembourg-Kirchberg, Luxembourg. E-mail:
[email protected] 605
This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
606
GREIFF, MARTIN, AND SPINATH
scale efforts. For example, some contributions uncover behavioral interaction patterns that are not reflected in final performance data and relate these patterns to psychological theories and relevant educational outcomes within a large-scale assessment. Thus, the common denominator of all articles published in the special section is that they exploit the computer in an innovative manner and substantially widen the scope of our view on students’ skills. Much of the research in this special section is embedded in the context of large-scale educational assessments, but some of it is experimental or draws on selective subgroups of student populations. In the first contribution (Goldhammer, Naumann, Stelter, Tóth, Rölke, & Klieme, 2014), the authors provided insights into behavioral processes that, until recently, were exclusively addressed in experimental settings. They elaborated on differential effects of the meaning of time on task in problem solving and in reading based on a representative German sample from the PIAAC field trial data. In a related way, the second contribution (Kupiainen, Vainikainen, Marjanen, & Hautamäki, 2014) investigated the role of time on task as an indicator of students’ investment and the subsequent effect on students’ achievement. Both articles make use of log files and show how this approach not only broadens the understanding of assessment but also advances theory in the field of educational psychology and may ultimately lead to the development of interventions that can be used in the classroom. The third contribution (Csapó, Molnár, & Nagy, 2014) demonstrates that psychometric properties can be optimized through computer-based test delivery even in an assessment of school readiness at a very young age. This illustrates that computers as assessment instruments can be used across a variety of age groups—a topic with very limited information until now. The fourth contribution (Ifenthaler, 2014) shows how computers can be used not only to collect large amounts of data but also to process and to automatically score the data in the context of team-based processes and performance. By doing so, Ifenthaler investigated team effectiveness, an area that is very relevant to the assessment of cooperation and collaboration in large-scale educational assessments. Collaborative problem solving will be assessed in the PISA 2015 cycle. The fifth contribution (Greiff, Kretzschmar, Müller, Spinath, & Martin, 2014) addressed complex problem solving, a phenomenon particularly relevant to cross-curricular skills. The authors related complex problem solving to intelligence and computer skills and showed in three different samples that the added value of complex problem solving cannot be traced back to an indirect assessment of computer skills. In fact, the added value seems to originate from complex cognitive processes associated with computer-simulated problem solving tasks. Further investigating the skill of complex problem solving, the sixth contribution (Sonnleitner, Brunner, Keller, & Martin, 2014) reported that computer-based simulations of complex cognitive processes may be less influenced than paper-and-pencil tests of intelligence by students’ cultural backgrounds, thus yielding a less biased and fairer assessment of cognitive skills for disadvantaged groups or minorities. These two articles provide strong empirical support for the claim that computer-based assessment allows psychologists to widen their scope to new cognitive constructs that cannot be accessed via classical paper-and-pencil-based measures. These six contributions in the special section span a wide array of different topics. They do not cover all topics relevant in the field, but they do cover important topics for scientists interested in
new developments in educational psychology. All articles in this special section were subjected to the normal rigorous process of anonymous peer review. Articles in which one of the guest editors was involved as a contributing author were not reviewed or edited by any of the other guest editors. Editorship and authorship were strictly separated in accordance with American Psychological Association guidelines. Over 20 years ago, Bunderson, Inouye, and Olsen (1989) predicted that new generations of computer-based assessment instruments would swiftly evolve along with a rapid decline in paperand-pencil testing. From today’s perspective, the shift toward a new generation of tests that allow for the assessment of more general and transversal skills and that exploit process data as a standard procedure has been much slower than was anticipated in the late 1980s. Considering how long computers have been available, Williamson, Bejar, and Mislevy (2006) observed that the exploitation of the potential that lay in computer-based assessment has been slower than expected. However, assessment is now at a transition point where many former barriers, such as availability of computer equipment in the classroom or the general level of computer literacy (cf. digital natives; Prensky, 2001), have become less relevant. As a consequence, computer-based assessment is widely available and accepted today, even if its potential for added value still needs to be established and fostered. The contributions in this special section are committed to advancing the knowledge of computer-based assessment in educational contexts. In doing so, our goal of this special section is to enhance the understanding of the assessment of cross-curricular skills and processes at different educational levels and to explain the process of skill acquisition by developing and testing adequate models. We sincerely hope that you enjoy reading this special section in the Journal of Educational Psychology.
References Autor, D. H., Levy, F., & Murnane, R. J. (2003). The skill content of recent technological change: An empirical exploration. Quarterly Journal of Economics, 118, 1279 –1333. doi:10.1162/003355303322552801 Bunderson, C. V., Inouye, D. K., & Olsen, J. B. (1989). The four generations of computerized educationale measurement. In R. L. Linn (Ed.), Educational measurement (pp. 367– 407). New York, NY: Macmillan. Csapó, B., Molnár, G., & Nagy, J. (2014). Computer-based assessment of school readiness and early reasoning.” Journal of Educational Psychology, 106, 639 – 650. doi:10.1037/a0035756 Goldhammer, F., Naumann, J., Stelter, A., Tóth, K., Rölke, H., & Klieme, E. (2014). The time-on-task effect in reading and problem solving is moderated by item difficulty and ability: Insights from computer-based large-scale assessment. Journal of Educational Psychology, 106, 608 – 628. doi:10.1037/a0034716 Greiff, S., Kretzschmar, A., Müller, J. C., Spinath, B., & Martin, R. (2014). The computer-based assessment of complex problem solving and how it is influenced by students’ information and communication technology literacy. Journal of Educational Psychology, 106, 666 – 680. doi: 10.1037/a0035426 Greiff, S., Wüstenberg, S., Molnar, G., Fischer, A., Funke, J., & Csapó, B. (2013). Complex problem solving in educational settings—something beyond g: Concept, assessment, measurement invariance, and construct validity. Journal of Educational Psychology, 105, 364 –379. doi: 10.1037/a0031856 Hautamäki, J., Arinen, P., Eronen, S., Hautamäki, A., Kupiainen, S., Lindblom, B., . . . Scheinin, P. (2002). Assessing learning-to-learn. A framework. Helsinki, Finland: National Board of Education & Univer-
This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
INTRODUCTION TO COMPUTER-BASED ASSESSMENT sity of Helsinki. Retrieved from http://www.oph.fi/download/ 47716_learning.pdf Kupiainen, S., Vainikainen, M.-P., Marjanen, J., & Hautamäki, J. (2014). The role of time on task in computer-based low-stakes assessment of cross-curricular skills. Journal of Educational Psychology, 106, 627– 638. doi:10.1037/a0035507 Ifenthaler, D. (2014). Toward automated computer-based visualization and assessment of team-based performance. Journal of Educational Psychology, 106, 651– 665. doi:10.1037/a0035505 Mayer, R. E., & Wittrock, M. C. (2006). Problem solving. In P. A. Alexander & P. H. Winne (Eds.), Handbook of educational psychology (pp. 287–303). Mahwah, NJ: Erlbaum. Organisation for Economic Co-operation and Development. (2009). PIAAC problem solving in technology-rich environments: A conceptual framework (OECD Education Working Papers, No. 36). Paris, France: Author. Organisation for Economic Co-operation and Development. (2010). PISA 2012 problem solving framework. Paris, France: Author. Organisation for Economic Co-operation and Development. (2011). PISA 2009 results: Students on line. Digital technologies and performance. Paris, France: Author.
607
Organisation for Economic Co-operation and Development. (2012, April). PISA 2015 field trial collaborative problem solving framework. Paper presented at the 33rd PISA Governing Board meeting, Talinn, Estonia. Prensky, M. (2001). Digital natives, digital immigrants: Part 1. On the Horizon, 9, 1– 6. Sonnleitner, P., Brunner, M., Keller, U., & Martin, R. (2014). Differential relations between facets of complex problems solving and students’ immigration background. Journal of Educational Psychology, 106, 681– 695. doi:10.1037/a0035506 Spitz-Oener, A. (2006). Technical change, job tasks, and rising educational demands: Looking outside the wage structure. Journal of Labor Economics, 24, 235–270. doi:10.1086/499972 Williamson, D. M., Bejar, I. I., & Mislevy, R. J. (2006). Automated scoring of complex tasks in computer-based testing: An introduction. Mahwah, NJ: Erlbaum.
Received November 30, 2013 Revision received November 30, 2013 Accepted December 5, 2013 䡲