Approaches in measuring learning environments - Stanford University

2 downloads 0 Views 86KB Size Report
Sep 9, 2006 - This special issue of the Learning Environment Research Journal ... ments in the measurement of complex classroom learning environments.
Learning Environ Res DOI 10.1007/s10984-006-9010-z PREFACE

Approaches in measuring learning environments Richard J. Shavelson Æ Tina Seidel

Received: 9 September 2006 / Accepted: 14 September 2006 Ó Springer Science+Business Media B.V. 2007

This special issue of the Learning Environment Research Journal grew out of a symposium held at the biannual meeting of the European Association for Research on Learning and Instruction in Cyprus. The symposium brought together experts from The Netherlands, Germany, Australia and Finland to address new developments in the measurement of complex classroom learning environments. The special issue presents the results of that symposium. In thinking about how best to introduce the special issue, a set of several related propositions about teaching and learning came to mind. The first proposition is that, in order to understand the effects of educational programs (e.g. inquiry science teaching) on student outcomes, we need to account for the intervening instructional processes. The second proposition follows the first—we need to observe and measure these instructional processes to understand them. And the third proposition is that we suspect that there are almost as many instruments for measuring learning environments as there are researchers doing this work. Hence the need for this special issue! This special issue presents a series of articles that advance both multilevel conceptual frameworks and statistical models for measuring and understanding learning environments. To this end, they consider issues of reliability and validity not just at the individual student level, but also for measures at the aggregate level. In doing so, they advance our knowledge of learning environments and how to measure them. In advancing knowledge, learning environment researchers take different perspectives. And here’s the rub (‘challenge’): While advances are made by letting a thousand flowers bloom, at some point the gardener needs to cull the flowerbed and

R. J. Shavelson Æ T. Seidel Stanford University, School of Education 485 Lasuen Mall Stanford, CA 94025-3096, USA e-mail: [email protected] T. Seidel (&) e-mail: [email protected]

123

Learning Environ Res

identify the best of the species for further development. So, as the field advances, we urge some culling. Perhaps one way to do this is to identify learning environment measures that the community of researchers agrees to include in their studies, among other home grown measures, so that different studies provide some comparable data to move our understanding of learning-environment measurement and theory forward. The articles herein provide a firm foundation for beginning that culling. Moreover, with new, multilevel conceptual frameworks, our gardeners are creating hybrids—that is, new constructs are being developed out of the seeds of the flowerbed—that need to be understood in their own right conceptually and psychometrically. Three of the articles in this special issue focus on the measurement characteristics of these new hybrids. Finally, we are coming to understand that flowers and their hybrids bloom better or worse depending on the environment in which they are planted. This gene x environment interaction is becoming increasingly important in the study of behavioural genetics and, as we shall see in the last article of this special issue, interactions too are becoming important in the study of person x micro climate interactions in the classroom. Three themes emerge from the set of articles. The first theme is conceptual—how should we conceptualise learning environments and, from that conceptualisation, what measures are most likely to provide access to the data we seek? As we will see, the authors vary in theoretical stance, but one central point comes out: because teachers, students and observers have access to different features of learning environments, which source of measurement provides the appropriate data is driven by the conceptualisation and tested empirically. Importantly, given the multilevel nature of education, decisions about information sources also depend on whether the focus is on individuals within classrooms or on between-classroom variations. Three articles address this first theme. Using the Questionnaire on Teacher Interaction (QTI) as an example, den Brok, Brekelmans, and Wubbels show how the meaning of questionnaire items changes when different perspectives and degrees of personalisation are applied (personal student perspective versus class perspective). Kunter and Baumert demonstrate how factor structures vary when using a teacher versus a student version of a questionnaire on instructional quality. Next to selecting appropriate information sources for measuring learning environments, issues of interactions between information source (e.g. a student) and measurement instruments (e.g. a questionnaire item) emerge. Cavanagh and Romanowski address this basic issue in developing measures for learning environments and demonstrate how to apply Item Response Theory to attitudinal student questionnaire data. The second theme extends the issues of reliability and validity to multilevel frameworks. As we will see, differences in reliability and agreement indices emerge, as do questions about validity. Two articles in this special issue address this theme. Den Brok, Brekelmans, and Wubbels use student agreement in classrooms as a measure for observer consistency and test for several single-level and multi-level models. In the second article, Lu¨dtke, Trautwein, Kunter, and Baumert critically reflect on measures currently used for assessing student agreement. They introduce and discuss different indices from the field of educational as well as of organisational psychology to assess the reliability and agreement of student perceptions at multiple levels. Finally, the third theme deals with relationships between different perspectives (students, teachers, observers) and issues of validity in measuring learning

123

Learning Environ Res

environments. We see that each perspective has differential validity with regard to instructional processes and learning outcomes. Furthermore, micro-climates within classrooms are explored, raising the possibility that classrooms might not be the best unit for measuring learning environments. Two articles address this third theme. Kunter and Baumert show the value of taking students’ and teachers’ perspective into account. In their analyses, they show that each perspective has a different predictive validity with regard to student motivation, teacher motivation, student achievement, and characteristics of tasks set in class. Finally, Seidel raises the question of whether it is valid to characterise a single classroom as one learning environment. To explore this possibility, she introduces latent class analysis to identify groups of students with similar prerequisite profiles and poses the question: Are learning environments for these clusters the same or do they vary within a classroom and, if they vary, do they vary systematically and predictably by cluster? The set of articles in this special issue gives the reader an overview of current conceptual and methodological issues in measuring learning environments. As Erno Lehtinen pointed out as symposium discussant, articles represent the state of the art in measuring learning environments. These researchers ‘want to know’ about learning environments and at the same time ‘want to be sure’ about the approaches taken. In doing so, we can expect the next generation of gardeners to continue to cull the flowerbed and that the knowledge about measuring learning environments will systematically improve.

123