Development and Use of an Instrument for Computer& ... - PsychWiki

10 downloads 0 Views 203KB Size Report
(shown in the box) provide a complex pattern of relationships. When com- puters are ...... The NCEI data were analysed in more detail for Lucy's class because.
MEASURING COMPUTER-SUPPORTED LEARNING ENVIRONMENTS

115

C. PAUL NEWHOUSE

DEVELOPMENT AND USE OF AN INSTRUMENT FOR COMPUTER-SUPPORTED LEARNING ENVIRONMENTS Received 23 June 2000; accepted (in revised form) 12 January 2001 ABSTRACT. This article begins by constructing an argument from the literature to support the view that research into the use of computers in classrooms must consider the overall learning environment. This provides the rationale for the development and use of the New Classroom Environment Instrument (NCEI) in research conducted into the use of portable computers in a school. This study particularly focused on the impact of the computers on classroom learning environments. An example is given of the use of the instrument to investigate the use of the portable computers with classes of 13-year-old students. The intention of the article is not to present the results of the study but to demonstrate the use of the instrument and encourage others to use such instruments in educational computing research. KEY WORDS: educational computing, evaluation, learning environment, portable computers, questionnaires

1. INTRODUCTION The use of classroom learning environment instruments in educational research dates back to the 1970s with Walberg and Moos (Fraser & Walberg, 1991). Because such instruments have been used in research conducted over a wide range of areas of interest in education, it was natural that in the 1980s and 1990s they would be applied to research in educational computing. This has particularly been the case for research into the implementation of computer support for learning. However, the importance of considering the classroom learning environment in research concerned with the use of computers to support learning should be seen as more than just another application of learning environment instruments. At its core, such research must involve a consideration of the learning environment within which the computers are used because of the interactive nature of the technology.

1.1. Computers and the Classroom Environment The use of computers in teaching and learning for the majority of children is most likely to occur in the classroom. Most experts in the field of eduLearning Environments Research 4: 115–138, 2001. © 2001 Kluwer Academic Publishers. Printed in the Netherlands.

116

C. PAUL NEWHOUSE

cational computing (e.g. Lynch, 1990; Olson, 1988; Rieber, 1994) would characterise computers as interactive and thus admit them a place within the relationship structures within the classroom. Carter (1990) goes so far as to claim that “new technologies construct a totally new environment” (p. 34). However, the majority of classroom learning environments, in schools, which incorporate computers could be depicted using the model in Figure 1. Strictly speaking, the computer systems and non-interactive technology are part of the context of the curriculum but, because computers are two-way interactive, it is more helpful to highlight them by separation. The elements of the traditional classroom learning environment (shown in the box) provide a complex pattern of relationships. When computers are used within this environment, the complexity of this pattern of relationships increases, with all elements of the traditional classroom learning environment needing to interact with both the hardware and software.

1.2. Computers in Schools In the 1960s, educators and interested community members began to consider realistically the potential for the use of computers in education both at the tertiary and school levels. In the mid-1970s, access to computer processing (i.e. the use of a computer system to perform tasks requiring

Figure 1. A model of the relationship of computer systems to the elements of the classroom learning environment.

MEASURING COMPUTER-SUPPORTED LEARNING ENVIRONMENTS

117

the user to submit one or more instructions) became available in Australian schools, initially limited to one terminal and later to a number of microcomputers. Today most Australian schools, as is the case in other developed countries, have substantial numbers of computers, some to the extent of one computer per student (Fallshaw, 1995; Lyall, 1997). Gardner, Morrison, Jarman, Reilly and McNally (1994) give the rationale for computers in schools as the perception of computer literacy for employability, to improve work output, to improve motivation, to improve ‘learning environments’, and to empower students by releasing them from tedious repetitive tasks.

1.3. Investigating Classroom Learning Environments Learning environments are the social-psychological contexts within which learning takes place (Fraser & Walberg, 1991). Learning environments in schools typically involve one or more adult teachers in a classroom with various numbers of students, whether juvenile or adult. These people interact and form a variety of relationships, creating what Salomon (1994) calls “a system of interrelated factors that jointly affect learning in interaction with (but separately from) relevant individual and cultural differences” (p. 80). The learning environment has a physical as well as a relationship dimension. Physically it might be in a room, full of particular furniture and equipment, including computers.

1.4. Impact of Computer Use on Classroom Environment It was argued earlier that using computers in classrooms changes the relationships within the environment and thus the roles of teachers and students are likely to change. Changes in the role of teachers and students will be reflected in changes in the organisation of the classroom and the teaching and learning strategies employed. Many educators (e.g. Collis, 1989; Van Den Akker, Keursten & Plomp, 1992) contend that, for students to make effective use of computers in classrooms, considerable change to what might be the normal classroom routine is required. Mercer and Fisher (1992) argue that, “it is necessary to examine the total activity, including the way the teacher has set up the task and how she then supports its progress” (p. 344). So, an important factor is likely to be the types of tasks to which computers are put (Bergen & Kingston, 1994). Just as the tasks are important to computer use, so is the organisation of the classroom and the teaching and learning strategies employed (Hayward, 1993). Rowe (1993) contends that “organising lessons in which students are work-

118

C. PAUL NEWHOUSE

ing on computers will involve making considerable changes to what might be the normal classroom routine” (p. 114). Collis (1989) suggests that: Instead of organizing the school around the notion of classrooms with desks aligned to face the teacher and the chalkboard, both centrally positioned at the front of the room, the school building may radically change in its interior. (p. 15)

McMahon and Duffy (1993) saw what they termed changes in the culture of the classroom in many schools involved in their study of the use of networked home and classroom computers. In particular, this involved a change in the relationship between students and teachers, with students having greater power than previously. It is indeed likely that, to realise the potential of computers in the support of teaching and learning, many revolutionary changes will need to be made in schools and classrooms (Salomon, 1994; Schofield, 1995), as foreshadowed by Rowe (1993): Obviously, the more effective uses of computers in education will require new patterns of interaction between students and teachers, changes in the social organisation of the classroom, the adaptation of curricula and alternative purposes and modes of student evaluation. (p. 5)

1.5. Computer-Supported Learning Environments Computer-supported learning environments (CSLEs) are those in which computers are used either to maintain a learning environment or to support student learning in the Vygotskian sense (DeCorte, 1990; Mercer & Fisher, 1992; Mevarech & Light, 1992). Learning environments which involve computers can be configured in many different ways to provide access to computer processing. There are a number of fundamental parameters which can be used to describe the place of computer systems in a learning environment. These parameters include the ratio of computers to students, whether the computers are in the ‘normal’ classroom or a laboratory, whether they are networked or stand-alone, whether they are portable or fixed, whether students freely access them or gain access through a roster system, and whether students work at the computers individually or in groups. Many educators, particularly school reformers, perceive the ideal computer configuration as a learning environment in which it is possible at any time for every student to access an adequate level of computer processing and software relevant to their learning needs. In essence, it implies the potential for students to have unrestricted access to computer processing. In practice, it means that either the students are in a computer laboratory

MEASURING COMPUTER-SUPPORTED LEARNING ENVIRONMENTS

119

with the potential for one workstation each, or that each student has a portable computer. In many places, the term ‘computer-rich’ is used to describe portable computer programmes, but the term ‘computer-saturated’ could be more appropriate as it is a quantitative term whereas the former could be interpreted in a qualitative way as well.

1.6. Using Classroom Environment Instruments Fraser and Walberg (1991) considered three approaches to measuring classroom environments. The research discussed in this article made use of two of these approaches, namely, apprehension of the environment and ethnographic case studies. The apprehension of the environment approach to studying classroom environments began in the 1970s with the independent work of Walberg and Moos which resulted in the development of, respectively, the Learning Environment Inventory (LEI) and the Classroom Environment Scale (CES). Since then, these questionnaires have been applied in many studies (Fraser, 1994; Fraser & Walberg, 1991) and a number of environment instruments (e.g. Individualised Classroom Environment Questionnaire (ICEQ), My Class Inventory (MCI), Science Laboratory Environment Inventory (SLEI)) have been developed from them and tailored to particular learning environments or research questions. Fraser and Walberg (1991) outline three broad areas of research using classroom environment instruments, two of which are relevant to the present discussion. Firstly, research has involved associations between student outcomes and classroom environment, with a meta-analysis showing that classroom environment perceptions accounts for appreciable amounts of variance in cognitive and affective outcomes (Teh & Fraser, 1995). Research into CSLEs almost always considers a number of affective outcomes, such as attitude towards the use of computers, towards learning and towards particular subject areas. Therefore a classroom environment instrument is ideal for describing classroom environments and drawing conclusions about how these can relate to the use of computers and student attitudes. Secondly, research has involved investigations of whether students achieve better when in their preferred environments. This is often referred to as person-environment fit research, and was pioneered by Fraser and Fisher (1983b) in the early 1980s. They found that student achievement for a range of outcomes is likely to improve if changes to the classroom are made to increase the congruence between the preferred and actual classroom environments. Fraser and Walberg (1991) conclude that, “the prac-

120

C. PAUL NEWHOUSE

tical implication of these findings for teachers is that class achievement of certain outcomes might be enhanced by attempting to change the actual classroom environment in ways which make it more congruent with that preferred by the class” (p. 16). It is this area of research that could be of most value to research in CSLEs. While there has been much discussion about how the presence of computers, logically, should change the nature of the classroom environment, little research has occurred to support this discussion. As Teh and Fraser (1995) put it, “innovations in computer-assisted learning rarely have been evaluated in terms of their impact on the nature of the classroom learning environment as perceived by students” (p. 178). This is argued by Levine and Donitsa-Schmidt (1995), especially in the context of the concept of computer-supported learning. It was my belief that computers could be used to support learning environments that would match student preferences more closely and thus lead to improved student achievement on a range of outcomes. There has been little application of this approach for studying classroom environments involving the impact of computers on learning environments (Fraser & Teh, 1994). A few attempts have been made to create learning environment instruments for use in classrooms (usually computer laboratories) in which computers are used (Levine & Donitsa-Schmidt, 1995). Teh and Fraser (1995) constructed the Geography Classroom Environment Inventory (GCEI) which has four scales: Gender Equity, Investigation, Innovation, and Resource Adequacy. They used the actual version to compare two types of classes, computer-assisted learning classes and control classes which did not use computers. Maor and Fraser (1996) studied the use of computers to support inquiry-based learning in secondary science classes and concluded that the use of the particular software package encouraged teachers and students to develop a more investigative and openended learning environment. Recently Newby and Fisher (2000) have reported on the use of the Computer Laboratory Environment Inventory (CLEI) which they used in research on the use of computer laboratories in a university business course. They developed a model to explain the relationship between student perceptions of the laboratory environment, student attitudes and achievement in the course. There are few researchers who have used generic learning environment instruments in studies where the use of computers is important. Where computers are used to support learning in a variety of environments (not just a specialist computer laboratory), the impact of computer use on the ‘normal’ classroom environment requires the use of a generic instrument. This provides data on the classroom environment no matter to what de-

MEASURING COMPUTER-SUPPORTED LEARNING ENVIRONMENTS

121

gree, and in what ways, the computers are used. If it is considered that the environment is of primary importance in teaching and learning, then there is little purpose in attempting to extract out a ‘computer’ component. Salomon (1994) supports this view by arguing that it is not possible to study “the impact of computer use in the absence of the other factors” nor to assume that “one factor impacts outcomes independently of the others” (p. 80). The educational aim is to embed the computer support in the learning environment (DeCorte, 1990), rather than to try to isolate its effect on learning. Using computers in learning is concerned with methods of using the technology to create environments and learning situations. The aim is to offer new learning opportunities or to improve the way in which current learning activities are implemented and therefore the overall effectiveness of learning environments and episodes is of paramount concern. It is important that the newness and ever-changing nature of computer-based technology does not overshadow the enduring nature of learning and the solid and ever-increasing base of knowledge about learning. Research conducted into the provision of learning environments incorporating high access to computers has supported a number of important assumptions which should be considered in such research. Firstly, research must consider the whole learning environment in a classroom, with a particular focus on the relationships between teachers, students, curriculum, and educational technology (Miller & Olson, 1994; Rowe, 1993). Secondly, studies must be long-term, particularly where the application of computers is relatively new to the teachers (Becker, 1994; Welle-Strand, 1991). And thirdly, teacher perceptions, attitudes and beliefs concerning learning, schooling and computers are the most important factors in the successful implementation of computer support in these learning environments (Dwyer, 1994).

1.7. New Classroom Environment Instrument (NCEI) The NCEI instrument evolved from a study into the development of computer-supported learning environments in secondary mathematics classes (Newhouse, 1993) and was based on a well accepted instrument, the Classroom Environment Scale (CES), developed by Rudolf Moos and Edison Trickett (1974). The instrument was then used extensively in a study of the implementation of portable computers in a secondary school. The study aimed to use the instrument to highlight any differences in the classroom environments, as perceived by the students, when various implementations of computer use were facilitated in the classroom. While the instrument

122

C. PAUL NEWHOUSE

could be administered to teachers to measure their perceptions, the earlier study had found that giving the same instrument to the classroom teacher was of little value (Newhouse, 1993).

1.7.1. Nature of the NCEI In selecting a classroom environment instrument to use in the study, it was important to consider the reason for its use. There have been a number of studies for which an instrument was developed specifically for computerbased or computer-supported classroom environments (e.g. Maor & Fraser, 1996; Teh & Fraser, 1995). These instruments include scales which are related to using computers in a classroom. However, my study was concerned with the impact of portable computers on the ‘normal’ classroom environment. It was considered that computer use could be implemented in a variety of ways by different teachers with different classes of students. Therefore it was important to consider differences in the environment between classes using computers and those not, and between classes using the computers for different purposes. As a result, the instrument needed to be independent of computer use and to measure students’ perceptions of general aspects of classroom environments which could apply to any classroom. Over a period of three years, commencing in 1991, I made an investigation of a range of classroom environment instruments available (e.g. those mentioned earlier, including the CES, ICEQ, MCI, SLEI, CLES, CCEI, and GCEI) and their use in educational research while developing the New Classroom Environment Instrument (NCEI). There were many such instruments available and so I discussed the appropriateness of these instruments firstly with Fraser and later with Fisher, and I read some literature which they recommended (Fraser, 1981, 1989, 1994; Fraser & Fisher, 1983a, 1983b; Fraser, Malone & Neale, 1989; Fraser & McRobbie, 1995; Fraser & Walberg, 1991; Rentoul & Fraser, 1979; Taylor & Fraser, 1991). These investigations made it clear that it was not necessary to develop an entirely new instrument, but rather to modify existing instruments. Further, it was clear that the instrument would mainly be used to describe the classroom environment, and compare preferred and actual class means from the perspective of person-environment fit. For these reasons, the original Classroom Environment Scale (CES) appeared to be the most appropriate to build upon. The NCEI was initially developed (Newhouse, 1993) by modifying the CES (Moos & Trickett, 1974) and later including the Group Work scale from the Classroom Interaction Patterns Questionnaire (CIPQ) developed by Woods (1995). Therefore all items for the NCEI were either directly

MEASURING COMPUTER-SUPPORTED LEARNING ENVIRONMENTS

123

from one of these two parent instruments or were reworded items from those instruments. Both a preferred and actual form of the instrument were developed.

1.7.2. Development of the NCEI The original CES (Moos & Trickett, 1974) has nine scales, each with 10 items, using a True-False response format. From the original CES, a number of items were removed because the earlier study I had conducted demonstrated that it had too many items for the age of student involved. Firstly, one of the nine scales, Rule Clarity, was removed because it was considered to be too similar to the scale Teacher Control. The number of items for each of the remaining eight scales was reduced to seven items. Items were removed which were thought to be more difficult for students to comprehend, particularly considering the level of language used. In the earlier study (Newhouse, 1993) and with the Grade 7 students (12-yearolds) in the first year of the portable computer study, students were asked to circle words which they had difficulty in understanding. In addition, some items were removed to improve the reliability of the scales using these data. From the earlier study, it was clear that many students had difficulty responding using the True-False response format. Often students indicated that it was neither True nor False. Firstly a five-point Likert response format was considered as used for the School-Level Environment Questionnaire (SLEQ). However many found it difficult to select from five alternative responses and they were not sure what it meant to agree or disagree with a statement. A number of subsequent trials made use of other response formats used by other classroom environment instruments, for example, the five-response frequency format of the SLEI (Fraser & McRobbie, 1995). The result was the adoption of the three-point frequency response format, Often, Sometimes, Almost Never. Students appeared to have least difficulty responding with this format and scale reliability coefficients were higher. In the final form of the NCEI (used from the second half of the first year of the portable computer study), the Task Orientation scale was removed and replaced with the Group Work scale. Analysis of the data collected from Grade 7 students had given low scale reliability for the Task Orientation scale (0.36 and 0.42 for preferred and actual versions, respectively) and it appeared from item analysis that it would be more difficult to improve than any other scale. Woods (1995) had just completed the development of the CIPQ instrument that had a Group Work scale. From

124

C. PAUL NEWHOUSE

discussions with teachers at the school and reading a number of studies (e.g. DeCorte, 1990; Dwyer, Ringstaff & Sandholtz, 1991; Johnson & Johnson, 1991; Mevarech & Light, 1992; Olson, 1988; Riel, 1989; Rowe, 1993; Rysavy & Sales, 1991; Wubbels, Brekelmans & Hooymayers, 1991), it was decided that it would be appropriate to include a scale related to working in groups in the classroom. Wood’s instrument used the five alternative responses of Very Often, Often, Sometimes, Seldom and Almost Never. Therefore the Very Often and Seldom responses were removed. The instrument contained seven items for each scale, which matched the number of items for the other scales of the NCEI, and thus all items were initially included. However, when the items were shown to students in a class for feedback, it was decided to remove one item and replace it with a slightly modified version of another item of the scale, and also the wording of two other items was changed. The wording of the actual version was improved and the preferred version was subsequently created. It was also ensured that there was at least one reverse-scored item for each scale. In modifying the wording of the items, I attempted to maintain the content of each item while improving its readability for middle-school students and also emphasising student perceptions rather than just opinions. This final concern is best illustrated in the preferred version where the word ‘would’ or ‘would not’ was used to replace words such as ‘should’ in the CES. For example, in the preferred version of the CES, Item 1 reads, “Students should put a lot of energy into what they do here”. In the preferred version of the NCEI, Item 1 reads, “Students would put a lot of energy into what they do here”. Students were asked to indicate what would constitute an ideal class for them rather than what should. Some students indicated that the latter term evoked a sense of what they thought others, particularly adults, thought an ideal class should be like rather than what they personally believed. As a result of all these changes, the final NCEI had 8 scales (Table I) with each scale represented by 7 items (56 items in total) using a threepoint frequency response format, Often, Sometimes, Almost Never.

1.7.3. Analysis of the NCEI Responses to the instrument were entered into a separate spreadsheet for each class. The three possible responses for each item, namely, Often, Sometimes, Almost Never, were coded with a 3, 2 and 1, respectively. Scale scores for each student and class means and standard deviations were calculated for each sub-scale. The spreadsheet data were imported into SPSS version 4.01 Macintosh (SPSS Inc., 1990) to calculate Cronbach alpha reliability coefficients as measures of internal consistency, and to investi-

MEASURING COMPUTER-SUPPORTED LEARNING ENVIRONMENTS

125

TABLE I Scale Description and Sample Item for Each Scale of the New Classroom Environment Instrument Description of scalea Sample item The extent to which students have Students put a lot of energy attentive interest in class activities into what they do here. (+) and participate in discussions. The extent to which students do additional work on their own and enjoy the class. AFF Affiliation The level of friendship that students Students in this class get to feel for each other, that is, the extent know each other well. (+) to which they help each other with homework, get to know each other easily, and enjoy working together. TS Teacher support The amount of help, concern, and This teacher remains at the friendship which the teacher directs front of the class rather than towards the students. The extent to moving about and talking which the teacher talks openly with with students. (–) students, trusts them, and is interested in their ideas. Group work The extent to which students are able Students work by themselves GWb to work collectively in class on tasks rather than working together and activities assigned by the teacher. on projects in this class. (–) COM Competition The emphasis placed on student’s Students feel pressured to competing with each other for grades compete here. (+) and recognition. An assessment of the difficulty of achieving good grades is included. OO Order and The emphasis on students behaving in This is a well-organised class. organisation an orderly and polite manner and on (+) the overall organization of assignments and classroom activities. The degree to which students tend to remain calm and quiet. TC Teacher control How strict the teacher is in enforcing There are very few rules to the rules, and the severity of the follow. (–) punishment for rule infractions. The number of rules and the ease of students getting into trouble. INN Innovation How much students contribute to New ideas are tried out in this planning classroom activities, and the class. (+) amount of unusual and varying activities and assignments planned by the teacher. The extent to which the teacher attempts to use new techniques and encourages creative thinking in the students. Note. Items designated (+) are scored by allocating 3, 2, 1, respectively, for the responses Often, Sometimes, Almost Never. Items designated (–) are scored in the reverse manner. Omitted responses are given a score of 2. aDescriptions taken from Moos and Trickett (1974, p. 3) and Woods (1995, p. 110). bGW scale was modified from the original CIPQ. All other scales were modified from the original CES. Code INV

Scale name Involvement

126

C. PAUL NEWHOUSE

gate differences between class means using effect sizes and t-tests (either dependent or independent). The data were also imported into CA-Cricket Graph III® (Computer Associates International, 1992) to create graphs to show comparisons of class means for the scales. In an international comparison of studies using various classroom environment instruments, Wubbels (1993) stated that, while a scale reliability coefficient of 0.7 or greater is regarded as acceptable, it is not uncommon to find reliability coefficients down to 0.2. In particular, comparisons were made between class means on the preferred and actual versions of the instrument. The t-test results were reported at 0.05 and sometimes the 0.001 level of significance. Effect sizes, or standardised mean differences, were calculated using the formula d = (mean difference/pooled standard deviation) which is discussed by Dunlap, Cortina, Vaslow and Burke (1996). Effect sizes above about 0.50 were regarded as moderate and those above about 0.70 regarded as large (Fraser, 1989). The analysis depended primarily on the use of the effect sizes which Thompson (1996) recommends should be general AERA editorial policy in line with American Psychological Association policy. Rennie (1997) also discusses the merits of using effect sizes to consider differences between group means. The interpretation of NCEI data is highly dependent on the beliefs, perceptions and previous experiences of the interpreter because they concern students’ beliefs and perceptions. The NCEI’s value is in the comparison of students’ preferences for, and perceptions of, the classroom environment and changes in perceptions over time.

2. AN APPLICATION OF THE NCEI The main study in which the NCEI was used was a three-year impact study into the implementation of a portable computer programme. The results of the study have been reported fully (Newhouse, 1998) and therefore this discussion is designed to provide an example of the application of the NCEI and further describe its development. The study was based on an ethnographic model of research using an interpretive methodology. Each year, the study focussed on those features of the psychosocial environment of the school which seemed to be important from an interpretation of data collected in the previous year. To illustrate the use of the NCEI in the study, data from the second year are considered. This was the year in which the final version of the NCEI was used. Data were collected using the NCEI, interviews, questionnaires, and

MEASURING COMPUTER-SUPPORTED LEARNING ENVIRONMENTS

127

lesson observations. However, the testing of the instrument in 1993 is also presented. In the second half of the first year of the study (1993), the NCEI was used in its final form for the first time. There were four Grade 8 classes (13-year-olds) involved in the study that year. The preferred and actual forms of the instrument were administered to each science class. The Cronbach alpha coefficients for each administration of the instrument are given in Table II. It was decided that the instrument was sufficiently reliable to use in a more comprehensive manner in the following year of the study. However, there was some concern about the Competition and Teacher Control scales. In the second year of the study, it was intended to obtain data associated with as many Grade 8 teacher-class combinations as possible. In the first year of the study, there had been a strong emphasis, which had gained empirical support, on investigating teacher-class combinations and it was decided to continue this emphasis. It appeared that different teachers were able to make use of the portable computer resource to support different types of learning environments. It was not yet clear whether students’ perceptions of these learning environments were consistent and whether their preferences were associated with the manner in which the computers were used. The preferred version of the NCEI was administered to five focus classes in February, with the actual version administered once at the end of March and again in November. The preferred version was administered a second time to one class in November. Class means and standard deviations on all scales for the administrations were calculated. Independent t-tests were used to examine the small differences between class means on the preferred

TABLE II Cronbach Alpha Reliability Coefficients for Administrations of the NCEI to Four Classes in 1993 and Five Classes in 1994 Scale

Involvement Affiliation Teacher support Group work Competition Order and organisation Teacher control Innovation

Pref. alpha Aug. 1993 (n = 95) 0.79 0.83 0.84 0.83 0.65 0.76 0.69 0.82

Actual alpha Aug. 1993 (n = 95) 0.69 0.66 0.75 0.73 0.23 0.54 0.40 0.70

Pref. alpha Feb. 1994 (n = 111) 0.60 0.82 0.75 0.79 0.33 0.71 0.38 0.63

Actual alpha March 1994 (n = 111) 0.39 0.65 0.45 0.80 0.10 0.65 0.39 0.59

Actual alpha Nov. 1994 (n = 102) 0.71 0.72 0.69 0.75 0.25 0.43 0.17 0.70

128

C. PAUL NEWHOUSE

TABLE III Effect Sizes and t-Test Results for a Comparison of Actual and Preferred Scores on the NCEI for Five Classes in Grade 8 Scale

Effect size for actual vs. preferred 1 = Lucy 2 = Tim 3 = Sue 4 = Brian 5 = Sandra Involvement –0.87* –0.41 –1.12* –0.77* –1.13* Affiliation –0.96* –0.35 –0.67* –0.63* –0.74* Teacher support –0.86* –0.76* –0.88* –0.76* –0.92* Group work –1.07* –0.29 –1.08* –1.36* –1.18* Competition –0.87* –0.67* 0.04 0.31 –0.14 Order and organisation –0.90* –0.61* –0.59 –0.09 –1.24* Teacher control 0.39 –0.14 0.04 –0.12 0.16 Innovation –0.94* –0.80 –1.13* –1.12* –1.13* Note. Effect sizes were calculated by subtracting preferred mean from actual mean and dividing by the pooled standard deviation. *p < 0.05.

scales, but none were statistically significant. The preferred and actual data from March were compared by calculating effect sizes and dependent ttests. These results are given in Table III. Effect sizes for class means for the two administrations of the actual version for each class are reported in Table IV. These results are also presented in Figures 2 and 3. These data were analysed to compare characteristics of the five teacher-class combinations and to create individual classroom environment profiles for the classes. Because Cronbach alpha coefficients calculated for the Competition and Teacher Control scales (Table II) suggest poor internal consistency, these scales were not used.

TABLE IV Effect Sizes for a Comparison of NCEI Actual Environment Data in March and November for Five Classes in Grade 8 Scale

Effect size for March vs. November 1 = Lucy 2 = Tim 3 = Sue 4 = Brian 5 = Sandra Involvement 0.33 0.15 0.28 –0.44 –0.20 Affiliation –0.43 –0.27 –0.15 –0.39 –0.35 Teacher support 0.28 0.12 0.45 –0.55 –0.27 Group work –0.13 –0.36 0.28 0.50 –0.51 Competition 0.16 0.94 –0.64 –0.75 0.28 Order and organisation 0.16 –0.05 –0.13 –1.55 0.44 Teacher control –0.13 0.03 0.49 –0.22 –0.29 Innovation 0.54 0.16 0.09 0.11 0.05 Note. Effect sizes were calculated by subtracting the March actual mean from the November actual mean and dividing by the pooled standard deviation. No t-tests were conducted because the analysis was not designed to determine a person-environment fit but rather to compare two sets of actual class means. aOnly seven students in Brian’s class completed the NCEI actual version in November.

MEASURING COMPUTER-SUPPORTED LEARNING ENVIRONMENTS

129

Figure 2. Comparison of Actual NCEI class means for the five focus classes, early in the second year of the study.

Figure 3. Comparison of Actual NCEI class means for the five focus classes, late in the second year of the study.

2.1. Tim’s Science Class In Tim’s class, the computers were used regularly as he worked towards integrating their use in the learning activities. Most of Tim’s lessons involved student-centred learning activities with students working individually or in groups on experiments or activities. They were encouraged to record notes, reports and data using their computers. Tim commented that they were trying to do more student-centred activities in Grade 8 science and that “we should do more of this because the students respond well” but that “not all teachers like this approach”.

130

C. PAUL NEWHOUSE

Figure 4. NCEI class means for Tim’s science class (Preferred, March Actual and November Actual).

The results from the NCEI are displayed graphically in Figure 4. The pattern of effect sizes for a comparison between actual and preferred scale scores suggests that the students wanted more Teacher Support and more Order and Organisation. From discussions with Tim, it seemed that he clearly did not want to encourage a teacher-controlled, competitive environment and thus, in a largely competitive school, it was not surprising that students should notice the lack of direction by the teacher. The students did perceive Tim’s class to be more innovative than the way in which students perceived three of the other focus classes (lowest effect size for Innovation for the difference between preferred and actual scores in Table III). However, students still perceived less innovation than they would have preferred, as indicated by an effect size of 0.80.

2.2. Lucy’s English Class An analysis of teaching strategies taken from observation notes of the five lessons (see Table V) clearly shows that, in Lucy’s class, a lot of time was spent on teacher presentation, whole-class interaction and individual work. Minimal group work was observed with this class even though Lucy was confident that she had conducted many lessons using group work. Lucy indicated that Lesson 3 was more typical of many of her lessons. In this lesson, there was a lot of individual work, much of it on the computers, and the lesson was more student-controlled. The other lessons were heavily teacher-controlled and made little, if any, use of the computers. Many students in Lucy’s class used the computers regularly, although Lucy did not appear to attempt to accommodate the use of the computers as part of learning activities to the extent to which she had in the previous year.

131

MEASURING COMPUTER-SUPPORTED LEARNING ENVIRONMENTS

TABLE V Percentage of Observed Teaching Strategies used by Lucy over Five Lessons with a Grade 8 English Class Strategy Transition Teacher presentation Whole-class interaction Individual student work Group work

Lesson 1 35% 15% 25% 25%

Lesson 2 19% 21% 60%

Lesson 3 21% 20% 2% 56%

Lesson 4 2% 18% 35% 33% 8%

100%

% Teacher controlled

75%

100%

31%

54%

100%

0

0

2

1

Computer use/student (mins)

20

Lesson 5

The NCEI data were analysed in more detail for Lucy’s class because both versions had been administered twice, once early in the year (March) and once late in the year (November). It appeared that, in terms of preferred-actual comparison, students were dissatisfied with her class (refer to Table III and Figure 5), particularly when compared with the other four focus classes. Lucy’s students perceived little change in the learning environment between March and November. However, it should be noted that there had been large differences between the two preferred versions for a number of scales, especially with the students appearing to want less Involvement. Lucy wanted to make use of group-work strategies and had been observed to do so in the preceding year. Although she felt that she still used group work a lot, this was only observed for about three minutes in six

Figure 5. NCEI class means for Lucy’s English class (Preferred, March Actual and November Actual).

132

C. PAUL NEWHOUSE

lessons and those students interviewed from her class did not have this perception. Lucy did admit to having less energy and enthusiasm for using the computers than in the previous year, and therefore it is likely that she implemented less of the higher-energy group-work strategies and, for similar reasons, probably put less effort into involving students in creating the learning environment.

2.3. Sue’s Social Studies Class An analysis of the strategies and amount of computer use in the observed lessons for Sue’s class was conducted. Students spent most of the time working individually or being involved in whole-class interactive sessions. In one lesson, group work was conducted. In none of the lessons did all students use their computers. Sue had a few students who chose to use the computers regularly in class, but the majority did not. Profiles of NCEI class means, as represented in Figure 6, indicate a poor person-environment fit for this class, as was also the case for Lucy, Brian and Sandra’s classes. The scales for which there were large, statistically significant differences between the actual and preferred version were Involvement, Affiliation, Teacher Support, Group Work, and Innovation.

2.4. Mathematics Classes Students in Brian’s and Sandra’s mathematics classes did not use the computers at any stage in class. It appears that the environments created in these classes (refer to Figures 7 and 8) were very similar and gave students the perception that they were not involved in decisions about learning, that they should not collaborate with other students, and that learning activi-

Figure 6. NCEI class means for Sue’s social studies class (Preferred, March Actual and November Actual).

MEASURING COMPUTER-SUPPORTED LEARNING ENVIRONMENTS

133

Figure 7. NCEI class means for Brian’s mathematics class (Preferred, March Actual and November Actual).

Figure 8. NCEI class means for Sandra’s mathematics class (Preferred, March Actual and November Actual).

ties were not innovative. Typically the lessons involved watching the teacher work examples on the board, reading the textbook and completing exercises from the textbook or board.

2.5. Comparing the Five Focus Classes For all classes, except Tim’s science class, the preferred-actual comparison of class means resulted in large effect sizes and statistically significant differences on most NCEI scales (Table III). Actual-preferred differences between class means for the scales of Involvement, Affiliation, Group Work and Innovation occurred for all classes except Tim’s. Statistically significant actual-preferred differences occurred for all classes on the Teacher Support scale.

134

C. PAUL NEWHOUSE

When considering the change in students’ perceptions of the classroom environments between March and November, there were few even moderate effect sizes. The March class means (Figure 2) show relatively similar profiles for all five classes, except that there is a large spread of class means for the Group Work scale. Students in Tim’s class perceived much more group work than the other classes and students in the two mathematics classes (Brian and Sandra) perceived the least. The November actual class means (Figure 3) show less uniformity with a spread of class means on the Group Work scale and the mathematics classes having low class means for the Involvement, Teacher Support and Innovation scales. Students in Lucy’s and Tim’s classes continued to perceive their classes as being more innovative and varied than did the students in the other classes. The computers were not consistently an integral part of the learning environments associated with any of these classes, although they were integral to some lessons (one observed) in Tim’s science class. In Lucy’s and Sue’s classes, the computers were used by some students at some times, but did not appear to be necessary for the lesson, or even to be helpful for some students. Only in Tim’s class were the computers used consistently by most students, almost always to support student-centred learning strategies. Also this was the only class with a good person-environment fit, particularly evident in the use of group work. This finding was consistent with the findings of other researchers (e.g. Becker, Ravitz & Wong, 1999) that computer support for learning is most conducive in student-centred learning environments. Rowe (1993) goes as far as to say that “learning with personal computers is not compatible with traditional didactic methods of class teaching” (p. 114).

3. CONCLUSIONS The New Classroom Environment Instrument (NCEI) proved to be a useful tool in the overall analysis of the classroom environments researched in the study. When combined with lesson observation and interview data, the NCEI provided a means of describing differences between teacher-class combinations and explaining the dynamics of the classroom. For example, when computers were consistently used to support student-centred approaches to learning, particularly involving group work, analysis indicated a good person-environment fit. Further, environments in which computers were used frequently were seen to be more innovative and involving. Another example was that, throughout a school year, the stu-

MEASURING COMPUTER-SUPPORTED LEARNING ENVIRONMENTS

135

dent preferred environment could change considerably which could relate to teacher energy and unfulfilled expectations. This also could be reflected in the use of computers (e.g. Lucy’s class) in classes in which there are developed expectations which require a high level of teacher energy to implement. With the importance of the impact of computer support on the learning environment, educational computing research must continue to focus on the analysis of the classroom learning environment. The use of learning environment instruments such as the NCEI, in conjunction with other data sources, is recommended for such research.

REFERENCES Becker, H. J. (1994). How exemplary computer-using teachers differ from other teachers: Implications for realizing the potential of computers in schools. Journal of Research on Computing in Education, 26, 291–321. Becker, H. J., Ravitz, J. L., & Wong, Y. T. (1999). Teacher and teacher-directed student use of computers and software (Teaching, Learning, and Computing: 1998 National Survey. 3). Irvine, CA: Center for Research on Information Technology and Organizations, University of California, Irvine. Bergen, C., & Kingston, P. (1994). A framework for analysing the contribution of educational technology to learning. British Journal of Educational Technology, 25(1), 58–60. Carter, D. S. G. (1990). Knowledge transmitter, social scientist or reflective thinker: Three images of the practitioner in Western Australian high schools. Theory and Research in Social Education, XVIII, 274–317. Collis, B. (1989, April). Using information technology to create new educational situations. Paper presented at the UNESCO International Congress on Education and Informatics, Paris. Computer Associates International. (1992). CA-Cricket Graph III (Version 1.53). Islandia, NY: Author. DeCorte, E. (1990). Learning with new information technologies in schools: Perspectives from the psychology of learning and instruction. Journal of Computer Assisted Learning, 6, 69–87. Dunlap, W. P., Cortina, J. M., Vaslow, J. B., & Burke, M. J. (1996). Meta-analysis of experiments with matched groups or repeated measures designs. Psychological Methods, 1, 170–177. Dwyer, D. (1994). Apple classrooms of tomorrow: What we’ve learned. Educational Leadership, 51(7), 4–10. Dwyer, D. C., Ringstaff, C., & Sandholtz, J. H. (1991). Changes in teachers’ beliefs and practices in technology-rich classrooms. Educational Leadership, 48(8), 45–52. Fallshaw, M. (1995). The use of laptops in senior mathematics. In L. W. Shears (Ed.), Computers and schools (pp. 91–97). Melbourne, Australia: Australian Council for Educational Research. Fraser, B. (1981). Using environmental assessments to make better classrooms. Journal of Curriculum Studies, 13, 131–144.

136

C. PAUL NEWHOUSE

Fraser, B. J. (1989). Twenty years of classroom climate work: Progress and prospect. Journal of Curriculum Studies, 21, 307–327. Fraser, B. J. (1994). Research on classroom and school climate. In D. Gabel (Ed.), Handbook of research on science teaching and learning (pp. 493–541). New York: Macmillan. Fraser, B. J., & Fisher, D. L. (1983a). Assessment of classroom psychosocial environment: Workshop manual. Perth, Australia: Western Australian Institute of Technology. Fraser, B. J., & Fisher, D. L. (1983b). Use of actual and preferred classroom environment scales in person-environment fit research. Journal of Educational Psychology, 75, 303–313. Fraser, B. J., & McRobbie, C. J. (1995). Science laboratory classroom environments at schools and universities: A cross-national study. Educational Research and Evaluation, 1, 289–317. Fraser, B. J., & Teh, G. P. L. (1994). Effect sizes associated with micro-prolog-based computer-assisted learning. Computers in Education, 23, 187–196. Fraser, B. J., & Walberg, H. J. (Eds.). (1991). Educational environments. Oxford: Pergamon Press. Fraser, B. J., Malone, J. A., & Neale, J. M. (1989). Assessing and improving the psychosocial environment of mathematics classrooms. Journal of Research in Mathematics Education, 20, 191–201. Gardner, J., Morrison, H., Jarman, R., Reilly, C., & McNally, H. (1994). Personal portable computers and the curriculum (Practitioner Minipaper 13). Glasgow, UK: Scottish Council for Research in Education. Hayward, P. A. (1993, April). When novelty isn’t enough: A case study of students’ reactions to technology in the classroom environment. Paper presented at the joint meeting of the Southern States Communication Association and the Central States Communication Association, Lexington, KY. Johnson, D. W., & Johnson, R. T. (1991). Cooperative learning and classroom and school climate. In B. J. Fraser & H. J. Walberg (Eds.), Educational environments (pp. 55– 74). Oxford, UK: Pergamon Press. Levine, T., & Donitsa-Schmidt, S. (1995). Computer experience, gender, and classroom environment in computer-supported writing classes. Journal of Educational Computing Research, 13, 337–357. Lyall, K. (1997, August 2–3). Laptop High. The Weekend Australian, Syte, pp. 1, 4. Lynch, W. (1990). Social aspects of human-computer interaction. Educational Technology, 30(4), 26–31. Maor, D., & Fraser, B. J. (1996). Use of classroom environment perceptions in evaluating inquiry-based computer-assisted learning. International Journal of Science Education, 18, 401–421. McMahon, T. A., & Duffy, T. M. (1993). Computers extending the learning environment: Connecting home and school. New Orleans, LA: Convention of the Association for Educational Communication and Technology. Mercer, N., & Fisher, E. (1992). How do teachers help children to learn? An analysis of teachers’ interventions in computer-based activities. Learning and Instruction, 2, 339– 355. Mevarech, A. R., & Light, P. H. (1992). Peer-based interaction at the computer: Looking backward, looking forward. Learning and Instruction, 2, 275–280.

MEASURING COMPUTER-SUPPORTED LEARNING ENVIRONMENTS

137

Miller, L., & Olson, J. (1994). Putting the computer in its place: A study of teaching with technology. Journal of Curriculum Studies, 26(2), 121–141. Moos, R. H., & Trickett, E. J. (1974). Classroom Environment Scale: Manual. Palo Alto, CA: Consulting Psychologists Press. Newby, M., & Fisher, D. (2000). A model of the relationship between university computer laboratory environment and student outcomes. Learning Environments Research, 3, 51–66. Newhouse, C. P. (1993, July). Are Australian classrooms ready for computers? Paper presented at the Australian Computers in Education Conference, Sydney, Australia. Newhouse, C. P. (1998). Teachers’ responses and classroom learning environments associated with student access to portable computers. Unpublished doctoral thesis, Curtin University of Technology, Perth, Australia. Olson, J. (1988). Schoolworlds – microworlds. Oxford, UK: Pergamon Press. Rennie, L. (1997, July). A significant issue in interpreting and reporting quantitative research. Paper presented at the annual conference of the Australasian Science Education Research Association, Adelaide, Australia. Rentoul, A. J., & Fraser, B. J. (1979). Conceptualization of enquiry-based or open classroom learning environments. Journal of Curriculum Studies, 11, 233–245. Rieber, L. P. (1994). Computers, graphics, and learning. Dubuque, IA: Wm. C. Brown Communications. Riel, M. (1989). The impact of computers in classrooms. Journal of Research on Computing in Education, 22, 180–190. Rowe, H. A. H. (1993). Learning with personal computers. Melbourne, Australia: Australian Council for Educational Research. Rysavy, S. D. M., & Sales, G. C. (1991). Cooperative learning in computer-based instruction. Education Training Research & Development, 39(2), 70–79. Salomon, G. (1994). Differences in patterns: Studying computer enhanced learning environments. In S. Vosniadou, E. DeCorte, & H. Mandl (Eds.), Technology-based learning environments (pp. 79–88). Heidelberg, Germany: Springer-Verlag. Schofield, J. W. (1995). Computers and classroom culture. New York: Cambridge University Press. SPSS Inc. (1990). SPSS for the Macintosh® 4.0 (Version Macintosh 4.04) [Statistical Analysis]. Chicago: Author. Taylor, P. C., & Fraser, B. J. (1991, April). CLES: An instrument for assessing constructivist learning environments. Paper presented at the annual meeting of the National Association for Research in Science Teaching, Fontane, WI. Teh, G. P., & Fraser, B. J. (1995). Development and validation of an instrument for assessing the psychosocial environment of computer-assisted learning classrooms. Journal of Educational Computing Research, 12, 177–193. Thompson, B. (1996). AERA editorial policies regarding statistical significance testing: Three suggested reforms. Educational Researcher, 25(2), 26–30. Van Den Akker, J., Keursten, P., & Plomp, T. (1992). The integration of computer use in education. International Journal of Educational Research, 17, 65–76. Welle-Strand, A. (1991). Evaluation of the Norwegian Program of Action: The impact of computers in the classroom and how schools learn. Computers and Education, 16, 29–35. Woods, J. D. (1995). Teaching effectiveness: Using students’ perceptions of teaching style and preferred learning style to enhance teaching performance. Unpublished doctoral thesis, Curtin University of Technology, Perth, Australia.

138

C. PAUL NEWHOUSE

Wubbels, T. (1993). Cross-national study of learning environments. In D. L. Fisher (Ed.), The study of learning environments, Volume 7 (pp. 112–120). Perth, Australia: Curtin University of Technology. Wubbels, T., Brekelmans, M., & Hooymayers, H. (1991). Interpersonal teacher behavior in the classroom. In B. J. Fraser & H. J. Walberg (Eds.), Educational environments (pp. 141–160). Oxford, UK: Pergamon Press. C. PAUL NEWHOUSE

School of Education Edith Cowan University 2 Bradford Street Mt Lawley, Western Australia 6050 Australia E-mail: [email protected]