these students. As Bridget Long (2002) of the. Harvard Graduate School of Education recently noted, high-achieving students offer positive peer effects to the ...
7KH(IIHFWVRI+RQRUV3URJUDP3DUWLFLSDWLRQRQ([SHULHQFHV RI*RRG3UDFWLFHVDQG/HDUQLQJ2XWFRPHV 7ULFLD$6HLIHUW(UQHVW73DVFDUHOOD1LFKRODV&RODQJHOR6XVDQ*$VVRXOLQH
Journal of College Student Development, Volume 48, Number 1, January/February 2007, pp. 57-74 (Article) 3XEOLVKHGE\-RKQV+RSNLQV8QLYHUVLW\3UHVV DOI: 10.1353/csd.2007.0007
For additional information about this article http://muse.jhu.edu/journals/csd/summary/v048/48.1seifert.html
Access provided by Montana State University (8 Dec 2015 17:59 GMT)
The Effects of Honors Program Participation on Experiences of Good Practices and Learning Outcomes Tricia A. Seifert Ernest T. Pascarella Nicholas Colangelo Susan Assouline Using multi-institution data and a longitudinal, pretest-posttest design, this study investigated the impact of honors programs on student experiences of good practices in undergraduate education as well as cognitive development in the first year of college. We found students in honors programs advantaged in terms of the good practice measures related to the in-class college experience. Addi tionally, we found significant positive effects of honors programs on critical thinking, mathematics, and composite cognitive development. We also found conditional effects in which honors programs participation seemed to have a greater impact for men and students of color on some learning outcomes. In an era of competition among colleges and universities, many institutions have sought to increase the quality of their student body by recruiting more academically gifted students to campus. For those institutions that are not able to increase their overall selectivity (e.g., public institutions that are mission-bound to accept local high school graduates meeting certain criteria), colleges and universities have increasingly created honors programs, and more recently honors colleges, as a means to market themselves to high-achieving students
(e.g., Bulakowski & Townsend, 1995; Byrne, 1998; Harrison-Cook, 1999; Long, 2002; Reihman, Varhus, & Whipple, 1990; Shushok, 2003). It is understandable why colleges and universities would be interested in recruiting these students. As Bridget Long (2002) of the Harvard Graduate School of Education recently noted, high-achieving students offer positive peer effects to the campus milieu. Additionally, a host of external stakeholders often point to the successes of high-achieving students in the labor market as a measure of overall institutional effectiveness. Finally, public institutions have heralded honors pro grams as combating the statewide brain drain while fulfilling their missions of providing postsecondary educational options for all residents, regardless of ability. From a student perspective, with college tuition outpacing increases in the median family income, students view honors programs as providing the opportunities of an Ivy League education at a state university price, thus decreasing the degree of overall stratification between colleges (Galinova, 2005; Long; Samuels, 2001). Although honors programs take many forms, they exist largely to enhance the impact
Tricia A. Seifert is a postdoctoral research scholar; Ernest T. Pascarella is the Mary Louise Petersen Chair in Higher Education and co-director of the Center for Research on Undergraduate Education; Nicholas Colangelo is the Myron and Jacqueline Blank Chair of Gifted Education and Director of the Belin-Blank Center for Gifted Education; Susan Assouline is the Associate Director of the Belin-Blank Center for Gifted Education; all at the University of Iowa. This investigation was conducted as part of the National Study of Student Learning (NSSL), which was supported by Grant No. R117G10037 from the US Department of Education to the National Center on Postsecondary Teaching, Learning, and Assessment. We also received generous support from the Center of Inquiry on the Liberal Arts at Wabash College. January/February 2007 ◆ vol 48 no 1
57
Seifert, Pascarella, Colangelo, & Assouline
of undergraduate education for particularly talented and motivated students (Austin, 1985; Harrison-Cook, 1999; Long, 2002; Pflaum, Pascarella, & Duby, 1985; Reed, 1988; Sederberg, 2005; Shushok, 2003). Long reported that honors programs exist at all but the most and least competitive institutions. Honors programs are most heavily concen trated at public four-year institutions; only six percent of public two-year colleges have honors programs. The rate at which institutions have sought to create honors programs differs by institutional type with public institutions establishing 25% of the honors programs since 1989. There has been far greater growth in honors programs at public two-year colleges with a 40% increase since 1989 (Long). In addition to varying by institutional setting, honors programs also vary by organi zational structure and programmatic offerings. Long (2002) noted public four-year institu tions have increasingly modified their honors “programs” to become honors “colleges.” Long (2002) and Sederberg (2005) identified honors colleges as more likely to have special residen tial opportunities and scholarships for honors students. Even among honors colleges, they relate to the broader university in different ways. Some honors colleges have a centralized “overlay” structure of the university’s under graduate program, whereas others are freestanding colleges with their own faculty and curriculum (Sederberg). From a programmatic perspective, most honors programs offer “general” honors courses within the core education curriculum with fewer programs offering “departmental” honors (Long; Sederberg). Over half of the programs offer some combi nation of special seminars or colloquia as well as an honors senior thesis (Long, Sederberg). The heterogeneity among honors programs and honors colleges motivated the National Collegiate Honors Council (NCHC) to develop “Basic Characteristics of a Fully 58
Developed Honors Program” over a decade ago and more recently the “Basic Characteristics of a Fully Developed Honors College” which was formally endorsed in 2005 (Sederberg, 2005). For more information regarding these characteristics, visit the NCHC website at http://www.nchchonors.org/basic.htm Honors programs, however, are not without their critics. Murray Sperber, professor of English and American Studies at Indiana University, argued in a 2000 article in the Chronicle of Higher Education that honors programs remove the best students and professors from the general classroom where their contributions would enrich the educa tional experiences of all students. Additionally, critics assert that honors programs redirect scarce resources from programs that serve the neediest of students and place them in programs serving the most able students (VanPoolen-Larsen, 1991). This criticism points to the paradox of cultural beliefs that undergird American higher education. On one hand, Americans hold the egalitarian notion that higher education should be the right of every American irrespective of wealth or social standing, whereas on the other hand they equally believe in the meritocractic system in which the best educational opportunities are distributed to the most motivated and talented students (Galinova, 2005). Given the high visibility and depending on one’s vantage point, laudable goals or levied criticisms of honors programs, relatively little research has examined the extent to which honors program participation influences student learning. The small body of research that does exist has been largely descriptive and anecdotal (Reihman et al., 1990), singleinstitution in nature (Denk, 1998), or has focused on predicting the success of honors program participants in terms of college grades or persistence (Astin, 1993; McDonald & Gawkoski, 1979; Pflaum et al., 1985). Cer Journal of College Student Development
Honors Participation and Learning Outcomes
tainly institutional persistence and academic performance are nontrivial indicators of success in college, but they are not necessarily reliable or generalizable indicators of intellec tual or cognitive development (Pascarella & Terenzini, 1991, 2005). Our review of the existing literature uncovered only three studies that specifically addressed the effects of honors programs on the intellectual or cognitive outcomes of college (Astin, 1993; Ory & Braskamp, 1988; Shushok, 2003). All three studies used student self-reports or self-reported gains to measure cognitive growth. Ory and Braskamp exam ined the effects of three different academic programs: the honors program, the regular curriculum, and a transition program for academically disadvantaged students. They predicted that programs that facilitate higher levels of student involvement would be associated with higher levels of student satisfaction and perceived gains in intellectual development. They found that honors pro gram participation had a significant positive effect on student self-reported gains in intellectual development during college. However, the design of their study failed to preclude the possibility that this “effect” could also be attributable to differences in the precollege characteristics of honors and nonhonors students. Astin (1993) and Shushok (2003) reported consistent findings but with more internally valid research designs than the Ory and Braskamp (1988) study. As part of his com prehensive investigation of What Matters in College, Astin analyzed 25,000 students at 217 colleges to conclude that net of important confounding influences, honors program participation had a small, positive influence on growth during college in self-reported analytic/problem solving skills. Similarly, in a recent single institution study, Shushok reported students in honors programs demon January/February 2007 ◆ vol 48 no 1
strated significantly higher levels of selfreported growth in the areas of liberal arts and science and technology than did their non honors peers. It is also worth noting that among honors program participants Shushok found students of color reported greater gains in science and technology than their White peers. Shushok found no difference in terms of self-reported gains in critical thinking and analytical skills between honors participants and their nonhonors peers. Interestingly, despite these differences in self-reported gains, honors and nonhonors students alike reported a marked homogeneity in their undergraduate experiences (Shushok, 2003). In other words, honors students had similar classroom and out-of-classroom experiences as their nonhonors peers. This, argues Shushok, suggests the larger selfreported gains of honors students may reflect a “Pygmalion” effect. That is, honors program students reported greater gains largely because of their perceptions of what an honors program is supposed to do, not because of the actual value added by the program. The Astin (1993), Ory and Braskamp (1988), and Shushok (2003) investigations constitute important contributions to our understanding of the impact of honors programs—in part because of the paucity of empirical evidence. At the same time, the studies are limited by the questions of psycho metric validity and internal design validity inherent in the use of student self-reports or student self-reported gains (Pascarella, 2001; Pascarella & Terenzini, 2005). The present study sought to address this problem, and thus build on existing evidence, by estimating the impact of honors programs with a longi tudinal, pretest–posttest design containing two characteristics absent in the existing body of research. First, it employed empirically vetted measures of good practices in undergraduate education (Chickering & Gamson, 1987, 59
Seifert, Pascarella, Colangelo, & Assouline
1991), and second it used standardized measures of student intellectual and cognitive development. The study tested four major hypotheses. The first hypothesis anticipated that, net of individual precollege/background character istics (e.g., tested academic ability, secondary school achievement and involvement, demo graphic characteristics, and educational plans) and other influences (credit hours taken, work responsibilities, on- or off-campus residence, course taking patterns, institution attended), students in honors programs during the first year of college would be more likely to experience “good practices” in undergraduate education than their nonhonors peers. The second hypothesis was that, net of individual precollege/background characteristics (includ ing scores on parallel precollege measures of each outcome) and other influences, honors program participants would demonstrate higher end-of-first-year scores on standardized measures of composite cognitive development, reading comprehension, mathematics, and critical thinking, than would their nonhonors program counterparts. Because the study had a longitudinal design, with a pretest and posttest score on each dependent measure (i.e., composite cognitive development, reading comprehension, mathematics, and critical thinking), the second hypothesis is essentially the same as saying that honors program students would demonstrate larger first-year gains on these measures than their nonhonors program counterparts (Pascarella, Wolniak, & Pierson, 2004). The third hypothesis antici pated that any significant positive effects of honors program participation on first-year cognitive outcomes (hypothesis two) would be reduced to nonsignificance after accounting for good practice measures. This essentially means that the net influence of honors program participation on first-year cognitive/ intellectual growth is explained by the fact that 60
students in such programs are more likely to be exposed to “good practices” in under graduate education than are other students (Lacy, 1978). The fourth and final hypothesis was less specific than the first three. This hypothesis anticipated that the net cognitive effects of honors-program participation might well be conditional rather than general. That is, the influence of honors program participation on first-year cognitive outcomes might differ in magnitude for different kinds of students (e.g., men versus women, White students versus students of color, and the like). To answers these questions, we analyzed longitudinal data from the first year of the 18-institution National Study of Student Learning (NSSL). Although NSSL is a decade old, our review of existing literature uncovered no other multi-institutional data set with characteristics permitting as internally valid an estimate of the influence of honors programs on learning outcomes as those of NSSL (i.e., a longitudinal, pretest–posttest design, empir ically vetted measures of good practices in undergraduate education, standardized mea sures of learning and cognitive development, and provision for statistical control of extensive confounding influences).
Method Samples and Data Collection The institutional sample was 18 four-year colleges and universities located in 15 states throughout the country. Institutions were chosen from the National Center for Education Statistics Integrated Postsecondary Education Data System (IPEDS) to represent differences in colleges and universities nationwide on such characteristics as institutional type and control (e.g., private and public research universities, private liberal arts colleges, comprehensive universities, and historically Black colleges), size, location, commuter versus residential Journal of College Student Development
Honors Participation and Learning Outcomes
character, and ethnic distribution of the undergraduate student body. This sampling technique provided a sample of institutions with a wide range of selectivity—from some of the most selective institutions in the country to essentially open-admission institutions. The individuals in the sample were students who had been followed during their first year of college as participants in the NSSL, a federally funded longitudinal investigation of factors influencing learning, cognitive development, and other college outcomes. The initial sample of 3,303 students was selected randomly from the incoming first-year class at each participating institution. Students received a cash stipend for their participation in each phase of the data collection. The first data collection was conducted in the fall of 1992 as the students were entering college. The data collected included an NSSL precollege survey that gathered information on student demographic characteristics and precollege experiences, and educational aspirations and expectations about college. Participants also completed the reading comprehension, mathematics knowledge, and critical thinking tests of the Collegiate Assessment of Academic Proficiency [CAAP] developed by ACT (American College Testing Program [ACT], 1990). Each of the three 40minute tests consisted of multiple-choice items. In the spring of 1993, each participant completed the same three CAAP tests as well as the College Student Experiences Question naire [CSEQ] (Pace, 1990) and an NSSL follow-up questionnaire on their first year of college. The CSEQ and the NSSL question naires gathered extensive information about each student’s classroom and non-classroom experiences during the preceding school year. This included whether or not one participated in an honors program. Usable data on the follow-up were available for approximately January/February 2007 ◆ vol 48 no 1
2,000 students from the original sample of 3,303 (a 60.5% response rate). Because of attrition from the sample and a differential response rate by sex, ethnicity, and institution, we developed a sample weighting algorithm to adjust for potential response bias. The re sponses of the follow-up participants within each institution were weighted up to that institution’s end-of-first-year population by sex (male or female) and race/ethnicity (African American, Caucasian, Hispanic, or other). Additional analyses indicated that students who persisted in the study through the first follow-up and those who dropped out of the study but persisted at the institution differed in only chance ways with respect to precollege cognitive test scores, age, race, and socio economic background (Pascarella, Edison, Nora, Hagedorn, & Terenzini, 1998).
Dependent Variables The study contained two sets of dependent variables. The first consisted of 20 individual measures of empirically validated good prac tices in undergraduate education taken from Chickering and Gamson’s (1987, 1991) principles of good practice in undergraduate education and research on effective teaching and influential peer interactions in college (Pascarella & Terenzini, 1991, 2005). These included measures such as: (a) student-faculty contact, (b) emphasis on cooperative learning, (c) active learning/time on task, (d) prompt feedback to students, (e) high expectations of faculty for student learning, (f ) quality of teaching, and (g) influential interactions with other students. Detailed operational definitions and psychometric properties of all good practice measures are provided in Appendix A in Pascarella, Wolniak, Seifert, Cruce, and Blaich (2005). A substantive body of evidence supports the predictive validity of Chickering and Gamson’s (1987, 1991) principles of good 61
Seifert, Pascarella, Colangelo, & Assouline
practice. In the presence of extensive controls for confounding influences, these principles of good practice have been significantly and positively linked to a myriad of desired cognitive and noncognitive outcomes during college and with career and personal benefits after college (Astin, 1993; Chickering & Reisser, 1993; Kuh, Schuh, Whitt, & Asso ciates, 1991; Pascarella & Terenzini, 1991, 2005). Recent research supporting the predic tive validity of specific principles of good practice include the following: (a) studentfaculty contact (Anaya, 1999; Frost, 1991; Kuh & Hu, 2001; Terenzini, Springer, Yaeger, Pascarella, & Nora, 1994); (b) emphasis on cooperative learning (Cabrera et al., 2002; Johnson, Johnson, & Smith, 1998a, 1998b; Qin, Johnson, & Johnson, 1995); (c) active learning (Grayson, 1999; Hake, 1998; Kuh, Pace, & Vesper, 1997; Lang, 1996; Murray & Lang, 1997); (d) academic effort/time on task (Astin; Ethington, 1998; Hagedorn, Siadat, Nora, & Pascarella, 1997; Johnstone, Ashbaugh, & Warfield, 2002; Watson & Kuh, 1996); (e) prompt feedback to students (d’Apollonia & Abrami, 1997; Feldman, 1997); (f ) high expectations (Arnold, Kuh, Vesper, & Schuh, 1993; Astin; Bray, Pascarella, & Pierson, 2004; Whitmire & Lawrence, 1996); and (g) diversity experiences (Kitchener, Wood, & Jensen, 2000; Pascarella, Palmer, Moye, & Pierson, 2001; Terenzini et al., 1994; Umbach & Kuh, 2006). We augment the Chickering and Gamson (1987, 1991) framework by adding the empirically supported dimension, quality of teaching (Feldman, 1997; Hines, Cruickshank, & Kennedy, 1985; Pascarella, Edison, Nora, Hagedorn, & Braxton, 1996; Wood & Murray, 1999) The second set of dependent measures was end-of-first-year of college scores on the CAAP reading comprehension, mathematics, and critical thinking tests. Designed by ACT, each of the three multiple-choice tests measures 62
academic competencies and skills thought to be generic to a general education curriculum in the first two years of college (ACT, 1990). In addition to scores on the three CAAP tests, we created a fourth score, termed “composite cognitive development,” which combined the reading, mathematics, and critical thinking scores into a single score. The alpha reliability was .83. The CAAP reading comprehension mod ule is a 40-minute, multiple-choice test composed of 36 items that assesses reading comprehension as a product of skill in infer ring, reasoning, and generalizing. The test consists of four 900-word prose passages designed to represent the level and kinds of reading students commonly encounter in college curricula, including topics in fiction, humanities, social sciences, and natural sciences. Alpha reliabilities range from .76 to .87. The CAAP mathematics module is a 40minute, multiple-choice test composed of 35 items designed to measure a student’s ability to solve mathematical problems. The test emphasizes quantitative reasoning rather than formal memorization and includes algebra (four levels), coordinate geometry, trigonometry, and introductory calculus. Alpha reliabilities range from .79 to .81. The CAAP critical thinking module is a 40-minute, multiple-choice test composed of 32 items. It is designed to measure a student’s ability to clarify, analyze, evaluate, and extend arguments. The test consists of four passages in a variety of formats (e.g., case studies, debates, dialogues, experimental results, statistical arguments, editorials). Each passage contains a series of arguments that support a general conclusion. Alpha reliabilities range from .81 to .83.
Independent Variables The independent variable in all analyses was Journal of College Student Development
Honors Participation and Learning Outcomes
a dummy variable indicating whether or not a student participated in an honors program during the first year of college. Approximately 13% of the sample was involved in honors programs. The information was taken from the NSSL follow-up instrument. This inde pendent variable is admittedly a surface measure of participation as we cannot disag gregate the depth or breadth of a student’s involvement in the honors program.
Control Variables This study is strengthened immeasurably by our ability to control for confounding influ ences in the regression specification. Specifically, in all analyses we controlled for a host of indi vidual background characteristics including gender; age; parents’ education and income; race; precollege educational aspirations; precollege academic ability; college attended was one’s first choice; self-reported high school GPA; high school involvement, which was a composite measure of time spent during high school in eight separate activities: studying alone, socializing with friends, talking with teachers outside of class, working for pay, exercising or sports, studying with friends, volunteer work, and extracurricular activities; precollege academic motivation; as well as a parallel precollege measure for each end-offirst-year cognitive measure estimated. We also held constant a number of college experiences including: cumulative number of college credit hours completed; hours worked on- and offcampus for pay; and course taking patterns in the natural sciences, mathematics, social sciences, technical courses, and the arts and humanities. In an effort to take into account the differences in the scope of honors programs (both at a structural and effectual level), we entered a set of dummy variables for the institution attended. Finally, in the final stage of the analyses, we entered all the good practice measures as controls so to estimate the unique January/February 2007 ◆ vol 48 no 1
effect of honors programs on the cognitive measures. The breadth of the controls entered into the regression equations resulted in a conservative estimate of the effects of honors programs on each of the dependent measures of interest. Descriptive statistics for all variables in the models for honors and nonhonors students are presented in Table 1.
Analyses The analyses were conducted in three stages. In the first stage, we regressed each good practice measure on the dummy variable representing honors program participation and an extensive set of student background and college experience controls as well as a set of dummy variables for institution attended. The latter set of dummy variables served to control for any contextual effects and allowed us to interpret effects net of the institution that the student attended. In the second stage of the analyses, we estimated the net total and direct effects of honors program participation on each of the four cognitive outcomes (end-of-first-year composite cognitive development, reading comprehension, mathematics, and critical thinking). To estimate total effects of honors participation, we regressed each of the four cognitive outcomes on the independent variable and the same set of controls used in the first stage of the analyses. The only exception was that we also controlled for prior ability in the cognitive area by using an exact parallel precollege measure of each cognitive outcome. To estimate the direct effects of honors program participation on the cognitive outcomes, we added the good practice mea sures, individually estimated in the first stage of the analysis, as a block of variables. This allowed us to estimate the unique effect of honors program participation on the cognitive measures, net of the types of in- and out-ofclassroom experiences had by the student. 63
Seifert, Pascarella, Colangelo, & Assouline
Table 1. Descriptive Statistics for Variables in Analysis, Weighted
Background Characteristics and Controls Precollege academic ability (pretest CAAP composite) Precollege reading comprehension Precollege mathematics Precollege critical thinking Precollege educational aspirations Precollege academic motivation College attended was first choice Age of respondent Proportion of White students Proportion of female students Self-reported high school GPA Parents total education Respondents’ family’s total income Scaled measure of high school involvement Proportion of students who lived on campus Hours completed during academic year Hours per week employed on campus Hours per week employed off-campus Courses taken in the natural sciences Courses taken in mathematics Courses taken in the social sciences Courses taken in technical/applied areas Courses taken in the arts & humanities Attend same college End-of-Year-1 Good Practices Quality of nonclassroom interactions with faculty scale Faculty concern with teaching/student development scale Instructional emphasis on cooperative learning scale Course-related interaction with peers scale Academic effort/involvement scale Number of essay exams in courses Instructor use of high-order questioning techniques scale Emphasis on high-order examination questions Computer use Instructor feedback to students scale Course challenge/effort scale Institutional scholarly/intellectual emphasis scale Number of textbooks/assigned books read Number of term papers/written reports Instructional skill/clarity Instructional organization and preparation Quality of interactions with students scale Noncourse-related peer interactions scale Cultural and interpersonal involvement scale End-of-Year-1 Cognitive Measures Posttest reading comprehension Posttest mathematics Posttest critical thinking Posttest composite cognitive measure
64
Nonhonors Students Mean
182.78 61.84 58.60 62.34 0.90 3.37 0.68 20.43 0.60 0.55 4.47 9.85 8.16 7.47 0.47 4.94 1.78 3.14 1.34 1.44 2.62 0.63 2.78 3.16
SD
Mini- mum
Honors Students
Maxi- mum
12.55 150.00 216.00 5.43 47.00 73.00 4.36 47.00 73.00 5.24 47.00 73.00 0.29 0.00 1.00 0.52 1.38 5.00 0.47 0.00 1.00 5.26 17.00 88.00 0.49 0.00 1.00 0.50 0.00 1.00 1.19 1.00 6.00 3.92 2.00 18.00 3.41 1.00 14.00 1.71 3.00 12.00 0.50 0.00 1.00 1.42 1.00 6.00 1.46 1.00 9.00 2.74 1.00 9.00 1.44 0.00 26.00 1.21 0.00 27.00 1.96 0.00 41.00 1.27 0.00 36.00 2.45 0.00 63.00 0.85 1.00 4.00
Mean
SD
Mini- mum
Maxi- mum
194.54 11.53 152.00 217.00 65.65 5.10 49.00 73.00 62.62 4.49 49.00 73.00 66.28 4.28 49.00 73.00 0.95 0.22 0.00 1.00 3.38 0.54 2.00 4.63 0.62 0.49 0.00 1.00 19.50 3.22 17.00 86.00 0.67 0.47 0.00 1.00 0.56 0.50 0.00 1.00 5.28 0.88 2.00 6.00 10.62 3.54 2.00 18.00 8.89 3.07 1.00 14.00 7.93 1.83 3.00 12.00 0.68 0.47 0.00 1.00 5.35 1.12 1.00 6.00 1.64 1.16 1.00 9.00 2.15 2.06 1.00 9.00 1.94 1.85 0.00 28.00 1.65 1.39 0.00 25.00 2.56 1.89 0.00 44.00 0.48 0.92 0.00 17.00 2.97 2.42 0.00 49.00 3.17 0.84 1.00 4.00
16.31
3.78
5.00
25.00
16.50
3.89
5.00
25.00
17.18
3.16
5.00
25.00
18.10
3.48
5.00
25.00
9.45 25.42 79.53 2.84
2.65 3.75 14.67 1.02
4.00 16.00 11.00 41.00 40.00 143.00 1.00 5.00
9.44 2.67 26.95 4.46 81.70 15.23 2.73 1.07
4.00 16.00 14.00 41.00 49.00 131.00 1.00 5.00
9.86 12.37 7.93 4.37 16.06 15.51 3.12 2.88 14.32 16.03 26.03 24.02 84.06
2.33 2.67 2.35 1.42 2.77 2.90 0.87 0.99 2.80 2.58 5.17 5.42 16.72
4.00 16.00 5.00 20.00 3.00 12.00 2.00 8.00 6.00 24.00 3.00 21.00 1.00 5.00 1.00 5.00 5.00 20.00 5.00 20.00 7.00 35.00 10.00 40.00 38.00 149.00
10.01 2.39 12.34 2.68 8.32 2.26 4.53 1.45 16.11 2.67 15.59 3.04 3.37 0.89 3.05 1.01 14.82 2.85 16.52 2.62 26.81 5.21 25.26 4.42 85.04 15.28
5.00 16.00 6.00 20.00 3.00 12.00 2.00 8.00 9.00 24.00 4.00 21.00 1.00 5.00 1.00 5.00 5.00 20.00 8.00 20.00 9.00 35.00 13.00 40.00 46.00 142.00
62.71 58.76 62.62 184.09
5.42 47.00 73.00 4.20 48.00 72.00 5.66 47.00 73.00 12.97 150.00 217.00
66.43 4.92 47.00 73.00 62.63 4.38 49.00 72.00 66.60 4.68 47.00 73.00 195.68 11.76 153.00 215.00
Journal of College Student Development
Honors Participation and Learning Outcomes
Because we controlled for a parallel pretest measure of each cognitive outcome, a signifi cant positive effect of honors program partici pation on end-of-first-year cognitive outcomes would be equivalent of honors participants demonstrating larger cognitive gains during the first year of college than their nonhonors counterparts (Pascarella et al., 2004). The final stage in the analyses sought to determine the presence of conditional effects of honors program participation on first-year cognitive outcomes. We created cross-products of honors participation with student gender, race, parental income, and college choice. This set of cross-product terms was then individually added to the total effects equation described above. A significant R2 increase associated with the set of cross-products indicated the presence of conditional effects. In the instances where a significant R2 was found, we then individually examined the nature of significant conditional effects. We displayed two types of regression coefficients in the tables. The unstandardized regression coefficient (i.e., metric regression coefficient) can be thought of as the difference in points on the dependent measure for honors students, controlling for all other variables in the regression equation. In an effort to provide a coefficient that can be compared across dependent measures, we also included the effect size. We calculated the effect size by dividing the unstandardized regression co efficient by the standard deviation of the dependent variable. The result indicates the amount of a standard deviation an honors program participant is advantaged (if the effect size is positive) or disadvantaged (if the effect size is negative) compared to their nonhonors peers (Hays, 1994).
Results With statistical controls in place for an extensive set of student background charac January/February 2007 ◆ vol 48 no 1
teristics, high school involvement, full- or part-time college enrollment, place of residence during college, type of first-year coursework, work responsibilities, and the specific insti tution attended, we found with respect to our first hypothesis that honors program students reported significantly greater exposure on 6 of the 20 established good practices during the first year of college than did their nonhonors counterparts. These effects are detailed in Table 2. The six good practice measures focused largely on the quality of the in-class academic experience and included scales for: (a) the extent of course-related interaction with peers, (b) academic effort/involvement, (c) number of textbooks/assigned readings, (d) instructor use of higher-order questioning techniques, (e) instructor feedback to students, and (f ) instructor skill and clarity. In short, we found evidence supporting the notion that honors programs do, in fact, provide a more intensive and challenging first-year academic experience that is also characterized by more effective instruction. Interestingly, although honors program students reported expending greater academic effort and reading in larger quantities, we found they reported fewer essay exams than their nonhonors peers. This finding highlights the complexity of the effects of honors program participation on experi ences of good practices in undergraduate education. In examining our second hypothesis, participation in honors programs also appeared to enhance cognitive growth during the first year of college. Detailed in Table 3, net of an extensive set of student precollege charac teristics, including a pretest measure of the outcome, full- or part-time enrollment, place of residence during college, type of course work, work responsibilities, and the specific institution attended, honors program partici pation had significant, positive total effects on the measure of composite cognitive develop 65
Seifert, Pascarella, Colangelo, & Assouline
Table 2. Statistically Significant Estimated Effects of Honors Program Participation on Good Practices in Undergraduate Educationa (N = 2008) Good Practice Variable
Honors participation vs. no honors participation
Unstandardized Regression Effect R 2 Total Coefficientb Sizec Model
Cooperation Among Students
Course-related interaction with peers
.241
.183
2.459*
.166
.230
–0.163*
–.159
.186
0.354*
.151
.129
.219
.120
.135
.192
.202
.126
0.939**
Active Learning / Time on Task
Academic effort/involvement
Number of essay exams in courses
Instructor use of higher-order questioning techniques
Prompt Feedback
Instructor feedback to students
0.313**
High Expectations
Number of textbooks/assigned readings
0.119*
Quality of Teaching
Instructional skill and clarity
0.567**
a
Equations also include controls for: gender; age; parents’ education and income; dummy variable for race (White/students of color); precollege educational aspirations; tested precollege academic ability (composite of CAAP reading comprehension, mathematics, and critical thinking test scores); college attended was one’s first choice; self-reported high school GPA; high school involvement (composite measure of time spent during high school in eight separate activities: studying, socializing with friends, talking with teachers outside of class, working for pay, exercising or sports, studying with friends, volunteer work, and extracurricular activities); precollege academic motivation; on-campus residence; cumulative number of college credit hours completed; hours working on-campus for pay; hours working off-campus for pay; and course taking patterns in the following areas: natural sciences, math, social sciences, technical courses, and arts & humanities; and a set of dummy variables for institution attended.
b
The metric regression coefficient represents the average difference between students participating in honors college programs and their nonparticipating peers on each good practice variable, statistically adjusted for the controls listed in footnote “a” above.
c
The effect size is computed by dividing the metric regression coefficient by the pooled standard deviation of the good practice variable. It indicates the fraction of a standard deviation students participating in honors programs are advantaged or disadvantaged (depending on the sign) with regard to th e good practice variable relative to their nonparticipating peers.
*p