Community College Student Success: What Institutional ... - CiteSeerX

6 downloads 0 Views 215KB Size Report
similar institutions—were not available (Alfonso, 2004; Rouse, 1995). ...... Research Center was founded as a result of a generous grant from the Alfred P. Sloan.
ARTICLE IN PRESS

Economics of Education Review 27 (2008) 632–645 www.elsevier.com/locate/econedurev

Community college student success: What institutional characteristics make a difference? Juan Carlos Calcagnoa,, Thomas Baileyb, Davis Jenkinsb, Gregory Kienzlc, Timothy Leinbachb a

Mathematica Policy Research, P.O. Box 2393, Princeton, NJ 08543-2393, USA Community College Research Center, Teachers College, Columbia University, 525 West 120th Street, Box 174, New York, NY 10027, USA c College of Education, University of Illinois at Urbana-Champaign, 1310 South Sixth Street, Champaign, IL 61820, USA

b

Received 19 September 2005; accepted 18 July 2007

Abstract Most of the models developed to examine student persistence and attainment in postsecondary education largely fail to account for the influence of institutional factors, particularly when attendance is observed at multiple institutions. Multiinstitutional attendance is common for students who begin at a community college, but until now an empirical framework to estimate the contribution of more than one institution’s characteristics on students’ educational outcomes has been largely absent in the literature. One of the goals of this study is to determine which institutional characteristics are correlated with positive community college outcomes for students who attend one or more colleges as measured by individual student probability of completing a certificate or degree or transferring to a baccalaureate institution. Using individual-level data from the National Education Longitudinal Study of 1988 (NELS:88) and institutional-level data from the Integrated Postsecondary Education Data System (IPEDS), we find consistent results across different specifications; namely, a negative relationship between relatively large institutional size, proportion of part-time faculty and minority students on the attainment of community college students. r 2007 Elsevier Ltd. All rights reserved. JEL classification: I21; I28 Keywords: Input output analysis; Resources allocation; Expenditures

1. Introduction Community colleges are a crucial point of access to higher education for many individuals seeking additional education beyond high school. Previous Corresponding author. Tel.: +1 609 275 2383;

fax: +1 609 799 0005. E-mail address: [email protected] (J.C. Calcagno). 0272-7757/$ - see front matter r 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.econedurev.2007.07.003

research even suggests that if community colleges were not available, many of these individuals would not have enrolled in postsecondary education (Rouse, 1995). The community college access mission is built on low tuition, convenient location, flexible scheduling, an open-door admissions policy, and programs and services designed to support atrisk students with a variety of social and academic barriers to postsecondary success (Cohen & Brawer, 1996). Yet while community colleges continue to

ARTICLE IN PRESS J.C. Calcagno et al. / Economics of Education Review 27 (2008) 632–645

play a crucial role in providing further educational opportunities to a wide variety of students, access alone is not sufficient. Many community college students who seek a postsecondary credential never finish a degree program. For example, in the 1995–1996 academic year, only 36% of students who enrolled in a community college as their first postsecondary enrollment had completed either a certificate, associate’s, or bachelor’s degree within 6 years. Another 22% were still enrolled in college (about three-fifths of those were enrolled in a 4-year institution), leaving the remaining 42% of first-time community college students with varying amounts of postsecondary education but no formal degree or certificate. Low-income, minority, and first-generation college students all have even lower 6-year completion rates, and those who do complete among these populations tend to earn certificates rather than associate’s or bachelor’s degrees.1 As policymakers, educators, accreditors, and scholars increasingly turn their attention to student persistence and completion, the most salient question now is: How can community colleges improve the completion rates of their students? One strategy would be to establish more selective admissions criteria. Extensive research has shown that students who have stronger high school records, who come from higher income families, whose parents also went to college, who do not delay college entry after high school, who attend full time, and who do not interrupt their college studies are more likely to graduate (Adelman, 1999, 2005, 2006; Bailey & Alfonso, 2005; Cabrera, Burkum, & La Nasa, 2005). Adopting this strategy, however, would defeat the primary purpose of these opendoor institutions. In many states, students can attend community college even if they do not have a high school diploma or equivalent and in many colleges, a majority of students, after being assessed, are determined not to be prepared for college level coursework. The primary challenge facing community colleges then is not how to attract better students (although surely many would like to do that), but rather how to do a better job with the types of students they already have. Indeed, there is some evidence that colleges differ in their effectiveness in helping students to graduate since community college graduation rates vary significantly, even 1

Authors’ calculations from the Beginning Postsecondary Student Longitudinal Study of 1995–1996.

633

after controlling for characteristics of the student body (Bailey, Calcagno, Jenkins, Leinbach, & Kienzl, 2006). The goal of the analysis presented here is to identify specific institutional characteristics of community colleges that are correlated with successful educational outcomes, such as degree attainment, transfer to a 4-year institution, or credit accumulation. To this end, we examine several institutional characteristics that are controlled by the colleges or state policymakers. They include college size; tuition levels; the use of part-time faculty; overall expenditures per student; the distribution of those expenditures among possible functions such as instruction, administration, and student services; the extent to which the college focuses on certificates as opposed to associate’s degrees; and the level of financial aid. The structure of this study is as follows. Section 2 reviews the existing literature that has examined the factors that are associated with student success at both baccalaureate institutions and community colleges. In Section 3, we introduce the empirical model to estimate the institutional influence on the completion rate of community college students. The findings are presented in Section 4, and the results of several robustness tests are given in Section 5. The study concludes with a discussion of implications, limitations, and areas of future research. 2. Existing research Two of the most commonly used conceptual frameworks of persistence and completion are based on Tinto’s Student Integration Model (1993) and Bean’s Student Attrition Model (1985). The central institutional implication of the models is that administrators and faculty should try to foster the academic and social engagement of their students in and with the colleges. Drawing on the various studies that have emerged from these models, Titus (2004, 2006) developed a list of institutional characteristics of 4-year colleges that appear to influence student persistence, including control (public or private), whether the college is residential, college size, sources of revenue, and patterns of expenditure.2 To test the influence of these characteristics, he merged two 2

According to Titus (2004, 2006), the other variables that are significantly correlated to student persistence are individual-level characteristics, such as degree goals, academic performance in high school, and the behavior of students’ peers.

ARTICLE IN PRESS 634

J.C. Calcagno et al. / Economics of Education Review 27 (2008) 632–645

nationally representative datasets (BPS:96/98 and IPEDS 1995) and consolidated student data with the institutional information from the 4-year college where each student enrolled. He concluded that persistence is higher at more selective, residential, and larger institutions (Titus, 2004) and, in a subsequent paper, he found that higher expenditure per full-time equivalent student is associated with greater persistence, although within expenditures, colleges with relatively higher administrative expenditures tended to have lower persistence. Graduation rates were also higher at colleges in which a larger share of revenue came from tuition (Titus, 2006). With these results in mind, this analysis contributes to the college persistence literature in three unique ways. First, we use a production function method with both college-level and individual variables to analyze the institutional correlates of persistence and completion (including transfer to a 4-year college) in community colleges. Education economists have used production functions for more than 30 years.3 The method allows researchers to understand the influence of individual and institutional characteristics on educational outcomes, but they have not been widely used to examine higher education outcomes like persistence, completion, or credit accumulation.4 Of the few studies that have explicitly analyzed the influence of institutional characteristics on individual-level higher education outcomes, all but one studied 4-year colleges. Second, many students now attend more than one postsecondary institution (Adelman, 1999, 2005, 2006). Given the growth of multi-institutional attendance, particularly among students who enter higher education through community colleges, it is important to examine the entire undergraduate experience of students. This study incorporates

3

As summarized by Hanushek (1986), this strand of research typically focuses on the following two fundamental questions: ‘‘Do schools make a difference?’’ and ‘‘Does money matter?’’ 4 When this approach is used, the institution is typically treated as the unit of analysis and the influence of institutional characteristics (including average student characteristics) on the college’s graduation rate are estimated (Astin, Tsui, & Avalos, 1996; Ryan, 2004; Scott, Bailey, & Kienzl, 2006). Only one study has conducted this type of analysis for community colleges (Bailey et al., 2006), and it concluded that institutions with a larger enrollment and a high share of minority students, part-time students, and women have lower graduation rates. Their results also confirm that greater instructional expenditures are related to a greater graduation rates.

institutional characteristics of every institution attended. Third, unobserved institutional factors like leadership, faculty relations, and local political environment may have a bearing on students’ outcomes. In order to estimate consistent coefficients for the observable variables, the unaccounted for variation in institutional-level factors is partitioned out using a random effects model, which will be discussed in more detail in the next section. 3. Empirical model and data 3.1. Econometric models Following the discussion in the last section, this study used a binary outcome and a continuous one to measure student success.5 Taken in order, attainment of any degree (certificate, associate’s, or bachelor’s) or transfer to a 4-year institution is considered a successful outcome for our sample of community college students. This outcome takes the value of unity if any of the aforementioned outcomes are observed, and zero otherwise. The probability that a community college student will succeed is as follows: yic ¼ X 0ic b þ vic ;

i ¼ 1; 2; . . . ; N and c ¼ 1; 2; . . . ; C

(1) and yic ¼ 1 if yic 40; otherwise yic ¼ 0 where i denotes each student and c is the cluster, in this case, the community college, y*, is the unobservable individual propensity to graduate, y is the observed outcome, X is a vector of exogenous students and institutional characteristics that affect the outcome and vic is the unobserved component. Under usual assumptions for the error component (mean zero, normalized variance s2v equal to one), we could pool the data and use a standard probit model (henceforth, Model 1). Maximum likelihood estimation guarantees asymptotically unbiased estimates, but the standard errors will be misleading. Therefore, a robust variance–covariance matrix is needed to account for serial correlation within institutions (Guilkey & Murphy, 1993). 5

In the discussion that follows, we use the terms graduate, complete, and success interchangeably for readability—in this context they all refer to earning a certificate or degree or transferring to a baccalaureate institution.

ARTICLE IN PRESS J.C. Calcagno et al. / Economics of Education Review 27 (2008) 632–645

Model 1 assumes that heterogeneity in students’ probability to graduate is only affected by observable characteristics of the institution. However, unobserved institutional factors like students’ academic preparedness, faculty and administration leadership and relations, and the local political environment may have a bearing on students’ outcomes. Thus, Model 2 is designed to account for the institution-level unobserved factors that may affect the individual’s propensity to graduate. We decompose the error term in Eq. (1) as follows: vic ¼ ac þ uic

(2)

where ac is the unobserved institution-specific effect and uic is the usual idiosyncratic error term. Both ac and uic are assumed to be independent and identically distributed random variables with mean zero and variance s2c and 1, respectively. The assumptions imply that Varðvic Þ ¼ s2a þ 1 and r ¼ corrðvic ; vis Þ ¼ s2a =ðs2a þ 1Þ, interpretable as the proportion of the total error variance contributed by the unobserved heterogeneity. Further, if error terms are independent of the vector of covariates X and we assume a standard normal distribution for uic, we obtain a random effect probit model for the outcome.6 Maximum likelihood estimation with respect to b=s2u and r provide asymptotically unbiased estimates. For Models 1 and 2, only institutional data from the first year of students’ enrollment in a community college is used. In so doing, the characteristics of subsequent institutions that students may have later enrolled in are ignored. According to the NELS:88 data, however, over 40% of community college students enroll in more than one institution during their time in postsecondary education. We contend that these institutions play a contributing role in students’ outcomes, so an attendance weight was created and applied to each institutional characteristic (Model 3). In each case, the attendance weight is proportional to the full-time equivalent (FTE) months enrolled in each institution relative to the FTE months enrolled at all institutions, prior to the student outcome event (certificate; degree; transfer; or last enrolled, if no outcome).7 In addition to the binary outcome measure, we conducted a parallel analysis using cumulative 6 Wooldridge (2002) provide excellent reviews of the model and its properties. 7 Model 3 allows students to change institution; hence, we expect the effect of the unobserved heterogeneity to be small. Nonetheless, results should be compared with Model 1.

635

number of credits earned as a dependent variable (Model 4). The rationale for doing so is that a dummy variable as a measure of success of community college student outcomes can hide important individual and institutional information. This alternative measure of success has a methodological advantage in that common linear regression can be used. However, the distribution of this dependent variable is non-linear since community college students have a high propensity to drop out after earning fewer than 10 credits.8 To normalize the distribution, the variable has been transformed into logs. 3.2. Dataset and variables We began with the National Education Longitudinal Study of 1988 (NELS:88) for student and enrollment information. NELS:88 follows a nationally representative sample of eighth graders in the spring of 1988.9 The dataset contains rich demographic, standardized high school test scores, and socioeconomic (SES) measures of the respondents that allow us to control for the college sorting process.10 The NELS:88 database also includes college transcripts of most individuals who enrolled in postsecondary education by 2000. With transcript data, the true enrollment patterns of college students, including the number and type of institutions attended, the intensity of the attendance at each institution, and the type of date of each credential earned, can be observed. Institutional variables were drawn from the Integrated Postsecondary Education Data System (IPEDS), which contains information on aggregate student characteristics, faculty, enrollment, and finances. From these data, a file of institution’s characteristics for each college over the span of 8 Excellent examples of these distributions and a detailed analysis can be found in Kane and Rouse (1995). 9 The NELS:88 sample contains mostly students who entered college soon after high school graduation, following the traditional pattern of postsecondary enrollment. Therefore, the sample is not a representative cross-section of all community college students, but by design (of the survey) includes only cohorts of younger postsecondary students. 10 The NELS:88 sample design involved stratification and clustered probability sampling. We used the survey design correction included with Stata statistical software for estimating the models. However, we were not able to account for stratification of the survey in our Model 2. Although we obtained the proper design-based point estimates, our standard errors might be misleading.

ARTICLE IN PRESS 636

J.C. Calcagno et al. / Economics of Education Review 27 (2008) 632–645

NELS:88 was created and then merged with the student characteristics file using the institution identifier and school year of enrollment to assign the appropriate institutional characteristics.11 The set of institutional characteristics can be divided in four groups: general institutional characteristics, which are under the control of the colleges or state policy makers; compositional characteristics of the student body; financial variables relating to revenue and expenditures; and fixed location characteristics. 3.2.1. General institutional characteristics These factors are, at least in principle, under the control of the college or state policy makers are institution size, the proportion of the faculty working part time, and the balance between certificates and associate degrees awarded. Of these variables, institution size has been the most studied, but the direction of the relationship on student educational outcomes has been inconclusive. As mentioned earlier, Titus (2004) found that larger 4-year institutions have significant positive impacts on persistence, explained by the belief that larger institutions have stronger institutional socialization capabilities. On the other hand, it would seem easier to create a socially and academically engaged environment in a small institution, so a negative relationship between size and persistence would also be consistent with the engagement model (Bailey et al., 2006; Pascarella & Terenzini, 2005; Toutkoushian & Smart, 2001). We used a step function based on intervals of full-time-equivalent enrollments capture non-linear effects of institutional size. The use of part-time faculty is a key cost saving strategy for community colleges, and indeed most 4-year colleges. The use of many adjuncts is generally considered a poor educational practice and accreditors set minimum percentages for full timers. However, the empirical evidence on the effectiveness of adjunct faculty is also mixed. Jacoby (2006) found a negative effect while Ehrenberg and Zhang (2005) found no effect.12 We used the percentage of the faculty accounted for by part timers as our variable for this feature. A high 11

NELS:88 reports students’ colleges by IPEDS ID number, so we were able to associate the characteristics of the college, reported in IPEDS, with the individuals in the NELS:88 sample. 12 Ehrenberg and Zhang (2005) found no evidence that increasing the percentage of part-time faculty members at 2-year colleges adversely influence institutional graduation rates. However, the authors used College Board data and included only those community colleges that report average SAT scores.

proportion of part-time professors arguably make it difficult for colleges to develop the type of environment envisioned by the engagement model, parttime practitioners may be particularly effective in occupational fields, which are more important for community colleges than 4-year institutions. Lastly, the mix of degrees (certificate versus associate’s degree) conferred by community colleges is a rough indication of the college’s mission. Colleges that confer more certificates than associate’s degrees are viewed as emphasizing short-term workforce development than more academic transfer-oriented programs, which implies a more occupational-oriented mission. Some researchers have argued that such a mission can lead to lower graduation and transfer rates (Brint & Karabel, 1989; Dougherty, 1994). We tested this hypothesis by including the ratio of certificate to associate’s degrees in an analysis limited to students in associate’s degree programs. 3.2.2. Student compositional characteristics Institutional compositional variables are captured through indirect or peer effects. For example, fulltime students would be less likely to persist if they attend a college dominated by part timers. Other compositional characteristics of the institution include the percent of part-time, female, and minority (comprising African American, Hispanic, and Native American) students. The expected relationship between these characteristics and student outcomes is as follows. Research on peer effects suggests that college students benefit when they take classes with or study with high-performing students (Winston & Zimmerman, 2004), but most of this work has focused on selective 4-year colleges. Assuming that this conclusion holds for community colleges, we would expect that colleges with high proportions of women, higher income students, and full-time students would have higher graduation rates, even after controlling for individual characteristics, since members all of these groups tend to be more successful students (Toutkoushian & Smart, 2001). Research on 4-year colleges that does not control for individual characteristics tends to confirm these relationships, although Titus (2004), who did control for individual characteristics, found no effect. The engagement model would also predict that a prevalence of part-time students would weaken persistence, since many part-time students make it more difficult to develop the socially and

ARTICLE IN PRESS J.C. Calcagno et al. / Economics of Education Review 27 (2008) 632–645

academically engaged environment called for by this perspective. On campuses with a highly heterogeneous population it might also be more difficult to establish an environment conducive to engagement. Titus (2004) tested the effect of a measure of racial diversity and found no effect for 4-year colleges. 3.2.3. Financial characteristics These characteristics include federal student aid per FTE; average undergraduate in-state tuition; and average expenditures per FTE in instruction, academic support,13 student services,14 and administration.15 The federal aid measure, which is primarily comprised of Pell Grants awarded to low- and middle-income students, also acts as a proxy for the relative income level of the student body.16 Based on the findings of previous research, we expected expenditures in instruction and academic support to have a positive effect on the probability of success of community college students. Titus (2004) and Ryan (2004) found negative effects for administrative expenditures in 4-year institutions and argued that, although these expenses are necessary for day-to-day work, they might divert funds from more effective expenditures like instruction. Finally, we expected important effects if institutions spending large amounts on student services succeeded in compensating for the deficiencies that their students face (Astin, 1993). However, it is possible that colleges may spend more on student services and still not be able to help their students overcome the multiple barriers to success that they face (Ryan, 2004). 3.2.4. Fixed location characteristics Our fourth category consists of just one variable: the college’s location in an urban, suburban, or rural area. There is no strong argument for expecting any particular effect here. Perhaps suburban colleges might be expected to have more 13 Academic support includes expenses for activities and services that support the institution’s primary mission of instruction, research and public service, like display of educational materials in libraries, museums, or galleries. 14 Student services include expenses for admissions, registrar activities, and activities whose primary purpose is to contribute to students’ emotional and physical well-being and to their intellectual, cultural, and social development outside the context of the formal instructional program. 15 Administration includes expenses for the day-to-day operational support of the institution. 16 Ehrenberg and Zhang (2005) also used the percentage of college students receiving a Pell Grant at the institution.

637

resources, especially in states where colleges collect revenue from local taxes, but this possibility ought to be accounted for by our expenditure variables. We included this variable to control for any factors that might be captured by a college’s location.17 3.2.5. Individual variables The primary focus of this analysis is the effect of institutional factors, but selected individual characteristics were added to control for differences in the college sorting and completion processes. Individual-level characteristics were chosen based on studies that have shown that students who have stronger high school records, who come from higher income families, and whose parents went also went to college are more likely to graduate (Adelman, 1999, 2005, 2006; Bailey & Alfonso, 2005; Cabrera et al., 2005). To start, basic controls for gender and race/ethnicity were included in the models. Also, to control for the college sorting process—better students attending higher quality community colleges—measures of SES and cognitive ability were added. To measure SES, we used a composite variable in NELS:88 that included parental education, parental occupation, and total household income. Academic readiness was approximated using 10th grade composite test scores. Lastly, findings from previous research also indicate that students who delay their college entry after high school have lower levels of academic integration which, in turn, decreases the likelihood of persisting and attaining a degree (Pascarella & Terenzini, 2005; Tinto, 1993). To test for this, dummy variables were added to the models indicating delayed college attendance. 3.2.6. Sample Our initial NELS:88 sample contained 2438 students whose initial postsecondary education was in one of 686 community colleges. However, regressions with full information included only 2196 students in 536 community colleges. Missing values corresponded mainly to high school tracking variables like test scores (173 observations), but also resulted from missing values in institutional variables merged from IPEDS (69 observations).18 17

We also estimated our models with state dummies to capture any unobserved factor shared by institutions in the same state. However, results did not change after controlling for state fixed effects. 18 Compared with the final sample presented in Table 2, students with missing data are, on average, more likely to be

ARTICLE IN PRESS 638

J.C. Calcagno et al. / Economics of Education Review 27 (2008) 632–645

We estimated our four models on two different samples. The first comprised all students whose initial postsecondary education was at a community college. The second was a subset containing community college students enrolled initially in an associate’s degree program. In the latter case, we excluded a certificate as a successful outcome since students in an associate’s degree program generally do not have earning a certificate as their goal. While the purpose of this research is to better understand the effect of institutional characteristics on community college student outcomes, we recognize that community college students are quite heterogeneous, especially in terms of their educational goals.19 Conducting a separate analysis for associate’s degree program students is a way to circumvent the problem. 4. Findings Descriptive statistics for each sample group are provided in Table 1. A majority of community college students in the sample were enrolled in urban institutions and 43% were enrolled in large (more than 5000 full-time equivalent or FTE undergraduates) institutions. The student body in the average public, 2-year institution was composed of 21% minority students (black, Hispanic, and Native American), 56% female students, and 37% part-time students. Also, the average community college student enrolled at an institution where students received, on average, $1073 dollars in Pell Grants and were charged $1327 in tuition. Similarly, the average institution spent $2925 on instruction, $472 on academic support, $608 on student services, and $1329 on administrative expenses per FTE student. Note also that the institutional-level variable means are reasonably similar for the associate’s degree sample. In terms of student characteristics, the sample was overrepresented by racial/ethnic minorities (28%). Students in the sample are also more likely to come from disadvantaged backgrounds as measured by SES and academic (footnote continued) Hispanic (22%), low SES (24%) and to delay enrollment after high school (45%). They are less likely to be white (59%). 19 A survey question asking first-time beginning community college students their primary reason for enrolling produced the following response distribution: job skills: 23%; degree or certificate: 21%; transfer: 39%; personal enrichment: 17% (Source: Beginning Postsecondary Students Longitudinal Study 1996–2001, authors’ calculations).

Table 1 Descriptive statistics for institutional characteristics Variable

Percentage by sample type All

General institutional characteristics 1001–2500 FTE undergraduates 2501–5000 FTE undergraduates More than 5000 FTE undergraduates Proportion part-time faculty Institution awards more certificates than associate degrees

Associate’s degree

25.73% 24.96% 42.80% 51.55% 10.59%

25.04% 24.93% 42.79% 52.61% 9.67%

21.24% 56.25% 37.14%

21.63% 56.43% 36.42%

1.073 1.327 2.925 0.472 0.608 1.329

1.085 1.371 2.840 0.466 0.609 1.328

Fixed locational characteristics College is located in urban area College is located in suburban area College is located in rural area

51.35% 45.89% 2.76%

47.67% 49.53% 2.80%

Student characteristics Female White Black Hispanic Asian Delayed enrollment SES: lowest quartile SES: second quartile SES: third quartile SES: highest quartile Test scores: lowest quartile Test scores: second quartile Test scores: third quartile Test scores: highest quartile

49.66% 71.71% 8.63% 15.66% 3.17% 32.20% 17.24% 28.97% 32.55% 21.25% 19.20% 31.03% 32.47% 17.30%

50.36% 68.44% 10.61% 17.92% 2.40% 23.93% 20.10% 30.29% 27.59% 22.01% 18.38% 31.78% 34.68% 15.16%

Student compositional characteristics Proportion FTE minority Proportion FTE female Proportion FTE part-time Financial characteristics Federal aid (Pell Grants)b In-state tuitiona Instructional expendituresb Academic supportb Student servicesb Administrative expendituresb

Observations

2196

1188

Source: Estimates based on NELS:88 and IPEDS. a In $1000s. b In $1000s per FTE undergraduate.

readiness. Similar to the institutional-level characteristics, the subset of students in associate’s degree programs shows comparable statistics. Table 2 shows the distribution of completion rates by each type of degree attainment or transfer status across both samples. Overall, 48% of all

ARTICLE IN PRESS J.C. Calcagno et al. / Economics of Education Review 27 (2008) 632–645

639

community college students attained some outcome between 1992 and 2000. Interestingly, 5% of community college students in the sample received a certificate degree as their highest outcome, 11% received an associate’s degree, 15% transferred to a 4-year institution, and 18% received a bachelor or post-baccalaureate degree before 2000. Recall that the NELS:88 sample contains mostly students who entered college soon after high school graduation, following the traditional pattern of postsecondary enrollment. Therefore, the sample is not a representative cross-section of all community college students, but by design is representative of a cohort of young adults. Estimates from the regression models applied on the entire sample of community college students and the subset of associate’s degree students are shown in Tables 3 and 4, respectively.20 The second column of Table 3 presents results of Model 1, the pooled probit regression (Eq. (1)). We used a robust variance–covariance matrix to account for serial correlation within clusters. Consistent with previous research (Bailey et al., 2006; Pascarella & Terenzini, 2005), we found that students enrolled in mediumand large-sized community colleges (1001–5000 FTE undergraduates and more than 5000 FTE

undergraduates, respectively) were between 15% and 18% less likely to have a successful outcome than the reference students in small institutions (fewer than 1000 FTE undergraduates). Similarly, students enrolled in institutions with large proportions of part-time faculty and minority populations were less likely to attain a degree. A $1000 increase in academic support per FTE decreases the probability of graduating by 12% among NELS:88 students, although the result was statistically weak. When unobservable institution-specific effects were accounted for in Model 2, the effect of medium- and large-sized institutions and the percentages of part-time faculty and minority students remained negatively associated with the probability of completing or transferring. However, the magnitude of the relationship was somewhat smaller than before. For instance, students enrolled in medium-size institutions (1000–5000 FTE) were now between 12% and 14% less likely to have a successful outcome than students in small institutions. Similarly, students enrolled in an institution with student body comprised 75% minority students were 9% less likely to succeed than are students enrolled in institutions only with 25% minority students, compared to 13% less likely in Model 1.21 However, the statistical association between academic support and the probability of graduating found in Model 1 has vanished. The third column of Table 3 presents results for Model 3. After accounting for multiple institutional attendance by the students, the pattern of significant institutional covariates remained, but the size of the coefficients were larger than those from the previous two models. Nevertheless, across specifications, institutional size and the proportions of part-time faculty and minority students were negatively associated with our binary measure of educational success. The fourth column of Table 3 shows the results of the re-estimated Model 2 (random effects model with multiple institutions) on the log number of credits earned. Consistent with previous estimates, institutional size and part-time faculty were negatively associated with the outcome, but the proportion of minority students was no longer statistically associated with cumulative credits earned. In addition, unlike the other three models previously

20 Individual-level characteristics were included as covariates. Results are not shown here, but are available upon request from authors.

21 For variables originally expressed as a proportion, like parttime faculty, and minority, female, part-time students, the marginal effects represent a unit change from 0 to 1.

Table 2 Degree completion of community college students by highest outcome Variable

Percentage by sample type All

Associate’s degree

Dropout Still enrolled Certificate Associate’s Transfer Bachelor’s or post-baccalaureate

45.4 6.5 4.8 10.6 15.2 17.5

44.2 4.6 3.6 14.7 15.5 17.5

Positive outcome

48.0

47.7

Observations

2196

1188

Source: Estimates based on NELS:88. Notes: A positive outcome for all students includes any certificate or degree earned or transfer to a 4-year institution with or without a credential. We excluded a certificate as a positive outcome for associate’s degree students since these students generally do not have earning a certificate as their goal.

ARTICLE IN PRESS 640

J.C. Calcagno et al. / Economics of Education Review 27 (2008) 632–645

Table 3 Marginal effects of institutional variables on community college student outcomes Variable

1001–2500 FTE undergraduates 2501–5000 FTE undergraduates More than 5000 FTE undergraduates Proportion part-time faculty Certificate degree oriented Proportion FTE minority Proportion FTE female Proportion FTE part-time Federal aid (Pell Grants)a In-state tuitionb Instructional expendituresa Academic supporta Student servicesa Administrative expendituresa College is located in urban area College is located in rural area Individual-level characteristicsc Unweighted observations Number of institutions Log-likelihood Pseudo-R2 Estimated rho

Model 1

Model 2

Model 3

Model 4

Pooled probit

Random effect probit

Pooled probit multiple institutions

Random effect linear model

dy/dx

dy/dx

S.E.

dy/dx

dy/dx

0.122** 0.139** 0.150*** 0.155*** 0.054 0.186** 0.011 0.018 0.041 0.004 0.000 0.013 0.030 0.032 0.054 0.059 Yes

0.056 0.057 0.057 0.058 0.046 0.087 0.257 0.121 0.05 0.017 0.016 0.056 0.049 0.028 0.039 0.081

0.176*** 0.200*** 0.180*** 0.237** 0.002 0.293** 0.237 0.093 0.112 0.019 0.007 0.125* 0.018 0.025 0.057 0.016

S.E.

0.061 0.157** 0.176*** 0.063 0.153** 0.064 0.220** 0.092 0.001 0.056 0.269** 0.123 0.351 0.267 0.237 0.224 0.054 0.064 0.026 0.019 0.004 0.02 0.119* 0.068 0.057 0.066 0.024 0.034 0.044 0.041 0.043 0.066 Yes 2196 536 1310.20 0.137

2196 536 1331.55 0.139 0.117

S.E. 0.064 0.064 0.063 0.102 0.057 0.14 0.26 0.258 0.083 0.018 0.02 0.068 0.052 0.046 0.041 0.08 Yes

2117 – 1266.44 0.139

S.E.

0.209* 0.250** 0.123 0.307** 0.052 0.076 0.605 0.606** 0.160 0.029 0.031 0.097 0.037 0.027 0.093 0.101

0.112 0.114 0.110 0.135 0.104 0.194 0.544 0.274 0.120 0.036 0.036 0.117 0.122 0.061 0.066 0.208 Yes

2196 536 – 0.089 0.072

Source: Authors’ estimates based on NELS:88 and IPEDS, various years. Notes: The dependent variable in Models 1–3 is attainment of any degree (certificate, associate’s, or bachelor’s) or transfer to a 4-year institution, while in Model 4 the dependent variable is the logarithm of cumulative number of credits earned. (***), (**), (*), indicate statistically significance at 1%, 5% and 10% level. a In $1000s per FTE undergraduate. b In $1000s. c Models include individual-level controls for gender, race, SES, ability, and delay enrollment.

discussed, there was evidence of a negative relationship between the proportion of part-time students and cumulative credits earned. We now focus our attention on a more homogeneous population: students initially enrolled in an associate’s degree program (Table 4). Examining the effects of the first institution only, the results generally mirror the patterns for the entire sample of community college students. Model 1 indicates that institutional size, part-time faculty, and minority student population were negatively associated with the probability of completion or transfer for associate’s degree students. Interestingly, relatively large expenditures on academic support by community colleges were also negatively associated with the probability of completing a degree or transferring to a 4-year college. Perhaps academic support at

community colleges is ineffective or instead reflects recognizable effort on behalf of colleges to address the academic deficiencies of their students that has had not been captured by the test score variable. Regardless of the cause, the irregular pattern significance of this finance-related factor across the various models suggests a weak association with students’ educational outcomes. The second column in Table 4 controls for unobservable institution-specific effects (Model 2). Again, the results are similar to those from all community college students. For example, increases in institutional size have a strong and negative relationship on student success—students enrolled in medium-sized institutions are between 13% and 19% less likely to graduate than those enrolled in community colleges with 1000 FTE students or less.

ARTICLE IN PRESS J.C. Calcagno et al. / Economics of Education Review 27 (2008) 632–645

641

Table 4 Marginal effects of institutional variables on associate’s degree student outcomes Variable

1001–2500 FTE undergraduates 2501–5000 FTE undergraduates More than 5000 FTE undergraduates Proportion part-time faculty Certificate degree oriented Proportion FTE minority Proportion FTE female Proportion FTE part-time Federal aid (Pell Grants)a In-state tuitionb Instructional expendituresa Academic supporta Student servicesa Administrative expendituresa College is located in urban area College is located in rural area Individual-level characteristicsc Unweighted observations Number of institutions Log-likelihood Pseudo-R2 Estimated rho

Model 1

Model 2

Model 3

Model 4

Pooled probit

Random effect probit

Pooled probit multiple institutions

Random effect linear model

dy/dx

dy/dx

S.E.

dy/dx

dy/dx

0.126* 0.191*** 0.152** 0.118* 0.011 0.289** 0.256 0.329* 0.002 0.007 0.019 0.097 0.061 0.036 0.009 0.181 Yes

0.073 0.074 0.074 0.074 0.067 0.116 0.352 0.171 0.07 0.022 0.024 0.074 0.069 0.039 0.038 0.108

0.046 0.097** 0.129** 0.113* 0.001 0.387*** 0.245 0.227 0.020 0.006 0.014 0.115** 0.015 0.011 0.010 0.064

S.E.

0.08 0.15* 0.232*** 0.081 0.197** 0.078 0.180** 0.077 0.022 0.08 0.308* 0.158 0.294 0.371 0.420* 0.223 0.005 0.075 0.014 0.027 0.003 0.018 0.178** 0.089 0.042 0.08 0.017 0.042 0.018 0.051 0.123 0.086 Yes 1188 423 682.75 0.171

1188 423 692.03 0.173 0.134

S.E. 0.045 0.049 0.058 0.073 0.043 0.124 0.226 0.168 0.057 0.014 0.017 0.065 0.047 0.033 0.03 0.065 Yes

1114 – 625.36 0.191

S.E.

0.252* 0.306** 0.232 0.229* 0.012 0.003 0.029 0.693** 0.334** 0.044 0.032 0.036 0.223 0.130* 0.009 0.208

0.145 0.147 0.148 0.126 0.129 0.239 0.661 0.341 0.137 0.044 0.046 0.135 0.143 0.071 0.077 0.21 Yes

1188 423 – 0.079 0.045

Source: Authors’ estimates based on NELS:88 and IPEDS, various years. Notes: The dependent variable in Models 1–3 is attainment of any degree (certificate, associate’s, or bachelor’s) or transfer to a 4-year institution, while in Model 4 the dependent variable is the logarithm of cumulative number of credits earned. (***), (**), (*), indicate statistically significance at 1%, 5% and 10% level. a In $1000s per FTE undergraduate. b In $1000s. c Models include individual-level controls for gender, race, SES, ability, and delay enrollment.

Note also that for students enrolled in associate’s degree program having more part-time faculty has a negative influence on their educational success, although the result was statistically weak. The results from Model 3 are shown in the third column of Table 4. The statistical significance of institution size remains, but the degree of association was less than the previous two models. Students in associate’s degree programs were also negatively affected by increases in the proportion of part-time faculty, but as above the relationship has diminished when characteristics of all the institutions attended are controlled for. On the other hand, an increase in the percentage of minority students at the institution was associated with an even lower probability of completing or transferring. For example, students enrolled in an institution with a student body that is 75%

minority are almost 19% less likely to succeed than are students enrolled in an institution with only 25% minority students (see footnote 15).22

22

At the suggestion of a referee, we included a new explanatory variable in all models that measures the number of transitions between community colleges a student made during the 8-year window of observation. The goal is to capture the impact of transferring between community colleges. When added to regressions for all community college students (as in Table 3), the new variable was not statistically significant and the size and statistical significance of all other coefficients remained the same. However, when the new variable was added to the regressions for the subset of associate’s degree students (Table 4), we found a positive and statistically significant effect, but small changes in the institutional variables. Although these results suggest that more transitions are associated with positive outcomes, it cannot necessarily be assumed that the transitions are occurring for

ARTICLE IN PRESS J.C. Calcagno et al. / Economics of Education Review 27 (2008) 632–645

642

Table 5 Measures of fit analysis: individual versus institutional characteristicsa Model

Community college students

Associate’s degree students

Block 1b

Block 2c

Block 1b

Block 2c

0.139 0.132 0.139

0.129 0.148 0.141

0.171 0.173 0.191

Model 1 0.113 Model 2 0.121 Model 3 0.115

Source: Authors’ estimates based on NELS:88 and IPEDS, various years. a Fit of the model is measured as pseudo-R2. b Block 1 corresponds to models only with individual-level characteristics. c Block 2 adds to Block 1 the institutional-level variables.

Finally, the fourth column of Table 4 reports the results from the continuous measure—log of the total number of credits earned—after accounting for multiple institutional attendance. For the most part, the findings are similar to those from Table 3. There are, however, two key differences. First, an increase in Pell Grants was negatively associated with the (log) number of credits earned by students in associate’s degree programs. Second, there was a weak relationship between administrative expenditures per FTE undergraduate and student success. Any importance placed on this finding is nominal given its level of significance and uniqueness among the other results. The relative effects of individual student characteristics compared with institutional characteristics are shown in Table 5. Although measures of fit in limited dependent variable models do not have the same interpretation as with a linear regression model, they do provide some signal about the accuracy with which the model fits the data (Maddala, 1983). We fitted each model with a constant term, and then added sequentially the individual characteristics (Block 1) and the institutional variables (Block 2) to compute pseudo-R2 using the log-likelihood values in each model. The results suggest that the addition of 16 institutional covariates improves the fit of the model, although the impact is relatively small. This finding indicates that individual student characteristics have a greater bearing on individual graduation rates than do (footnote continued) academic reasons. Further research is necessary to determine the causes of these shifts between colleges.

institutional characteristics, or at least the institutional characteristics that are measured by IPEDS. Data on more specific institutional policies, practices, and programs may show these characteristics to be more influential than the more macro characteristics such as institutional size, student composition and overall expenditures that were used in this study. 5. Robustness and limitations of the results The first robustness test examines whether the pooled probit is a more appropriate specification than the random effect probit models. For this test, rho (r), which represents the proportion of the total variance contributed by the unobserved heterogeneity at the institution level, is reported in last row of the second columns of Tables 3 and 4 for Model 2. Taken in order, 12% of the variance in the unexplained outcome of all community college students can be explained by the unobserved institution-specific effect. Similarly, 13% of the unexplained variance in the outcomes of students in the associate’s degree student sub-sample can be attributed to unobserved institutional-level effects. After estimating r, we compared the pooled probit and the random effect probit models using a likelihood ratio test (r ¼ 0). The likelihood ratio test was distributed Chi-square with one degree of freedom. The estimated values of 16.43 for the community college students and 7.11 for the associate’s degree sub-sample provide strong statistical evidence at the 1% level that the random effect probit model is the most appropriate model. A second important sensitivity analysis is to demonstrate the value added by including NELS:88 data on individual students in the regressions. We conduct a series of regressions, as shown in Table 6, that first correlate student outcomes with only institutional factors (Model 1), and then sequentially add student demographics (Model 2), SES (Model 3) and test scores (Model 4). For this example, we re-estimate the random effect model to the entire student sample (Table 3, column 3). Results for Model 1, without any individual-level variables, suggest that institutional size is not statistically correlated with community college student outcomes, but it becomes statistically significant after controlling for students demographic characteristics. Overall, the coefficients on institutional size dummies and part-time faculty increase in absolute value after controlling for the

ARTICLE IN PRESS J.C. Calcagno et al. / Economics of Education Review 27 (2008) 632–645

643

Table 6 Marginal effects from random effects model with stepwise inclusion of individual characteristics Variable

1001–2500 FTE undergraduates 2501–5000 FTE undergraduates More than 5000 FTE undergraduates Proportion part-time faculty Certificate degree oriented Proportion FTE minority undergraduates Proportion FTE female undergraduates Proportion FTE part-time undergraduates Federal aid (Pell Grants)a In-state tuitionb Instructional expendituresa Academic supporta Student servicesa Administrative expendituresa College is located in urban area College is located in rural area Student demographicsc Student SES Student test scores Unweighted observations Number of institutions

Model 1

Model 2

Model 3

Model 4

dy/dx

S.E.

dy/dx

S.E.

dy/dx

S.E.

dy/dx

S.E.

0.078 0.091* 0.080 0.141*** 0.063 0.239*** 0.196 0.152 0.045 0.006 0.014 0.047 0.013 0.012 0.058* 0.019 No No No

0.048 0.054 0.049 0.051 0.042 0.071 0.225 0.105 0.044 0.015 0.014 0.049 0.042 0.025 0.026 0.072

0.131*** 0.152*** 0.148*** 0.096** 0.061 0.081 0.214 0.083 0.027 0.001 0.006 0.045 0.002 0.012 0.039* 0.018 Yes No No

0.044 0.045 0.045 0.044 0.038 0.066 0.198 0.092 0.038 0.013 0.012 0.042 0.036 0.022 0.023 0.064

0.126*** 0.161*** 0.169*** 0.100** 0.060 0.087 0.180 0.101 0.030 0.001 0.007 0.046 0.003 0.013 0.040 0.013 Yes Yes No

0.049 0.051 0.051 0.05 0.042 0.075 0.225 0.105 0.044 0.014 0.014 0.048 0.041 0.025 0.026 0.072

0.122** 0.139** 0.150*** 0.155*** 0.054 0.186** 0.011 0.018 0.041 0.004 0.000 0.013 0.030 0.032 0.054 0.059 Yes Yes Yes

0.056 0.057 0.057 0.058 0.046 0.087 0.257 0.121 0.05 0.017 0.016 0.056 0.049 0.028 0.039 0.081

2389 548

2381 548

2368 547

2196 536

Source: Authors’ estimates based on NELS:88 and IPEDS, various years. Notes: The dependent variable in all Models is attainment of any degree (certificate, associate’s, or bachelor’s) or transfer to a 4-year institution. (***), (**), (*), indicate statistically significance at 1%, 5% and 10% level. a In $1000s per FTE undergraduate. b In $1000s. c Demographic variables include controls for gender, race, and delay enrollment.

full set of individual-level factors suggesting a bias towards zero. On the other hand, the coefficient on the proportion of minority students’ increase after adding 10th grade test scores, suggesting that the former was capturing some negative influence of student ability. One of the limitations of the analysis is the use of aggregate institutional measures to capture the variation in multiple educational outcomes—attainment of a certificate or associate’s degree or transfer to a 4-year college. We recognize that these outcomes may be the result of different institutional priorities or realities, but most community college are often evaluated on their combined performance on these outcomes rather than singly or a subset. Thus, since the main objective of the study is to determine the institutional contribution to individual completion, we use the most generous definition of student success. Also some analytic problems remain. For example, we still must rely on the crude institutional

measures available in IPEDS. So, while we may know that an individual is from a low-income family, we have no reliable information on the economic background of the typical student at that individual’s college. In addition, we do not have measures of specific institutional policies, such as the types of student services or pedagogic strategies used to improve retention and completion. Finally, the NELS:88 sample is made up almost entirely of traditional-age college students, and therefore provides no information on older students, who comprise an important part of community college enrollments. Alternatively, the magnitude of some variables may reflect a response to perceived student need as well as to some exogenously determined institutional policy. For example, as noted earlier, colleges whose students face multiple barriers may spend more on student services. While we added student characteristics to control for the college sorting process, there may be important factors that are not

ARTICLE IN PRESS 644

J.C. Calcagno et al. / Economics of Education Review 27 (2008) 632–645

measured in our datasets. In this case, even if student services are effective in increasing retention, the negative effect of the initial student characteristics may offset the positive program effect resulting in a coefficient that suggests no effect. The reader is also cautioned that some unobservable student heterogeneity may be determining both the choice of community college and type of community college, i.e., large or urban or occupationallyoriented, which may affect the influence of the institutional measures included in the models on student completion. 6. Discussion The overarching goal of this study is to measure the institutional characteristics that affect the success of individual community college students, which is acutely lacking in the literature. We estimate the institutional effect on a student’s likelihood of completing a postsecondary credential or transferring to a 4-year college while controlling for individual characteristics such as a student’s socioeconomic background and scores on standardized tests administered in high school. Our estimation strategy addressed two methodological challenges: unobserved institutional effects and multiple institution attendance. Our results are generally consistent across both population and specifications. What do these findings imply about the policy, compositional, and financial variables that we analyzed? First, we observe an inverse relationship between school size and students’ likelihood of completing or transferring to a 4-year college. This finding contrasts with some findings about 4-year colleges, but is consistent with other institutional analyses of 2-year colleges. Our finding is also consistent with the notion that a more professional atmosphere and personalized services, such as having a greater proportion of full-time faculty rather than parttime and expanding academic support services, seem to benefit the traditional-age student population in the NELS:88 sample. Second, compositional factors have some influence as well. The findings from this study provides some support for the hypothesis that colleges with a larger share of minority students have lower graduation rates, a result that is consistent with research using institutional data at both community colleges and 4-year colleges. Given that we are controlling for race, test scores, and

socioeconomic status, this is a result worth further examination. Lastly, with the exception of academic support, selected expenditure and tuition levels are generally not related to differences in the graduation probabilities. This finding, however, stands in contrast to most of the findings for 4-year college students and is also a candidate for further study. Ultimately, we find that individual characteristics are more strongly related to the completion probabilities than are institutional factors. There may be several explanations for the apparent greater importance of individual characteristics. First, the findings suggest that well-prepared students with economic resources are likely to survive and do well in a variety of institutions. On the other hand, students with many challenges, including personal and financial responsibilities, may have trouble even in strong colleges. Second, individual variables are measured with much more precision than institutional variables, especially with respect to the influence of factors on individuals. Students’ individual characteristics obviously influence their experience, but colleges often comprised subcultures that are probably more important to individual students than average characteristics of the whole institution. Finally, our institutional characteristics are not able to capture effective institutional policies. Pedagogic strategies, successful guidance and academic counseling efforts, faculty culture, organizational characteristics, and many other factors are probably more influential than the broad characteristics measured by IPEDS. Research on the relationship between institutional characteristics and institutional effectiveness is crucial to understanding how community colleges can increase their very low completion and transfer rates. There are several possible directions for future research. Certainly additional refinements of the type of analysis presented here using IPEDS and national longitudinal data, such as NELS:88 or the BPS:96/2001, will be important. State unit record data systems can provide much larger samples, including significant samples within individual institutions, although clearly the number of institutions will be much smaller. But within states it will be easier to have more comprehensive measures of institutional features. Evaluations of individual programs, such as particular strategies for remediation, can also play a role. Finally, additional insights can be gained by conducting qualitative research that searches for institutional features and

ARTICLE IN PRESS J.C. Calcagno et al. / Economics of Education Review 27 (2008) 632–645

policies that seem to be related to differences in institutional effectiveness. Acknowledgments This research was funded by the Ford Foundation. The work reported here has also benefited from research funded by the Lumina Foundation for Education (as part of the Achieving the Dream: Community Colleges Count initiative) and the US Department of Education (as part of the National Assessment of Vocational Education). The Community College Research Center was founded as a result of a generous grant from the Alfred P. Sloan Foundation, which continues to support our work. An earlier version of this paper was presented at the 2005 American Educational Research Association Annual Meeting and at the Council for the Study of Community Colleges, 47th Annual Conference. We also wish to thank Mariana Alfonso, Clive Belfield, Cliff Adelman, Lauren O’Gara, Lisa Rothman, Wendy Schwartz and three referees for their comments on an earlier draft. References Adelman, C. (1999). Answers in the tool box: Academic intensity, attendance patterns, and bachelor’s degree attainment. Washington, DC: National Center for Education Statistics. Adelman, C. (2005). Moving into town and moving one. The community college in the lives of traditional-age students. Washington, DC: National Center for Education Statistics. Adelman, C. (2006). Replicating the toolbox hypotheses: Path the degree completion from high school through college. Washington, DC: National Center for Education Statistics. Astin, A. (1993). What matters in college: Four critical years revisited. San Francisco: Jossey-Bass. Astin, A., Tsui, L., & Avalos, J. (1996). Degree attainment rates at American colleges and universities: Effects of race, gender, and institutional type. Los Angeles: Higher Education Research Institute, UCLA. Bailey, T., & Alfonso, M. (2005). Paths to persistence: An analysis of research on program effectiveness at community colleges. Indianapolis: Lumina Foundation for Education. Bailey, T., Calcagno, J., Jenkins, D., Leinbach, D., & Kienzl, G. (2006). Is Student-Right-to-Know all you should know? An analysis of community college graduation rates. Research in Higher Education, 47, 5. Bean, J. (1985). Interaction effects based on class level in an explanatory model of college student dropout syndrome. American Educational Research Journal, 22(1), 35–64. Brint, S., & Karabel, J. (1989). The diverted dream: Community colleges and the promise of educational opportunity in America, 1900– 1985. New York: Oxford University Press.

645

Cabrera, A., Burkum, K., & La Nasa, S. (2005). Pathways to a four-year degree: Determinants of transfer and degree completion. In A. Seidman (Ed.), College student retention: A formula for student success. Westport: ACE/Praeger Series on Higher Education. Cohen, A., & Brawer, F. (1996). The American community college. San Francisco: Jossey-Bass. Dougherty, K. (1994). The contradictory college: The conflicting origins, impacts, and futures of the community college. Albany: State University of New York Press. Ehrenberg, R., & Zhang, L. (2005). Do tenured and tenuretrack faculty matter? Journal of Human Resources, 40(3), 647–659. Guilkey, D., & Murphy, J. (1993). Estimation and testing in the random effects probit model. Journal of Econometrics, 59(3), 301–317. Hanushek, E. (1986). The economics of schooling: Production and efficiency in public schools. Journal of Economic Literature, 24(3), 1142–1177. Jacoby, D. (2006). Effects of part-time faculty employment on community college graduation rates. Journal of Higher Education, 77(6), 1081–1103. Kane, T., & Rouse, C. (1995). Labor-market returns to twoand four-year college. American Economic Review, 85(3), 600–614. Maddala, G. (1983). Limited dependent and qualitative variables in econometrics. New York: Cambridge University Press. Pascarella, E., & Terenzini, P. (2005). How college affects students: A third decade of research, Vol. 2. San Francisco: Jossey-Bass. Rouse, C. (1995). Democratization or diversion? The effect of community colleges on educational attainment. Journal of Business and Economic Statistics, 13(2), 217–224. Ryan, J. (2004). The relationship between institutional expenditures and degree attainment at baccalaureate institutions. Research in Higher Education, 45(2), 97–114. Scott, M., Bailey, T., & Kienzl, G. (2006). Relative success? Determinants of college graduation rates in public and private colleges in the US. Research in Higher Education, 47(3), 249–279. Tinto, V. (1993). Leaving college: Rethinking the causes and cures of student attrition. Chicago: University of Chicago Press. Titus, M. (2004). An examination of the influence of institutional context on student persistence at 4-year colleges and universities: A multilevel approach. Research in Higher Education, 45(7), 673–700. Titus, M. (2006). Understanding the influence of financial context of institutions on student persistence at 4-year colleges and universities: A multilevel approach. Journal of Higher Education, 77(2), 353–375. Toutkoushian, R., & Smart, J. (2001). Do institutional characteristics affect student gains from college? Review of Higher Education, 25(1), 39–61. Winston, G., & Zimmerman, D. (2004). Peer effects in higher education. In C. Hoxby (Ed.), College choice: The economics of where to go, when to go, and how to pay for it. Chicago: University of Chicago Press. Wooldridge, J. (2002). Econometric analysis of cross section and panel data. Cambridge: MIT Press.