Search for EdWorkingPapers here by author, title, or keywords.
Methodology, measurement and data
Enrollment in higher education has risen dramatically in Latin America, especially in Chile. Yet graduation and persistence rates remain low. One way to improve graduation and persistence is to use data and analytics to identify students at risk of dropout, target interventions, and evaluate interventions’ effectiveness at improving student success. We illustrate the potential of this approach using data from eight Chilean universities. Results show that data available at matriculation are only weakly predictive of persistence, while prediction improves dramatically once data on university grades become available. Some predictors of persistence are under policy control. Financial aid predicts higher persistence, and being denied a first-choice major predicts lower persistence. Student success programs are ineffective at some universities; they are more effective at others, but when effective they often fail to target the highest risk students. Universities should use data regularly and systematically to identify high-risk students, target them with interventions, and evaluate those interventions’ effectiveness.
Clustered observational studies (COSs) are a critical analytic tool for educational effectiveness research. We present a design framework for the development and critique of COSs. The framework is built on the counterfactual model for causal inference and promotes the concept of designing COSs that emulate the targeted randomized trial that would have been conducted were it feasible. We emphasize the key role of understanding the assignment mechanism to study design. We review methods for statistical adjustment and highlight a recently developed form of matching designed specifically for COSs. We review how regression models can be profitably combined with matching and note best practice for estimates of statistical uncertainty. Finally, we review how sensitivity analyses can determine whether conclusions are sensitive to bias from potential unobserved confounders. We demonstrate concepts with an evaluation of a summer school reading intervention in Wake County, North Carolina.
Many interventions in education occur in settings where treatments are applied to groups. For example, a reading intervention may be implemented for all students in some schools and withheld from students in other schools. When such treatments are non-randomly allocated, outcomes across the treated and control groups may differ due to the treatment or due to baseline differences between groups. When this is the case, researchers can use statistical adjustment to make treated and control groups similar in terms of observed characteristics. Recent work in statistics has developed matching methods designed for contexts where treatments are clustered. This form of matching, known as multilevel matching, may be well suited to many education applications where treatments are assigned to schools. In this article, we provide an extensive evaluation of multilevel matching and compare it to multilevel regression modeling. We evaluate multilevel matching methods in two ways. First, we use these matching methods to recover treatment effect estimates from three clustered randomized trials using a within-study comparison design. Second, we conduct a simulation study. We find evidence that generally favors an analytic approach to statistical adjustment that combines multilevel matching with regression adjustment. We conclude with an empirical application.
Researchers commonly interpret effect sizes by applying benchmarks proposed by Cohen over a half century ago. However, effects that are small by Cohen’s standards are large relative to the impacts of most field-based interventions. These benchmarks also fail to consider important differences in study features, program costs, and scalability. In this paper, I present five broad guidelines for interpreting effect sizes that are applicable across the social sciences. I then propose a more structured schema with new empirical benchmarks for interpreting a specific class of studies: causal research on education interventions with standardized achievement outcomes. Together, these tools provide a practical approach for incorporating study features, cost, and scalability into the process of interpreting the policy importance of effect sizes.
Using rich longitudinal data from one of the largest teacher education programs in Texas, we examine the measurement of pre-service teacher (PST) quality and its relationship with entry into the K–12 public school teacher workforce. Drawing on rubric-based observations of PSTs during clinical teaching, we find that little of the variation in observation scores is attributable to actual differences between PSTs. Instead, differences in scores largely reflect differences in the rating standards of field supervisors. We also find that men and PSTs of color receive systematically lower scores. Finally, higher-scoring PSTs are slightly more likely to enter the teacher workforce and substantially more likely to be hired at the same school as their clinical teaching placement.
After increasing in the 1970s and 1980s, time to bachelor’s degree has declined since the 1990s. We document this fact using data from three nationally representative surveys. We show that this pattern is occurring across school types and for all student types. Using administrative student records from 11 large universities, we confirm the finding and show that it is robust to alternative sample definitions. We discuss what might explain the decline in time to bachelor’s degree by considering trends in student preparation, state funding, student enrollment, study time, and student employment during college.
The Community Eligibility Provision (CEP) is a policy change to the federally-administered National School Lunch Program that allows schools serving low-income populations to classify all students as eligible for free meals, regardless of individual circumstances. This has implications for the use of free and reduced-price meal (FRM) data to proxy for student disadvantage in education research and policy applications, which is a common practice. We document empirically how the CEP has affected the value of FRM eligibility as a proxy for student disadvantage. At the individual student level, we show that there is essentially no effect of the CEP. However, the CEP does meaningfully change the information conveyed by the share of FRM-eligible students in a school. It is this latter measure that is most relevant for policy uses of FRM data.
Note: Portions of this paper were previously circulated under the title “Using Free Meal and Direct Certification Data to Proxy for Student Disadvantage in the Era of the Community Eligibility Provision.” We have since split the original paper into two parts. This is the first part.
Important educational policy decisions, like whether to shorten or extend the school year, often require accurate estimates of how much students learn during the year. Yet, related research relies on a mostly untested assumption: that growth in achievement is linear throughout the entire school year. We examine this assumption using a data set containing math and reading test scores for over seven million students in kindergarten through 8th grade across the fall, winter, and spring of the 2016-17 school year. Our results indicate that assuming linear within-year growth is often not justified, particularly in reading. Implications for investments in extending the school year, summer learning loss, and racial/ethnic achievement gaps are discussed.
High school graduation rates have increased dramatically in the past two decades. Some skepticism has arisen, however, because of the confluence of the graduation rise and the starts of high-stakes accountability for graduation rates with No Child Left Behind (NCLB). In this study we provide some of the first evidence about the role of accountability versus strategic behavior, especially the degree to which the recent graduation rate rise represents increased human capital. First, using national DD analysis of within-state, cross-district variation in proximity to state graduation rate thresholds, we confirm that NCLB accountability increased graduation rates. However, we find limited evidence that this is due to strategic behavior. To test for lowering of graduation standards, we examined graduation rates in states with and without graduation exams and trends in GEDs; neither analysis suggests that the graduation rate rise is due to strategic behavior. We also examined the effects of “credit recovery” courses using Louisiana micro data; while our results suggest an increase in credit recovery, consistent with some lowering of standards, the size of the effect is not nearly enough to explain the rise in graduation rates. Finally, we examine other forms of strategic behavior by schools, though these can only explain inflation of school/district-level graduation rates, not rational rates. Overall, the evidence suggests that the rise in the national graduation rates reflects some strategic behavior, but also a substantial increase in the nation’s stock of human capital. Graduation accountability was a key contributor.
Many kindergarten teachers place students in higher and lower “ability groups” to learn math and reading. Ability group placement should depend on student achievement, but critics charge that placement is biased by socioeconomic status (SES), gender, and race/ethnicity. We predict group placement in the Early Childhood Longitudinal Study of the Kindergarten class of 2010-11, using linear and ordinal regression models with classroom fixed effects. The best predictors of group placement are test scores, but girls, high-SES students, and Asian Americans receive higher placements than their test scores alone would predict. One third of students move groups during kindergarten, and some movement is predicted by changes in test scores, but high-SES students move up more than score gains would predict, and Hispanic children move up less. Net of SES and test scores, there is no bias in the placement of African American children. Differences in teacher-reported behaviors explain the higher placement of girls, but do little to explain the higher or lower placement of other groups. Although achievement is the best predictor of ability group placement, there are signs of bias.