Search for EdWorkingPapers here by author, title, or keywords.
Methodology, measurement and data
Research showing that high-quality preschool benefits children’s early learning and later life outcomes has led to increased state engagement in public preschool. However, mixed results from evaluations of two programs—Tennessee’s Voluntary Pre-K program and Head Start—have left many policymakers unsure about how to ensure productive investments. This report presents the most rigorous evidence on the effects of preschool and clarifies how the findings from Tennessee and Head Start relate to the larger body of research showing that high-quality preschool enhances children’s school readiness by supporting substantial early learning gains in comparison to children who do not experience preschool and can have lasting impacts far into children’s later years of school and life. Therefore, the issue is not whether preschool “works,” but how to design and implement programs that ensure public preschool investments consistently deliver on their promise.
Estimates of teacher “value-added” suggest teachers vary substantially in their ability to promote student learning. Prompted by this finding, many states and school districts have adopted value-added measures as indicators of teacher job performance. In this paper, we conduct a new test of the validity of value-added models. Using administrative student data from New York City, we apply commonly estimated value-added models to an outcome teachers cannot plausibly affect: student height. We find the standard deviation of teacher effects on height is nearly as large as that for math and reading achievement, raising obvious questions about validity. Subsequent analysis finds these “effects” are largely spurious variation (noise), rather than bias resulting from sorting on unobserved factors related to achievement. Given the difficulty of differentiating signal from noise in real-world teacher effect estimates, this paper serves as a cautionary tale for their use in practice.
Despite wide achievement gaps across California between students from different racial and socioeconomic backgrounds, some school districts have excelled at supporting the learning of all their students. This analysis identifies these positive outlier districts—those in which students of color, as well as White students, consistently achieve at higher levels than students from similar racial/ethnic backgrounds and from families of similar income and education levels in most other districts. These results are predicted, in significant part, by the qualifications of districts’ teachers, as measured by their certification and experience. In particular, the proportion of underprepared teachers—those teaching on emergency permits, waivers, and intern credentials—is associated with decreased achievement for all students, while teaching experience is associated with increased achievement, especially for students of color.
We show that grit, a skill that has been shown to be highly predictive of achievement, is malleable in childhood and can be fostered in the classroom environment. We evaluate a randomized educational intervention implemented in two independent elementary school samples. Outcomes are measured via a novel incentivized real effort task and performance in standardized tests. We find that treated students are more likely to exert effort to accumulate task-specific ability, and hence, more likely to succeed. In a follow up 2.5 years after the intervention, we estimate an effect of about 0.2 standard deviations on a standardized math test.
Despite large schooling and learning gains in many developing countries, children in highly deprived areas are often unlikely to achieve even basic literacy and numeracy. We study how much of this problem can be resolved using a multi-pronged intervention combining several distinct interventions known to be effective in isolation. We conducted a cluster-randomized trial in The Gambia evaluating a literacy and numeracy intervention designed for primary-aged children in remote parts of poor countries. The intervention combines para teachers delivering after-school supplementary classes, scripted lesson plans, and frequent monitoring focusing on improving teacher practice (coaching). A similar intervention previously demonstrated large learning gains in a cluster-randomized trial in rural India. After three academic years, Gambian children receiving the intervention scored 46 percentage points (3.2 SD) better on a combined literacy and numeracy test than control children. This intervention holds great promise to address low learning levels in other poor, remote settings.
Up to three-fourths of college students can be classified as “non-traditional”, yet whether typical policy interventions improves their education and labor market outcomes is understudied. I use a regression discontinuity design to estimate the impacts of a state financial aid program aimed towards non-traditional students. Eligibility has no impacts on degree completion for students intending to enroll in community colleges or four-year colleges but increases bachelor’s degrees for students interested in large, for-profit colleges by four percentage points. I find no impacts on employment or earnings for all applicants. This research highlights challenges in promoting human capital investment for adults.
This paper presents new experimental estimates of the impact of low-ability peers on own outcomes using nationally representative data from China. We exploit the random assignment of students to junior high school classrooms and find that the proportion of low-ability peers, defined as having been retained during primary school (“repeaters”), has negative effects on non-repeaters’ cognitive and non-cognitive outcomes. An exploration of the mechanisms shows that a larger proportion of repeater peers is associated with reduced after-school study time. The negative effects are driven by male repeaters and are more pronounced among students with less strict parental monitoring at home.
The sustaining environments hypothesis refers to the popular idea, stemming from theories in developmental, cognitive, and educational psychology, that the long-term success of early educational interventions is contingent on the quality of the subsequent learning environment. Several studies have investigated whether specific kindergarten classroom and other elementary school factors account for patterns of persistence and fadeout of early educational interventions. These analyses focus on the statistical interaction between an early educational intervention – usually whether the child attended preschool – and several measures of the quality of the subsequent educational environment. The key prediction of the sustaining environments hypothesis is a positive interaction between these two variables. To quantify the strength of the evidence for such effects, we meta-analyze existing studies that have attempted to estimate interactions between preschool and later educational quality in the United States. We then attempt to establish the consistency of the direction and a plausible range of estimates of the interaction between preschool attendance and subsequent educational quality by using a specification curve analysis in a large, nationally representative dataset that has been used in several recent studies of the sustaining environments hypothesis. The meta-analysis yields small positive interaction estimates ranging from approximately .00 to .04, depending on the specification. The specification curve analyses yield interaction estimates of approximately 0. Results suggest that the current mix of methods used to test the sustaining environments hypothesis cannot reliably detect realistically sized effects. Our recommendations are to combine large sample sizes with strong causal identification strategies, and to study combinations of interventions that have a strong probability of showing large main effects.
Researchers commonly interpret effect sizes by applying benchmarks proposed by Cohen over a half century ago. However, effects that are small by Cohen’s standards are large relative to the impacts of most field-based interventions. These benchmarks also fail to consider important differences in study features, program costs, and scalability. In this paper, I present five broad guidelines for interpreting effect sizes that are applicable across the social sciences. I then propose a more structured schema with new empirical benchmarks for interpreting a specific class of studies: causal research on education interventions with standardized achievement outcomes. Together, these tools provide a practical approach for incorporating study features, cost, and scalability into the process of interpreting the policy importance of effect sizes.
Policymakers are increasingly including early-career earnings data in consumer-facing college search tools to help students and families make more informed post-secondary education decisions. We offer new evidence on the degree to which existing college-specific earnings data equips consumers with useful information by documenting the level of selection bias in the earnings metrics reported in the U.S. Department of Education’s College Scorecard. Given growing interest in reporting earnings by college and major, we focus on the degree to which earnings differences across four-year colleges and universities can be explained by differences in major composition across institutions. We estimate that more than three-quarters of the variation in median earnings across institutions is explained by observable factors, and accounting for differences in major composition explains over 30 percent of the residual variation in earnings after controlling for institutional selectivity, student composition, and local cost of living differences. We also identify large variations in the distribution of earnings within colleges; as a result, comparisons of early-career earnings can be extremely sensitive to whether the median, 25th, or 75th percentiles are presented. Taken together, our findings indicate that consumers can easily draw misleading conclusions about institutional quality when using publicly available earnings data to compare institutions.