Search for EdWorkingPapers here by author, title, or keywords.
Standards, accountability, assessment, and curriculum
Starting in 2009, the U.S. public education system undertook a massive effort to institute new high-stakes teacher evaluation systems. We examine the effects of these reforms on student achievement and attainment at a national scale by exploiting the staggered timing of implementation across states. We find precisely estimated null effects, on average, that rule out impacts as small as 1.5 percent of a standard deviation for achievement and 1 percentage point for high school graduation and college enrollment. We also find little evidence of heterogeneous effects across an index measuring system design rigor, specific design features, and district characteristics.
We examine the effects of disseminating academic performance data—either status, growth, or both—on parents’ school choices and their implications for racial, ethnic, and economic segregation. We conduct an online survey experiment featuring a nationally representative sample of parents and caretakers of children age 0-12. Participants choose between three randomly sampled elementary schools drawn from the same school district. Only growth information—alone and not in concert with status information—has clear and consistent desegregating consequences. Because states that include growth in their school accountability systems have generally done so as a supplement to and not a replacement for status, there is little reason to expect that this development will influence choice behavior in a manner that meaningfully reduces school segregation.
High school graduation rates in the United States are at an all-time high, yet many of these graduates are deemed not ready for postsecondary coursework when they enter college. This study examines the short-, medium-, and long-term effects of remedial courses in middle school using a regression discontinuity design. While the short-term test score benefits of taking a remedial course in English language arts in middle school fade quickly, I find significant positive effects on the likelihood of taking college credit-bearing courses in high school, college enrollment, enrolling in more selective colleges, persistence in college, and degree attainment.
One of the most obvious and not sufficiently well understood political decisions in education regards the optimal amount of instruction time required to improve academic performance. This paper considers an unexpected, exogenous regulatory change that reduced the school calendar of non-fee-paying schools (public and charter schools) in the Madrid region (Spain) by two weeks during the 2017/2018 school year. Using difference-in-differences regression, we found that this regulatory change contributed to a significant deterioration in academic performance, particularly in Spanish and English. We further explored non-linear (quantile) effects across the distribution of scores in standardized exams, finding that the disruption due to the new regulations affected more students in the upper quartile of the distribution. Overall, we found a reduction in the gap across non-fee-paying schools and an increase in the gap between non-fee- and fee-paying schools (private schools).
An administrative rule allowed students who failed an exam to retake it shortly after, triggering strong `teach to the test' incentives to raise these students' test scores for the retake. We develop a model that accounts for truncation and find that these students score 0.14 standard deviations higher on the retest. Using a regression discontinuity design, we estimate thirty percent of these gains persist to the following year. These results provide evidence that test-focused instruction or `cramming' raises contemporaneous performance, but a large portion of these gains fade-out. Our findings highlight that persistence should be accounted for when comparing educational interventions.
Millions of high school students who take an Advanced Placement (AP) course in one of over 30 subjects can earn college credit by performing well on the corresponding AP exam. Using data from four metro-Atlanta public school districts, we find that 15 percent of students’ AP courses do not result in an AP exam. We predict that up to 32 percent of the AP courses that do not result in an AP exam would result in a score of 3 or higher, which generally commands college credit at colleges and universities across the United States. Next, we examine disparities in AP exam-taking rates by demographics and course taking patterns. Most immediately policy relevant, we find evidence consistent with the positive impact of school district exam subsidies on AP exam-taking rates. In fact, students on free and reduced-price lunch (FRL) in the districts that provide a higher subsidy to FRL students than non-FRL students are more likely to take an AP exam than their non-FRL counterparts, after controlling for demographic and academic covariates.
The educative Teacher Performance Assessment (edTPA) - a performance-based examination for prospective PreK-12 teachers to guarantee teaching readiness - has gained popularity in recent years. This research offers the first causal evidence about the effects of this nationwide initiative on teacher supply and student outcomes of new teachers. We leverage the quasi-experimental setting of different adoption timing by states and analyze multiple data sources containing a national sample of prospective teachers and students of new teachers in the US. We find that the new license requirement reduced the number of graduates from teacher preparation programs by 14%. The negative effect is stronger for non-white prospective teachers at less-selective universities. Contrary to the policy intention, we find evidence that edTPA has adverse effects on student learning.
How do college non-completers list schooling on their resumes? The negative signal of not completing might outweigh the positive signal of attending but not persisting. If so, job-seekers might hide non-completed schooling on their resumes. To test this we match resumes from an online jobs board to administrative educational records. We find that fully one in three job-seekers who attended college but did not earn a degree omit their only post-secondary schooling from their resumes. We further show that these are not casual omissions but are strategic decisions systematically related to schooling characteristics, such as selectivity and years of enrollment. We also find evidence of lying, and show which degrees listed on resumes are most likely untrue. Lastly, we discuss implications. We show not only that this implies a commonly held assumption, that employers perfectly observe schooling, does not hold, but also that we can learn about which college experiences students believe are most valued by employers.
In multisite experiments, we can quantify treatment effect variation with the cross-site treatment effect variance. However, there is no standard method for estimating cross-site treatment effect variance in multisite regression discontinuity designs (RDD). This research rectifies this gap in the literature by systematically exploring and evaluating methods for estimating the cross-site treatment effect variance in multisite RDDs. Specifically, we formalize a fixed intercepts/random coefficients (FIRC) RDD model and develop a random effects meta-analysis (Meta) RDD model for estimating cross-site treatment effect variance. We find that a restricted FIRC model works best when the running variables' relationship to the outcome is stable across sites but can be biased otherwise. In those instances, we recommend using either the unrestricted FIRC model or the meta-analysis model; with the unrestricted FIRC model generally performing better when the average number of in-bandwidth observations is less than 120 and the meta-analysis model performing better when the average number of in-bandwidth observations is above 120. We apply our models to a high school exit exam policy in Massachusetts that required students who passed the high school exit exam but were still determined to be nonproficient to complete an ``Education Proficiency Plan" (EPP). We find the EPP policy had a positive local average treatment effect on whether students completed a math course their senior year on average across sites, but that the impact varied enough such that a third of schools could have had a negative impact.
From 2010 onwards, most US states have aligned their education standards by adopting the Common Core State Standards (CCSS) for math and English Language Arts. The CCSS did not target other subjects such as science and social studies. We estimate spillovers of the CCSS on student achievement in non-targeted subjects in models with state and year fixed effects. Using student achievement data from the NAEP, we show that the CCSS had a negative effect on student achievement in non-targeted subjects. This negative effect is largest for underprivileged students, exacerbating racial and socioeconomic student achievement gaps. Using teacher surveys, we show that the CCSS caused a reduction in instructional focus on nontargeted subjects.