Search for EdWorkingPapers here by author, title, or keywords.
Methodology, measurement and data
At least sixteen US states have taken steps toward holding teacher preparation programs (TPPs) accountable for teacher value-added to student test scores. Yet it is unclear whether teacher quality differences between TPPs are large enough to make an accountability system worthwhile. Several statistical practices can make differences between TPPs appear larger and more significant than they are. We reanalyze TPP evaluations from 6 states—New York, Louisiana, Missouri, Washington, Texas, and Florida—using appropriate methods implemented by our new caterpillar command for Stata. Our results show that teacher quality differences between most TPPs are negligible—.01-.03 standard deviations in student test scores—even in states where larger differences were reported previously. While ranking all a state’s TPPs may not be possible or desirable, in some states and subjects we can find a single TPP whose teachers stand out as significantly above or below average. Such exceptional TPPs may reward further study.
We examine U.S. children whose parents won the lottery to trace out the effect of financial resources on college attendance. The analysis leverages federal tax and financial aid records and substantial variation in win size and timing. While per-dollar effects are modest, the relationship is weakly concave, with a high upper bound for amounts greatly exceeding college costs. Effects are smaller among low-SES households, not sensitive to how early in adolescence the shock occurs, and not moderated by financial aid crowd-out. The results imply that households derive consumption value from college and household financial constraints alone do not inhibit attendance.
The estimation of test score “gaps” and gap trends plays an important role in monitoring educational inequality. Researchers decompose gaps and gap changes into within- and between-school portions to generate evidence on the role schools play in shaping these inequalities. However, existing decomposition methods assume an equal-interval test scale and are a poor fit to coarsened data such as proficiency categories. This leaves many potential data sources ill-suited for decomposition applications. We develop two decomposition approaches that overcome these limitations: an extension of V, an ordinal gap statistic, and an extension of ordered probit models. Simulations show V decompositions have negligible bias with small within-school samples. Ordered probit decompositions have negligible bias with large within-school samples but more serious bias with small within-school samples. More broadly, our methods enable analysts to (1) decompose the difference between two groups on any ordinal outcome into portions within- and between some third categorical variable, and (2) estimate scale-invariant between-group differences that adjust for a categorical covariate.
For years Georgia's HOPE Scholarship program provided full tuition scholarships to high achieving students. State budgetary shortfalls reduced its generosity in 2011. Under the new rules, only students meeting more rigorous merit-based criteria would retain the original scholarship covering full tuition, now called Zell Miller, with other students seeing aid reductions of approximately 15 percent. We exploit the fact that two of the criteria were high school GPA and SAT/ACT score, which students could not manipulate when the change took place. We compare already-enrolled students just above and below these cutoffs, making use of advances in multi-dimensional regression discontinuity, to estimate effects of partial aid loss. We show that, after the changes, aid flowed disproportionately to wealthier students, and find no evidence that the financial aid reduction affected persistence or graduation for these students. The results suggest that high-achieving students, particularly those already in college, may be less price sensitive than their peers.
Recent interest to promote and support replication efforts assume that there is well-established methodological guidance for designing and implementing these studies. However, no such consensus exists in the methodology literature. This article addresses these challenges by describing design-based approaches for planning systematic replication studies. Our general approach is derived from the Causal Replication Framework (CRF), which formalizes the assumptions under which replication success can be expected. The assumptions may be understood broadly as replication design requirements and individual study design requirements. Replication failure occurs when one or more CRF assumptions are violated. In design-based approaches to replication, CRF assumptions are systematically tested to evaluate the replicability of effects, as well as to identify sources of effect variation when replication failure is observed. In direct replication designs, replication failure is evidence of bias or incorrect reporting in individual study estimates, while in conceptual replication designs, replication failure occurs because of effect variation due to differences in treatments, outcomes, settings, and participant characteristics. The paper demonstrates how multiple research designs may be combined in systematic replication studies, as well as how diagnostic measures may be used to assess the extent to which CRF assumptions are met in field settings.
Researchers are rarely satisfied to learn only whether an intervention works, they also want to understand why and under what circumstances interventions produce their intended effects. These questions have led to increasing calls for implementation research to be included in high quality studies with strong causal claims. Of critical importance is determining whether an intervention can be delivered with adherence to a standardized protocol, and the extent to which an intervention protocol can be replicated across sessions, sites, and studies. When an intervention protocol is highly standardized and delivered through verbal interactions with participants, a set of natural language processing (NLP) techniques termed semantic similarity can be used to provide quantitative summary measures of how closely intervention sessions adhere to a standardized protocol, as well as how consistently the protocol is replicated across sessions. Given the intense methodological, budgetary and logistical challenges for conducting implementation research, semantic similarity approaches have the benefit of being low-cost, scalable, and context agnostic for use. In this paper, we demonstrate how semantic similarity approaches may be utilized in an experimental evaluation of a coaching protocol on teacher pedagogical skills in a simulated classroom environment. We discuss strengths and limitations of the approach, and the most appropriate contexts for applying this method.
Virtual charter schools provide full-time, tuition-free K-12 education through internet-based instruction. Although virtual schools offer a personalized, content-appropriate experience, most research suggests these schools are negatively associated with achievement. Few studies account for differential rates of student mobility, which may produce biased estimates if mobility is jointly associated with virtual school enrollment and subsequent test scores. We account for student mobility in an evaluation of a single, large, anonymous virtual charter school. We estimate treatment effects of the virtual school on student achievement using a hybrid of exact and nearest-neighbor propensity score matching. Relative to their matched peers, we estimate that virtual students produce similar ELA scores and significantly worse math scores after one year. Among a limited sample of students observed for four years, we estimate that virtual students ultimately produce higher ELA scores and similar math scores relative to matched peers. We argue these findings are more reliable indicators of the independent effect of virtual schooling on student achievement because the match on student mobility is a proxy for otherwise unobservable negative selection factors.
The Community Eligibility Provision (CEP) is a policy change to the federally-administered National School Lunch Program that allows schools serving low-income populations to classify all students as eligible for free meals, regardless of individual circumstances. This has implications for the use of free and reduced-price meal (FRM) data to proxy for student disadvantage in education research and policy applications, which is a common practice. We document empirically how the CEP has affected the value of FRM eligibility as a proxy for student disadvantage. At the individual student level, we show that there is essentially no effect of the CEP. However, the CEP does meaningfully change the information conveyed by the share of FRM-eligible students in a school. It is this latter measure that is most relevant for policy uses of FRM data.
Note: Portions of this paper were previously circulated under the title “Using Free Meal and Direct Certification Data to Proxy for Student Disadvantage in the Era of the Community Eligibility Provision.” We have since split the original paper into two parts. This is the first part.
International assessments are important to benchmark the quality of education across countries. However, on low-stakes tests, students’ incentives to invest their maximum effort may be minimal. Research stresses that ignoring students’ effort when interpreting results from low-stakes assessments can lead to biased interpretations of test performance across groups of examinees. We use data from the Programme for International Student Assessment (PISA), a low-stakes test, to analyze the extent to which student effort helps to explain test scores heterogeneity across countries and by gender groups. Our results highlight the importance of accounting for differences in student effort to understand cross-country heterogeneity in performance and variations in gender achievement gaps across nations. We find that, once we account for differential student effort across gender groups, the estimated gender achievement gap in math and science could be up to 12 and 6 times wider, respectively, and up to 49 percent narrower in reading, in favor of boys. In math and science, the gap widens in most countries, even among some of the top 20 most gender-equal countries. Altogether, our effort measures on average explain between 36 and 40 percent of the cross-country variation in test scores.