Search EdWorkingPapers

Search EdWorkingPapers by author, title, or keywords.

Methodology, measurement and data

Brendan Bartanen, Aliza N. Husain, David D. Liebowitz.

School principals are viewed as critical mechanisms by which to improve student outcomes, but there remain important methodological questions about how to measure principals' effects. We propose a framework for measuring principals' contributions to student outcomes and apply it empirically using data from Tennessee, New York City, and Oregon. We find that using contemporaneous student outcomes to assess principal performance is flawed. Value-added models misattribute to principals changes in student performance caused by factors that principals minimally control. Further, little to none of the variation in average student test scores or attendance is explained by persistent effectiveness differences between principals.

More →


Jeremy Singer.

After near-universal school closures in the United States at the start of the pandemic, lawmakers and educational leaders made plans for when and how to reopen schools for the 2020-21 school year. Educational researchers quickly assessed how a range of public health, political, and demographic factors were associated with school reopening decisions and parent preferences for in-person and remote learning. I review this body of literature, to highlight what we can learn from its findings, limitations, and influence on public discourse. Studies consistently highlighted the influence of partisanship, teachers’ unions, and demographics, with mixed findings on COVID-19 rates. The literature offers useful insight and requires more evidence, and it highlights benefits and limitations to rapid research with large-scale quantitative data.

More →


Joshua B. Gilbert, James S. Kim, Luke W. Miratrix.

Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing Heterogeneous Treatment Effects (HTE) fail to address the HTE that may exist within outcome measures. In this study, we present a novel application of the Explanatory Item Response Model (EIRM) for assessing what we term “item-level” HTE (IL-HTE), in which a unique treatment effect is estimated for each item in an assessment. Results from data simulation reveal that when IL-HTE are present but ignored in the model, standard errors can be underestimated and false positive rates can increase. We then apply the EIRM to assess the impact of a literacy intervention focused on promoting transfer in reading comprehension on a digital formative assessment delivered online to approximately 8,000 third-grade students. We demonstrate that allowing for IL-HTE can reveal treatment effects at the item-level masked by a null average treatment effect, and the EIRM can thus provide fine-grained information for researchers and policymakers on the potentially heterogeneous causal effects of educational interventions.

More →


Michael Dinerstein, Isaac M. Opper.

What happens when employers would like to screen their employees but only observe a subset of output? We specify a model in which heterogeneous employees respond by producing more of the observed output at the expense of the unobserved output. Though this substitution distorts output in the short-term, we derive three sufficient conditions under which the heterogenous response improves screening efficiency: 1) all employees place similar value on staying in their current role; 2) the employees' utility functions satisfy a variation of the traditional single-crossing condition; 3) employer and worker preferences over output are similar. We then assess these predictions empirically by studying a change to teacher tenure policy in New York City, which increased the role that a single measure -- test score value-added -- played in tenure decisions. We show that in response to the policy teachers increased test score value-added and decreased output that did not enter the tenure decision. The increase in test score value-added was largest for the teachers with more ability to improve students' untargeted outcomes, increasing their likelihood of getting tenure. We find that the endogenous response to the policy announcement reduced the screening efficiency gap -- defined as the reduction of screening efficiency stemming from the partial observability of output -- by 28%, effectively shifting some of the cost of partial observability from the post-tenure period to the pre-tenure period.

More →


Peter M. Steiner, Patrick Sheehan, Vivian C. Wong.

Given recent evidence challenging the replicability of results in the social and behavioral sciences, critical questions have been raised about appropriate measures for determining replication success in comparing effect estimates across studies. At issue is the fact that conclusions about replication success often depend on the measure used for evaluating correspondence in results. Despite the importance of choosing an appropriate measure, there is still no wide-spread agreement about which measures should be used. This paper addresses these questions by describing formally the most commonly used measures for assessing replication success, and by comparing their performance in different contexts according to their replication probabilities – that is, the probability of obtaining replication success given study-specific settings. The measures may be characterized broadly as conclusion-based approaches, which assess the congruence of two independent studies’ conclusions about the presence of an effect, and distance-based approaches, which test for a significant difference or equivalence of two effect estimates. We also introduce a new measure for assessing replication success called the correspondence test, which combines a difference and equivalence test in the same framework. To help researchers plan prospective replication efforts, we provide closed formulas for power calculations that can be used to determine the minimum detectable effect size (and thus, sample sizes) for each study so that a predetermined minimum replication probability can be achieved. Finally, we use a replication dataset from the Open Science Collaboration (2015) to demonstrate the extent to which conclusions about replication success depend on the correspondence measure selected.

More →


Ariana Audisio, Rebecca Taylor-Perryman, Tim Tasker, Matthew P. Steinberg.

Teachers are the most important school-specific factor in student learning. Yet, little evidence exists linking teacher professional learning programs and the various strategies or components that comprise them to student achievement. In this paper, we examine a teacher fellowship model for professional learning designed and implemented by Leading Educators, a national nonprofit organization that aims to bridge research and practice to improve instructional quality and accelerate learning across school systems. During the 2015-16 and 2016-17 school years, Leading Educators conducted its fellowship program for teachers and school leaders to provide educators ongoing, collaborative, job-embedded professional development and to improve student achievement. Relying on quasi-experimental methods, we find that a school’s participation in the fellowship model increased student proficiency rates in math and English language arts on state achievement exams. Further, student achievement benefitted from a more sustained duration of teacher participation in the fellowship model, and the impact on student achievement varied depending on the share of a school’s teachers who participated in the fellowship model and the extent to which teachers independently selected into the fellowship model or were appointed to participate by school leaders. Taken together, findings from this paper should inform professional learning organizations, schools, and policymakers on the design, implementation and impact of teacher professional learning.

More →


Dominique J. Baker, Karly S. Ford, Samantha Viano, Marc P. Johnston-Guerrero.

How scholars name different racial groups has powerful salience for understanding what researchers study. We explored how education researchers used racial terminology in recently published, high-profile, peer-reviewed studies. Our sample included all original empirical studies published in the non-review AERA journals from 2009 to 2019. We found two-thirds of articles used at least one racial category term, with an increase from about half to almost three-quarters of published studies between 2009 and 2019. Other trends include the increasing popularity of the term Black, the emergence of gender-expansive terms such as Latinx, the popularity of the term Hispanic in quantitative studies, and the paucity of studies with terms connoting missing race data or including terms describing Indigenous and multiracial peoples.

More →


Todd Pugatch, Elizabeth Schroeder, Nicholas Wilson.

We design a commitment contract for college students, "Study More Tomorrow," and conduct a randomized control trial testing a model of its demand. The contract commits students to attend peer tutoring if their midterm grade falls below a pre-specified threshold. The contract carries a financial penalty for noncompliance, in contrast to other commitment devices for studying tested in the literature. We find demand for the contract, with take-up of 10% among students randomly assigned a contract offer. Contract demand is not higher among students randomly assigned to a lower contract price, plausibly because a lower contract price also means a lower commitment benefit of the contract. Students with the highest perceived utility for peer tutoring have greater demand for commitment, consistent with our model. Contrary to the model's predictions, we fail to find evidence of increased demand among present-biased students or among those with higher self-reported tendency to procrastinate. Our results show that college students are willing to pay for study commitment devices. The sources of this demand do not align fully with behavioral theories, however.

More →


Daniel Rodriguez-Segura, Beth E. Schueler.

A significant share of education and development research uses data collected by workers called “enumerators.” It is well-documented that “enumerator effects”—or inconsistent practices between the individual people who administer measurement tools— can be a key source of error in survey data collection. However, it is less understood whether this is a problem for academic assessments or performance tasks. We leverage a remote phone-based mathematics assessment of primary school students and survey of their parents in Kenya. Enumerators were randomized to students to study the presence of enumerator effects. We find that both the academic assessment and survey was prone to enumerator effects and use simulation to show that these effects were large enough to lead to spurious results at a troubling rate in the context of impact evaluation. We therefore recommend assessment administrators randomize enumerators at the student level and focus on training enumerators to minimize bias.

More →


Edward J. Kim.

This study introduces the signal weighted teacher value-added model (SW VAM), a value-added model that weights student-level observations based on each student’s capacity to signal their assigned teacher’s quality. Specifically, the model leverages the repeated appearance of a given student to estimate student reliability and sensitivity parameters, whereas traditional VAMs represent a special case where all students exhibit identical parameters. Simulation study results indicate that SW VAMs outperform traditional VAMs at recovering true teacher quality when the assumption of student parameter invariance is met but have mixed performance under alternative assumptions of the true data generating process depending on data availability and the choice of priors. Evidence using an empirical data set suggests that SW VAM and traditional VAM results may disagree meaningfully in practice. These findings suggest that SW VAMs have promising potential to recover true teacher value-added in practical applications and, as a version of value-added models that attends to student differences, can be used to test the validity of traditional VAM assumptions in empirical contexts.

More →