Search EdWorkingPapers

Search EdWorkingPapers by author, title, or keywords.

Methodology, measurement and data

Reagan Mozer, Luke W. Miratrix, Jackie Eunjung Relyea, James S. Kim.

In a randomized trial that collects text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by human raters. An impact analysis can then be conducted to compare treatment and control groups, using the hand-coded scores as a measured outcome. This process is both time and labor-intensive, which creates a persistent barrier for large-scale assessments of text. Furthermore, enriching ones understanding of a found impact on text outcomes via secondary analyses can be difficult without additional scoring efforts. Machine-based text analytic and data mining tools offer one potential avenue to help facilitate research in this domain. For instance, we could augment a traditional impact analysis that examines a single human-coded outcome with a suite of automatically generated secondary outcomes. By analyzing impacts across a wide array of text-based features, we can then explore what an overall change signifies, in terms of how the text has evolved due to treatment. In this paper, we propose several different methods for supplementary analysis in this spirit. We then present a case study of using these methods to enrich an evaluation of a classroom intervention on young children’s writing. We argue that our rich array of findings move us from “it worked” to “it worked because” by revealing how observed improvements in writing were likely due, in part, to the students having learned to marshal evidence and speak with more authority. Relying exclusively on human scoring, by contrast, is a lost opportunity.

More →


Julie Cohen, Anandita Krishnamachari, Vivian C. Wong.

Many novice teachers learn to teach “on-the-job,” leading to burnout and attrition among teachers and negative outcomes for students in the long term. Pre-service teacher education is tasked with optimizing teacher readiness, but there is a lack of causal evidence regarding effective ways for preparing new teachers. In this paper, we use a mixed reality simulation platform to evaluate the causal effects and robustness of an individualized, directive coaching model for candidates enrolled in a university-based teacher education program, as well as for undergraduates considering teaching as a profession. Across five conceptual replication studies, we find that targeted, directive coaching significantly improves candidates’ instructional performance during simulated classroom sessions, and that coaching effects are robust across different teaching tasks, study timing, and modes of delivery. However, coaching effects are smaller for a sub-population of participants not formally enrolled in a teacher preparation program. These participants differed from teacher candidates in multiple ways, including by demographic characteristics, as well as by their prior experiences learning about instructional methods. We highlight implications for research and practice.

More →


Matthew A. Lenard, Mikko Silliman.

We study the effects of informal social interactions on academic achievement and behavior using idiosyncratic variation in peer groups stemming from changes in bus routes across elementary, middle, and high school. In early grades, a one standard-deviation change in the value-added of same-grade bus peers corresponds to a 0.01 SD change in academic performance and a 0.03 SD change in behavior; by high school, these magnitudes grow to 0.04 SD and 0.06 SD. These findings suggest that student interactions outside the classroom—especially in adolescence—may be an important factor in the education production function.

More →


Luke Keele, Matthew Lenard, Lindsay Page.

In education settings, treatments are often non-randomly assigned to clusters, such as schools or classrooms, while outcomes are measured for students. This research design is called the clustered observational study (COS). We examine the consequences of common support violations in the COS context. Common support violations occur when the covariate distributions of treated and control units do not overlap. Such violations are likely to occur in a COS, especially with a small number of treated clusters. One common technique for dealing with common support violations is trimming treated units. We demonstrate how this practice can yield nonsensical results in some COSs. More specifically, we show how trimming the data can result in an uninterpretable estimand. We use data on Catholic schools to illustrate concepts throughout.

More →


Michael Gilraine, Jeffrey Penney.

An administrative rule allowed students who failed an exam to retake it shortly after, triggering strong `teach to the test' incentives to raise these students' test scores for the retake. We develop a model that accounts for truncation and find that these students score 0.14 standard deviations higher on the retest. Using a regression discontinuity design, we estimate thirty percent of these gains persist to the following year. These results provide evidence that test-focused instruction or `cramming' raises contemporaneous performance, but a large portion of these gains fade-out. Our findings highlight that persistence should be accounted for when comparing educational interventions.

More →


Ishtiaque Fazlul, Todd R. Jones, Jonathan Smith.

Millions of high school students who take an Advanced Placement (AP) course in one of over 30 subjects can earn college credit by performing well on the corresponding AP exam. Using data from four metro-Atlanta public school districts, we find that 15 percent of students’ AP courses do not result in an AP exam. We predict that up to 32 percent of the AP courses that do not result in an AP exam would result in a score of 3 or higher, which generally commands college credit at colleges and universities across the United States. Next, we examine disparities in AP exam-taking rates by demographics and course taking patterns.  Most immediately policy relevant, we find evidence consistent with the positive impact of school district exam subsidies on AP exam-taking rates. In fact, students on free and reduced-price lunch (FRL) in the districts that provide a higher subsidy to FRL students than non-FRL students are more likely to take an AP exam than their non-FRL counterparts, after controlling for demographic and academic covariates.

More →


Kelli A. Bird, Benjamin L. Castleman, Zachary Mabel, Yifeng Song.

Colleges have increasingly turned to predictive analytics to target at-risk students for additional support. Most of the predictive analytic applications in higher education are proprietary, with private companies offering little transparency about their underlying models. We address this lack of transparency by systematically comparing two important dimensions: (1) different approaches to sample and variable construction and how these affect model accuracy; and (2) how the selection of predictive modeling approaches, ranging from methods many institutional researchers would be familiar with to more complex machine learning methods, impacts model performance and the stability of predicted scores. The relative ranking of students’ predicted probability of completing college varies substantially across modeling approaches. While we observe substantial gains in performance from models trained on a sample structured to represent the typical enrollment spells of students and with a robust set of predictors, we observe similar performance between the simplest and most complex models.

More →


David M. Houston, Michael B. Henderson, Paul E. Peterson, Martin R. West.

Do Americans hold a consistent set of opinions about their public schools and how to improve them? From 2013 to 2018, over 5,000 unique respondents participated in more than one consecutive iteration of the annual, nationally representative Education Next poll, offering an opportunity to examine individual-level attitude stability on education policy issues over a six-year period. The proportion of participants who provide the same response to the same question over multiple consecutive years greatly exceeds the amount expected to occur by chance alone. We also find that teachers offer more consistent responses than their non-teaching peers. By contrast, we do not observe similar differences in attitude stability between parents of school-age children and their counterparts without children.

More →

Download616.68 KB

Peter Q. Blair, Papia Debroy, Justin Heck.

Over the past four decades, income inequality grew significantly between workers with bachelor’s degrees and those with high school diplomas (often called “unskilled”). Rather than being unskilled, we argue that these workers are STARs because they are skilled through alternative routes—namely their work experience. Using the skill requirements of a worker’s current job as a proxy of their actual skill, we find that though both groups of workers make transitions to occupations requiring similar skills to their previous occupations, workers with bachelor’s degrees have dramatically better access to higher wage occupations where the skill requirements exceed the workers’ observed skill. This measured opportunity gap offers a fresh explanation of income inequality by degree status and reestablishes the important role of on-the-job-training in human capital formation.

More →


Dorottya Demszky, Jing Liu, Zid Mancenido, Julie Cohen, Heather C. Hill, Dan Jurafsky, Tatsunori Hashimoto.

In conversation, uptake happens when a speaker builds on the contribution of their interlocutor by, for example, acknowledging, repeating or reformulating what they have said. In education, teachers' uptake of student contributions has been linked to higher student achievement. Yet measuring and improving teachers' uptake at scale is challenging, as existing methods require expensive annotation by experts. We propose a framework for computationally measuring uptake, by (1) releasing a dataset of student-teacher exchanges extracted from US math classroom transcripts annotated for uptake by experts; (2) formalizing uptake as pointwise Jensen-Shannon Divergence (pJSD), estimated via next utterance classification; (3) conducting a linguistically-motivated comparison of different unsupervised measures and (4) correlating these measures with educational outcomes. We find that although repetition captures a significant part of uptake, pJSD outperforms repetition-based baselines, as it is capable of identifying a wider range of uptake phenomena like question answering and reformulation. We apply our uptake measure to three different educational datasets with outcome indicators. Unlike baseline measures, pJSD correlates significantly with instruction quality in all three, providing evidence for its generalizability and for its potential to serve as an automated professional development tool for teachers.

More →