- Jing Liu
Search for EdWorkingPapers here by author, title, or keywords.
Providing consistent, individualized feedback to teachers is essential for improving instruction but can be prohibitively resource intensive in most educational contexts. We develop an automated tool based on natural language processing to give teachers feedback on their uptake of student contributions, a high-leverage teaching practice that supports dialogic instruction and makes students feel heard. We conduct a randomized controlled trial as part of an online computer science course, Code in Place (n=1,136 instructors), to evaluate the effectiveness of the feedback tool. We find that the tool improves instructors’ uptake of student contributions by 24% and present suggestive evidence that our tool also improves students’ satisfaction with the course. These results demonstrate the promise of our tool to complement existing efforts in teachers’ professional development.
In conversation, uptake happens when a speaker builds on the contribution of their interlocutor by, for example, acknowledging, repeating or reformulating what they have said. In education, teachers' uptake of student contributions has been linked to higher student achievement. Yet measuring and improving teachers' uptake at scale is challenging, as existing methods require expensive annotation by experts. We propose a framework for computationally measuring uptake, by (1) releasing a dataset of student-teacher exchanges extracted from US math classroom transcripts annotated for uptake by experts; (2) formalizing uptake as pointwise Jensen-Shannon Divergence (pJSD), estimated via next utterance classification; (3) conducting a linguistically-motivated comparison of different unsupervised measures and (4) correlating these measures with educational outcomes. We find that although repetition captures a significant part of uptake, pJSD outperforms repetition-based baselines, as it is capable of identifying a wider range of uptake phenomena like question answering and reformulation. We apply our uptake measure to three different educational datasets with outcome indicators. Unlike baseline measures, pJSD correlates significantly with instruction quality in all three, providing evidence for its generalizability and for its potential to serve as an automated professional development tool for teachers.
We provide novel evidence on the causal impacts of student absences in middle and high school on state test scores, course grades, and educational attainment using a rich administrative dataset that tracks the date and class period of each absence. We use two similar but distinct identification strategies that address potential endogeneity due to time-varying student-level shocks by exploiting within-student, between-subject variation in class-specific absences. We also leverage information on the timing of absences to show that absences that occur after the annual window for state standardized testing do not affect test scores, providing a further check of our identification strategy. Both approaches yield similar results. We nd that absences in middle and high school harm contemporaneous student achievement and longer-term educational attainment: On average, missing 10 classes reduces math or English Language Arts test scores by 3-4% of a standard deviation and course grades by 17-18% of a standard deviation. 10 total absences across all subjects in 9th grade reduce both the probability of on-time graduation and ever enrolling in college by 2%. Learning loss due to school absences can have profound economic and social consequences.
Valid and reliable measurements of teaching quality facilitate school-level decision-making and policies pertaining to teachers. Using nearly 1,000 word-to-word transcriptions of 4th- and 5th-grade English language arts classes, we apply novel text-as-data methods to develop automated measures of teaching to complement classroom observations traditionally done by human raters. This approach is free of rater bias and enables the detection of three instructional factors that are well aligned with commonly used observation protocols: classroom management, interactive instruction, and teacher-centered instruction. The teacher-centered instruction factor is a consistent negative predictor of value-added scores, even after controlling for teachers’ average classroom observation scores. The interactive instruction factor predicts positive value-added scores. Our results suggest that the text-as-data approach has the potential to enhance existing classroom observation systems through collecting far more data on teaching with a lower cost, higher speed, and the detection of multifaceted classroom practices.
Classroom teachers in the US are absent on average approximately six percent of a school year. Despite the prevalence of teacher absences, surprisingly little research has assessed the key source of replacement instruction: substitute teachers. Using detailed administrative and survey data from a large urban school district, we document the prevalence, predictors, and variation of substitute coverage across schools. Less advantaged schools systematically exhibit lower rates of substitute coverage compared with peer institutions. Observed school, teacher, and absence characteristics account for only part of this school variation. In contrast, substitute teachers’ preferences for specific schools, mainly driven by student behavior and support from teachers and school administrators, explain a sizable share of the unequal distribution of coverage rates above and beyond standard measures in administrative data.
Valid and reliable measurements of teaching quality facilitate school-level decision-making and policies pertaining to teachers, but conventional classroom observations are costly, prone to rater bias, and hard to implement at scale. Using nearly 1,000 word-to-word transcriptions of 4th- and 5th-grade English language arts classes, we apply novel text-as-data methods to develop automated, objective measures of teaching to complement classroom observations. This approach is free of rater bias and enables the detection of three instructional factors that are well aligned with commonly used observation protocols: classroom management, interactive instruction, and teacher-centered instruction. The teacher-centered instruction factor is a consistent negative predictor of value-added scores, even after controlling for teachers’ average classroom observation scores. The interactive instruction factor predicts positive value-added scores.
With 55 million students in the United States out of school due to the COVID-19 pandemic, education systems are scrambling to meet the needs of schools and families, including planning how best to approach instruction in the fall given students may be farther behind than in a typical year. Yet, education leaders have little data on how much learning has been impacted by school closures. While the COVID-19 learning interruptions are unprecedented in modern times, existing research on the impacts of missing school (due to absenteeism, regular summer breaks, and school closures) on learning can nonetheless inform projections of potential learning loss due to the pandemic. In this study, we produce a series of projections of COVID-19-related learning loss and its potential effect on test scores in the 2020-21 school year based on (a) estimates from prior literature and (b) analyses of typical summer learning patterns of five million students. Under these projections, students are likely to return in fall 2020 with approximately 63-68% of the learning gains in reading relative to a typical school year and with 37-50% of the learning gains in math. However, we estimate that losing ground during the COVID-19 school closures would not be universal, with the top third of students potentially making gains in reading. Thus, in preparing for fall 2020, educators will likely need to consider ways to support students who are academically behind and further differentiate instruction.
Although program evaluations using rigorous quasi-experimental or experimental designs can inform decisions about whether to continue or terminate a given program, they often have limited ability to reveal the mechanisms by which complex interventions achieve their effects. To illuminate these mechanisms, this paper analyzes novel text data from thousands of school improvement planning and implementation reports from Washington State, deploying computer-assisted techniques to extract measures of school improvement processes. Our analysis identified 15 coherent reform strategies that varied greatly across schools and over time. The prevalence of identified reform strategies was largely consistent with school leaders’ own perceptions of reform priorities via interviews. Several reform strategies measures were significantly associated with reductions in student chronic absenteeism and improvements in student achievement. We lastly discuss the opportunities and pitfalls of using novel text data to study reform processes.
Teachers’ impact on student long-run success is only partially explained by their contributions to students’ short-run academic performance. For this study, we explore a second dimension of teacher effectiveness by creating measures of teachers’ contributions to student class-attendance. We find systematic variation in teacher effectiveness at reducing unexcused class absences at the middle and high school level. These differences across teachers are as stable as those for student achievement, but teacher effectiveness on attendance only weakly correlates with their effects on achievement. We link these measures of teacher effectiveness to students’ long-run outcomes. A high value-added to attendance teacher has a stronger impact on students’ likelihood of finishing high school than does a high value-added to achievement teacher. Moreover, high value-added to attendance teachers can motivate students to pursue higher academic goals as measured by Advanced Placement course taking. These positive effects are particularly salient for low-achieving and low-attendance students.