- Matthew Kraft
Search for EdWorkingPapers here by author, title, or keywords.
Narrative accounts of classroom instruction suggest that external interruptions, such as intercom announcements and visits from staff, are a regular occurrence in U.S. public schools. We study the frequency, nature, and duration of external interruptions in the Providence Public School District (PPSD) using original data from a district-wide survey and classroom observations. We estimate that a typical classroom in PPSD is interrupted over 2,000 times per year, and that these interruptions and the disruptions they cause result in the loss of between 10 to 20 days of instructional time. Administrators appear to systematically underestimate the frequency and negative consequences of these interruptions. We propose several organizational approaches schools might adopt to reduce external interruptions to classroom instruction.
Researchers commonly interpret effect sizes by applying benchmarks proposed by Cohen over a half century ago. However, effects that are small by Cohen’s standards are large relative to the impacts of most field-based interventions. These benchmarks also fail to consider important differences in study features, program costs, and scalability. In this paper, I present five broad guidelines for interpreting effect sizes that are applicable across the social sciences. I then propose a more structured schema with new empirical benchmarks for interpreting a specific class of studies: causal research on education interventions with standardized achievement outcomes. Together, these tools provide a practical approach for incorporating study features, cost, and scalability into the process of interpreting the policy importance of effect sizes.
In recent years, states have sought to increase accountability for public school teachers by implementing a package of reforms centered on high-stakes evaluation systems. We examine the effect of these reforms on the supply and quality of new teachers. Leveraging variation across states and time, we find that accountability reforms reduced the number of newly licensed teacher candidates and increased the likelihood of unfilled teaching positions, particularly in hard-to-staff schools. Evidence also suggests that reforms increased the quality of new labor supply by reducing the likelihood new teachers attended unselective undergraduate institutions. Decreases in job security, satisfaction, and autonomy are likely mechanisms for these effects.
We explore the potential for mobile technology to facilitate more frequent and higher-quality teacher-parent communication among a sample of 132 New York City public schools. We provide participating schools with free access to a mobile communication app and randomize schools to receive intensive training and guidance for maximizing the efficacy of the app. User supports led to substantially higher levels of communication within the app in the treatment year, but had few subsequent effects on perceptions of communication quality or student outcomes. Treatment teachers used the app less frequently the following year when they no longer received communication tips and reminders. We analyze internal user data to suggest organizational policies schools might adopt to increase the take-up and impacts of mobile communication technology.
This paper describes and evaluates a web-based coaching program designed to support teachers in implementing Common Core-aligned math instruction. Web-based coaching programs can be operated at relatively lower costs, are scalable, and make it more feasible to pair teachers with coaches who have expertise in their content area and grade level. Results from our randomized field trial document sizable and sustained effects on both teachers’ ability to analyze instruction and on their instructional practice, as measured the Mathematical Quality of Instruction (MQI) instrument and student surveys. However, these improvements in instruction did not result in corresponding increases in math test scores as measured by state standardized tests or interim assessments. We discuss several possible explanations for this pattern of results.
We examine the dynamic nature of teacher skill development using panel data on principals’ subjective performance ratings of teachers. Past research on teacher productivity improvement has focused primarily on one important but narrow measure of performance: teachers’ value-added to student achievement on standardized tests. Unlike value-added, subjective performance ratings provide detailed information about specific skill dimensions and are available for the many teachers in non-tested grades and subjects. Using a within-teacher returns to experience framework, we find, on average, large and rapid improvements in teachers’ instructional practices throughout their first ten years on the job as well as substantial differences in improvement rates across individual teachers. We also document that subjective performance ratings contain important information about teacher effectiveness. In the district we study, principals appear to differentiate teacher performance throughout the full distribution instead of just in the tails. Furthermore, prior performance ratings and gains in these ratings provide additional information about teachers’ ability to improve test scores that is not captured by prior value-added scores. Taken together, our study provides new insights on teacher performance improvement and variation in teacher development across instructional skills and individual teachers.
Starting in 2011, Boston Public Schools (BPS) implemented major reforms to its teacher evaluation system with a focus on promoting teacher development. We administered independent district-wide surveys in 2014 and 2015 to capture BPS teachers’ perceptions of the evaluation feedback they receive. Teachers generally reported that evaluators were fair and accurate, but that they struggled to provide high-quality feedback. We conduct a randomized controlled trial to evaluate the district’s efforts to improve this feedback through an intensive training program for evaluators. We find little evidence the program affected evaluators’ feedback, teacher retention, or student achievement. Our results suggest that improving the quality of evaluation feedback may require more fundamental changes to the design and implementation of teacher evaluation systems.