- Heather C. Hill
Search for EdWorkingPapers here by author, title, or keywords.
Heather C. Hill
Federal policy has both incentivized and supported better use of research evidence by educational leaders. However, the extent to which these leaders are well-positioned to understand foundational principles from research design and statistics, including those that underlie the What Works Clearinghouse ratings of research studies, remains an open question. To investigate educational leaders’ knowledge of these topics, we developed a construct map and items representing key concepts, then conducted surveys containing those items with a small pilot sample (n=178) and a larger nationally representative sample (n=733) of educational leaders. We found that leaders’ knowledge was surprisingly inconsistent across topics. We also found most items were answered correctly by less than half of respondents, with cognitive interviews suggesting that some of those correct answers derived from guessing or test-taking techniques. Our findings identify a roadblock to policymakers’ contention that educational leaders should use research in decision-making.
Despite calls for more evaluative research in teacher education, formal assessments of the effectiveness of novel teacher education practices remain rare. One reason is that we lack designs and measurement approaches that appropriately meet the challenges of causal inference in the field. In this article, we seek to fill this gap. We first outline the difficulties of doing evaluative work in teacher education. We then describe a set of replicable practices for developing measures of key teaching outcomes, and propose evaluative research designs that can be adapted to suit the needs of the field. Finally, we identify community-wide initiatives that are necessary to advance useful evaluative research.
More than half of U.S. children fail to meet proficiency standards in mathematics and science in fourth grade. Teacher professional development and curriculum improvement are two of the primary levers that school leaders and policymakers use to improve children’s science, technology, engineering and mathematics (STEM) learning, yet until recently, the evidence base for understanding their effectiveness was relatively thin. In recent years, a wealth of rigorous new studies using experimental designs have investigated whether and how STEM instructional improvement programs work. This article highlights contemporary research on how to improve classroom instruction and subsequent student learning in STEM. Instructional improvement programs that feature curriculum integration, teacher collaboration, content knowledge, pedagogical content knowledge, and how students learn all link to stronger student achievement outcomes. We discuss implications for policy and practice.
How should teachers spend their STEM-focused professional learning time? To answer this question, we analyzed a recent wave of rigorous new studies of STEM instructional improvement programs. We found that programs work best when focused on building knowledge teachers can use during instruction: knowledge of the curriculum materials they will use, knowledge of content and how content can be represented for learners, and knowledge of how students learn that content. We argue that such learning opportunities improve teachers’ professional knowledge and skill, potentially by supporting teachers in making more informed in-the-moment instructional decisions.
This paper describes and evaluates a web-based coaching program designed to support teachers in implementing Common Core-aligned math instruction. Web-based coaching programs can be operated at relatively lower costs, are scalable, and make it more feasible to pair teachers with coaches who have expertise in their content area and grade level. Results from our randomized field trial document sizable and sustained effects on both teachers’ ability to analyze instruction and on their instructional practice, as measured the Mathematical Quality of Instruction (MQI) instrument and student surveys. However, these improvements in instruction did not result in corresponding increases in math test scores as measured by state standardized tests or interim assessments. We discuss several possible explanations for this pattern of results.
We present results from a meta-analysis of 95 experimental and quasi-experimental preK-12 science, technology, engineering, and mathematics (STEM) professional development and curriculum programs, seeking to understand what content, activities and formats relate to stronger student outcomes. Across rigorously conducted studies, we found an average weighted impact estimate of +0.21 standard deviations. Programs saw stronger outcomes when they helped teachers learn to use curriculum materials; focused on improving teachers' content knowledge, pedagogical content knowledge and/or understanding of how students learn; incorporated summer workshops; and included teacher meetings to troubleshoot and discuss classroom implementation. We discuss implications for policy and practice.