Search for EdWorkingPapers here by author, title, or keywords.
Methodology, measurement and data
A common rationale for offering online courses in K-12 schools is that they allow students to take courses not offered at their schools; however, there has been little research on how online courses are used to expand curricular options when operating at scale. We assess the extent to which students and schools use online courses for this purpose by analyzing statewide, student-course level data from high school students in Florida, which has the largest virtual sector in the nation. We introduce a “novel course” framework to address this question. We define a virtual course as “novel” if it is only available to a student virtually, not face-to-face through their own home high school. We find that 7% of high school students in 2013-14 enroll in novel online courses. Novel courses were more commonly used by higher-achieving students, in rural schools, and in schools with relatively few Advanced Placement/International Baccalaureate offerings.
Researchers decompose test score “gaps” and gap-changes into within- and between-school portions to generate evidence on the role that schools play in shaping educational inequality. However, existing decomposition methods (a) assume an equal-interval test scale and (b) are a poor fit to coarsened data such as proficiency categories. We develop two decomposition approaches that overcome these limitations: an extension of V, an ordinal gap statistic (Ho, 2009), and an extension of ordered probit models (Reardon et al., 2017). Simulations show V decompositions have negligible bias with small within-school samples. Ordered probit decompositions have negligible bias with large within-school samples but more serious bias with small within-school samples. These methods are applicable to decomposing any ordinal outcome by any categorical grouping variable.
Many kindergarten teachers place students in higher and lower “ability groups” to learn math and reading. Ability group placement should depend on student achievement, but critics charge that placement is biased by socioeconomic status (SES), gender, and race/ethnicity. We predict group placement in the Early Childhood Longitudinal Study of the Kindergarten class of 2010-11, using linear and ordinal regression models with classroom fixed effects. The best predictors of group placement are test scores, but girls, high-SES students, and Asian Americans receive higher placements than their test scores alone would predict. One third of students move groups during kindergarten, and some movement is predicted by changes in test scores, but high-SES students move up more than score gains would predict, and Hispanic children move up less. Net of SES and test scores, there is no bias in the placement of African American children. Differences in teacher-reported behaviors explain the higher placement of girls, but do little to explain the higher or lower placement of other groups. Although achievement is the best predictor of ability group placement, there are signs of bias.
Enrollment in higher education has risen dramatically in Latin America, especially in Chile. Yet graduation and persistence rates remain low. One way to improve graduation and persistence is to use data and analytics to identify students at risk of dropout, target interventions, and evaluate interventions’ effectiveness at improving student success. We illustrate the potential of this approach using data from eight Chilean universities. Results show that data available at matriculation are only weakly predictive of persistence, while prediction improves dramatically once data on university grades become available. Some predictors of persistence are under policy control. Financial aid predicts higher persistence, and being denied a first-choice major predicts lower persistence. Student success programs are ineffective at some universities; they are more effective at others, but when effective they often fail to target the highest risk students. Universities should use data regularly and systematically to identify high-risk students, target them with interventions, and evaluate those interventions’ effectiveness.
Clustered observational studies (COSs) are a critical analytic tool for educational effectiveness research. We present a design framework for the development and critique of COSs. The framework is built on the counterfactual model for causal inference and promotes the concept of designing COSs that emulate the targeted randomized trial that would have been conducted were it feasible. We emphasize the key role of understanding the assignment mechanism to study design. We review methods for statistical adjustment and highlight a recently developed form of matching designed specifically for COSs. We review how regression models can be profitably combined with matching and note best practice for estimates of statistical uncertainty. Finally, we review how sensitivity analyses can determine whether conclusions are sensitive to bias from potential unobserved confounders. We demonstrate concepts with an evaluation of a summer school reading intervention in Wake County, North Carolina.
Many interventions in education occur in settings where treatments are applied to groups. For example, a reading intervention may be implemented for all students in some schools and withheld from students in other schools. When such treatments are non-randomly allocated, outcomes across the treated and control groups may differ due to the treatment or due to baseline differences between groups. When this is the case, researchers can use statistical adjustment to make treated and control groups similar in terms of observed characteristics. Recent work in statistics has developed matching methods designed for contexts where treatments are clustered. This form of matching, known as multilevel matching, may be well suited to many education applications where treatments are assigned to schools. In this article, we provide an extensive evaluation of multilevel matching and compare it to multilevel regression modeling. We evaluate multilevel matching methods in two ways. First, we use these matching methods to recover treatment effect estimates from three clustered randomized trials using a within-study comparison design. Second, we conduct a simulation study. We find evidence that generally favors an analytic approach to statistical adjustment that combines multilevel matching with regression adjustment. We conclude with an empirical application.
Researchers commonly interpret effect sizes by applying benchmarks proposed by Cohen over a half century ago. However, effects that are small by Cohen’s standards are large relative to the impacts of most field-based interventions. These benchmarks also fail to consider important differences in study features, program costs, and scalability. In this paper, I present five broad guidelines for interpreting effect sizes that are applicable across the social sciences. I then propose a more structured schema with new empirical benchmarks for interpreting a specific class of studies: causal research on education interventions with standardized achievement outcomes. Together, these tools provide a practical approach for incorporating study features, cost, and scalability into the process of interpreting the policy importance of effect sizes.
Using rich longitudinal data from one of the largest teacher education programs in Texas, we examine the measurement of pre-service teacher (PST) quality and its relationship with entry into the K–12 public school teacher workforce. Drawing on rubric-based observations of PSTs during clinical teaching, we find that little of the variation in observation scores is attributable to actual differences between PSTs. Instead, differences in scores largely reflect differences in the rating standards of field supervisors. We also find that men and PSTs of color receive systematically lower scores. Finally, higher-scoring PSTs are slightly more likely to enter the teacher workforce and substantially more likely to be hired at the same school as their clinical teaching placement.
After increasing in the 1970s and 1980s, time to bachelor’s degree has declined since the 1990s. We document this fact using data from three nationally representative surveys. We show that this pattern is occurring across school types and for all student types. Using administrative student records from 11 large universities, we confirm the finding and show that it is robust to alternative sample definitions. We discuss what might explain the decline in time to bachelor’s degree by considering trends in student preparation, state funding, student enrollment, study time, and student employment during college.
The Community Eligibility Provision (CEP) is a policy change to the federally-administered National School Lunch Program that allows schools serving low-income populations to classify all students as eligible for free meals, regardless of individual circumstances. This has implications for the use of free and reduced-price meal (FRM) data to proxy for student disadvantage in education research and policy applications, which is a common practice. We document empirically how the CEP has affected the value of FRM eligibility as a proxy for student disadvantage. At the individual student level, we show that there is essentially no effect of the CEP. However, the CEP does meaningfully change the information conveyed by the share of FRM-eligible students in a school. It is this latter measure that is most relevant for policy uses of FRM data.
Note: Portions of this paper were previously circulated under the title “Using Free Meal and Direct Certification Data to Proxy for Student Disadvantage in the Era of the Community Eligibility Provision.” We have since split the original paper into two parts. This is the first part.