- David M. Quinn
Search EdWorkingPapers by author, title, or keywords.
David M. Quinn
Frames shape public opinion on policy issues, with implications for policy adoption and agenda-setting. What impact do common issue frames for racial equity in education have on voters’ support for racially equitable education policy? Across survey experiments with two independent representative polls of California voters, framing effects were moderated by voters’ prior policy preferences. Among respondents concerned with tax policy, a frame emphasizing the economic benefits of equity elicited higher priority for racial equity in education. Among respondents concerned with social justice, an “equal opportunity” frame elicited higher priority ratings. However, exploratory analyses showed frames only mattered when respondents held mixed policy preferences. Among respondents who (a) valued both tax policy and social justice issues, or who (b) valued neither, both frames were equally impactful.
Scholars argue the “racial achievement gap” frame perpetuates deficit mindsets. Previously, we found teachers gave lower priority to racial equity when disparities were framed as “achievement gaps” versus “inequality in educational outcomes.” In this brief, we analyze data from two survey experiments using a teacher sample and an MTurk sample. We find: (1) the effect of “achievement gap” (AG) language on equity prioritization is moderated by implicit bias, with larger negative effects among teachers holding stronger anti-Black/pro-White stereotypes, (2) the negative effect of AG language replicates with non-teachers, and (3) AG language causes respondents to express more negative racial stereotypes.
The estimation of test score “gaps” and gap trends plays an important role in monitoring educational inequality. Researchers decompose gaps and gap changes into within- and between-school portions to generate evidence on the role schools play in shaping these inequalities. However, existing decomposition methods assume an equal-interval test scale and are a poor fit to coarsened data such as proficiency categories. This leaves many potential data sources ill-suited for decomposition applications. We develop two decomposition approaches that overcome these limitations: an extension of V, an ordinal gap statistic, and an extension of ordered probit models. Simulations show V decompositions have negligible bias with small within-school samples. Ordered probit decompositions have negligible bias with large within-school samples but more serious bias with small within-school samples. More broadly, our methods enable analysts to (1) decompose the difference between two groups on any ordinal outcome into portions within- and between some third categorical variable, and (2) estimate scale-invariant between-group differences that adjust for a categorical covariate.
A vast research literature documents racial bias in teachers’ evaluations of students. Theory suggests bias may be larger on grading scales with vague or overly-general criteria versus scales with clearly-specified criteria, raising the possibility that well-designed grading policies may mitigate bias. This study offers relevant evidence through a randomized web-based experiment with 1,549 teachers. On a vague grade-level evaluation scale, teachers rated a student writing sample lower when it was randomly signaled to have a Black author, versus a White author. However, there was no evidence of racial bias when teachers used a rubric with more clearly-defined evaluation criteria. Contrary to expectation, I found no evidence that the magnitude of grading bias depends on teachers’ implicit or explicit racial attitudes.
The “achievement gap” has long dominated mainstream conversations about race and education. Some scholars warn that the discourse around racial gaps perpetuates stereotypes and promotes the adoption of deficit-based explanations that fail to appreciate the role of structural inequities. I investigate through three randomized experiments. Results indicate that a TV news story about racial achievement gaps (versus a control or counter-stereotypical video) led viewers to express more exaggerated stereotypes of Black Americans as lacking education (study 1: ES=.30 SD; study 2: ES=.38 SD) and may have increased viewers’ implicit stereotyping of Black students as less competent than White students (study 1: ES=.22 SD; study 2: ES=.12 SD, n.s.). The video did not affect viewers’ explicit competence-related racial stereotyping, the explanations they gave for achievement inequalities, or their prioritization of ending achievement inequalities. After two weeks, the effect on stereotype exaggeration faded. Future research should probe how we can most productively frame educational inequality by race.
Theory suggests that teachers’ implicit racial attitudes affect their students, but we lack large-scale evidence on US teachers’ implicit biases and their correlates. Using nationwide data from Project Implicit, we find that teachers’ implicit White/Black biases (as measured by the implicit association test) vary by teacher gender and race. Teachers’ adjusted bias levels are lower in counties with larger shares of Black students. In the aggregate, counties in which teachers hold higher levels of implicit and explicit racial bias have larger adjusted White/Black test score inequalities and White/Black suspension disparities.