- Isaac M. Opper
Search for EdWorkingPapers here by author, title, or keywords.
Isaac M. Opper
What happens when employers would like to screen their employees but only observe a subset of output? We specify a model in which heterogeneous employees respond by producing more of the observed output at the expense of the unobserved output. Though this substitution distorts output in the short-term, we derive three sufficient conditions under which the heterogenous response improves screening efficiency: 1) all employees place similar value on staying in their current role; 2) the employees' utility functions satisfy a variation of the traditional single-crossing condition; 3) employer and worker preferences over output are similar. We then assess these predictions empirically by studying a change to teacher tenure policy in New York City, which increased the role that a single measure -- test score value-added -- played in tenure decisions. We show that in response to the policy teachers increased test score value-added and decreased output that did not enter the tenure decision. The increase in test score value-added was largest for the teachers with more ability to improve students' untargeted outcomes, increasing their likelihood of getting tenure. We find that the endogenous response to the policy announcement reduced the screening efficiency gap -- defined as the reduction of screening efficiency stemming from the partial observability of output -- by 28%, effectively shifting some of the cost of partial observability from the post-tenure period to the pre-tenure period.
There is an emerging consensus that teachers impact multiple student outcomes, but it remains unclear how to measure and summarize the multiple dimensions of teacher effectiveness into simple metrics for research or personnel decisions. We present a multidimensional empirical Bayes framework and illustrate how to use noisy estimates of teacher effectiveness to assess the dimensionality and predictive power of teachers' true effects. We find that it is possible to efficiently summarize many dimensions of effectiveness and most summary measures lead to similar teacher rankings; however, focusing on any one specific measure alone misses important dimensions of teacher quality.
We show that natural disasters affect a region’s aggregate human capital through at least four channels. In addition to causing out-migration, natural disasters reduce student achievement, lower high school graduation rates, and decrease post-secondary attendance. We estimate that disasters that cause at least $500 in per capita property damage reduce the net present value (NPV) of an affected county’s human capital by an average of $505 per person. These negative effects on human capital are not restricted to large disasters: less severe events – disasters with property damages of $100-$500 per capita – also cause significant and persistent reductions in student achievement and post-secondary attendance.
We consider the case in which the number of seats in a program is limited, such as a job training program or a supplemental tutoring program, and explore the implications that peer effects have for which individuals should be assigned to the limited seats. In the frequently-studied case in which all applicants are assigned to a group, the average outcome is not changed by shuffling the group assignments if the peer effect is linear in the average composition of peers. However, when there are fewer seats than applicants, the presence of linear-in-means peer effects can dramatically influence the optimal choice of who gets to participate. We illustrate how peer effects impact optimal seat assignment, both under a general social welfare function and under two commonly used social welfare functions. We next use data from a recent job training RCT to provide evidence of large peer effects in the context of job training for disadvantaged adults. Finally, we combine the two results to show that the program's effectiveness varies greatly depending on whether the assignment choices account for or ignore peer effects.
Researchers often include covariates when they analyze the results of randomized controlled trials (RCTs), valuing the increased precision of the estimates over the potential of inducing small-sample bias when doing so. In this paper, we develop a sufficient condition which ensures that the inclusion of covariates does not cause small-sample bias in the effect estimates. Using this result as a building block, we develop a novel approach that uses machine learning techniques to reduce the variance of the average treatment effect estimates while guaranteeing that the effect estimates remain unbiased. The framework also highlights how researchers can use data from outside the study sample to improve the precision of the treatment effect estimate by using the auxiliary data to better model the relationship between the covariates and the outcomes. We conclude with a simulation, which highlights the value of using the proposed approach.