Search EdWorkingPapers by author, title, or keywords.
Data science applications are increasingly entwined in students’ educational experiences. One prominent application of data science in education is to predict students’ risk of failing a course in or dropping out from college. There is growing interest among higher education researchers and administrators in whether learning management system (LMS) data, which capture very detailed information on students’ engagement in and performance on course activities, can improve model performance. We systematically evaluate whether incorporating LMS data into course performance prediction models improves model performance. We conduct this analysis within an entire state community college system. Among students with prior academic history in college, administrative data-only models substantially outperform LMS data-only models and are quite accurate at predicting whether students will struggle in a course. Among first-time students, LMS data-only models outperform administrative data-only models. We achieve the highest performance for first-time students with models that include data from both sources. We also show that models achieve similar performance with a small and judiciously selected set of predictors; models trained on system-wide data achieve similar performance as models trained on individual courses.
Colleges can send signals about their quality by adopting new, more alluring names. We study how this affects college choice and labor market performance of college graduates. Administrative data show name-changing colleges enroll higher-aptitude students, with larger effects for alluring-but-misleading name changes and among students with less information. A large resume audit study suggests a small premium for new college names in most jobs, and a significant penalty in lower-status jobs. We characterize student and employer beliefs using web-scraped text, surveys, and other data. Our study shows signals designed to change beliefs can have real, lasting impacts on market outcomes.
We exploit historical natural experiments to test whether universities increase economic mobility and equality. We use "runner-up’" counties that were strongly considered to become university sites but were not selected for as-good-as-random reasons as counterfactuals for university counties. University establishment causes greater intergenerational income mobility but also increases cross-sectional income inequality. We highlight four findings to explain this seeming paradox: universities hollow out the local labor market and provide greater opportunities to achieve top incomes, both of which increase cross-sectional inequality, and increase educational attainment and connections to high-SES people, which prevent inequality from perpetuating into intergenerational immobility.
Graduate education is among the fastest growing segments of the U.S. higher educational system. This paper provides up-to-date causal evidence on labor market returns to Master’s degrees and examines heterogeneity in the returns by field area, student demographics and initial labor market conditions. We use rich administrative data from Ohio and an individual fixed effects model that compares students’ earnings trajectories before and after earning a Master’s degree. Findings show that obtaining a Master’s degree increased quarterly earnings by about 12% on average, but the returns vary largely across graduate fields. We also find gender and racial disparities in the returns, with higher average returns for women than for men, and for White than for Black graduates. In addition, by comparing returns among students who graduated before and under the Great Recession, we show that economic downturns appear to reduce but not eliminate the positive returns to Master’s degrees.
For decades, pundits, politicians, college administrators, and academics have lamented the dismal rates of civic engagement among students who enroll in courses and eventually major in science, technology, engineering, and mathematics (i.e., STEM) fields. However, the research supporting this conclusion has faced distinct challenges in terms of data quality. Does STEM actually decrease the odds that young people will be actively involved in democracy? This paper assesses the relationship between studying STEM and voting. To do so, we create a dataset of over 23 million students in the U.S. matched to national validated voting records. The novel dataset is the largest known individual-level dataset in the U.S. connecting high school and college students to voting outcomes. It also contains a rich set of demographic and academic variables, to account for many of the common issues related to students' selection into STEM coursework. We consider two measures of STEM participation ---Advanced Placement (AP) Exam taking in high school and college major. Using both measures, we find that, unconditionally, STEM students are slightly more likely to vote than their non-STEM peers. After including the rich set of controls, the sign reverses and STEM students are slightly less likely to vote than their non-STEM peers. However, these estimated relationships between STEM and voting are small in magnitude---about the same effect size as a single get-out-the-vote mailer---and we can rule out even very modest causal effects of marginally more STEM coursework on voting for the typical STEM student. We cannot rule out modest effects for a few subfields. Our analyses demonstrate that, on average, marginally more STEM coursework in high school and college does not contribute to the dismally low participation rates among young people in the U.S.
To boost college graduation rates, policymakers often advocate for academic supports such as coaching or mentoring. Proactive and intensive coaching interventions are effective, but are costly and difficult to scale. We evaluate a relatively lower-cost group coaching program targeted at first-year college students placed on academic probation. Participants attend a workshop where coaches aim to normalize failure and improve self-confidence. Coaches also facilitate a process whereby participants reflect on their academic difficulties, devise solutions to address their challenges, and create an action plan. Participants then hold a one-time follow-up meeting with their coach or visit a campus resource. Using a difference-in-discontinuity design, we show that the program raises students’ first-year GPA by 14.6% of a standard deviation, and decreases the probability of first-year dropout by 8.5 percentage points. Effects are concentrated among lower-income students who also experience a significant increase in the probability of graduating. Finally, using administrative data we provide the first evidence that coaching/mentoring may have substantial long-run effects as we document significant gains in lower-income students’ earnings 7–9 years following entry to the university. Our findings indicate that targeted, group coaching can be an effective way to improve marginal students’ academic and early career outcomes.
Performance-based funding models for higher education, which tie state support for institutions to performance on student outcomes, have proliferated in recent decades. Some states have designed these policies to also address educational attainment gaps by including bonus payments for traditionally low-performing groups. Using a Synthetic Control Method research design, we examine the impact of these funding regimes on race-based completion gaps in Tennessee and Ohio. We find no evidence that performance-based funding narrowed race-based completion gaps. In fact, contrary to their intended purpose, we find that performance-based funding widened existing gaps in certificate completion in Tennessee. Across both states, the estimated impacts on associate degree outcomes are also directionally consistent with performance-based funding exacerbating racial inequities in associate degree attainment.
Financing college expenses through an income share agreement (ISA) is an arrangement where the student agrees to pay a fixed percentage of future earned income for a designated period of time in exchange for college funding. Using administrative and survey data for all eligible applicants to a university ISA program, I estimate the adverse selection into the ISA and provide preliminary estimates of the moral hazard for ISA participants. Identification of adverse selection comes from being able to observe the full set of eligible students who apply to the program. There is evidence of selection on the offered income share rate (which is determined by the student’s major) as well as on parent characteristics, though not parent income. Surprisingly, there is no evidence of adverse selection on student ability as measured by SAT score and college grades. I find no differential selection on other student characteristics including demographics and measures of debt aversion, risk aversion, and time preference. Controlling for observable factors, ISA participation increases the likelihood of college graduation by 3 percentage points and decreases starting salary by $5,000 on average.
Beliefs about relative academic performance may shape field specialization and explain gender gaps in STEM enrollment, but little causal evidence exists. To test whether these beliefs are malleable and salient enough to change behavior, I run a randomized controlled trial with 5,700 undergraduates across seven introductory STEM courses. Providing relative performance information shrinks gender gaps in biased beliefs substantially and closes ten percent of the gender gap in subsequent STEM course-taking. The gap closes due to men taking fewer STEM credits; women’s behavior is unchanged, implying that male overconfidence rather than female underconfidence contributes to gaps in specialization. Beliefs matter, but may not be a useful target for facilitating female STEM participation.
This paper examines how the pandemic impacted the enrollment patterns, fields of study, and academic outcomes of students in the California Community College System, the largest higher-education system in the country. Enrollment dropped precipitously during the pandemic – the total number of enrolled students fell by 11 percent from fall 2019 to fall 2020 and by another 7 percent from fall 2020 to fall 2021. The California Community College system lost nearly 300,000 students over this period. Our analysis reveals that enrollment reductions were largest among Black/African-American and Latinx students, and were larger among continuing students than first-time students. We find no evidence that having a large online presence prior to the pandemic protected colleges from these negative effects. Enrollment changes were substantial across a wide range of fields and were large for both vocational courses and academic courses that can be transferred to four-year institutions. In terms of course performance, changes in completion rates, withdrawal rates, and grades primarily occurred in the spring of 2020. These findings of the effects of the pandemic at community colleges have implications for policy, impending budgetary pressures, and future research.