- Lindsay C. Page
Search for EdWorkingPapers here by author, title, or keywords.
Lindsay C. Page
Verification is a federally mandated process that requires selected students to further attest that the information reported on their FAFSA is accurate and complete. In this brief, we estimate institutional costs of administrating the FAFSA verification mandate and consider variation in costs by institution type and sector. Using data from 2014, we estimate that compliance costs to institutions in that year totaled nearly $500 million with the burden falling disproportionately on public institutions and community colleges, in particular. Specifically, we estimate that 22% of an average community college’s financial aid office operating budget is devoted to verification procedures, compared to 15% at public four-year institutions. Our analysis is timely, given that rates of FAFSA verification have increased in recent years.
Success in postsecondary education requires students to engage with their institution both academically and administratively. As with the transition to college, administrative requirements students face once enrolled can be substantial. Missteps with required processes can threaten students’ ability to persist. During the 2018-19 academic year, Georgia State University implemented an artificially intelligent text-based chatbot to provide proactive outreach and support to help undergraduates navigate administrative processes and take advantage of campus resources. A team of centralized university administrators orchestrated outreach “campaigns” to support students across three broad domains: (1) academic supports; (2) social and career supports; and (3) administrative processes. We investigate GSU’s implementation of this persistence-focused chatbot through an experimental study. Of the three message domains, outreach was most effective when focused on administrative processes, many of which were time-sensitive and for which outreach could be targeted specifically to students for whom it was relevant based on administrative data. In contrast, outreach to encourage take up of other supports had little effect on student behavior. By the end of the academic year, rates of FAFSA filing and registration for the subsequent fall semester were approximately three percentage points higher, suggesting positive effects on year-to-year college persistence. The positive effects on fall enrollment persisted into summer 2019, at which time the GSU administration judged that the study results were compelling enough to conclude the experiment and roll the chatbot system out to all students. We situate our findings in the literature on nudge-type efforts to support college access and success to draw lessons regarding their effective use.
Many interventions in education occur in settings where treatments are applied to groups. For example, a reading intervention may be implemented for all students in some schools and withheld from students in other schools. When such treatments are non-randomly allocated, outcomes across the treated and control groups may differ due to the treatment or due to baseline differences between groups. When this is the case, researchers can use statistical adjustment to make treated and control groups similar in terms of observed characteristics. Recent work in statistics has developed matching methods designed for contexts where treatments are clustered. This form of matching, known as multilevel matching, may be well suited to many education applications where treatments are assigned to schools. In this article, we provide an extensive evaluation of multilevel matching and compare it to multilevel regression modeling. We evaluate multilevel matching methods in two ways. First, we use these matching methods to recover treatment effect estimates from three clustered randomized trials using a within-study comparison design. Second, we conduct a simulation study. We find evidence that generally favors an analytic approach to statistical adjustment that combines multilevel matching with regression adjustment. We conclude with an empirical application.
Clustered observational studies (COSs) are a critical analytic tool for educational effectiveness research. We present a design framework for the development and critique of COSs. The framework is built on the counterfactual model for causal inference and promotes the concept of designing COSs that emulate the targeted randomized trial that would have been conducted were it feasible. We emphasize the key role of understanding the assignment mechanism to study design. We review methods for statistical adjustment and highlight a recently developed form of matching designed specifically for COSs. We review how regression models can be profitably combined with matching and note best practice for estimates of statistical uncertainty. Finally, we review how sensitivity analyses can determine whether conclusions are sensitive to bias from potential unobserved confounders. We demonstrate concepts with an evaluation of a summer school reading intervention in Wake County, North Carolina.
We examine through a field experiment whether outreach and support provided through an AI-enabled chatbot can reduce summer melt and improve first-year college enrollment at a four-year university and at a community college. At the four-year college, the chatbot increased overall success with navigating financial aid processes, such that student take up of educational loans increased by four percentage points. This financial aid effect was concentrated among would-be first-generation college goers, for whom loan acceptances increased by eight percentage points. In addition, the outreach increased first-generation students’ success with course registration and fall semester enrollment each by three percentage points. For the community college, where the randomized experiment could not be robustly implemented due to limited cell phone number information, we present a qualitative analysis of organizational readiness for chatbot implementation. Together, our findings suggest that proactive outreach to students is likely to be most successful when targeted to those who may be struggling (for example, in keeping up with required administrative tasks). Yet, such targeting requires university systems to have ready access to and ability to make use of their administrative data.
English-only college education in non-English speaking countries is a rapidly growing phenomenon that has been dubbed as the most important trend in higher education internationalization. Despite worldwide popularity, there is little empirical evidence about how the transition to English-only instruction affects students’ academic outcomes. Using a natural experiment at a selective university in Central Asia and a difference-in-differences strategy, we estimate the causal effect of switching to English-only instruction on students’ college outcomes. We find that the introduction of English-only instruction led to a decrease of GPAs and probability of graduation and an increase in the number of failed course credits. Although negative, the effects were short-lived. The difference-in-differences estimates and the examination of potential mechanisms suggest that at least in selective universities in non-English speaking countries, the switch to English-only instruction may affect college outcomes negatively at the time of transition but may not necessarily imply longer-run negative effects.