Toggle Mobile Menu
Academic Programs


Machine learning tools have drawn increasing interest from public policy practitioners, yet our un- derstanding of the effectiveness and equity of such tools when paired with human decision makers is limited. We implement a randomized controlled trial to evaluate the effects of a reputable algorithmic decision aid tool used by a U.S. child welfare agency. In the absence of a tool, workers more often investigate families predicted as high-risk. When the tool is available, however, workers surprisingly become less likely to investigate predicted high-risk families and more likely to investigate low-risk families compared to the control group. Despite this counterintuitive result, we find potentially ben- eficial effects of tool use on repeat allegations, Black-white racial disparities, and improved targeting of child welfare visits. We analyze rich text data from team discussions to understand mechanisms for tool use on worker decision making, and link hospital records to estimate downstream effects on child injury. Our results highlight the potential unintended impacts of human-algorithm interaction and have direct implications for the rollout of predictive risk modeling in other high-stakes contexts.