Abstract
In the context of rising delegation of administrative discretion to advanced technologies, this study aims to quantitatively assess key public values that may be at risk when governments employ automated decision systems (ADS). Drawing on the public value failure framework coupled with experimental methodology, we address the need to measure and compare the salience of three such values—fairness, transparency, and human responsiveness. Based on a preregistered design, we administer a survey experiment to 1460 American adults inspired by prominent ADS applications in child welfare and criminal justice. The results provide clear causal evidence that certain public value failures associated with artificial intelligence have significant negative impacts on citizens’ evaluations of government. We find substantial negative citizen reactions when fairness and transparency are not realized in the implementation of ADS. These results transcend both policy context and political ideology and persist even when respondents are not themselves personally impacted.