Decision, Vol 11(3), Jul 2024, 404-419; doi:10.1037/dec0000229
What information guides individuals to trust an algorithm? We examine this question across four experiments that consistently found explanations and relative performance information increased ratings of trust in an algorithm relative to a human expert. When participants learn of the algorithm’s shortcomings, we find that trust can be broken but, importantly, restored. Strikingly, despite these increases and restorations of trust, few individuals changed their overall preferred agent for each commonplace task (e.g., driving a car), suggesting a conceptual ceiling to the extent to which people will trust algorithmic decision aids. Thus, initial preferences for an algorithm were “sticky” and largely resistant, despite large numeric shifts in trust ratings. We discuss theoretical and practical implications of this work for researching trust in algorithms and identify important contributions to understanding when information can improve people’s willingness to trust decision aid algorithms. (PsycInfo Database Record (c) 2024 APA, all rights reserved)