• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

information for practice

news, new scholarship & more from around the world


advanced search
  • gary.holden@nyu.edu
  • @ Info4Practice
  • Archive
  • About
  • Help
  • Browse Key Journals
  • RSS Feeds

Trusting an algorithm can be a tricky and sticky thing.

Decision, Vol 11(3), Jul 2024, 404-419; doi:10.1037/dec0000229

What information guides individuals to trust an algorithm? We examine this question across four experiments that consistently found explanations and relative performance information increased ratings of trust in an algorithm relative to a human expert. When participants learn of the algorithm’s shortcomings, we find that trust can be broken but, importantly, restored. Strikingly, despite these increases and restorations of trust, few individuals changed their overall preferred agent for each commonplace task (e.g., driving a car), suggesting a conceptual ceiling to the extent to which people will trust algorithmic decision aids. Thus, initial preferences for an algorithm were “sticky” and largely resistant, despite large numeric shifts in trust ratings. We discuss theoretical and practical implications of this work for researching trust in algorithms and identify important contributions to understanding when information can improve people’s willingness to trust decision aid algorithms. (PsycInfo Database Record (c) 2024 APA, all rights reserved)

Read the full article ›

Posted in: Journal Article Abstracts on 08/29/2024 | Link to this post on IFP |
Share

Primary Sidebar

Categories

Category RSS Feeds

  • Calls & Consultations
  • Clinical Trials
  • Funding
  • Grey Literature
  • Guidelines Plus
  • History
  • Infographics
  • Journal Article Abstracts
  • Meta-analyses - Systematic Reviews
  • Monographs & Edited Collections
  • News
  • Open Access Journal Articles
  • Podcasts
  • Video

© 1993-2025 Dr. Gary Holden. All rights reserved.

gary.holden@nyu.edu
@Info4Practice