• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

information for practice

news, new scholarship & more from around the world


advanced search
  • gary.holden@nyu.edu
  • @ Info4Practice
  • Archive
  • About
  • Help
  • Browse Key Journals
  • RSS Feeds

AI as moral cover: How algorithmic bias exploits psychological mechanisms to perpetuate social inequality

Abstract

Algorithmic decision-making systems are increasingly shaping critical social outcomes (e.g., hiring, lending, criminal justice, healthcare), yet technical approaches to bias mitigation ignore crucial psychological mechanisms that enable discriminatory use. To address this gap, I integrate motivated reasoning, system justification, and moral disengagement theories to argue that AI systems may function as “moral cover,” allowing users to perpetuate inequality while maintaining beliefs in their own objectivity. Users often demonstrate “selective adherence,” following algorithmic advice when it confirms stereotypes while dismissing counter-stereotypical outputs. System justification motives lead people to defend discriminatory algorithmic outcomes as legitimate, “data-driven” decisions. Moral disengagement mechanisms (including responsibility displacement, euphemistic labeling, and advantageous comparison) can enable discrimination while preserving moral self-regard. Finally, I argue that understanding AI bias as fundamentally psychological rather than merely technical demands interventions addressing these underlying psychological processes alongside algorithmic improvements.

Public Significance Statement

AI systems can enable discrimination while making users feel objective and fair. I argue that three psychological processes—selective adherence to confirming outputs, justification of biased results as “data-driven,” and moral disengagement from harmful outcomes—allow people to perpetuate inequality through AI while maintaining beliefs in their own fairness. In short, addressing AI bias requires understanding these human psychological factors, not just improving algorithms.

Read the full article ›

Posted in: Journal Article Abstracts on 09/22/2025 | Link to this post on IFP |
Share

Primary Sidebar

Categories

Category RSS Feeds

  • Calls & Consultations
  • Clinical Trials
  • Funding
  • Grey Literature
  • Guidelines Plus
  • History
  • Infographics
  • Journal Article Abstracts
  • Meta-analyses - Systematic Reviews
  • Monographs & Edited Collections
  • News
  • Open Access Journal Articles
  • Podcasts
  • Video

© 1993-2025 Dr. Gary Holden. All rights reserved.

gary.holden@nyu.edu
@Info4Practice