• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

information for practice

news, new scholarship & more from around the world


advanced search
  • gary.holden@nyu.edu
  • @ Info4Practice
  • Archive
  • About
  • Help
  • Browse Key Journals
  • RSS Feeds

The Moral Psychology of Artificial Intelligence

Current Directions in Psychological Science, Ahead of Print.
Artificial intelligences (AIs), although often perceived as mere tools, have increasingly advanced cognitive and social capacities. In response, psychologists are studying people’s perceptions of AIs as moral agents (entities that can do right and wrong) and moral patients (entities that can be targets of right and wrong actions). This article reviews the extent to which people see AIs as moral agents and patients and how they feel about such AIs. We also examine how characteristics about ourselves and the AIs affect attributions of moral agency and patiency. We find multiple factors that contribute to attributions of moral agency and patiency in AIs, some of which overlap with attributions of morality to humans (e.g., mind perception) and some that are unique (e.g., sci-fi fan identity). We identify several future directions, including studying agency and patiency attributions to the latest generation of chatbots and to likely more advanced future AIs that are being rapidly developed.

Read the full article ›

Posted in: Journal Article Abstracts on 12/02/2023 | Link to this post on IFP |
Share

Primary Sidebar

Categories

Category RSS Feeds

  • Calls & Consultations
  • Clinical Trials
  • Funding
  • Grey Literature
  • Guidelines Plus
  • History
  • Infographics
  • Journal Article Abstracts
  • Meta-analyses - Systematic Reviews
  • Monographs & Edited Collections
  • News
  • Open Access Journal Articles
  • Podcasts
  • Video

© 1993-2025 Dr. Gary Holden. All rights reserved.

gary.holden@nyu.edu
@Info4Practice