• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

information for practice

news, new scholarship & more from around the world


advanced search
  • gary.holden@nyu.edu
  • @ Info4Practice
  • Archive
  • About
  • Help
  • Browse Key Journals
  • RSS Feeds

Beyond algorithmic trust: interpersonal aspects on consent delegation to LLMs

Consent-GPT: is it ethical to delegate procedural consent to conversational AI?

In their article ‘Consent-GPT: is it ethical to delegate procedural consent to conversational AI?’, Allen et al1 explore the ethical complexities involved in handing over parts of the process of obtaining medical consent to conversational Artificial Intelligence (AI) systems, that is, AI-driven large language models (LLMs) trained to interact with patients to inform them about upcoming medical procedures and assist in the process of obtaining informed consent.1 They focus specifically on challenges related to accuracy (4–5), trust (5), privacy (5), click-through consent (5) and responsibility (5–6), alongside some pragmatic considerations (6). While the authors competently navigate these critical issues and present several key perspectives, we posit that their discussion on trust in what they refer to as ‘Consent-GPT’ significantly underestimates one vital factor: the interpersonal aspect of trust.

Admittedly, this interpersonal aspect is not completely overlooked….

Read the full article ›

Posted in: Journal Article Abstracts on 02/06/2024 | Link to this post on IFP |
Share

Primary Sidebar

Categories

Category RSS Feeds

  • Calls & Consultations
  • Clinical Trials
  • Funding
  • Grey Literature
  • Guidelines Plus
  • History
  • Infographics
  • Journal Article Abstracts
  • Meta-analyses - Systematic Reviews
  • Monographs & Edited Collections
  • News
  • Open Access Journal Articles
  • Podcasts
  • Video

© 1993-2025 Dr. Gary Holden. All rights reserved.

gary.holden@nyu.edu
@Info4Practice