Decision, Vol 12(4), Oct 2025, 287-314; doi:10.1037/dec0000264
Recent research and technological advances have raised questions pertaining to ethics in human–machine interaction, particularly users’ evaluation of machine decision-makers and how this may differ from their evaluation of human decision-makers. The literature is notably sparse on actual experiments on how people view ethical decision making from a machine compared with a human, particularly when contextual features of a given scenario provide contradicting assumptions—from a utilitarian versus deontological perspective—on what ought to be done. The current research utilized the consequences, norms, and inaction model to explore differences in participants’ perceptions of human or machine decision-makers with 12 new scenarios when utilitarian and deontological norms toward (in)action are (not) aligned. Across two studies, we demonstrated participants’ preference for machines making utilitarian ethical decisions over humans. However, this was qualified by a participant’s preference for action; that is, in Study 1, those with a tendency for inaction were more likely to endorse an action if the referent was a machine and the benefits of a decision were greater than its costs. Participants also were more accepting of normative violations from a human than a machine in Study 1. Results from Study 2 largely replicate Study 1 but also demonstrate ethical decision making may not be binary as the consequences, norms, and inaction model suggests and may require continuous criteria to capture this nuance in decision-making processes when individuals are paired with human and machine referents. (PsycInfo Database Record (c) 2025 APA, all rights reserved)