Artificial intelligence (AI) combined with smart sensors or conversational agents are becoming part of our lives, and have the potential to improve aging in place by supporting independent living. Trust and willingness to use AI seem essential for actual embedding. Explainable AI (XAI), originating from the recognition that AI infrastructures often operate in an opaque, “black boxed” way, might assist in understanding the underlying logic of AI-made decisions. However, it is unknown what older adults think about XAI and what they consider as needed explainability.
I conducted 28 semi-structured interviews to explore XAI in the worlds of older adults. Inductive analysis was applied to analyze what do older adults know about AI and how do they imagine our society with AI? What is XAI to them and how do they value explainability?
The analysis resulted in nine themes: four regarding the knowledge on AI of older adults, including definitions, knowledge acquisition, attitudes, and expectations, and five themes were leading for the views on XAI in which XAI is not desirable, XAI is necessary, or collaboration is preferred.
The visions of XAI are different from current technological discourses. For older adults, XAI is not only technological, but a constellation between humans and machines. Most argue that a form of joint decision-making is important. As a follow-up, it seems recommended to explore the enactment of XAI in real life, and investigate the form or degree of XAI needed and for whom.