In its most recent version, ChatGPT (Generative Pre-trained Transformer), OpenAI’s new large language model (LLM), responded to the author’s prompt about its ability to help and harm public health. It can help, in a nutshell, (i) provide up-to-date and accurate information, (ii) support research and analysis, (iii) improve access to healthcare information, (iv) support telemedicine and (v) improve health communication. Conversely, it can harm by (i) spreading misinformation, (ii) misdiagnosis, (iii) bias, (iv) privacy concerns and (v) disrupting healthcare professional–patient relationships. A more fundamental query than the five attempts proposed in a recent article on Artificial Intelligence (AI) Robotics in medical and health industries for elderly companionship1 is the philosophical nature of communication or intelligence. How will the AI robot, then, communicate with its supposed elderly or patient companion? How does one proceed in understanding what can be coined here as Intelligent Patient Companionship (IPC)? LLMs and their supposed ‘intelligence’ that communicates beget reflexive scrutiny.