Background: Large language models (LLMs) are increasingly applied in healthcare. However, concerns remain that their nursing care recommendations may reflect patients’ sociodemographic attributes rather than clinical needs. Objective: To investigate potential biases in nursing care plans generated by LLMs, we focused on whether outputs differ systematically based on patients’ sociodemographic characteristics and assessed the implications for equitable nursing care. Methods: We utilized a standardized clinical scenario with GPT to generate care plans for 96 sociodemographic identity combinations, drawing on 9,600 tests. We conducted statistical analyses (t-tests and ANOVA) to analyze how text length and the frequency of physiological and psychological nursing terms varied across sociodemographic factors. Additionally, we utilized Python for data processing and visualization to ensure methodological rigor throughout the study. Results: The analysis revealed significant sociodemographic biases in LLMs-generated nursing care plans. Female patients received shorter care plans (t = 4.864, P