Artificial intelligence (AI) is transforming clinical practice while simultaneously raising concerns about trust. Drawing on complexity theory, this paper argues that the crisis of trust in medical AI is rooted in multiple forms of uncertainty, including non-causal statistical relations, system-level complexity and the irreducibility of clinical judgement. It introduces a ‘U-map’ (Uncertainty Map), a conceptual tool that links specific forms of uncertainty to role-appropriate clinical uses such as screening, triage or deliberation aid. Using this map, the paper calibrates model claims against distinct clinical epistemic roles and develops a multidimensional account of trust that spans technological reliability, institutional governance and cultural–emotional orientations. On this basis, the paper sketches a posthuman model of care in which human–machine collaboration and distributed accountability offer a more adequate response to the normative and epistemic challenges posed by medical AI.