Growing numbers of asylum seekers across Europe have created heightened pressure on governments to employ technologies to assist immigration systems in meeting humanitarian standards of international law. This article analyses the potential of hybrid intelligence (HI)—a machine learning (ML) utility supervised by and supervising human intelligence—for assisting both asylum seekers and immigration officers in performing fair and just assessments, while addressing theoretical underpinnings of what hybridity entails from the perspective of stakeholders and humanitarian systems. While aspects of ML demonstrate promise in reducing bias in immigration decisions, such technology itself suffers from various inherent biases. In addition, technological mediation poses several unforeseen, unintended, and subtle threats to humanitarian missions. By analysing ML algorithms currently employed in refugee status determination pilot programs and immigration control, this article synthesizes universal complications of using assistive technology in Refugee Status Determinations, with special focus on evaluating resultant theoretical refugee identity reconfigurations. Conceptually, this article expands on the theoretical model of what has been termed ‘ID entity’ by biometrics researchers and ethnographers by analysing potential latent consequences from technological mediation in asylum cases, while addressing use cases such as German and Canadian immigration services’ pilot programs, along with automated pilot border screening projects such as Iborderctrl, among others. In addition, several hypothetical scenarios are presented to concretize and further theoretical inquiry of using HI in asylum seeker interviews, with special focus on the requisite criterion of possessing a well-founded fear of persecution.