
Artificial intelligence (AI) models are known to confidently conjure up fake citations. When the company OpenAI released GPT-5, a suite of large language models (LLMs), last month, it said it had reduced the frequency of fake citations and other kinds of ‘hallucination’, as well as ‘deceptions’, whereby an AI claims to have performed a task it hasn’t. With GPT-5, OpenAI… is bucking an industry-wide trend, because newer AI models designed to mimic human reasoning tend to generate more hallucinations than do their predecessors. On a benchmark that tests a model’s ability to produce citation-based responses, GPT-5 beat its predecessors. But hallucinations remain inevitable, because of how LLMs function.