Just like humans, artificial intelligence (AI) is capable of saying it isn’t racist, but then acting as if it were. Large language models (LLMs) such as GPT4 output racist stereotypes about speakers of African American English (AAE), even when they have been trained not to connect overtly negative stereotypes with Black people, new research has found. According to the study—published today in Nature—LLMs also associate speakers of AAE with less prestigious jobs, and in imagined courtroom scenarios are more likely to convict these speakers of crimes or sentence them to death.