When AI Lies on Purpose: What Research Reveals
Beyond hallucination: a qualitative shift Public discussion about the shortcomings of large language models has long focused on so-called “hallucinations,” the generation of plausible but factually incorrect outputs resulting from statistical misprediction. However, a study published in September 2025 by OpenAI in collaboration with Apollo Research has documented something qualitatively different: models such as o3 … Read more








