
OpenAI’s latest reasoning models o3 and o4-mini represent significant advancements in the ever-evolving world of Artificial Intelligence (AI). However, these new models hallucinate by generating false or fabricated information at substantially higher rates than their predecessor model o1.
The problem of AI hallucinations, where models generate plausible but false information, has long been recognized as one of the most persistent challenges in the development of Artificial Intelligence. Traditionally, newer AI models have demonstrated additional improvements in reducing hallucinations compared to their predecessors. However, the recent release of OpenAI’s o3 and o4-mini models seems to be an outlier and has shaken up the pattern of progress.
These new models, designed to be cutting-edge with a state-of-the-art performance in their ability to perform complex reasoning tasks, have unexpectedly become overconfident in giving accurate answers. Internal evaluations from OpenAI reveal that both o3 and o4-mini hallucinate more frequently than former reasoning models like o1, o1-mini, and o3-mini, as well as OpenAI’s conventional “non-reasoning” model GPT-4o.
Internal testing also reveals that o3 hallucinates in 33% of responses on OpenAI’s PersonQA benchmark, which is double the rate of previous models – 16% in o1 and 14.8% in o3-mini. o4-mini performs even worse with hallucinations in nearly half of the cases with a 48% hallucination in responses.
The regression is puzzling as o3 and o4-mini excel exceedingly well in coding and math tasks. For example, o3 scores 69.1% on the SWE-bench coding test, where it outperforms many rivals, according to OpenAI’s report.
According to OpenAI’s system card, o3 tends to assert more statements that results in both more accurate assertions and an increased number of inaccurate or hallucinated statements. This suggests that the models’ increased verbosity and willingness to make claims may be directly related to the higher hallucination rates.
The practical implications of this situation might lead to misinformation and decline in trust, especially in fields like healthcare and finance, where accuracy is critical. As such they might find older models like o1 safer despite inferior reasoning.
The unexpected regression in factual reliability raises important questions about the trade-offs or the compromises involved in enhancing AI reasoning capabilities, as well as the challenges that’d be encountered in ensuring accuracy in highly sophisticated AI systems. As a result, the model’s advanced reasoning methods that involve refining cognitive processes may prioritize complex problem-solving over factual accuracy.
This situation is particularly concerning, as OpenAI itself acknowledged the uncertainties surrounding the increase in hallucinations in their newer models. In its technical documentation, the tech giant says that “more research is needed” to comprehend why hallucinations are escalating as reasoning models are being expanded.
For now, older models like o1 remain safer for factual queries, while o3 and o4-mini are best suited for tasks where creativity outweighs precision. Transparency about these limitations will be important to maintain trust as the world of AI continues to evolve.
Niko Felix, an OpenAI spokesperson said in an email to TechCrunch, “Addressing hallucinations across all our models is an ongoing area of research, and we’re continually working to improve their accuracy and reliability.”