Close Menu

    Stay Ahead with Exclusive Updates!

    Enter your email below and be the first to know what’s happening in the ever-evolving world of technology!

    What's Hot

    Anthropic research reveals AI models get worse the longer they think

    August 7, 2025

    OpenAI signs strategic UK Partnership to build AI hubs in public services

    August 5, 2025

    China-linked hackers exploit SharePoint zero-day flaw to hit U.S. agencies

    August 3, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter)
    PhronewsPhronews
    • Home
    • Big Tech & Startups

      OpenAI signs strategic UK Partnership to build AI hubs in public services

      August 5, 2025

      China-linked hackers exploit SharePoint zero-day flaw to hit U.S. agencies

      August 3, 2025

      Zip Security raises $13.5M to help SMBs automate cybersecurity

      August 3, 2025

      OpenAI prepares to launch GPT-5 model in August

      July 31, 2025

      Trump administration unveils AI Action Plan to cut red tape and boost infrastructure

      July 29, 2025
    • Crypto

      Crypto Markets Rally as GENIUS Act Nears Stablecoin Regulation Breakthrough

      July 23, 2025

      Lightchain and Ethereum Spark AI Chain Revolution

      July 23, 2025

      Agora Secures $50M Series A for White Label Stablecoin Infrastructure

      July 22, 2025

      Coinbase hack explained: lessons in crypto security

      May 24, 2025

      Coinbase responds to hack: customer impact and official statement

      May 22, 2025
    • Gadgets & Smart Tech
      Featured

      EV Giant Tesla opens first India showroom in Mumbai

      By preciousJuly 28, 20253
      Recent

      EV Giant Tesla opens first India showroom in Mumbai

      July 28, 2025

      Google rolls out Veo 3 video generator to Pro & Ultra users

      July 19, 2025

      DStv Eyes Weekly Subscription Model Amid Economic Headwinds 

      June 26, 2025
    • Cybersecurity & Online Safety

      China-linked hackers exploit SharePoint zero-day flaw to hit U.S. agencies

      August 3, 2025

      Microsoft July 2025 Patch Tuesday update: 128 security vulnerabilities including SQL Server flaws

      July 26, 2025

      Scattered Spider gang steps up SIM-swap attacks on airlines

      July 15, 2025

      Ransomware Terror: How SafePay Hijacked Ingram Micro

      July 15, 2025

      SmartAttack: New Smartwatch Attack Shows How Air-gapped Systems Can Be Breached

      June 24, 2025
    PhronewsPhronews
    Home»Artificial Intelligence & The Future»OpenAI’s new reasoning models found to hallucinate more frequently
    Artificial Intelligence & The Future

    OpenAI’s new reasoning models found to hallucinate more frequently

    preciousBy preciousApril 25, 2025Updated:June 13, 2025No Comments41 Views
    Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    OpenAI’s latest reasoning models o3 and o4-mini represent significant advancements in the ever-evolving world of Artificial Intelligence (AI). However, these new models hallucinate by generating false or fabricated information at substantially higher rates than their predecessor model o1.

    The problem of AI hallucinations, where models generate plausible but false information, has long been recognized as one of the most persistent challenges in the development of Artificial Intelligence. Traditionally, newer AI models have demonstrated additional improvements in reducing hallucinations compared to their predecessors. However, the recent release of OpenAI’s o3 and o4-mini models seems to be an outlier and has shaken up the pattern of progress. 

    These new models, designed to be cutting-edge with a state-of-the-art performance in their ability to perform complex reasoning tasks, have unexpectedly become overconfident in giving accurate answers. Internal evaluations from OpenAI reveal that both o3 and o4-mini hallucinate more frequently than former reasoning models like o1, o1-mini, and o3-mini, as well as OpenAI’s conventional “non-reasoning” model GPT-4o. 

    Internal testing also reveals that o3 hallucinates in 33% of responses on OpenAI’s PersonQA benchmark, which is double the rate of previous models – 16% in o1 and 14.8% in o3-mini. o4-mini performs even worse with hallucinations in nearly half of the cases with a 48% hallucination in responses.

    The regression is puzzling as o3 and o4-mini excel exceedingly well in coding and math tasks. For example, o3 scores 69.1% on the SWE-bench coding test, where it outperforms many rivals, according to OpenAI’s report. 

    According to OpenAI’s system card, o3 tends to assert more statements that results in both more accurate assertions and an increased number of inaccurate or hallucinated statements. This suggests that the models’ increased verbosity and willingness to make claims may be directly related to the higher hallucination rates. 

    The practical implications of this situation might lead to misinformation and decline in trust, especially in fields like healthcare and finance, where accuracy is critical. As such they might find older models like o1 safer despite inferior reasoning.

    The unexpected regression in factual reliability raises important questions about the trade-offs or the compromises involved in enhancing AI reasoning capabilities, as well as the challenges that’d be encountered in ensuring accuracy in highly sophisticated AI systems. As a result, the model’s advanced reasoning methods that involve refining cognitive processes may prioritize complex problem-solving over factual accuracy. 

    This situation is particularly concerning, as OpenAI itself acknowledged the uncertainties surrounding the increase in hallucinations in their newer models. In its technical documentation, the tech giant says that “more research is needed” to comprehend why hallucinations are escalating as reasoning models are being expanded. 

    For now, older models like o1 remain safer for factual queries, while o3 and o4-mini are best suited for tasks where creativity outweighs precision. Transparency about these limitations will be important to maintain trust as the world of AI continues to evolve.

    Niko Felix, an OpenAI spokesperson said in an email to TechCrunch, “Addressing hallucinations across all our models is an ongoing area of research, and we’re continually working to improve their accuracy and reliability.”

    advanced reasoning AI AI creative vs factual AI false information AI hallucination problem AI hallucination research AI in Finance AI in healthcare AI misinformation risks AI model regression AI model reliability AI performance trade-offs AI system card AI trust issues factual accuracy AI GPT-4o comparison hallucination rates in AI Niko Felix OpenAI o1 vs o3 o3 vs o1-mini o4-mini hallucination rate OpenAI model testing OpenAI o3 model OpenAI o4-mini OpenAI PersonQA benchmark reasoning models AI SWE-bench coding test
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    precious
    • LinkedIn

    I’m Precious Amusat, Phronews’ Content Writer. I conduct in-depth research and write on the latest developments in the tech industry, including trends in big tech, startups, cybersecurity, artificial intelligence and their global impacts. When I’m off the clock, you’ll find me cheering on women’s footy, curled up with a romance novel, or binge-watching crime thrillers.

    Related Posts

    Anthropic research reveals AI models get worse the longer they think

    August 7, 2025

    OpenAI signs strategic UK Partnership to build AI hubs in public services

    August 5, 2025

    OpenAI prepares to launch GPT-5 model in August

    July 31, 2025

    Comments are closed.

    Top Posts

    MIT Study Reveals ChatGPT Impairs Brain Activity & Thinking

    June 29, 202597

    From Ally to Adversary: What Elon Musk’s Feud with Trump Means for the EV Industry

    June 6, 202558

    Coinbase Hack 2025: Everything we know so far.

    May 21, 202551

    Coinbase responds to hack: customer impact and official statement

    May 22, 202548
    Don't Miss
    Artificial Intelligence & The Future

    Anthropic research reveals AI models get worse the longer they think

    By preciousAugust 7, 20251

    A recent study from leading AI research company Anthropic has revealed that contrary to popular…

    OpenAI signs strategic UK Partnership to build AI hubs in public services

    August 5, 2025

    China-linked hackers exploit SharePoint zero-day flaw to hit U.S. agencies

    August 3, 2025

    Zip Security raises $13.5M to help SMBs automate cybersecurity

    August 3, 2025
    Stay In Touch
    • Facebook
    • Twitter
    About Us
    About Us

    Evolving from Phronesis News, Phronews brings deep insight and smart analysis to the world of technology. Stay informed, stay ahead, and navigate tech with wisdom.
    We're accepting new partnerships right now.

    Email Us: info@phronews.com

    Facebook X (Twitter) Pinterest YouTube
    Our Picks
    Most Popular

    MIT Study Reveals ChatGPT Impairs Brain Activity & Thinking

    June 29, 202597

    From Ally to Adversary: What Elon Musk’s Feud with Trump Means for the EV Industry

    June 6, 202558

    Coinbase Hack 2025: Everything we know so far.

    May 21, 202551
    © 2025. Phronews.
    • Home
    • About Us
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.