Close Menu

    Stay Ahead with Exclusive Updates!

    Enter your email below and be the first to know what’s happening in the ever-evolving world of technology!

    What's Hot

    Nigeria’s Proposed AI Bill: Licensing, Oversight and Risks

    November 27, 2025

    Why Gemini 3 Ignites a Bold New Era in Smart Insight

    November 27, 2025

    Researchers Uncover Critical RCE Flaws in Meta, Nvidia & Microsoft Inference Engines

    November 27, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter)
    PhronewsPhronews
    • Home
    • Big Tech & Startups

      Nigeria’s Proposed AI Bill: Licensing, Oversight and Risks

      November 27, 2025

      Blowout Earnings From Nvidia Reignite the AI Bubble Debate

      November 26, 2025

      Google Unveils WeatherNext 2, a Major Leap in AI Weather Forecasting

      November 26, 2025

      Anthropic Blocks Large-Scale AI Cyberattack in New Security Warning

      November 26, 2025

      ⁠Jeff Bezos’ Project Prometheus Bets Big on Industrial AI Automation

      November 25, 2025
    • Crypto

      Kanye West YZY Coin Crash Follows $3B Hype Launch

      August 24, 2025

      Crypto Markets Rally as GENIUS Act Nears Stablecoin Regulation Breakthrough

      July 23, 2025

      Lightchain and Ethereum Spark AI Chain Revolution

      July 23, 2025

      Agora Secures $50M Series A for White Label Stablecoin Infrastructure

      July 22, 2025

      Coinbase hack explained: lessons in crypto security

      May 24, 2025
    • Gadgets & Smart Tech
      Featured

      Why Amazon’s New AI Glasses Are Changing Delivery

      By oluchiNovember 10, 202519
      Recent

      Why Amazon’s New AI Glasses Are Changing Delivery

      November 10, 2025

      Google Fi’s Powerful AI Innovation Upgrades Calls and Chats

      October 29, 2025

      Sesame Raises $250M And Opens iOS Beta For Its Voice-First AI App

      October 28, 2025
    • Cybersecurity & Online Safety

      Anthropic Blocks Large-Scale AI Cyberattack in New Security Warning

      November 26, 2025

      Research Shows Cyberattacks Now Cost UK Businesses Up To £14.7 Billion Annually

      November 20, 2025

      Cloudflare Outage Hits ChatGPT, X and Hundreds of Services

      November 19, 2025

      Palo Alto Networks Launches Cortex AgentiX For The Agentic Workforce

      October 31, 2025

      Microsoft Launches a Security Store for AI agents Aimed at Cybersecurity Teams

      October 22, 2025
    PhronewsPhronews
    Home»Artificial Intelligence & The Future»NVIDIA Latest Chips Show Significant Gains in AI Training Performance
    Artificial Intelligence & The Future

    NVIDIA Latest Chips Show Significant Gains in AI Training Performance

    oluchiBy oluchiJune 10, 2025Updated:June 11, 2025No Comments18 Views
    Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Image Generated from Communications Today

    The artificial intelligence ecosystem is evolving at an exhilarating pace, with demands for more powerful and efficient AI models driving the wheels of AI innovation. At the forefront of this revolution stands NVIDIA, constantly pushing the limits of AI possibilities.

    Recent data released on June 4 by MLCommons, a non-profit organization dedicated to AI performance evaluations, showed that NVIDIA’S chips, especially those built with Blackwell architecture, are delivering significant gains in AI performance by outperforming their predecessors and setting a new bar in the AI industry.

    The Gold Standard: Understanding MLPerf Training Benchmarks

    MLCommons developed a benchmark, MLPerf, designed by a consortium of over 125 AI leaders from academics, research labs, and industry to provide unbiased evaluations of training and inference performance for hardware, software, and services for both training and inference tasks, which is always adapting to the latest advancements in AI.

    MLPerf is widely recognized as the industry’s most trusted and rigorous benchmark for evaluating AI performance. The range of tests it provides measures how quickly a platform can train various AI models to predetermine its quality thresholds, encompassing diverse workloads from image recognition and object detection to natural language processing and, majorly, large language model pre-training and fine-tuning.

    The MLPert Training v5.5 suite introduced a new and more demanding benchmark: Llama 3.1 405B pretraining. This new Llama model, which serves as a representative for the current state-of-the-art LLMs, has a whopping 405 billion parameters, which serves as a true stress test for modern AI hardware and software stacks needed to keep up with the escalating demands of training the next-generation AI.

    MLPerf Training v5.0: Blackwell Dominates Across the Board

    The recent MLCommons training benchmarks provide compelling, empirical evidence of Blackwell’s supremacy. The results show the Blackwell chips, on a per-chip basis, delivered 2.6X faster than the previous generation Hopper chips, which had 80 billion transistors used for training large AI systems. In a remarkable demonstration of its prowess, a cluster of 2,496 Blackwell GPUs, part of the NVIDIA DGX GB200 NVL72 system, completed the Llama 3.1 405B pre-training benchmark in 27 minutes. 

    Its predecessor, Hopper, would have required over three times as many units to scale this task for that scale of time. This shows how much more efficient and enhanced Blackwell architectural advancements are, with its high-density liquid-cooled rack and fifth-generation NVIDIA NVLink and NVLink Switch interconnect technologies for scale-up.

    Apart from processing speed, the MLCommons results also highlighted NVIDIA’s exceptional scaling efficiency. In the process of expanding from 512 to 2,496 GPUs, NVIDIA’s GB200 NVL72 system demonstrated a 90% strong scaling efficiency. 

    Dominance Across Diverse AI Workloads

    The NVIDIA AI platform delivered the highest performance at scale on all seven benchmarks in the MLPerf Training v5.0 suite. They included:

    • LLM Pre-Training (Llama 3.1 405B)—trained within 20.8 minutes
    • LLM Fine-Tuning (Llama 2 70B-LoRA) – trained within 0.56 minutes
    • Text-to-Image (Stable Diffusion v2)—trained within 1.4 minutes 
    • Graph Neural Network (R-GAT)—trained within 0.84 minutes
    • Recommender (DLRM-DCNv2)—trained within 0.7 minutes
    • Natural Language Processing (BERT)—trained within 0.3 minutes
    • Object Detection (RetinaNet)—trained within 1.4 minutes.

    Implications for the Future of AI

    • Faster AI progress: with AI training being achieved at faster speeds, researchers and AI developers can try out more ideas and improve their models quicker than before, signifying rapid AI advancement.
    • More People X Can Work with Big AI: As the training process becomes more efficient, it will be easier for more organizations to work with large-scale AI models like Llama 3.1.
    • NVIDIA Stays Ahead: The MLPerf Training v5.0 solidifies NVIDIA’s position as the undisputed leader in AI training hardware, which is vital for the company as demand for its AI technology continues to skyrocket.
    • Emphasis on Interconnects and Software: The amazing scaling ability NVIDIA achieved shows how important superfast connections between chips (like NVLink) and smart software (like NVIDIA’s CUDA-X libraries) are for maximizing the hardware.

    As AI models become more complex and large, being able to train them in a quick and efficient manner is paramount. NVIDIA’s breakthrough at the MLPerf v5.0 is not just about faster chips but the potential future for AI chips and how they will change the future of the AI landscape for years to come.

    AI Training Performance AIAccelerators AIatScale AIBenchmarkResults AIChipPerformance AIDataCenterTech AIHardwareRevolution AIInfrastructure AIInnovation2025 AIModelFineTuning AIModelOptimization AIModelTrainingSpeed AITrainingEfficiency AITrainingTimes Artificial Intelligence BERTBenchmark BlackwellArchitecture CUDAxLibraries DGXGB200NVL72 FifthGenNVLink FutureOfAI GraphNeuralNetworks HighDensityGPUs LargeLanguageModels LiquidCoolingTech Llama 3.1 405B Llama3Training LLMPretraining LLMs2025 MLCommons benchmark results MLCommonsReport MLPerfBenchmarks MLPerfv5 NVIDIA Blackwell GPUs NVIDIA2025 NVIDIALeadership NVIDIAResearch NVIDIAvsHopper NVLinkTechnology ObjectDetectionBenchmark semiconductor industry StableDiffusionBenchmark SupercomputingAI TextToImageAI
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    oluchi
    • X (Twitter)
    • LinkedIn

    I am a content writer with over three years of experience. I specialize in creating clear, engaging, and value-driven content across diverse niches, and I’m now focused on the tech and business space. My strong research skills, paired with a natural storytelling ability, enable me to break down complex topics into compelling, reader-friendly articles. As an avid reader and music lover, I bring creativity, insight, and a sharp eye for detail to every piece I write.

    Related Posts

    Nigeria’s Proposed AI Bill: Licensing, Oversight and Risks

    November 27, 2025

    Why Gemini 3 Ignites a Bold New Era in Smart Insight

    November 27, 2025

    Researchers Uncover Critical RCE Flaws in Meta, Nvidia & Microsoft Inference Engines

    November 27, 2025

    Comments are closed.

    Top Posts

    MIT Study Reveals ChatGPT Impairs Brain Activity & Thinking

    June 29, 2025119

    From Ally to Adversary: What Elon Musk’s Feud with Trump Means for the EV Industry

    June 6, 202570

    Coinbase Hack 2025: Everything we know so far.

    May 21, 202561

    Coinbase responds to hack: customer impact and official statement

    May 22, 202557
    Don't Miss
    Artificial Intelligence & The Future

    Nigeria’s Proposed AI Bill: Licensing, Oversight and Risks

    By preciousNovember 27, 20254

    Nigeria is taking a firm step in shaping the future of artificial intelligence (AI) within…

    Why Gemini 3 Ignites a Bold New Era in Smart Insight

    November 27, 2025

    Researchers Uncover Critical RCE Flaws in Meta, Nvidia & Microsoft Inference Engines

    November 27, 2025

    Blowout Earnings From Nvidia Reignite the AI Bubble Debate

    November 26, 2025
    Stay In Touch
    • Facebook
    • Twitter
    About Us
    About Us

    Evolving from Phronesis News, Phronews brings deep insight and smart analysis to the world of technology. Stay informed, stay ahead, and navigate tech with wisdom.
    We're accepting new partnerships right now.

    Email Us: info@phronews.com

    Facebook X (Twitter) Pinterest YouTube
    Our Picks
    Most Popular

    MIT Study Reveals ChatGPT Impairs Brain Activity & Thinking

    June 29, 2025119

    From Ally to Adversary: What Elon Musk’s Feud with Trump Means for the EV Industry

    June 6, 202570

    Coinbase Hack 2025: Everything we know so far.

    May 21, 202561
    © 2025. Phronews.
    • Home
    • About Us
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.