Close Menu

    Stay Ahead with Exclusive Updates!

    Enter your email below and be the first to know what’s happening in the ever-evolving world of technology!

    What's Hot

    Intel partners with SoftBank in a bold $2B stock purchase

    August 29, 2025

    Akumin to use AI for early breast cancer detection 

    August 26, 2025

    Kanye West YZY Coin Crash Follows $3B Hype Launch

    August 24, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter)
    PhronewsPhronews
    • Home
    • Big Tech & Startups

      Intel partners with SoftBank in a bold $2B stock purchase

      August 29, 2025

      In iPhone 17, Apple Fights Fear of China Dependence

      August 23, 2025

      Chowdeck raises $9M Series A to expand quick-commerce operations in Nigeria and Ghana

      August 23, 2025

      Google to Pay $35 Million Fine in Australia Over Anti-Competitive Deals

      August 21, 2025

      Microsoft beats Meta at its aggressive recruitment strategy, poaches Meta talent with multimillion-dollar deals

      August 19, 2025
    • Crypto

      Kanye West YZY Coin Crash Follows $3B Hype Launch

      August 24, 2025

      Crypto Markets Rally as GENIUS Act Nears Stablecoin Regulation Breakthrough

      July 23, 2025

      Lightchain and Ethereum Spark AI Chain Revolution

      July 23, 2025

      Agora Secures $50M Series A for White Label Stablecoin Infrastructure

      July 22, 2025

      Coinbase hack explained: lessons in crypto security

      May 24, 2025
    • Gadgets & Smart Tech
      Featured

      Google teases Pixel 10 Pro Fold ahead of August 20 launch

      By preciousAugust 16, 20258
      Recent

      Google teases Pixel 10 Pro Fold ahead of August 20 launch

      August 16, 2025

      Tech Titans Recoil as China Freezes Nvidia Chip Deals

      August 13, 2025

      US Secures Powerful AI Chip Revenue Share from Nvidia, AMD

      August 13, 2025
    • Cybersecurity & Online Safety

      Recent data shows Nigeria faced an average of 6,101 cyberattacks per week in July

      August 21, 2025

      China-linked hackers exploit SharePoint zero-day flaw to hit U.S. agencies

      August 3, 2025

      Microsoft July 2025 Patch Tuesday update: 128 security vulnerabilities including SQL Server flaws

      July 26, 2025

      Scattered Spider gang steps up SIM-swap attacks on airlines

      July 15, 2025

      Ransomware Terror: How SafePay Hijacked Ingram Micro

      July 15, 2025
    PhronewsPhronews
    Home»Artificial Intelligence & The Future»NVIDIA Latest Chips Show Significant Gains in AI Training Performance
    Artificial Intelligence & The Future

    NVIDIA Latest Chips Show Significant Gains in AI Training Performance

    oluchiBy oluchiJune 10, 2025Updated:June 11, 2025No Comments16 Views
    Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Image Generated from Communications Today

    The artificial intelligence ecosystem is evolving at an exhilarating pace, with demands for more powerful and efficient AI models driving the wheels of AI innovation. At the forefront of this revolution stands NVIDIA, constantly pushing the limits of AI possibilities.

    Recent data released on June 4 by MLCommons, a non-profit organization dedicated to AI performance evaluations, showed that NVIDIA’S chips, especially those built with Blackwell architecture, are delivering significant gains in AI performance by outperforming their predecessors and setting a new bar in the AI industry.

    The Gold Standard: Understanding MLPerf Training Benchmarks

    MLCommons developed a benchmark, MLPerf, designed by a consortium of over 125 AI leaders from academics, research labs, and industry to provide unbiased evaluations of training and inference performance for hardware, software, and services for both training and inference tasks, which is always adapting to the latest advancements in AI.

    MLPerf is widely recognized as the industry’s most trusted and rigorous benchmark for evaluating AI performance. The range of tests it provides measures how quickly a platform can train various AI models to predetermine its quality thresholds, encompassing diverse workloads from image recognition and object detection to natural language processing and, majorly, large language model pre-training and fine-tuning.

    The MLPert Training v5.5 suite introduced a new and more demanding benchmark: Llama 3.1 405B pretraining. This new Llama model, which serves as a representative for the current state-of-the-art LLMs, has a whopping 405 billion parameters, which serves as a true stress test for modern AI hardware and software stacks needed to keep up with the escalating demands of training the next-generation AI.

    MLPerf Training v5.0: Blackwell Dominates Across the Board

    The recent MLCommons training benchmarks provide compelling, empirical evidence of Blackwell’s supremacy. The results show the Blackwell chips, on a per-chip basis, delivered 2.6X faster than the previous generation Hopper chips, which had 80 billion transistors used for training large AI systems. In a remarkable demonstration of its prowess, a cluster of 2,496 Blackwell GPUs, part of the NVIDIA DGX GB200 NVL72 system, completed the Llama 3.1 405B pre-training benchmark in 27 minutes. 

    Its predecessor, Hopper, would have required over three times as many units to scale this task for that scale of time. This shows how much more efficient and enhanced Blackwell architectural advancements are, with its high-density liquid-cooled rack and fifth-generation NVIDIA NVLink and NVLink Switch interconnect technologies for scale-up.

    Apart from processing speed, the MLCommons results also highlighted NVIDIA’s exceptional scaling efficiency. In the process of expanding from 512 to 2,496 GPUs, NVIDIA’s GB200 NVL72 system demonstrated a 90% strong scaling efficiency. 

    Dominance Across Diverse AI Workloads

    The NVIDIA AI platform delivered the highest performance at scale on all seven benchmarks in the MLPerf Training v5.0 suite. They included:

    • LLM Pre-Training (Llama 3.1 405B)—trained within 20.8 minutes
    • LLM Fine-Tuning (Llama 2 70B-LoRA) – trained within 0.56 minutes
    • Text-to-Image (Stable Diffusion v2)—trained within 1.4 minutes 
    • Graph Neural Network (R-GAT)—trained within 0.84 minutes
    • Recommender (DLRM-DCNv2)—trained within 0.7 minutes
    • Natural Language Processing (BERT)—trained within 0.3 minutes
    • Object Detection (RetinaNet)—trained within 1.4 minutes.

    Implications for the Future of AI

    • Faster AI progress: with AI training being achieved at faster speeds, researchers and AI developers can try out more ideas and improve their models quicker than before, signifying rapid AI advancement.
    • More People X Can Work with Big AI: As the training process becomes more efficient, it will be easier for more organizations to work with large-scale AI models like Llama 3.1.
    • NVIDIA Stays Ahead: The MLPerf Training v5.0 solidifies NVIDIA’s position as the undisputed leader in AI training hardware, which is vital for the company as demand for its AI technology continues to skyrocket.
    • Emphasis on Interconnects and Software: The amazing scaling ability NVIDIA achieved shows how important superfast connections between chips (like NVLink) and smart software (like NVIDIA’s CUDA-X libraries) are for maximizing the hardware.

    As AI models become more complex and large, being able to train them in a quick and efficient manner is paramount. NVIDIA’s breakthrough at the MLPerf v5.0 is not just about faster chips but the potential future for AI chips and how they will change the future of the AI landscape for years to come.

    AI Training Performance AIAccelerators AIatScale AIBenchmarkResults AIChipPerformance AIDataCenterTech AIHardwareRevolution AIInfrastructure AIInnovation2025 AIModelFineTuning AIModelOptimization AIModelTrainingSpeed AITrainingEfficiency AITrainingTimes Artificial Intelligence BERTBenchmark BlackwellArchitecture CUDAxLibraries DGXGB200NVL72 FifthGenNVLink FutureOfAI GraphNeuralNetworks HighDensityGPUs LargeLanguageModels LiquidCoolingTech Llama 3.1 405B Llama3Training LLMPretraining LLMs2025 MLCommons benchmark results MLCommonsReport MLPerfBenchmarks MLPerfv5 NVIDIA Blackwell GPUs NVIDIA2025 NVIDIALeadership NVIDIAResearch NVIDIAvsHopper NVLinkTechnology ObjectDetectionBenchmark semiconductor industry StableDiffusionBenchmark SupercomputingAI TextToImageAI
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    oluchi
    • X (Twitter)
    • LinkedIn

    I am a content writer with over three years of experience. I specialize in creating clear, engaging, and value-driven content across diverse niches, and I’m now focused on the tech and business space. My strong research skills, paired with a natural storytelling ability, enable me to break down complex topics into compelling, reader-friendly articles. As an avid reader and music lover, I bring creativity, insight, and a sharp eye for detail to every piece I write.

    Related Posts

    Intel partners with SoftBank in a bold $2B stock purchase

    August 29, 2025

    Akumin to use AI for early breast cancer detection 

    August 26, 2025

    Meta Unveils Epic AI Tool for Video Voice Translations

    August 23, 2025

    Comments are closed.

    Top Posts

    MIT Study Reveals ChatGPT Impairs Brain Activity & Thinking

    June 29, 2025101

    From Ally to Adversary: What Elon Musk’s Feud with Trump Means for the EV Industry

    June 6, 202559

    Coinbase Hack 2025: Everything we know so far.

    May 21, 202552

    Coinbase responds to hack: customer impact and official statement

    May 22, 202551
    Don't Miss
    Artificial Intelligence & The Future

    Intel partners with SoftBank in a bold $2B stock purchase

    By preciousAugust 29, 20258

    Japanese tech giant Softbank has invested $2 billion in Intel corporation, providing a crucial lifeline…

    Akumin to use AI for early breast cancer detection 

    August 26, 2025

    Kanye West YZY Coin Crash Follows $3B Hype Launch

    August 24, 2025

    In iPhone 17, Apple Fights Fear of China Dependence

    August 23, 2025
    Stay In Touch
    • Facebook
    • Twitter
    About Us
    About Us

    Evolving from Phronesis News, Phronews brings deep insight and smart analysis to the world of technology. Stay informed, stay ahead, and navigate tech with wisdom.
    We're accepting new partnerships right now.

    Email Us: info@phronews.com

    Facebook X (Twitter) Pinterest YouTube
    Our Picks
    Most Popular

    MIT Study Reveals ChatGPT Impairs Brain Activity & Thinking

    June 29, 2025101

    From Ally to Adversary: What Elon Musk’s Feud with Trump Means for the EV Industry

    June 6, 202559

    Coinbase Hack 2025: Everything we know so far.

    May 21, 202552
    © 2025. Phronews.
    • Home
    • About Us
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.