Close Menu

    Stay Ahead with Exclusive Updates!

    Enter your email below and be the first to know what’s happening in the ever-evolving world of technology!

    What's Hot

    Microsoft Offers Free Cybersecurity Initiative to European Governments

    June 12, 2025

    Broadcom’s Tomahawk 6: Powering Next-Gen AI Data Centers

    June 12, 2025

    Apple Loses Bid to Pause App Store Reform in Epic Games Case

    June 12, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter)
    PhronewsPhronews
    • Home
    • Big Tech & Startups

      Apple Loses Bid to Pause App Store Reform in Epic Games Case

      June 12, 2025

      GlobalFoundries to Invest $16 Billion in Semiconductor Production

      June 11, 2025

      Can SpaceX Survive Without Government Support?

      June 9, 2025

      Will Tech Billionaires Turn on Trump? Musk Feud Signals Shifting Power Dynamics

      June 7, 2025

      From Ally to Adversary: What Elon Musk’s Feud with Trump Means for the EV Industry

      June 6, 2025
    • Crypto

      Coinbase hack explained: lessons in crypto security

      May 24, 2025

      Coinbase responds to hack: customer impact and official statement

      May 22, 2025

      Coinbase Hack 2025: Everything we know so far.

      May 21, 2025

      El Salvador, the first country to adopt Bitcoin has Legal Tender

      April 27, 2025

      Trump Adds 5 Cryptocurrencies to National Reserve, Triggering Market Surge

      April 26, 2025
    • Gadgets & Smart Tech
      Featured

      Apple’s May Software Rollouts: Pride Harmony & iOS 18.5

      By preciousJune 2, 20252
      Recent

      Apple’s May Software Rollouts: Pride Harmony & iOS 18.5

      June 2, 2025

      Shanghai Auto Show 2025: China’s Electric car Brands Challenge Tesla

      May 14, 2025

      Synology tightens restrictions on third-party NAS hard drives

      May 14, 2025
    • Cybersecurity & Online Safety

      Microsoft Offers Free Cybersecurity Initiative to European Governments

      June 12, 2025

      CERT-In Cyber Threat Alert Amid India-Pakistan Hostilities

      June 4, 2025

      Fortinet’s FortiGuard Cyber-Espionage Findings in the Middle East

      May 31, 2025

      Coinbase hack explained: lessons in crypto security

      May 24, 2025

      Coinbase responds to hack: customer impact and official statement

      May 22, 2025
    PhronewsPhronews
    Home»Artificial Intelligence & The Future»NVIDIA Latest Chips Show Significant Gains in AI Training Performance
    Artificial Intelligence & The Future

    NVIDIA Latest Chips Show Significant Gains in AI Training Performance

    oluchiBy oluchiJune 10, 2025Updated:June 11, 2025No Comments11 Views
    Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Image Generated from Communications Today

    The artificial intelligence ecosystem is evolving at an exhilarating pace, with demands for more powerful and efficient AI models driving the wheels of AI innovation. At the forefront of this revolution stands NVIDIA, constantly pushing the limits of AI possibilities.

    Recent data released on June 4 by MLCommons, a non-profit organization dedicated to AI performance evaluations, showed that NVIDIA’S chips, especially those built with Blackwell architecture, are delivering significant gains in AI performance by outperforming their predecessors and setting a new bar in the AI industry.

    The Gold Standard: Understanding MLPerf Training Benchmarks

    MLCommons developed a benchmark, MLPerf, designed by a consortium of over 125 AI leaders from academics, research labs, and industry to provide unbiased evaluations of training and inference performance for hardware, software, and services for both training and inference tasks, which is always adapting to the latest advancements in AI.

    MLPerf is widely recognized as the industry’s most trusted and rigorous benchmark for evaluating AI performance. The range of tests it provides measures how quickly a platform can train various AI models to predetermine its quality thresholds, encompassing diverse workloads from image recognition and object detection to natural language processing and, majorly, large language model pre-training and fine-tuning.

    The MLPert Training v5.5 suite introduced a new and more demanding benchmark: Llama 3.1 405B pretraining. This new Llama model, which serves as a representative for the current state-of-the-art LLMs, has a whopping 405 billion parameters, which serves as a true stress test for modern AI hardware and software stacks needed to keep up with the escalating demands of training the next-generation AI.

    MLPerf Training v5.0: Blackwell Dominates Across the Board

    The recent MLCommons training benchmarks provide compelling, empirical evidence of Blackwell’s supremacy. The results show the Blackwell chips, on a per-chip basis, delivered 2.6X faster than the previous generation Hopper chips, which had 80 billion transistors used for training large AI systems. In a remarkable demonstration of its prowess, a cluster of 2,496 Blackwell GPUs, part of the NVIDIA DGX GB200 NVL72 system, completed the Llama 3.1 405B pre-training benchmark in 27 minutes. 

    Its predecessor, Hopper, would have required over three times as many units to scale this task for that scale of time. This shows how much more efficient and enhanced Blackwell architectural advancements are, with its high-density liquid-cooled rack and fifth-generation NVIDIA NVLink and NVLink Switch interconnect technologies for scale-up.

    Apart from processing speed, the MLCommons results also highlighted NVIDIA’s exceptional scaling efficiency. In the process of expanding from 512 to 2,496 GPUs, NVIDIA’s GB200 NVL72 system demonstrated a 90% strong scaling efficiency. 

    Dominance Across Diverse AI Workloads

    The NVIDIA AI platform delivered the highest performance at scale on all seven benchmarks in the MLPerf Training v5.0 suite. They included:

    • LLM Pre-Training (Llama 3.1 405B)—trained within 20.8 minutes
    • LLM Fine-Tuning (Llama 2 70B-LoRA) – trained within 0.56 minutes
    • Text-to-Image (Stable Diffusion v2)—trained within 1.4 minutes 
    • Graph Neural Network (R-GAT)—trained within 0.84 minutes
    • Recommender (DLRM-DCNv2)—trained within 0.7 minutes
    • Natural Language Processing (BERT)—trained within 0.3 minutes
    • Object Detection (RetinaNet)—trained within 1.4 minutes.

    Implications for the Future of AI

    • Faster AI progress: with AI training being achieved at faster speeds, researchers and AI developers can try out more ideas and improve their models quicker than before, signifying rapid AI advancement.
    • More People X Can Work with Big AI: As the training process becomes more efficient, it will be easier for more organizations to work with large-scale AI models like Llama 3.1.
    • NVIDIA Stays Ahead: The MLPerf Training v5.0 solidifies NVIDIA’s position as the undisputed leader in AI training hardware, which is vital for the company as demand for its AI technology continues to skyrocket.
    • Emphasis on Interconnects and Software: The amazing scaling ability NVIDIA achieved shows how important superfast connections between chips (like NVLink) and smart software (like NVIDIA’s CUDA-X libraries) are for maximizing the hardware.

    As AI models become more complex and large, being able to train them in a quick and efficient manner is paramount. NVIDIA’s breakthrough at the MLPerf v5.0 is not just about faster chips but the potential future for AI chips and how they will change the future of the AI landscape for years to come.

    AI Training Performance AIAccelerators AIatScale AIBenchmarkResults AIChipPerformance AIDataCenterTech AIHardwareRevolution AIInfrastructure AIInnovation2025 AIModelFineTuning AIModelOptimization AIModelTrainingSpeed AITrainingEfficiency AITrainingTimes Artificial Intelligence BERTBenchmark BlackwellArchitecture CUDAxLibraries DGXGB200NVL72 FifthGenNVLink FutureOfAI GraphNeuralNetworks HighDensityGPUs LargeLanguageModels LiquidCoolingTech Llama 3.1 405B Llama3Training LLMPretraining LLMs2025 MLCommons benchmark results MLCommonsReport MLPerfBenchmarks MLPerfv5 NVIDIA Blackwell GPUs NVIDIA2025 NVIDIALeadership NVIDIAResearch NVIDIAvsHopper NVLinkTechnology ObjectDetectionBenchmark semiconductor industry StableDiffusionBenchmark SupercomputingAI TextToImageAI
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    oluchi
    • X (Twitter)
    • LinkedIn

    I am a content writer with over three years of experience. I specialize in creating clear, engaging, and value-driven content across diverse niches, and I’m now focused on the tech and business space. My strong research skills, paired with a natural storytelling ability, enable me to break down complex topics into compelling, reader-friendly articles. As an avid reader and music lover, I bring creativity, insight, and a sharp eye for detail to every piece I write.

    Related Posts

    Broadcom’s Tomahawk 6: Powering Next-Gen AI Data Centers

    June 12, 2025

    GlobalFoundries to Invest $16 Billion in Semiconductor Production

    June 11, 2025

    NITDA & Microsoft Boost Nigeria’s Digital Transformation

    June 10, 2025

    Comments are closed.

    Top Posts

    From Ally to Adversary: What Elon Musk’s Feud with Trump Means for the EV Industry

    June 6, 202551

    Coinbase Hack 2025: Everything we know so far.

    May 21, 202549

    Coinbase responds to hack: customer impact and official statement

    May 22, 202542

    Regulatory Showdown: Nigeria’s FCCPC holds Meta accountable in a $220 Billion fine

    May 19, 202533
    Don't Miss
    Cybersecurity & Online Safety

    Microsoft Offers Free Cybersecurity Initiative to European Governments

    By preciousJune 12, 20252

    Microsoft has announced a major new initiative to help European governments defend themselves against rising…

    Broadcom’s Tomahawk 6: Powering Next-Gen AI Data Centers

    June 12, 2025

    Apple Loses Bid to Pause App Store Reform in Epic Games Case

    June 12, 2025

    GlobalFoundries to Invest $16 Billion in Semiconductor Production

    June 11, 2025
    Stay In Touch
    • Facebook
    • Twitter
    About Us
    About Us

    Evolving from Phronesis News, Phronews brings deep insight and smart analysis to the world of technology. Stay informed, stay ahead, and navigate tech with wisdom.
    We're accepting new partnerships right now.

    Email Us: info@phronews.com

    Facebook X (Twitter) Pinterest YouTube
    Our Picks
    Most Popular

    From Ally to Adversary: What Elon Musk’s Feud with Trump Means for the EV Industry

    June 6, 202551

    Coinbase Hack 2025: Everything we know so far.

    May 21, 202549

    Coinbase responds to hack: customer impact and official statement

    May 22, 202542
    © 2025. Phronews.
    • Home
    • About Us
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.