Close Menu

    Stay Ahead with Exclusive Updates!

    Enter your email below and be the first to know what’s happening in the ever-evolving world of technology!

    What's Hot

    Inside Zuckerberg’s $300 Million Power Play to Poach OpenAI Talent

    July 10, 2025

    IonQ Acquires Oxford Ionics for $1B: Next Step in the Quantum Revolution

    July 9, 2025

    How ‘Zuck Bucks’ Are Shaking Up the AI Race

    July 9, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter)
    PhronewsPhronews
    • Home
    • Big Tech & Startups

      IonQ Acquires Oxford Ionics for $1B: Next Step in the Quantum Revolution

      July 9, 2025

      Chowdeck Acquires Mira: From Delivery to Merchant Operations

      July 6, 2025

      Thinking Machines Lab shatters Record with $2B Seed Funding

      July 2, 2025

      WhatsApp Rolls out Ads in Updates Tab after years of Ad-free policy

      June 26, 2025

      LemFi Acquires Pillar to Launch Immigrant Credit Solutions

      June 26, 2025
    • Crypto

      Coinbase hack explained: lessons in crypto security

      May 24, 2025

      Coinbase responds to hack: customer impact and official statement

      May 22, 2025

      Coinbase Hack 2025: Everything we know so far.

      May 21, 2025

      El Salvador, the first country to adopt Bitcoin has Legal Tender

      April 27, 2025

      Trump Adds 5 Cryptocurrencies to National Reserve, Triggering Market Surge

      April 26, 2025
    • Gadgets & Smart Tech
      Featured

      DStv Eyes Weekly Subscription Model Amid Economic Headwinds 

      By oluchiJune 26, 20251
      Recent

      DStv Eyes Weekly Subscription Model Amid Economic Headwinds 

      June 26, 2025

      SmartAttack: New Smartwatch Attack Shows How Air-gapped Systems Can Be Breached

      June 24, 2025

      Tesla Launches Robotaxi Service in Austin on June 22 in Autonomous Push

      June 23, 2025
    • Cybersecurity & Online Safety

      SmartAttack: New Smartwatch Attack Shows How Air-gapped Systems Can Be Breached

      June 24, 2025

      NYC’s 550 Madison Avenue Attack: Hackers Claim Data Breach

      June 22, 2025

      Microsoft Offers Free Cybersecurity Initiative to European Governments

      June 12, 2025

      CERT-In Cyber Threat Alert Amid India-Pakistan Hostilities

      June 4, 2025

      Fortinet’s FortiGuard Cyber-Espionage Findings in the Middle East

      May 31, 2025
    PhronewsPhronews
    Home»Artificial Intelligence & The Future»NVIDIA Latest Chips Show Significant Gains in AI Training Performance
    Artificial Intelligence & The Future

    NVIDIA Latest Chips Show Significant Gains in AI Training Performance

    oluchiBy oluchiJune 10, 2025Updated:June 11, 2025No Comments14 Views
    Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Image Generated from Communications Today

    The artificial intelligence ecosystem is evolving at an exhilarating pace, with demands for more powerful and efficient AI models driving the wheels of AI innovation. At the forefront of this revolution stands NVIDIA, constantly pushing the limits of AI possibilities.

    Recent data released on June 4 by MLCommons, a non-profit organization dedicated to AI performance evaluations, showed that NVIDIA’S chips, especially those built with Blackwell architecture, are delivering significant gains in AI performance by outperforming their predecessors and setting a new bar in the AI industry.

    The Gold Standard: Understanding MLPerf Training Benchmarks

    MLCommons developed a benchmark, MLPerf, designed by a consortium of over 125 AI leaders from academics, research labs, and industry to provide unbiased evaluations of training and inference performance for hardware, software, and services for both training and inference tasks, which is always adapting to the latest advancements in AI.

    MLPerf is widely recognized as the industry’s most trusted and rigorous benchmark for evaluating AI performance. The range of tests it provides measures how quickly a platform can train various AI models to predetermine its quality thresholds, encompassing diverse workloads from image recognition and object detection to natural language processing and, majorly, large language model pre-training and fine-tuning.

    The MLPert Training v5.5 suite introduced a new and more demanding benchmark: Llama 3.1 405B pretraining. This new Llama model, which serves as a representative for the current state-of-the-art LLMs, has a whopping 405 billion parameters, which serves as a true stress test for modern AI hardware and software stacks needed to keep up with the escalating demands of training the next-generation AI.

    MLPerf Training v5.0: Blackwell Dominates Across the Board

    The recent MLCommons training benchmarks provide compelling, empirical evidence of Blackwell’s supremacy. The results show the Blackwell chips, on a per-chip basis, delivered 2.6X faster than the previous generation Hopper chips, which had 80 billion transistors used for training large AI systems. In a remarkable demonstration of its prowess, a cluster of 2,496 Blackwell GPUs, part of the NVIDIA DGX GB200 NVL72 system, completed the Llama 3.1 405B pre-training benchmark in 27 minutes. 

    Its predecessor, Hopper, would have required over three times as many units to scale this task for that scale of time. This shows how much more efficient and enhanced Blackwell architectural advancements are, with its high-density liquid-cooled rack and fifth-generation NVIDIA NVLink and NVLink Switch interconnect technologies for scale-up.

    Apart from processing speed, the MLCommons results also highlighted NVIDIA’s exceptional scaling efficiency. In the process of expanding from 512 to 2,496 GPUs, NVIDIA’s GB200 NVL72 system demonstrated a 90% strong scaling efficiency. 

    Dominance Across Diverse AI Workloads

    The NVIDIA AI platform delivered the highest performance at scale on all seven benchmarks in the MLPerf Training v5.0 suite. They included:

    • LLM Pre-Training (Llama 3.1 405B)—trained within 20.8 minutes
    • LLM Fine-Tuning (Llama 2 70B-LoRA) – trained within 0.56 minutes
    • Text-to-Image (Stable Diffusion v2)—trained within 1.4 minutes 
    • Graph Neural Network (R-GAT)—trained within 0.84 minutes
    • Recommender (DLRM-DCNv2)—trained within 0.7 minutes
    • Natural Language Processing (BERT)—trained within 0.3 minutes
    • Object Detection (RetinaNet)—trained within 1.4 minutes.

    Implications for the Future of AI

    • Faster AI progress: with AI training being achieved at faster speeds, researchers and AI developers can try out more ideas and improve their models quicker than before, signifying rapid AI advancement.
    • More People X Can Work with Big AI: As the training process becomes more efficient, it will be easier for more organizations to work with large-scale AI models like Llama 3.1.
    • NVIDIA Stays Ahead: The MLPerf Training v5.0 solidifies NVIDIA’s position as the undisputed leader in AI training hardware, which is vital for the company as demand for its AI technology continues to skyrocket.
    • Emphasis on Interconnects and Software: The amazing scaling ability NVIDIA achieved shows how important superfast connections between chips (like NVLink) and smart software (like NVIDIA’s CUDA-X libraries) are for maximizing the hardware.

    As AI models become more complex and large, being able to train them in a quick and efficient manner is paramount. NVIDIA’s breakthrough at the MLPerf v5.0 is not just about faster chips but the potential future for AI chips and how they will change the future of the AI landscape for years to come.

    AI Training Performance AIAccelerators AIatScale AIBenchmarkResults AIChipPerformance AIDataCenterTech AIHardwareRevolution AIInfrastructure AIInnovation2025 AIModelFineTuning AIModelOptimization AIModelTrainingSpeed AITrainingEfficiency AITrainingTimes Artificial Intelligence BERTBenchmark BlackwellArchitecture CUDAxLibraries DGXGB200NVL72 FifthGenNVLink FutureOfAI GraphNeuralNetworks HighDensityGPUs LargeLanguageModels LiquidCoolingTech Llama 3.1 405B Llama3Training LLMPretraining LLMs2025 MLCommons benchmark results MLCommonsReport MLPerfBenchmarks MLPerfv5 NVIDIA Blackwell GPUs NVIDIA2025 NVIDIALeadership NVIDIAResearch NVIDIAvsHopper NVLinkTechnology ObjectDetectionBenchmark semiconductor industry StableDiffusionBenchmark SupercomputingAI TextToImageAI
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    oluchi
    • X (Twitter)
    • LinkedIn

    I am a content writer with over three years of experience. I specialize in creating clear, engaging, and value-driven content across diverse niches, and I’m now focused on the tech and business space. My strong research skills, paired with a natural storytelling ability, enable me to break down complex topics into compelling, reader-friendly articles. As an avid reader and music lover, I bring creativity, insight, and a sharp eye for detail to every piece I write.

    Related Posts

    Inside Zuckerberg’s $300 Million Power Play to Poach OpenAI Talent

    July 10, 2025

    How ‘Zuck Bucks’ Are Shaking Up the AI Race

    July 9, 2025

    How Tetra Pak is Using AI to Revolutionize Recycling in the UK

    July 8, 2025

    Comments are closed.

    Top Posts

    MIT Study Reveals ChatGPT Impairs Brain Activity & Thinking

    June 29, 202589

    From Ally to Adversary: What Elon Musk’s Feud with Trump Means for the EV Industry

    June 6, 202556

    Coinbase Hack 2025: Everything we know so far.

    May 21, 202549

    Coinbase responds to hack: customer impact and official statement

    May 22, 202545
    Don't Miss
    Artificial Intelligence & The Future

    Inside Zuckerberg’s $300 Million Power Play to Poach OpenAI Talent

    By oluchiJuly 10, 20254

    The artificial intelligence market is currently a growing, immersive, and essential one. As AI increasingly…

    IonQ Acquires Oxford Ionics for $1B: Next Step in the Quantum Revolution

    July 9, 2025

    How ‘Zuck Bucks’ Are Shaking Up the AI Race

    July 9, 2025

    How Tetra Pak is Using AI to Revolutionize Recycling in the UK

    July 8, 2025
    Stay In Touch
    • Facebook
    • Twitter
    About Us
    About Us

    Evolving from Phronesis News, Phronews brings deep insight and smart analysis to the world of technology. Stay informed, stay ahead, and navigate tech with wisdom.
    We're accepting new partnerships right now.

    Email Us: info@phronews.com

    Facebook X (Twitter) Pinterest YouTube
    Our Picks
    Most Popular

    MIT Study Reveals ChatGPT Impairs Brain Activity & Thinking

    June 29, 202589

    From Ally to Adversary: What Elon Musk’s Feud with Trump Means for the EV Industry

    June 6, 202556

    Coinbase Hack 2025: Everything we know so far.

    May 21, 202549
    © 2025. Phronews.
    • Home
    • About Us
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.