Nvidia introduced what it known as the world’s most superior enterprise AI infrastructure — Nvidia DGX SuperPOD constructed with Nvidia Blackwell Extremely GPUs — which gives enterprises throughout industries with AIfactory supercomputing for state-of-the-art agentic AI reasoning.
Enterprises can use new Nvidia DGX GB300 and Nvidia DGX B300 methods, built-in with Nvidia networking, to ship out-of-the-box DGX SuperPOD AI supercomputers that supply FP4 precision and quicker AI reasoning to supercharge token era for AI purposes.
AI factories present purpose-built infrastructure for agentic, generative and bodily AI workloads, which might require important computing assets for AI pretraining, post-training and test-time scaling for purposes working in manufacturing.
“AI is advancing at light speed, and companies are racing to build AI factories that can scale to meet the processing demands of reasoning AI and inference time scaling,” stated Jensen Huang, founder and CEO of Nvidia, in an announcement. “The Nvidia Blackwell Ultra DGX SuperPOD provides out-of-the-box AI supercomputing for the age of agentic and physical AI.”
DGX GB300 methods characteristic Nvidia Grace Blackwell Extremely Superchips — which embrace 36 Nvidia Grace CPUs and 72 Nvidia Blackwell Extremely GPUs — and a rack-scale, liquid-cooled structure designed for real-time agent responses on superior reasoning fashions.
Air-cooled Nvidia DGX B300 methods harness the Nvidia B300 NVL16 structure to assist information facilities in all places meet the computational calls for of generative and agentic AI purposes.
To satisfy rising demand for superior accelerated infrastructure, Nvidia additionally unveiled Nvidia On the spot AI Manufacturing facility, a managed service that includes the Blackwell Extremely-powered NVIDIA DGX SuperPOD. Equinix will probably be first to supply the brand new DGX GB300 and DGX B300 methods in its preconfigured liquid- or air-cooled AI-ready information facilities situated in 45 markets all over the world.
NVIDIA DGX SuperPOD With DGX GB300 Powers Age of AI Reasoning
DGX SuperPOD with DGX GB300 methods can scale as much as tens of hundreds of Nvidia Grace Blackwell Extremely Superchips — linked through NVLink, Nvidia Quantum-X800 InfiniBand and Nvidia Spectrum-X™ Ethernet networking — to supercharge coaching and inference for probably the most compute-intensive workloads.
DGX GB300 methods ship as much as 70 instances extra AI efficiency than AI factories constructed with Nvidia Hopper methods and 38TB of quick reminiscence to supply unmatched efficiency at scale for multistep reasoning on agentic AI and reasoning purposes.
The 72 Grace Blackwell Extremely GPUs in every DGX GB300 system are linked by fifth-generation NVLink know-how to develop into one huge, shared reminiscence area via the NVLink Swap system.
Every DGX GB300 system options 72 Nvidia ConnectX-8 SuperNICs, delivering accelerated networking speeds of as much as 800Gb/s — double the efficiency of the earlier era. Eighteen Nvidia BlueField-3 DPUs pair with Nvidia Quantum-X800 InfiniBand or NvidiaSpectrum-X Ethernet to speed up efficiency, effectivity and safety in massive-scale AI information facilities.
DGX B300 Programs Speed up AI for Each Information Heart
The Nvidia DGX B300 system is an AI infrastructure platform designed to deliver energy-efficient generative AI and AI reasoning to each information heart.
Accelerated by Nvidia Blackwell Extremely GPUs, DGX B300 methods ship 11 instances quicker AI efficiency for inference and a 4x speedup for coaching in contrast with the Hopper era.
Every system gives 2.3TB of HBM3e reminiscence and contains superior networking with eight NVIDIA ConnectX-8 SuperNICs and two BlueField-3 DPUs.
Nvidia Software program Accelerates AI Growth and Deployment
To allow enterprises to automate the administration and operations of their infrastructure, Nvidia additionally introduced Nvidia Mission Management — AI information heart operation and orchestration software program for Blackwell-based DGX methods.
Nvidia DGX methods assist the Nvidia AI Enterprise software program platform for constructing and deploying enterprise-grade AI brokers. This contains Nvidia NIM microservices, equivalent to the brand new Nvidia Llama Nemotron open reasoning mannequin household introduced right now, and Nvidia AI Blueprints, frameworks, libraries and instruments used to orchestrate and optimize efficiency of AI brokers.
Nvidia On the spot AI Manufacturing facility affords enterprises an Equinix managed service that includes the Blackwell Extremely-powered Nvidia DGX SuperPOD with Nvidia Mission Management software program.
With devoted Equinix services across the globe, the service will present companies with absolutely provisioned, intelligence-generating AI factories optimized for state-of-the-art mannequin coaching and real-time reasoning workloads — eliminating months of pre-deployment infrastructure planning.
Availability
Nvidia DGX SuperPOD with DGX GB300 or DGX B300 methods are anticipated to be obtainable from companions later this yr.
NVIDIA On the spot AI Manufacturing facility is deliberate to be obtainable beginning later this yr.
Every day insights on enterprise use circumstances with VB Every day
If you wish to impress your boss, VB Every day has you lined. We provide the inside scoop on what corporations are doing with generative AI, from regulatory shifts to sensible deployments, so you’ll be able to share insights for max ROI.
An error occured.