Dell Technologies (NYSE: DELL), a leading provider of AI infrastructure, has announced a major expansion of the Dell AI Factory with NVIDIA, aimed at helping enterprises scale AI adoption faster and more efficiently.
Why It Matters
As AI becomes central to enterprise strategy, the need for powerful, scalable infrastructure and easy-to-deploy solutions has never been greater. Dell and NVIDIA are responding with innovations across servers, networking, data platforms, and services to streamline AI deployment from experimentation to full-scale implementation.
Next-Gen AI Infrastructure
Dell has introduced new PowerEdge servers optimised for NVIDIA’s Blackwell GPUs, enabling up to 4x faster large language model (LLM) training compared to previous systems. Highlights include:
● Air-cooled PowerEdge XE9780/XE9785 and liquid-cooled XE9780L/XE9785L support up to 256 NVIDIA Blackwell Ultra GPUs per rack.
● PowerEdge XE9712 with NVIDIA GB300 NVL72 delivers 50x AI inference output and 5x throughput improvements, thanks to Dell’s new PowerCool technology.
● PowerEdge XE7745, coming July 2025, supports RTX Pro 6000 Blackwell GPUs for demanding AI applications like robotics and digital twins.
● Future PowerEdge systems will also support the NVIDIA Vera CPU and Rubin platform for Dell’s scalable rack systems.
Dell is also expanding its networking lineup with high-speed switches, including Dell PowerSwitch SN5600/SN2201 Ethernet and NVIDIA Quantum-X800 InfiniBand, delivering up to 800 Gbps throughput. These will be supported by Dell’s deployment and ProSupport services.
Smarter Data for Smarter AI
The Dell AI Data Platform has been enhanced to keep pace with the scale of modern AI workloads:
● Dell ObjectScale now supports dense deployments with better performance and lower data centre costs, thanks to integration with NVIDIA BlueField-3 and Spectrum-4.
● A high-performance solution combining Dell PowerScale, Project Lightning, and PowerEdge XE servers is optimised for large-scale inference using NVIDIA’s NIXL Libraries.
● S3 over RDMA support on ObjectScale boosts throughput by 230%, slashes latency by 80%, and reduces CPU usage by 98%.
● Dell is also launching a new integrated solution with the NVIDIA AI Data Platform to speed up insights and enable advanced agentic AI applications.
Simplified AI Deployment
Dell now offers direct access to the NVIDIA AI Enterprise software suite, including tools like NVIDIA NIM, NeMo microservices, and the Llama Nemotron model, to build scalable agentic AI workflows. Red Hat OpenShift will also be available for added flexibility and security in deployments.
New Managed Services
The new Dell Managed Services for AI Factory simplifies operations with 24/7 monitoring, updates, and support across the full AI stack, helping customers reduce resource constraints and speed up time-to-value.
Executive Perspectives
Michael Dell emphasised the goal of making AI more accessible:
“Our job is to make AI more accessible. With the Dell AI Factory with NVIDIA, enterprises can manage the entire AI lifecycle, from deployment to training, at any scale.”

NVIDIA CEO Jensen Huang added:
“AI factories are the infrastructure of modern industry. Together with Dell, we’re delivering the broadest line of Blackwell AI systems for every use case — from cloud to edge.”
Availability Timeline
● 2H 2025: Air-cooled XE9780/XE9785, XE9712 with GB300 NVL72, SN5600/SN2201 Ethernet, Quantum-X800 InfiniBand, ObjectScale with BlueField-3 and S3 over RDMA.
● Later in 2025: Liquid-cooled XE9780L/XE9785L, Dell high-performance inference solution.
● July 2025: PowerEdge XE7745 with RTX Pro 6000 Blackwell GPUs.
● May 2025: NVIDIA AI Enterprise software directly from Dell.
● Now Available: Dell Managed Services for the AI Factory with NVIDIA.