CFOtech Asia - Technology news for CFOs & financial decision-makers
Story image

HPE expands NVIDIA partnership to boost AI platforms & storage

Yesterday

Hewlett Packard Enterprise has announced expanded integration with NVIDIA to enhance its AI portfolio and support organisations throughout the AI lifecycle.

HPE detailed several updates to its AI offerings, including support for NVIDIA's recent technologies in the HPE Private Cloud AI platform, improvements to storage with the Alletra Storage MP X10000, and server and software enhancements. These integrations are aimed at speeding up the deployment of AI solutions by enterprises, service providers, and research institutions.

The HPE Private Cloud AI, co-developed with NVIDIA, now incorporates feature branch model updates from NVIDIA AI Enterprise, and is aligned with the NVIDIA Enterprise AI Factory validated design. This update provides AI developers with the ability to test, validate, and optimise workloads by leveraging the full capabilities of NVIDIA's software, including frameworks and microservices for pre-trained models.

Antonio Neri, President and Chief Executive Officer of HPE, said: "Our strong collaboration with NVIDIA continues to drive transformative outcomes for our shared customers. By co-engineering cutting-edge AI technologies elevated by HPE's robust solutions, we are empowering businesses to harness the full potential of these advancements throughout their organisation, no matter where they are on their AI journey. Together, we are meeting the demands of today, while paving the way for an AI-driven future."

Jensen Huang, Founder and Chief Executive Officer of NVIDIA, added: "Enterprises can build the most advanced NVIDIA AI factories with HPE systems to ready their IT infrastructure for the era of generative and agentic AI. Together, NVIDIA and HPE are laying the foundation for businesses to harness intelligence as a new industrial resource that scales from the data center to the cloud and the edge."

Joseph Yang, General Manager of HPC and AI for APAC and India at HPE, commented: "As AI-driven solutions continue to grow in demand across the APAC region, this deepened integration between HPE and NVIDIA will accelerate enterprises' ability to leverage AI at scale. With innovations like HPE Private Cloud AI and the Alletra Storage MP X10000, businesses in APAC will be able to seamlessly streamline AI development, from data ingestion to model training and continuous learning, all while ensuring performance, security, and efficiency."

The HPE Private Cloud AI platform aims to help organisations standardise their approach to AI across different departments, reducing risk and supporting scaling from developer environments to production-ready generative AI applications. New feature branch support allows businesses to experiment with different model features while maintaining safe, multi-layered strategies through existing production branch support.

HPE Alletra Storage MP X10000 now offers a software development kit (SDK) compatible with the NVIDIA AI Data Platform reference design. This SDK facilitates the integration of enterprise unstructured data directly with NVIDIA's ecosystem, supporting data ingestion, inference, training, and ongoing learning processes. The system leverages remote direct memory access (RDMA) technology to transfer data efficiently between the X10000, GPU memory, and system memory, increasing the speed and effectiveness of AI workflows.

The new storage SDK enables flexible inline data processing, metadata enrichment, and data management, while also providing a modular, composable approach to scaling deployment as organisational needs evolve. This integration supports customers in unifying storage and intelligence layers for real-time data access from core to cloud environments.

On the compute front, HPE ProLiant Compute DL380a Gen12 servers have ranked first in over 50 industry benchmark scenarios, including tasks such as language models GPT-J and Llama2-70B, and computer vision models ResNet50 and RetinaNet. This server will soon be available with up to ten NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, designed for intensive enterprise AI workloads such as multimodal AI inference, physical AI, and advanced design or video applications.

Key features of the DL380a Gen12 include both air-cooled and direct liquid-cooled (DLC) options, advanced security with post-quantum cryptography readiness, and automated management tools for proactive system health and energy efficiency. Additional benchmark-topping servers include the HPE ProLiant Compute DL384 Gen12 with dual-socket NVIDIA GH200 NVL2 and the HPE Cray XD670 with eight NVIDIA H200 SXM GPUs, both achieving high rankings in recent benchmarks.

HPE's OpsRamp Software has also been updated to support NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. The SaaS platform enables IT teams to observe AI infrastructure health and performance, automate workflows, and gain AI-supported analytics. Integration with NVIDIA's infrastructure ecosystem allows for detailed monitoring of GPU metrics and energy optimisation for distributed AI workloads.

Through these enhancements, HPE and NVIDIA seek to offer organisations across different sectors the tools to manage data pipelines, model training, and AI optimisation more efficiently, supporting the adoption and scaling of AI technologies in a secure and tailored manner.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X