To help clients embrace generative AI, IBM is extending its high-performance computing (HPC) offerings, giving enterprises more power and versatility to carry out research, innovation and business transformation.
With the general availability of NVIDIA H100 Tensor Core GPU instances on IBM Cloud, businesses will have access to a powerful platform for AI applications, including large language model (LLM) training. This new GPU joins IBM Cloud’s existing lineup of accelerated computing offerings to leverage during every stage of an enterprise’s AI implementation.
Clients looking to transform with AI can apply IBM’s watsonx AI studio, data lakehouse, and governance toolkit to even the most demanding, compute-intensive applications, raising the ceiling for innovation—even in the most highly regulated industries.
NVIDIA H100 on IBM Cloud builds on IBM’s work to support generative AI model training and inferencing. Last year, IBM began making the NVIDIA A100 Tensor Core GPUs available to clients through IBM Cloud, giving clients immense processing headroom to innovate with AI via the watsonx platform, or as GPUaaS for custom needs.
The new NVIDIA H100 Tensor Core GPU takes this progression a step further, which NVIDIA reports can enable up to 30X faster inference performance over the A100.
It has the potential to give IBM Cloud customers a range of processing capabilities while also addressing the cost of enterprise-wide AI tuning and inferencing.
Businesses can start small, training small-scale models, fine-tuning models, or deploying applications like chatbots, natural language search, and using forecasting tools using NVIDIA L40S and L4 Tensor Core GPUs. As their needs grow, IBM Cloud customers can adjust their spend accordingly, eventually harnessing the H100 for the most demanding AI and HPC use cases.
The NVIDIA 100 Tensor Core GPU instances on IBM Cloud are now available in multi-zone regions (MZRs) in North America, Latin America, Europe, Japan and Australia.
For more information about this news, visit www.ibm.com.