https://www.denvrdata.com/?utm_campaign=XAds&utm_campaign_id=1&utm_medium=paid&utm_source=X
top of page

NVIDIA H100

Choose from single-GPU, to 8-GPU NVLink, or scale-out training clusters with 3,200G InfiniBand.

H100.png

H100 Specifications

Fourth-generation Tensor Cores speed up all precisions, including FP64, TF32, FP32, FP16, INT8, and FP8, to reduce memory usage and increase performance while still maintaining accuracy for LLMs.

80 GB

HBM3 Memory

Run 70B+ parameter models on a single GPU.

900 GB/s

NVLink Bandwidth

Fourth-gen NVLink for multi-GPU scaling.

1,979

TFLOPS FP16

3,958 TFLOPS at FP8 with Transformer Engine.

9X

Faster Pre-Training

vs A100 on large language models.

NVIDIA H100 on Denvr AI Cloud

AI Ascend Assets-17.png

Distributed Training

Scale across 8-GPU NVLink nodes and multi-node clusters with 3,200G InfiniBand. Bare metal access for maximum training throughput with no virtualization overhead.

AI Ascend Assets-18.png

LLM Inference

Serve 70B+ parameter models on a single GPU, or scale to 8 GPUs with 640 GB VRAM for 500B+ parameter models at FP8 precision.

AI Ascend Assets-20.png

Production AI

H100 delivers 9x faster training vs. A100 at $2.10/hr on-demand. For extended context or larger models, consider the H200 with 141 GB HBM3e.

AI Ascend Assets-19.png

Managed Storage

High-performance Weka filesystem and local NVMe available. No external storage to provision for datasets, checkpoints, or model artifacts.

Platform

GPUs

GPU VRAM

vCPUs

Memory

Local Storage

Interconnect

On-Demand

NVIDIA H100 SXM

8

80 GB

208

1024 GB

6x 3.8TB NVMe

IB 3200G

$2.10 / GPU

Configurations

Per-minute billing with on-demand and reserved options. All configurations available as bare metal, VM, or model endpoints.

Related GPUs

Compare Denvr GPU options by workload and performance requirements.

Optimized For

Distributed training, multi-node scaling

Large model training, high-throughput inference

Extended context, large batch inference

VRAM

80 GB

80 GB

141 GB

Memory Bandwidth

2,039 GB/s

3,350 GB/s

4,800 GB/s

FP64/FP32

19.5 TFLOPS

67 TFLOPS

67 TFLOPS

FP16

312 TFLOPS

1,979 TFLOPS

1,979 TFLOPS

FP8

-

3,958 TFLOPS

3,958 TFLOPS

NVLink

600 GB/s

900 GB/s

900 GB/s

On-Demand Pricing

$1.30 / GPU

$2.10 / GPU

Reserved only

Infrastructure you can trust at scale

As an NVIDIA Cloud Partner we build and operate AI clusters following NVIDIA Reference Architectures. Your models and data are supported via strict privacy safeguards and SOC 2 Type 2 security practices.

NPN Preferred Partner Badges_NVIDIA_PreferredPartner_Default_RGB.png
soc-150x150-1.webp
Website - July 2025 - V2.0-08.jpg

Ready to get started?

Denvr AI Cloud is for innovators, creators, entrepreneurs, and business leaders. 

Start your AI journey today!

bottom of page