lambdalabs logo

Lambda

H100_sxm5x4

| 320GB VRAMs | gpu_4x_h100_sxm5

Explore Lambda's H100 cloud instance specifications and benchmarks. Compare hardware configurations and performance metrics to optimize your AI and ML workloads.

LLM Benchmark Comparison

Compare performance metrics between different language models

Hardware Specifications

GPU ConfigurationValue
GPU TypeH100
GPU InterconnectSXM5
GPU Model NameNVIDIA H100 80GB HBM3
Driver Version550.127.05
GPU VRAM320 GB
Power Limit (W)700.00
GPU Temperature (°C)26
GPU Clock Speed (MHz)345
Memory Clock Speed (MHz)2619
PstateP0
CPU ConfigurationValue
Model NameIntel(R) Xeon(R) Platinum 8480+
Vendor IDGenuineIntel
CPUs104
CPU Clock Speed4000.00
Threads Per Core2
Cores Per Socket52
Sockets1
MemoryValue
Total885Gb

Disks Specifications

StorageValue
Total11692.00 GB

Available Disks

PropertyValue
Disk 1
Modelvda
Size11Tb
TypeHDD
Mount PointUnmounted
Disk 2
Modelvdb
Size428K
TypeHDD
Mount PointUnmounted

Software Specifications

SoftwareValue
OSUbuntu
OS Version22.04.5 LTS (Jammy Jellyfish)
Cuda Driver12.4
Docker Version27.4.0
Python VersionPython 3.10.12

Benchmarks

powered byLLM Benchmark Logo
BenchmarkValue
ffmpeg121ms
Coremark (Iterations per sec)34971.149
llama2Inference (Tokens per sec)81.16
Tensorflow Mnist Training1.314

Launch instance

CloudLambda
GPU TypeH100
Shadeform Instance TypeH100 sxm5x4
Cloud Instance Typegpu 4x h100 sxm5
Spin Up Time5-10 mins
Hourly Price$12.36

By clicking launch you agree to our Terms of Service.

Feedback