Explore Lambda's H100 cloud instance specifications and benchmarks. Compare hardware configurations and performance metrics to optimize your AI and ML workloads.
LLM Benchmark Comparison
Hardware Specifications
GPU Configuration | Value |
---|---|
GPU Type | H100 |
GPU Interconnect | PCIE |
GPU Model Name | NVIDIA H100 PCIe |
Driver Version | 535.129.03 |
GPU VRAM | 80 |
Power Limit (W) | 350.00 |
GPU Temperature (°C) | 30 |
GPU Clock Speed (MHz) | 345 |
Memory Clock Speed (MHz) | 1593 |
Pstate | P0 |
CPU Configuration | Value |
---|---|
Model Name | Intel(R) Xeon(R) Platinum 8480+ |
Vendor ID | GenuineIntel |
CPUs | 26 |
CPU Clock Speed | 4000.00 |
Threads Per Core | 1 |
Cores Per Socket | 1 |
Sockets | 26 |
Memory | Value |
---|---|
Total | 221Gb |
Disks Specifications
Storage | Value |
---|---|
Total | 1024.00GB |
Available Disks
Property | Value |
---|---|
Disk 1 | |
Model | vda |
Size | 1Tb |
Type | HDD |
Mount Point | Unmounted |
Software Specifications
Software | Value |
---|---|
OS | Ubuntu |
OS Version | 22.04.3 LTS (Jammy Jellyfish) |
Cuda Driver | 12.2 |
Docker Version | 24.0.7 |
Python Version | Python 3.10.12 |
Benchmarks
Benchmark | Value |
---|---|
ffmpeg | 232 |
Coremark (Itterations per sec) | 35030.360 |
llama2Inference (Tokens per sec) | 73.96 |
Tensorflow Mnist Training | 4.281 |
Nvidia-smi output
Nvidia-smi topo -m outpu