Explore Nebius's H100 cloud instance specifications and benchmarks. Compare hardware configurations and performance metrics to optimize your AI and ML workloads.
LLM Benchmark Comparison
Hardware Specifications
GPU Configuration | Value |
---|---|
GPU Type | H100 |
GPU Interconnect | SXM5 |
GPU Model Name | NVIDIA H100 80GB HBM3 |
Driver Version | 535.54.03 |
GPU VRAM | 80 |
Power Limit (W) | 700.00 |
GPU Temperature (°C) | 28 |
GPU Clock Speed (MHz) | 345 |
Memory Clock Speed (MHz) | 2619 |
Pstate | P0 |
CPU Configuration | Value |
---|---|
Model Name | Intel Xeon Processor (Icelake) |
Vendor ID | GenuineIntel |
CPUs | 20 |
CPU Clock Speed | 4200.00 |
Threads Per Core | 2 |
Cores Per Socket | 10 |
Sockets | 1 |
Memory | Value |
---|---|
Total | 157Gb |
Disks Specifications
Storage | Value |
---|---|
Total | 500.00GB |
Available Disks
Property | Value |
---|---|
Disk 1 | |
Model | vda |
Size | 500Gb |
Type | HDD |
Mount Point | Unmounted |
Software Specifications
Software | Value |
---|---|
OS | Ubuntu |
OS Version | 22.04.3 LTS (Jammy Jellyfish) |
Cuda Driver | 12.2 |
Docker Version | 24.0.6 |
Python Version | Python 3.10.12 |
Benchmarks
Benchmark | Value |
---|---|
ffmpeg | 65 |
Coremark (Itterations per sec) | 28756.290 |
llama2Inference (Tokens per sec) | 72.15 |
Tensorflow Mnist Training | 1.589 |
Nvidia-smi output
Nvidia-smi topo -m outpu