Explore IMWT's H100 cloud instance specifications and benchmarks. Compare hardware configurations and performance metrics to optimize your AI and ML workloads.
LLM Benchmark Comparison
Hardware Specifications
GPU Configuration | Value |
---|---|
GPU Type | H100 |
GPU Interconnect | PCIE |
GPU Model Name | NVIDIA H100 PCIe |
Driver Version | 560.35.03 |
GPU VRAM | 80 |
Power Limit (W) | 350.00 |
GPU Temperature (°C) | 32 |
GPU Clock Speed (MHz) | 345 |
Memory Clock Speed (MHz) | 1593 |
Pstate | P0 |
CPU Configuration | Value |
---|---|
Model Name | Intel(R) Xeon(R) Platinum 8352Y CPU @ 2.20GHz |
Vendor ID | GenuineIntel |
CPUs | 20 |
CPU Clock Speed | 4399.99 |
Threads Per Core | 1 |
Cores Per Socket | 10 |
Sockets | 2 |
Memory | Value |
---|---|
Total | 125Gb |
Disks Specifications
Storage | Value |
---|---|
Total | 1228.80GB |
Available Disks
Property | Value |
---|---|
Disk 1 | |
Model | sda |
Size | 1.2Tb |
Type | HDD |
Mount Point | Unmounted |
Software Specifications
Software | Value |
---|---|
OS | Ubuntu |
OS Version | 22.04.4 LTS (Jammy Jellyfish) |
Cuda Driver | 12.6 |
Docker Version | 27.2.0 |
Python Version | Python 3.10.12 |
Benchmarks
powered byBenchmark | Value |
---|---|
ffmpeg | 119 |
Coremark (Itterations per sec) | 23091.133 |
llama2Inference (Tokens per sec) | 42.61 |
Tensorflow Mnist Training | 1.914 |
Nvidia-smi output
Nvidia-smi topo -m outpu