NVIDIA H200 SXM 141 GB vs NVIDIA Tesla M40

Comparison of NVIDIA H200 SXM 141 GB with 141 GB HBM3e and 16,896 cores vs NVIDIA Tesla M40 with 12 GB GDDR5 and 3,072 cores.

Loading...

Performance Rating

NVIDIA H200 SXM 141 GB outperforms NVIDIA Tesla M40 by 1,080.04% in the overall GPU ARK performance rating

A100 A100
H200 H200
MI325X MI325X

NVIDIA H200 SXM 141 GB

67.4

NVIDIA H200 SXM 141 GB

67.4
RX 7900 XTX RX 7900 XTX
MI250 MI250
Instinct MI300X Instinct MI300X

NVIDIA Tesla M40

5.7

NVIDIA Tesla M40

5.7

Contents:

Memory ML Performance Compute Power Architecture & Compatibility ML Software Support Clocks & Performance Power Consumption Rendering Benchmarks Additional

Memory

Memory Size

🔥 +1,075% 141 ГБ
12 ГБ

Memory Type

HBM3e GDDR5

Memory Bandwidth

🔥 4.89 TB/s
288.4 GB/s

Memory Bus Width

6,144 бит 384 бит

ML Performance

FP16 (Half Precision)

🔥 267.6 TFLOPS
No

BF16 (Brain Float)

No No

TF32 (TensorFloat)

No No

Compute Power

FP32 (Single Precision)

🔥 +879% 66.91 TFLOPS
6.832 TFLOPS

FP64 (Double Precision)

🔥 +15,567% 33.45 TFLOPS
0.2135 TFLOPS

CUDA Cores

🔥 +450% 16,896
3,072

RT Cores

No No

Architecture & Compatibility

GPU Architecture

Hopper Maxwell 2.0

SM (Streaming Multiprocessor)

🔥 132
No

PCIe Version

PCIe 5.0 x16 PCIe 3.0 x16

ML Software Support

CUDA Version

🔥 9.0
5.2

Clocks & Performance

Base Clock

🔥 +58% 1,500
948

Boost Clock

🔥 +78% 1,980
1,112

Memory Clock

🔥 +6% 1,593
1,502

Power Consumption

TDP/TGP

700 W
🔥 -64% 250 W

Recommended PSU

1100 W
🔥 -45% 600 W

Power Connector

8-pin EPS 8-pin EPS

Rendering

Texture Units (TMU)

🔥 +175% 528
192

ROP

No No

L2 Cache

🔥 +1,567% 50 MB
3 MB

Benchmarks

MLPerf, llama2-70b-99.9 (UNSET)

3 534 tokens/s

MLPerf, llama2-70b-99.9 (fp16)

3 553 tokens/s

MLPerf, llama2-70b-99.9 (fp8)

2 444 tokens/s

MLPerf, llama3.1-405b (fp16)

40.8 tokens/s

MLPerf, llama3.1-405b (fp8)

25.3 tokens/s

MLPerf, llama3.1-8b (fp8)

5 161 tokens/s

llama.cpp, gpt-oss 20B Q4_K - Medium

47.0 tokens/s

llama.cpp, llama 7B Q4_0

36.7 tokens/s

llama.cpp, llama-2-7b-Q4_0

41.7 tokens/s

llama.cpp, qwen3 32B Q4_K - Medium

7.19 tokens/s

llama.cpp, qwen3moe 30B.A3B Q4_K - Medium

35.1 tokens/s

MLPerf, deepseek-r1 (fp8)

1 113 tokens/s

MLPerf, mixtral-8x7b (fp8)

7 132 tokens/s

Additional

Slots

🔥 SXM Module
Dual-slot

Release Date

Nov. 18, 2024 Nov. 10, 2015

Display Outputs

No outputs
No outputs

Renting is cheaper than buying