Barco MXRT-1450 vs NVIDIA H200 SXM 141 GB

Comparison of Barco MXRT-1450 and 80 cores vs NVIDIA H200 SXM 141 GB with 141 GB HBM3e and 16,896 cores.

Loading...

Performance Rating

A100 A100
H200 H200
MI325X MI325X

Barco MXRT-1450

0.0

Barco MXRT-1450

0.0
RX 7900 XTX RX 7900 XTX
MI250 MI250
Instinct MI300X Instinct MI300X

NVIDIA H200 SXM 141 GB

67.4

NVIDIA H200 SXM 141 GB

67.4

Contents:

Memory ML Performance Compute Power Architecture & Compatibility ML Software Support Clocks & Performance Power Consumption Rendering Benchmarks Additional

Memory

Memory Size

No
🔥 141 ГБ

Memory Type

GDDR3 HBM3e

Memory Bandwidth

9.600 GB/s
🔥 4.89 TB/s

Memory Bus Width

64 бит 6,144 бит

ML Performance

FP16 (Half Precision)

No
🔥 267.6 TFLOPS

BF16 (Brain Float)

No No

TF32 (TensorFloat)

No No

Compute Power

FP32 (Single Precision)

96.0 TFLOPS
🔥 66.91 TFLOPS

FP64 (Double Precision)

No
🔥 33.45 TFLOPS

CUDA Cores

80
🔥 +21,020% 16,896

RT Cores

No No

Architecture & Compatibility

GPU Architecture

TeraScale 2 Hopper

SM (Streaming Multiprocessor)

No
🔥 132

PCIe Version

PCIe 2.0 x1 PCIe 5.0 x16

ML Software Support

CUDA Version

No 9.0

Clocks & Performance

Base Clock

No
🔥 1,500

Boost Clock

No
🔥 1,980

Memory Clock

600
🔥 +166% 1,593

Power Consumption

TDP/TGP

🔥 -98% 15 W
700 W

Recommended PSU

🔥 -82% 200 W
1100 W

Power Connector

None 8-pin EPS

Rendering

Texture Units (TMU)

8
🔥 +6,500% 528

ROP

No No

L2 Cache

128 KB
🔥 50 MB

Benchmarks

MLPerf, llama2-70b-99.9 (UNSET)

3 534 tokens/s

MLPerf, llama2-70b-99.9 (fp16)

3 553 tokens/s

MLPerf, llama2-70b-99.9 (fp8)

2 444 tokens/s

MLPerf, llama3.1-405b (fp16)

40.8 tokens/s

MLPerf, llama3.1-405b (fp8)

25.3 tokens/s

MLPerf, llama3.1-8b (fp8)

5 161 tokens/s

MLPerf, deepseek-r1 (fp8)

1 113 tokens/s

MLPerf, mixtral-8x7b (fp8)

7 132 tokens/s

Additional

Slots

Single-slot
🔥 SXM Module

Release Date

Jan. 31, 2011 Nov. 18, 2024

Display Outputs

1x DMS-59
No outputs

Renting is cheaper than buying