AMD Radeon HD 8370D IGP vs NVIDIA H200 SXM 141 GB

Comparison of AMD Radeon HD 8370D IGP and 128 cores vs NVIDIA H200 SXM 141 GB with 141 GB HBM3e and 16,896 cores.

Loading...

Performance Rating

A100 A100
H200 H200
MI325X MI325X

AMD Radeon HD 8370D IGP

AMD Radeon HD 8370D IGP

RX 7900 XTX RX 7900 XTX
MI250 MI250
Instinct MI300X Instinct MI300X

NVIDIA H200 SXM 141 GB

67.4

NVIDIA H200 SXM 141 GB

67.4

Contents:

Memory ML Performance Compute Power Architecture & Compatibility ML Software Support Clocks & Performance Power Consumption Rendering Benchmarks Additional

Memory

Memory Size

No
🔥 141 ГБ

Memory Type

System Shared HBM3e

Memory Bandwidth

System Dependent
🔥 4.89 TB/s

Memory Bus Width

No 6,144 бит

ML Performance

FP16 (Half Precision)

No
🔥 267.6 TFLOPS

BF16 (Brain Float)

No No

TF32 (TensorFloat)

No No

Compute Power

FP32 (Single Precision)

0.1946 TFLOPS
🔥 +34,283% 66.91 TFLOPS

FP64 (Double Precision)

No
🔥 33.45 TFLOPS

CUDA Cores

128
🔥 +13,100% 16,896

RT Cores

No No

Architecture & Compatibility

GPU Architecture

TeraScale 3 Hopper

SM (Streaming Multiprocessor)

No
🔥 132

PCIe Version

IGP PCIe 5.0 x16

ML Software Support

CUDA Version

No 9.0

Clocks & Performance

Base Clock

No
🔥 1,500

Boost Clock

No
🔥 1,980

Memory Clock

No
🔥 1,593

Power Consumption

TDP/TGP

🔥 -91% 65 W
700 W

Recommended PSU

No 1100 W

Power Connector

No 8-pin EPS

Rendering

Texture Units (TMU)

8
🔥 +6,500% 528

ROP

No No

L2 Cache

No
🔥 50 MB

Benchmarks

MLPerf, llama2-70b-99.9 (UNSET)

3 534 tokens/s

MLPerf, llama2-70b-99.9 (fp16)

3 553 tokens/s

MLPerf, llama2-70b-99.9 (fp8)

2 444 tokens/s

MLPerf, llama3.1-405b (fp16)

40.8 tokens/s

MLPerf, llama3.1-405b (fp8)

25.3 tokens/s

MLPerf, llama3.1-8b (fp8)

5 161 tokens/s

Llama.cpp: Backend: AMD ROCm HIP - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 1024

1 080 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 2048

964.38 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 512

1 139 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: GLM-4.7-Flash-IQ4_XS - Test: Text Generation 128

59.44 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128

26.29 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 1024

236.95 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 2048

230.31 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 512

237.79 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Text Generation 128

35.79 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 1024

1 338 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 2048

1 278 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 512

1 364 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: Qwen3-8B-Q8_0 - Test: Text Generation 128

25.86 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 1024

1 721 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 2048

1 680 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 512

1 726 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: gpt-oss-20b-Q8_0 - Test: Text Generation 128

71.45 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 1024

914.59 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 2048

834.67 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 512

953.45 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: GLM-4.7-Flash-IQ4_XS - Test: Text Generation 128

70.92 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 1024

1 100 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 2048

1 061 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 512

1 130 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128

26.17 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 1024

224.52 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 2048

212.58 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 512

225.25 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Text Generation 128

46.20 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 128

27.44 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 1024

1 099 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 2048

1 030 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 512

1 115 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Qwen3-8B-Q8_0 - Test: Text Generation 128

25.79 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 1024

1 427 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 2048

1 416 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 512

1 420 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: gpt-oss-20b-Q8_0 - Test: Text Generation 128

78.56 Tokens Per Second

MLPerf, deepseek-r1 (fp8)

1 113 tokens/s

MLPerf, mixtral-8x7b (fp8)

7 132 tokens/s

Additional

Slots

IGP
🔥 SXM Module

Release Date

July 7, 2013 Nov. 18, 2024

Display Outputs

Motherboard Dependent
No outputs

Renting is cheaper than buying