AMD Radeon HD 8370D IGP vs NVIDIA GeForce RTX 5070 Ti SUPER

Comparison of AMD Radeon HD 8370D IGP and 128 cores vs NVIDIA GeForce RTX 5070 Ti SUPER with 16 GB GDDR7 and 8,960 cores.

Loading...

Performance Rating

A100 A100
H200 H200
MI325X MI325X

AMD Radeon HD 8370D IGP

AMD Radeon HD 8370D IGP

RX 7900 XTX RX 7900 XTX
MI250 MI250
Instinct MI300X Instinct MI300X

NVIDIA GeForce RTX 5070 Ti SUPER

20.0

NVIDIA GeForce RTX 5070 Ti SUPER

20.0

Contents:

Memory ML Performance Compute Power Architecture & Compatibility ML Software Support Clocks & Performance Power Consumption Rendering Benchmarks Additional

Memory

Memory Size

No
🔥 16 ГБ

Memory Type

System Shared GDDR7

Memory Bandwidth

System Dependent 896.0 GB/s

Memory Bus Width

No 256 бит

ML Performance

FP16 (Half Precision)

No
🔥 43.94 TFLOPS

BF16 (Brain Float)

No No

TF32 (TensorFloat)

No No

Compute Power

FP32 (Single Precision)

0.1946 TFLOPS
🔥 +22,480% 43.94 TFLOPS

FP64 (Double Precision)

No
🔥 0.6866 TFLOPS

CUDA Cores

128
🔥 +6,900% 8,960

RT Cores

No
🔥 70

Architecture & Compatibility

GPU Architecture

TeraScale 3 Blackwell 2.0

SM (Streaming Multiprocessor)

No
🔥 70

PCIe Version

IGP PCIe 5.0 x16

ML Software Support

CUDA Version

No 12.0

Clocks & Performance

Base Clock

No
🔥 2,295

Boost Clock

No
🔥 2,452

Memory Clock

No
🔥 1,750

Power Consumption

TDP/TGP

🔥 -81% 65 W
350 W

Recommended PSU

No 750 W

Power Connector

No 1x 16-pin

Rendering

Texture Units (TMU)

8
🔥 +3,400% 280

ROP

No
🔥 70

L2 Cache

No
🔥 48 MB

Benchmarks

Llama.cpp: Backend: AMD ROCm HIP - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 1024

1 080 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 2048

964.38 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 512

1 139 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: GLM-4.7-Flash-IQ4_XS - Test: Text Generation 128

59.44 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128

26.29 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 1024

236.95 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 2048

230.31 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 512

237.79 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Text Generation 128

35.79 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 1024

1 338 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 2048

1 278 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 512

1 364 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: Qwen3-8B-Q8_0 - Test: Text Generation 128

25.86 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 1024

1 721 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 2048

1 680 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 512

1 726 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: gpt-oss-20b-Q8_0 - Test: Text Generation 128

71.45 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 1024

914.59 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 2048

834.67 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 512

953.45 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: GLM-4.7-Flash-IQ4_XS - Test: Text Generation 128

70.92 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 1024

1 100 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 2048

1 061 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 512

1 130 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128

26.17 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 1024

224.52 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 2048

212.58 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 512

225.25 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Text Generation 128

46.20 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 128

27.44 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 1024

1 099 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 2048

1 030 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 512

1 115 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Qwen3-8B-Q8_0 - Test: Text Generation 128

25.79 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 1024

1 427 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 2048

1 416 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 512

1 420 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: gpt-oss-20b-Q8_0 - Test: Text Generation 128

78.56 Tokens Per Second

Additional

Slots

IGP Dual-slot

Release Date

July 7, 2013 July 17, 2025

Display Outputs

Motherboard Dependent
1x HDMI 2.1b
3x DisplayPort 2.1b

Renting is cheaper than buying