AMD Radeon HD 8370D IGP vs SPARKLE Arc A310 OmniView

Comparison of AMD Radeon HD 8370D IGP and 128 cores vs SPARKLE Arc A310 OmniView with 4 GB GDDR6 and 768 cores.

Loading...

Performance Rating

A100 A100
H200 H200
MI325X MI325X

AMD Radeon HD 8370D IGP

AMD Radeon HD 8370D IGP

RX 7900 XTX RX 7900 XTX
MI250 MI250
Instinct MI300X Instinct MI300X

SPARKLE Arc A310 OmniView

2.2

SPARKLE Arc A310 OmniView

2.2

Contents:

Memory ML Performance Compute Power Architecture & Compatibility ML Software Support Clocks & Performance Power Consumption Rendering Benchmarks Additional

Memory

Memory Size

No
🔥 4 ГБ

Memory Type

System Shared GDDR6

Memory Bandwidth

System Dependent 124.0 GB/s

Memory Bus Width

No 64 бит

ML Performance

FP16 (Half Precision)

No
🔥 6.144 TFLOPS

BF16 (Brain Float)

No No

TF32 (TensorFloat)

No No

Compute Power

FP32 (Single Precision)

0.1946 TFLOPS
🔥 +1,479% 3.072 TFLOPS

FP64 (Double Precision)

No
🔥 0.768 TFLOPS

CUDA Cores

128
🔥 +500% 768

RT Cores

No
🔥 6

Architecture & Compatibility

GPU Architecture

TeraScale 3 Generation 12.7

SM (Streaming Multiprocessor)

No No

PCIe Version

IGP PCIe 4.0 x8

ML Software Support

CUDA Version

No No

Clocks & Performance

Base Clock

No
🔥 2,000

Boost Clock

No
🔥 2,000

Memory Clock

No
🔥 1,937

Power Consumption

TDP/TGP

🔥 -13% 65 W
75 W

Recommended PSU

No 250 W

Power Connector

No None

Rendering

Texture Units (TMU)

8
🔥 +300% 32

ROP

No
🔥 6

L2 Cache

No
🔥 4 MB

Benchmarks

Llama.cpp: Backend: AMD ROCm HIP - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 1024

1 080 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 2048

964.38 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 512

1 139 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: GLM-4.7-Flash-IQ4_XS - Test: Text Generation 128

59.44 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128

26.29 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 1024

236.95 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 2048

230.31 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 512

237.79 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Text Generation 128

35.79 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 1024

1 338 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 2048

1 278 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 512

1 364 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: Qwen3-8B-Q8_0 - Test: Text Generation 128

25.86 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 1024

1 721 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 2048

1 680 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 512

1 726 Tokens Per Second

Llama.cpp: Backend: AMD ROCm HIP - Model: gpt-oss-20b-Q8_0 - Test: Text Generation 128

71.45 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 1024

914.59 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 2048

834.67 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: GLM-4.7-Flash-IQ4_XS - Test: Prompt Processing 512

953.45 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: GLM-4.7-Flash-IQ4_XS - Test: Text Generation 128

70.92 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 1024

1 100 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 2048

1 061 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 512

1 130 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128

26.17 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 1024

224.52 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 2048

212.58 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Prompt Processing 512

225.25 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: MiniMax-M2.5-UD-TQ1_0 - Test: Text Generation 128

46.20 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 128

27.44 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 1024

1 099 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 2048

1 030 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Qwen3-8B-Q8_0 - Test: Prompt Processing 512

1 115 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: Qwen3-8B-Q8_0 - Test: Text Generation 128

25.79 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 1024

1 427 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 2048

1 416 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: gpt-oss-20b-Q8_0 - Test: Prompt Processing 512

1 420 Tokens Per Second

Llama.cpp: Backend: Vulkan - Model: gpt-oss-20b-Q8_0 - Test: Text Generation 128

78.56 Tokens Per Second

Additional

Slots

IGP Single-slot

Release Date

July 7, 2013 Nov. 18, 2024

Display Outputs

Motherboard Dependent
4x mini-DisplayPort 2.0
4x HDMI 2.1

Renting is cheaper than buying