Qwen3.5 GGUF Models โ€” RDNA4 R9700 Benchmark

Exact GGUF files used in the Comprehensive LLM Benchmark on AMD Radeon AI PRO R9700 (RDNA4).

These files are provided so anyone with an R9700 can reproduce the results exactly.

Note: The 35B file was downloaded from unsloth/Qwen3.5-35B-A3B-GGUF on 2025-02-25. Unsloth has since updated the file โ€” the current version on their repo is larger (20.71 GiB vs 18.34 GiB). This repo preserves the original file used in the benchmark.

Models

Model Type Total Params Active/Token File Size Quantization Source
Qwen3.5-35B-A3B MoE 34.66B ~3.5B 18.34 GiB UD-Q4_K_XL (file_type=Q4_K_M) unsloth
Qwen3.5-27B Dense 26.90B 26.90B 15.59 GiB Q4_K_M unsloth

System Configuration

Component Details
GPU AMD Radeon AI PRO R9700 (gfx1201, RDNA4, 32 GB GDDR6, 64 CUs)
Memory bandwidth 640 GB/s (MCLK 1258 MHz)
PCIe PCIe 5.0 x16, 32 GT/s
CPU AMD Ryzen 9 9900X 12-Core
RAM 64 GB DDR5
OS Ubuntu 24.04.4 LTS, Kernel 6.19.8
Mesa (RADV) 25.2.8
llama.cpp commit dc8d14c58 (build 8554)

Best Results

Qwen3.5-35B-A3B (MoE)

Metric RADV AMDVLK
Best decode 149.5 t/s 163.7 t/s (gfx+rm_kq=1)
Best prefill pp2048 3,075 t/s (ub=2048) 2,170 t/s

Qwen3.5-27B (Dense)

Metric RADV AMDVLK
Best decode 32.5 t/s (ASPM perf) 33.2 t/s (ASPM perf)
Best prefill pp2048 993 t/s (Mesa 25.3.6) 207 t/s

Key Findings

  • rm_kq=1 is the single most impactful code change: +1% RADV, +2% AMDVLK MoE, +13% AMDVLK dense
  • PCIe ASPM=performance gives +10.8% dense decode on RADV
  • gfx queue helps AMDVLK MoE (+4.7%) but hurts AMDVLK dense (-8%)
  • RADV wins overall (best prefill, competitive decode)
  • Dense models reach 79-83% bandwidth utilization with optimizations

Full Results

See BENCHMARK_COMPREHENSIVE_R9700.md for the complete benchmark with 50+ configurations tested.

License

Apache 2.0 (following the original model license).

Credits

Downloads last month
255
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support