Qwopus3.5-9B-v3 โ€” hipfire quantized

Jackrong/Qwopus3.5-9B-v3 quantized for hipfire, a Rust-native inference engine for AMD RDNA GPUs.

Files

File Quant Size Speed (5700 XT) Notes
qwopus-9b.hf4 HF4 (4-bit) 4.5 GB ~41 tok/s Faster, fits 8GB VRAM
qwopus-9b.hf6 HF6 (6-bit) 6.9 GB ~34 tok/s Better quality, needs 8GB

Usage

# Install hipfire
curl -L https://raw.githubusercontent.com/Kaden-Schutt/hipfire/master/scripts/install.sh | bash

# Pull and run
hipfire pull qwopus:9b        # HF4 (default)
hipfire pull qwopus:9b-hf6    # HF6 (higher quality)
hipfire run qwopus:9b

About

Qwopus3.5-9B-v3 is a Qwen3.5-9B finetune by Jackrong. It uses the same DeltaNet hybrid architecture (linear attention + full attention layers) and ChatML chat template as the base Qwen3.5 model.

hipfire runs this model natively on AMD GPUs (RDNA 1-4) without ROCm runtime overhead. Kernels JIT-compile for the detected GPU architecture.

Hardware

Any AMD GPU with HIP SDK support:

  • RDNA 1: RX 5500/5600/5700
  • RDNA 2: RX 6600/6700/6800/6900
  • RDNA 3: RX 7600/7800/7900
  • RDNA 4: RX 9070
  • APUs: Strix Halo, Strix Point

License

Same as upstream: Apache 2.0

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for schuttdev/hipfire-qwopus-9b

Finetuned
Qwen/Qwen3.5-9B
Finetuned
(4)
this model