Qwen3-Bifrost-SOL-4B-GUFF

Qwen3 Bifrost SOL 4B is a specialized, fine-tuned variant of the Qwen3-4B base model crafted for blockchain coding and smart contract development within the Solana ecosystem. It was trained using the Solana Vanguard Challenge dataset, which comprises 1,000 in-depth questions covering a broad spectrum of topics: fundamental blockchain concepts, advanced on-chain programming in Rust (including security, state management, CPIs, and PDAs), as well as client-side integration in TypeScript with tools like @solana/web3.js, wallet adapters, and Metaplex for NFTs. This model was trained over 11 hours and 22 minutes using an NVIDIA GeForce RTX 3090, and features ongoing development with further fine-tuning, benchmarking, and future extensions planned (such as C# coverage via Solnet). Intended for research and development, Bifrost SOL 4B should not be deployed in production environments without thorough testing, as it may still produce unexpected or biased outputs despite alignment efforts using SFT and DPO.

Model Files

File Name Quant Type File Size
Qwen3-Bifrost-SOL-4B.BF16.gguf BF16 8.05 GB
Qwen3-Bifrost-SOL-4B.F16.gguf F16 8.05 GB
Qwen3-Bifrost-SOL-4B.F32.gguf F32 16.1 GB
Qwen3-Bifrost-SOL-4B.Q2_K.gguf Q2_K 1.67 GB
Qwen3-Bifrost-SOL-4B.Q3_K_L.gguf Q3_K_L 2.24 GB
Qwen3-Bifrost-SOL-4B.Q3_K_M.gguf Q3_K_M 2.08 GB
Qwen3-Bifrost-SOL-4B.Q3_K_S.gguf Q3_K_S 1.89 GB
Qwen3-Bifrost-SOL-4B.Q4_K_M.gguf Q4_K_M 2.5 GB
Qwen3-Bifrost-SOL-4B.Q4_K_S.gguf Q4_K_S 2.38 GB
Qwen3-Bifrost-SOL-4B.Q5_K_M.gguf Q5_K_M 2.89 GB
Qwen3-Bifrost-SOL-4B.Q5_K_S.gguf Q5_K_S 2.82 GB
Qwen3-Bifrost-SOL-4B.Q6_K.gguf Q6_K 3.31 GB
Qwen3-Bifrost-SOL-4B.Q8_0.gguf Q8_0 4.28 GB

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
58
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3-Bifrost-SOL-4B-GUFF

Finetuned
Qwen/Qwen3-4B
Quantized
(4)
this model