Qwen3-4B Instruct Alpha
Finetuned from unsloth/Qwen3-4B-Instruct-2507 using QLoRA + Unsloth. Finance-domain specialized — only finance and investing-related examples were retained for training. <think> blocks stripped from all assistant turns.
Key Differences from NoThink-V2
Unlike NoThink-V2 which trained on the full general-purpose dataset, Alpha applies a finance/investing keyword filter before training, resulting in a smaller but domain-focused dataset. It also incorporates a custom first-party dataset (VladHong/Alpha-Instruct) alongside the TeichAI sources.
Training Data
| Dataset | Raw | After Finance Filter |
|---|---|---|
| TeichAI/gemini-3-pro-preview-high-reasoning-250x | 248 | 120 |
| TeichAI/gemini-3-pro-preview-high-reasoning-1000x | 1,018 | 671 |
| TeichAI/claude-4.5-opus-high-reasoning-250x | 250 | 159 |
| TeichAI/claude-sonnet-4.5-high-reasoning-250x | 247 | 91 |
| TeichAI/gpt-5.2-high-reasoning-250x | 249 | 242 |
| VladHong/Alpha-Instruct | 336 | 265 |
| Total | 2,348 | 1,548 |
~1,425 examples after MinHash deduplication (threshold 0.8). Finance filter covers equities, bonds, funds, crypto, macro indicators, derivatives, retirement accounts, and more.
Training Details
| Parameter | Value |
|---|---|
| Method | QLoRA (4-bit NF4) + Unsloth |
| LoRA rank | 16 |
| LoRA alpha | 16 |
| Epochs | 1 |
| Steps | 179 |
| Batch size | 2 per device × 4 gradient accumulation = 8 effective |
| Learning rate | 1e-4 (cosine schedule) |
| Max seq length | 1024 |
| Optimizer | AdamW 8-bit |
| Hardware | Kaggle Tesla T4 (14.56 GB VRAM) |
| Training time | ~70.6 minutes |
| Trainable params | 33M / 4.05B (0.81%) |
| Peak VRAM | 5.47 GB (1.66 GB for LoRA) |
Training used train_on_responses_only — loss computed on assistant completions only.
Files
*.gguf— IQ4_XS quantized, ready for LM Studio / Ollama / llama.cpplora-adapter/— Raw LoRA weights for merging with the base model
Usage (Ollama)
ollama run VladHong/Qwen3-4B-Instruct-Alpha
License Note
Base model is Apache 2.0. Training data includes AI-generated content and a custom first-party dataset — review upstream dataset terms before commercial use.
- Downloads last month
- 212
4-bit
Model tree for VladHong/Qwen3-4B-Instruct-Alpha
Base model
Qwen/Qwen3-4B-Instruct-2507