|
|
--- |
|
|
library_name: transformers |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- pt |
|
|
- en |
|
|
base_model: |
|
|
- unsloth/Qwen3-4B-Base |
|
|
pipeline_tag: text-generation |
|
|
datasets: |
|
|
- nvidia/OpenMathReasoning |
|
|
--- |
|
|
# π§ DogeAI-v2.0-4B-Reasoning |
|
|
**"The Small Model That Thinks Big."** |
|
|
|
|
|
DogeAI-v2.0-4B-Reasoning is a high-efficiency model optimized for **Chain-of-Thought (CoT)**. Built by [AxionLab-Co](https://huggingface.co), it merges a specialized reasoning LoRA onto the powerful **Qwen3-4B-Base** architecture, delivering structured, step-by-step analytical capabilities in a compact 4B footprint. |
|
|
|
|
|
### π Key Highlights |
|
|
- **Architecture:** Decoder-only Transformer (Qwen3 Base). |
|
|
- **Core Strength:** Multi-step logical reasoning and structured problem solving. |
|
|
- **Hardware Friendly:** Optimized for local inference (Low VRAM usage). |
|
|
- **Final Merge:** No LoRA dependency; ready for production or GGUF conversion. |
|
|
|
|
|
--- |
|
|
|
|
|
## π― Use Cases |
|
|
- **Complex Problem Solving:** Math, logic, and analytical tasks. |
|
|
- **Detailed Explanations:** When you need the "why" and "how", not just the "what". |
|
|
- **Local Agents:** High-performance reasoning for edge devices and local LLM setups. |
|
|
|
|
|
--- |
|
|
|
|
|
## π οΈ Quick Start |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
import torch |
|
|
|
|
|
model_id = "AxionLab-Co/DogeAI-v2.0-4B-Reasoning" |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
model_id, |
|
|
device_map="auto", |
|
|
torch_dtype=torch.bfloat16 # Recommended for Qwen3 |
|
|
) |
|
|
|
|
|
prompt = "Solve this step-by-step: If a train leaves at 2 PM at 60mph, and another..." |
|
|
inputs = tokenizer(prompt, return_tensors="pt").to(model.device) |
|
|
|
|
|
outputs = model.generate( |
|
|
**inputs, |
|
|
max_new_tokens=512, |
|
|
temperature=0.3, # Lower temp recommended for reasoning |
|
|
do_sample=True |
|
|
) |
|
|
|
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
|
|
|
``` |
|
|
|
|
|
ποΈ Training & Methodology |
|
|
Our goal at AxionLab was to prioritize Depth of Thought over mere textual fluency. |
|
|
Dataset: A curated mix of synthetic CoT datasets and manually pre-processed logical reasoning prompts. |
|
|
Fine-tuning: Performed on Kaggle GPUs using PEFT (LoRA) with a focus on preserving the base model's knowledge while injecting structured logic. |
|
|
Optimization: Mixed precision (fp16) with a final merge_and_unload for seamless deployment. |
|
|
|
|
|
π Evaluation Results |
|
|
In qualitative testing, DogeAI-v2.0-4B shows: |
|
|
Higher Logical Consistency compared to the stock Qwen3-4B-Base. |
|
|
Reduced Hallucination in multi-step word problems. |
|
|
Structured Verbosity: It "thinks" before it answers. |
|
|
|
|
|
β οΈ Limitations & Bias |
|
|
Reasoning Loops: The model might occasionally over-explain simple tasks. |
|
|
Safety: No specific safety RLHF has been applied. Use with external safety guardrails in production. |
|
|
Factuality: While logic is improved, it can still hallucinate complex facts. |
|
|
|
|
|
π€ Contact & Collaboration |
|
|
Developed with β€οΈ by AxionLab-Co. |
|
|
We are an independent, community-driven lab focused on efficient AI. |
|
|
Organization: AxionLab-official |
|
|
Feedback: Open a Discussion on this repo! |
|
|
Language Support: Primarily English. Portuguese support is available but may vary in reasoning depth. |
|
|
|