VibeThinker-1.5B + rStar-Coder
Fine-tuned VibeThinker-1.5B on 80K Python samples from microsoft/rStar-Coder.
Training:
- LoRA r=16, alpha=32
- 1 epoch, batch=16
- Cost: $3-4 on A100
Usage:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("prometheus04/vibethinker-1.5b-rstar-coder", torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("prometheus04/vibethinker-1.5b-rstar-coder")
- Downloads last month
- 22
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support