-
pantelis-ninja/unsloth-Qwen2.5-3B-Instruct-1_r-8_lr-0.0005_ms-25_gas-1_batch-size-8
Text Generation • 3B • Updated -
pantelis-ninja/unsloth-Qwen2.5-3B-Instruct-2_r-8_lr-0.0005_ms-25_gas-2_batch-size-8
Text Generation • 3B • Updated -
pantelis-ninja/unsloth-Qwen2.5-3B-Instruct-3_r-8_lr-0.0005_ms-25_gas-3_batch-size-8
Text Generation • 3B • Updated -
pantelis-ninja/unsloth-Qwen2.5-3B-Instruct-4_r-8_lr-0.0005_ms-25_gas-4_batch-size-8
Text Generation • 3B • Updated
Pantelis Sarantos
pantelis-ninja
·
AI & ML interests
None yet
Recent Activity
liked
a dataset 11 days ago
fka/prompts.chat reacted
to
danielhanchen's
post with 🚀 11 days ago
You can now fine-tune Qwen3.5 for free with our notebook! 🔥
You just need 5GB VRAM to train Qwen3.5-2B LoRA locally!
Unsloth trains Qwen3.5 1.5x faster with 50% less VRAM.
GitHub: https://github.com/unslothai/unsloth
Guide: https://unsloth.ai/docs/models/qwen3.5/fine-tune
Qwen3.5-4B Colab: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_5_(4B)_Vision.ipynb updated
a model 12 months ago
pantelis-ninja/gemma-3-1b-it-metamath25k-lora Organizations
None yet