AI & ML interests

None defined yet.

Articles

danielhanchen 
posted an update 6 days ago
view post
Post
2500
You can now fine-tune embedding models in our free Unsloth notebook! 🤗

Fine-tuning embedding models improves retrieval & RAG by aligning vectors to your domain-specific notion of similarity, improving search, clustering, and recommendations on your data.

⭐ Blog + Notebooks: https://unsloth.ai/docs/new/embedding-finetuning

Unsloth trains embedding models 1.8-3.3x faster with 20% less VRAM, 2x longer context & no accuracy loss vs. FA2 setups.

We'd like to thank Hugging Face and Unsloth contributor: electroglyph for making this possible!
  • 1 reply
·
danielhanchen 
posted an update 8 days ago
danielhanchen 
posted an update 13 days ago
view post
Post
2782
You can now do reinforcement learning training with 7× longer context and no accuracy loss, via our new batching algorithms.

Long reasoning chains in RL are costly, but now we enable you to train gpt-oss with GRPO & reach 380K context on a 192GB GPU.

Blog: https://unsloth.ai/docs/new/grpo-long-context
danielhanchen 
posted an update 28 days ago
danielhanchen 
posted an update about 1 month ago
danielhanchen 
posted an update about 1 month ago
danielhanchen 
posted an update about 1 month ago
danielhanchen 
posted an update about 2 months ago
danielhanchen 
posted an update about 2 months ago
view post
Post
3845
Mistral's new Ministral 3 models can now be Run & Fine-tuned locally! (16GB RAM)
Ministral 3 have vision support and the best-in-class performance for their sizes.
14B Instruct GGUF: unsloth/Ministral-3-14B-Instruct-2512-GGUF
14B Reasoning GGUF: unsloth/Ministral-3-14B-Reasoning-2512-GGUF

🐱 Step-by-step Guide: https://docs.unsloth.ai/new/ministral-3
All GGUFs, BnB, FP8 etc. variants uploads: https://huggingface.co/collections/unsloth/ministral-3
·
danielhanchen 
posted an update 2 months ago
deep-cogito-team 
published an article 2 months ago
view article
Article

Introducing Cogito v2.1

17
danielhanchen 
posted an update 3 months ago