Chief Scientist at Shirova AI, focused on advancing open-source AI, Experienced in LLM fine-tuning, model architecture, and research, with a strong interest in building scalable and efficient models
6 Open-Source Libraries to FineTune LLMs 1. Unsloth GitHub: https://github.com/unslothai/unsloth ā Fastest way to fine-tune LLMs locally ā Optimized for low VRAM (even laptops) ā Plug-and-play with Hugging Face models
3. TRL (Transformer Reinforcement Learning) GitHub: https://github.com/huggingface/trl ā RLHF, DPO, PPO for LLM alignment ā Built on Hugging Face ecosystem ā Essential for post-training optimization
4. DeepSpeed GitHub: https://github.com/microsoft/DeepSpeed ā Train massive models efficiently ā Memory + speed optimization ā Industry standard for scaling
6. PEFT GitHub: https://github.com/huggingface/peft ā Fine-tune with minimal compute ā LoRA, adapters, prefix tuning ā Best for cost-efficient training