6 Open-Source Libraries to FineTune LLMs 1. Unsloth GitHub: https://github.com/unslothai/unsloth → Fastest way to fine-tune LLMs locally → Optimized for low VRAM (even laptops) → Plug-and-play with Hugging Face models
3. TRL (Transformer Reinforcement Learning) GitHub: https://github.com/huggingface/trl → RLHF, DPO, PPO for LLM alignment → Built on Hugging Face ecosystem → Essential for post-training optimization
4. DeepSpeed GitHub: https://github.com/microsoft/DeepSpeed → Train massive models efficiently → Memory + speed optimization → Industry standard for scaling
6. PEFT GitHub: https://github.com/huggingface/peft → Fine-tune with minimal compute → LoRA, adapters, prefix tuning → Best for cost-efficient training
This is the best set of AI and ML books and a full guide to learning machine learning from the ground up. This is my study material that I used, so I thought it would be helpful to share it with others. Like, share, and add it to your collection at Ujjwal-Tyagi/ai-ml-foundations-book-collection.
We are hiring at Shirova AI. We need AI researchers and engineers to work in our research lab. Shirova AI is a research lab in India, so we can help our researchers move to nearby workspaces or let them work from home without ever coming to the lab. We're building our founding team, so the pay will be good. You can learn, so don't hesitate to mail us at: careers@shirova.com
I am sharing my study material for AI & ML, these books are really a "bible" and gives very strong foundation, I also have given guidance, introduction and my master notes in the dataset repo card! I hope you will find them helpful, if you have any queries, just start a discussion and I am always there to help you out! Ujjwal-Tyagi/ai-ml-foundations-book-collection
We should really have a release date range slider on the /models page. Tired of "trending/most downloaded" being the best way to sort and still seeing models from 2023 on the first page just because they're embedded in enterprise pipelines and get downloaded repeatedly. "Recently Created/Recently Updated" don't solve the discovery problem considering the amount of noise to sift through.
Slight caveat: Trending actually does have some recency bias, but it's not strong/precise enough.
Public reports allege that Anthropic gobbled up trillions of tokens of copyrighted material and public data to build their castle. 🏰📄 Now that they're sitting on top, they're begging for special laws to protect their profits while pulling the ladder up behind them. 🪜🚫
But the hypocrisy meter just broke! 📉 They are accusing Chinese labs like DeepSeek, Minimax, and Kimi of "huge distillation attacks. The Reality is that You can't just loot the entire internet's library, lock the door, and then sue everyone else for reading through the window. Stop trying to gatekeep the tech you didn't own in the first place. Read the complete article on it: https://huggingface.co/blog/Ujjwal-Tyagi/the-dark-underbelly-of-anthropic
Qwen 3.5 Model is here! Supporting 1m context length by default, It is giving much good performance and competitive to Claude Opus 4.6, Qwen/Qwen3.5-397B-A17B, here it's GGUF: unsloth/Qwen3.5-397B-A17B-GGUF, Follow me and turn on the notification for the latest news!
🏙️ Hugging Face Community Post Title: 🧬 Experimenting with "Dynamic Chaos" in Tamil SLMs
Hi everyone! I just published a new experimental study on Small Language Model (SLM) resilience.
I took the Qwen2.5-0.5B model and put it through a "Chaos Phase" to see how much weight data a tiny model can lose before its understanding of classical Tamil grammar breaks.
Key highlights of the study:
Target Data: Fine-tuned on the Thirukkural (1,330 couplets + modern explanations). The Chaos Step: Applied 20% random weight pruning but implemented "Layer Protection" for the Token Embeddings and LM Head to keep the characters readable. Compression: 4-bit (Q4_K_M) quantization for extreme efficiency. Result: A surrealist classical Tamil model that is ultra-light (~300MB) and ultra-fast!
There is a new open-source music generation model called HeartMuLa. It offers strong, competitive performance compared to Suno and supports English, Chinese, Japanese, Korean, and Spanish. It is optimized to run easily on RTX GPUs and other consumer-grade hardware. HeartMuLa/HeartMuLa-oss-3B https://github.com/HeartMuLa/heartlib
So, Koreans are also doing great progress behind Chinese, Their two open source ai models that are actually good in coding. upstage/Solar-Open-100Bskt/A.X-K1
I’m excited to release hawky-ai-Qwen3-0.6B-Marketing-MoT, a specialized SLM designed for deep strategic reasoning in performance marketing.
While small at 0.6B parameters, this model punches way above its weight class by utilizing a Mixture of Thoughts (MoT) framework. It doesn't just give you an answer; it thinks through the logic of Meta Ads scaling, GA4 attribution, and unit economics before providing a strategic recommendation.
Key Features:
Thinking-First: Trained on 1,500+ critical thinking scenarios. MoT Framework: 5 distinct reasoning styles (Linear, Exploratory, Critical, Deconstructive, Analogical). SLM Speed: Perfect for low-latency, high-precision marketing audits. Check it out on Hugging Face: 🔗 Sri-Vigneshwar-DJ/hawky-ai-Qwen3-0.6B-Marketing-MoT
I am very excited to see the release of nyuuzyou/gitee-code. This is exactly what I have been looking for. Thank you to @nyuuzyou for his hard work on this.
I’m looking for AI engineers and researchers to join my company as part of the core team. We’ll be working on cutting-edge research and hands-on implementation across LLMs and related systems. I’m especially interested in founding engineers for my ai startup, who want to build from the ground up and shape both the product and the research direction. If this sounds interesting to you, reply to this post and message me on Discord — my username is "ujjwal_tyagi.shirova", Please also attach your Resume and Details of your open source projects (if any related to LLMs) on discord, avoid sharing here as a reply to this post.
Introducing Hawky-AI H1 4B PM: The First Open-Source LLM for Performance Marketing 🎯
Hey HF Community! 👋
Just released the first LLM fine-tuned specifically for Performance Marketing. What is it? Gemma 3 4B distilled from Claude Opus 4.5 with expert-level marketing knowledge. Covers: 📱 Meta Ads (campaign structure, bidding, scaling, creative fatigue) 🔍 Google Ads (Quality Score, Performance Max, lead gen) 📊 Measurement (ROAS vs MER, incrementality, LTV:CAC) 🎨 Creative Strategy (hook rates, A/B testing, funnel creative) Why we built it: Generic LLMs say "optimize your targeting" — not helpful. This model gives specific frameworks like "frequency at 4.5 + CTR drop = creative fatigue, here's the fix..." Technical:
Base: Gemma 3 4B Method: QLoRA (r=64) Teacher: Claude Opus 4.5