Transformers v5 just landed! 🚀 It significantly unifies and reduces modeling code across architectures, while opening the door to a whole new class of performance optimizations.
My favorite new feature? 🤔 The new dynamic weight loader + converter. Here’s why 👇
Over the last few months, the core Transformers maintainers built an incredibly fast weight loader, capable of converting tensors on the fly while loading them in parallel threads. This means we’re no longer constrained by how parameters are laid out inside the safetensors weight files.
In practice, this unlocks two big things: - Much more modular modeling code. You can now clearly see how architectures build on top of each other (DeepSeek v2 → v3, Qwen v2 → v3 → MoE, etc.). This makes shared bottlenecks obvious and lets us optimize the right building blocks once, for all model families. - Performance optimizations beyond what torch.compile can do alone. torch.compile operates on the computation graph, but it can’t change parameter layouts. With the new loader, we can restructure weights at load time: fusing MoE expert projections, merging attention QKV projections, and enabling more compute-dense kernels that simply weren’t possible before.
Personally, I'm honored to have contributed in this direction, including the work on optimizing MoE implementations and making modeling code more torch-exportable, so these optimizations can be ported cleanly across runtimes.
Overall, Transformers v5 is a strong signal of where the community and industry are converging: Modularity and Performance, without sacrificing Flexibility.
Transformers v5 makes its signature from_pretrained an entrypoint where you can mix and match: - Parallelism - Quantization - Custom kernels - Flash/Paged attention - Continuous batching - ...
After 2 months of refinement, I'm happy to announce that a lot of Transformers' modeling code is now significantly more torch-compile & export-friendly 🔥
Why it had to be done 👇 PyTorch's Dynamo compiler is increasingly becoming the default interoperability layer for ML systems. Anything that relies on torch.export or torch.compile, from model optimization to cross-framework integrations, benefits directly when models can be captured as a single dynamo-traced graph !
Transformers models are now easier to: ⚙️ Compile end-to-end with torch.compile backends 📦 Export reliably via torch.export and torch.onnx.export 🚀 Deploy to ONNX / ONNX Runtime, Intel Corporation's OpenVINO, NVIDIA AutoDeploy (TRT-LLM), AMD's Quark, Meta's Executorch and more hardware-specific runtimes.
This work aims at unblocking entire TorchDynamo-based toolchains that rely on exporting Transformers across runtimes and accelerators.
We are doubling down on Transformers commitment to be a first-class citizen of the PyTorch ecosystem, more exportable, more optimizable, and easier to deploy everywhere.
There are definitely some edge-cases that we still haven't addressed so don't hesitate to try compiling / exporting your favorite transformers and to open issues / PRs.
PR in the comments ! More updates coming coming soon !
🚀 Optimum: The Last v1 Release 🚀 Optimum v1.27 marks the final major release in the v1 series. As we close this chapter, we're laying the groundwork for a more modular and community-driven future: - Optimum v2: A lightweight core package for porting Transformers, Diffusers, or Sentence-Transformers to specialized AI hardware/software/accelerators.. - Optimum‑ONNX: A dedicated package where the ONNX/ONNX Runtime ecosystem lives and evolves, faster-moving and decoupled from the Optimum core.
🎯 Why this matters: - A clearer governance path for ONNX, fostering stronger community collaboration and improved developer experience.. - Enable innovation at a faster pace in a more modular, open-source environment.
💡 What this means: - More transparency, broader participation, and faster development driven by the community and key actors in the ONNX ecosystem (PyTorch, Microsoft, Joshua Lochner 👀, ...) - A cleaner, more maintainable core Optimum, focused on extending HF libraries to special AI hardware/software/accelerators tooling and used by our partners (Intel Corporation, Amazon Web Services (AWS), AMD, NVIDIA, FuriosaAI, ...)
🛠️ Major updates I worked on in this release: ✅ Added support for Transformers v4.53 and SmolLM3 in ONNX/ONNXRuntime. ✅ Solved batched inference/generation for all supported decoder model architectures (LLMs).
✨ Big shoutout to @echarlaix for leading the refactoring work that cleanly separated ONNX exporter logic and enabled the creation of Optimum‑ONNX.
If you've followed the progress of robotics in the past 18 months, you've likely noticed how robotics is increasingly becoming the next frontier that AI will unlock.
At Hugging Face—in robotics and across all AI fields—we believe in a future where AI and robots are open-source, transparent, and affordable; community-built and safe; hackable and fun. We've had so much mutual understanding and passion working with the Pollen Robotics team over the past year that we decided to join forces!
You can already find our open-source humanoid robot platform Reachy 2 on the Pollen website and the Pollen community and people here on the hub at
1. OCR a grocery list or train a titan while sipping coffee? ☕ 2. Camera Snap 📷: Capture life’s chaos—your cat’s face or that weird receipt. Proof you’re a spy! 3. OCR 🔍: PDFs beg for mercy as GPT-4o extracts text. 4. Image Gen 🎨: Prompt “neon superhero me” 5. PDF 📄: Double-page OCR Single-page sniping
The new DeepSite space is really insane for vibe-coders enzostvs/deepsite
With the wave of vibe-coding-optimized LLMs like the latest open-source DeepSeek model (version V3-0324), you can basically prompt out-of-the-box and create any app and game in one-shot.
It feels so powerful to me, no more complex framework or under-the-hood prompt engineering to have a working text-to-app tool.
AI is eating the world and *open-source* AI is eating AI itself!
PS: and even more meta is that the DeepSite app and DeepSeek model are both fully open-source code => time to start recursively improve?
PPS: you still need some inference hosting unless you're running the 600B param model at home, so check the very nice list of HF Inference Providers for this model: deepseek-ai/DeepSeek-V3-0324