Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Building on HF
79
20
25
Abdennacer Badaoui
badaoui
Follow
John6666's profile picture
michellehbn's profile picture
tfrere's profile picture
96 followers
·
243 following
abdennacer0
Abdennacer-Badaoui
abdennacer-badaoui-412a02224
AI & ML interests
None yet
Recent Activity
reacted
to
IlyasMoutawwakil
's
post
with 🔥
about 5 hours ago
Transformers v5 just landed! 🚀 It significantly unifies and reduces modeling code across architectures, while opening the door to a whole new class of performance optimizations. My favorite new feature? 🤔 The new dynamic weight loader + converter. Here’s why 👇 Over the last few months, the core Transformers maintainers built an incredibly fast weight loader, capable of converting tensors on the fly while loading them in parallel threads. This means we’re no longer constrained by how parameters are laid out inside the safetensors weight files. In practice, this unlocks two big things: - Much more modular modeling code. You can now clearly see how architectures build on top of each other (DeepSeek v2 → v3, Qwen v2 → v3 → MoE, etc.). This makes shared bottlenecks obvious and lets us optimize the right building blocks once, for all model families. - Performance optimizations beyond what torch.compile can do alone. torch.compile operates on the computation graph, but it can’t change parameter layouts. With the new loader, we can restructure weights at load time: fusing MoE expert projections, merging attention QKV projections, and enabling more compute-dense kernels that simply weren’t possible before. Personally, I'm honored to have contributed in this direction, including the work on optimizing MoE implementations and making modeling code more torch-exportable, so these optimizations can be ported cleanly across runtimes. Overall, Transformers v5 is a strong signal of where the community and industry are converging: Modularity and Performance, without sacrificing Flexibility. Transformers v5 makes its signature from_pretrained an entrypoint where you can mix and match: - Parallelism - Quantization - Custom kernels - Flash/Paged attention - Continuous batching - ... Kudos to everyone involved! I highly recommend the: Release notes: https://github.com/huggingface/transformers/releases/tag/v5.0.0 Blog post: https://huggingface.co/blog/transformers-v5
upvoted
an
article
about 1 month ago
Shadow AI - Where are the CIOs?
liked
a Space
about 2 months ago
transformers-community/circle-ci-viz
View all activity
Organizations
badaoui
's Spaces
2
Sort: Recently updated
Running
1
Tcid
👁
A dashboard
Sleeping
Test
🌖
for testing