AI & ML interests

The AI community building the future.

Recent Activity

sayakpaul  updated a dataset about 17 hours ago
huggingface/diffusers-metadata
alvarobartt  updated a dataset about 22 hours ago
huggingface/DEH-image-scan-data
lewtun  submitted a paper about 23 hours ago
Single-minus gluon tree amplitudes are nonzero
View all activity

Articles

Billing API

1
#36 opened 4 days ago by
andrewday
albertvillanova 
posted an update 3 days ago
view post
Post
1339
5 years already working in democratizing AI 🤗
Grateful to be part of such an awesome team making it happen every day.
juanjucm 
posted an update 17 days ago
view post
Post
225
Last week,
zai-org
dropped zai-org/GLM-4.7-Flash. Now, we bring it to Microsoft Foundry!

- 🏆 30B-A3B MoE, the strongest model in the 30B class. It excels at coding tasks, agentic workflows and reasoning.
- 🤏🏻 Lighter version of his 358B big brother, balancing performance and efficiency.

Not light enough for you? We are also adding
unsloth
unsloth/GLM-4.7-Flash-GGUF to the catalog, with GPU and CPU support powered by llama.cpp 🔥

Go join the hype and deploy them from the Hugging Face collection on Microsoft Foundry!
  • 2 replies
·
evalstate 
posted an update 17 days ago
view post
Post
246
Hugging Face MCP Server v0.3.1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

- Streamable HTTP used for Gradio Connectivity
- SSE Transport (as Server) removed
- Proxy Configuration added for launch of sub-agent tools

danieldk 
posted an update 18 days ago
view post
Post
2710
kernels 0.12 is out! 🎉

Changes:

* Support for kernel version branches to gracefully roll out kernel API changes.
* Support for PyTorch 2.10.
* kernel-builder is now merged into the kernels repo.
* Initial support for standardized kernel benchmarks.

https://github.com/huggingface/kernels/releases/tag/v0.12.0
IlyasMoutawwakil 
posted an update 18 days ago
view post
Post
2996
Transformers v5 just landed! 🚀
It significantly unifies and reduces modeling code across architectures, while opening the door to a whole new class of performance optimizations.

My favorite new feature? 🤔
The new dynamic weight loader + converter. Here’s why 👇

Over the last few months, the core Transformers maintainers built an incredibly fast weight loader, capable of converting tensors on the fly while loading them in parallel threads. This means we’re no longer constrained by how parameters are laid out inside the safetensors weight files.

In practice, this unlocks two big things:
- Much more modular modeling code. You can now clearly see how architectures build on top of each other (DeepSeek v2 → v3, Qwen v2 → v3 → MoE, etc.). This makes shared bottlenecks obvious and lets us optimize the right building blocks once, for all model families.
- Performance optimizations beyond what torch.compile can do alone. torch.compile operates on the computation graph, but it can’t change parameter layouts. With the new loader, we can restructure weights at load time: fusing MoE expert projections, merging attention QKV projections, and enabling more compute-dense kernels that simply weren’t possible before.

Personally, I'm honored to have contributed in this direction, including the work on optimizing MoE implementations and making modeling code more torch-exportable, so these optimizations can be ported cleanly across runtimes.

Overall, Transformers v5 is a strong signal of where the community and industry are converging: Modularity and Performance, without sacrificing Flexibility.

Transformers v5 makes its signature from_pretrained an entrypoint where you can mix and match:
- Parallelism
- Quantization
- Custom kernels
- Flash/Paged attention
- Continuous batching
- ...

Kudos to everyone involved! I highly recommend the:
Release notes: https://github.com/huggingface/transformers/releases/tag/v5.0.0
Blog post: https://huggingface.co/blog/transformers-v5
·
IlyasMoutawwakil 
posted an update 23 days ago
view post
Post
2358
After 2 months of refinement, I'm happy to announce that a lot of Transformers' modeling code is now significantly more torch-compile & export-friendly 🔥

Why it had to be done 👇
PyTorch's Dynamo compiler is increasingly becoming the default interoperability layer for ML systems. Anything that relies on torch.export or torch.compile, from model optimization to cross-framework integrations, benefits directly when models can be captured as a single dynamo-traced graph !

Transformers models are now easier to:
⚙️ Compile end-to-end with torch.compile backends
📦 Export reliably via torch.export and torch.onnx.export
🚀 Deploy to ONNX / ONNX Runtime, Intel Corporation's OpenVINO, NVIDIA AutoDeploy (TRT-LLM), AMD's Quark, Meta's Executorch and more hardware-specific runtimes.

This work aims at unblocking entire TorchDynamo-based toolchains that rely on exporting Transformers across runtimes and accelerators.

We are doubling down on Transformers commitment to be a first-class citizen of the PyTorch ecosystem, more exportable, more optimizable, and easier to deploy everywhere.

There are definitely some edge-cases that we still haven't addressed so don't hesitate to try compiling / exporting your favorite transformers and to open issues / PRs.

PR in the comments ! More updates coming coming soon !
  • 1 reply
·
tomaarsen 
posted an update 2 months ago
view post
Post
3678
🐦‍🔥 I've just published Sentence Transformers v5.2.0! It introduces multi-processing for CrossEncoder (rerankers), multilingual NanoBEIR evaluators, similarity score outputs in mine_hard_negatives, Transformers v5 support and more. Details:

- CrossEncoder multi-processing: Similar to SentenceTransformer and SparseEncoder, you can now use multi-processing with CrossEncoder rerankers. Useful for multi-GPU and CPU settings, and simple to configure: just device=["cuda:0", "cuda:1"] or device=["cpu"]*4 on the model.predict or model.rank calls.

- Multilingual NanoBEIR Support: You can now use community translations of the tiny NanoBEIR retrieval benchmark instead of only the English one, by passing dataset_id, e.g. dataset_id="lightonai/NanoBEIR-de" for the German benchmark.

- Similarity scores in Hard Negatives Mining: When mining for hard negatives to create a strong training dataset, you can now pass output_scores=True to get similarity scores returned. This can be useful for some distillation losses!

- Transformers v5: This release works with both Transformers v4 and the upcoming v5. In the future, Sentence Transformers will only work with Transformers v5, but not yet!

- Python 3.9 deprecation: Now that Python 3.9 has lost security support, Sentence Transformers no longer supports it.

Check out the full changelog for more details: https://github.com/huggingface/sentence-transformers/releases/tag/v5.2.0

I'm quite excited about what's coming. There's a huge draft PR with a notable refactor in the works that should bring some exciting support. Specifically, better multimodality, rerankers, and perhaps some late interaction in the future!
angt 
posted an update 2 months ago
view post
Post
2797
installama.sh at the TigerBeetle 1000x World Tour !

Last week I had the chance to give a short talk during the TigerBeetle 1000x World Tour (organized by @jedisct1 👏 ) a fantastic event celebrating high-performance engineering and the people who love pushing systems to their limits!

In the talk, I focused on the CPU and Linux side of things, with a simple goal in mind: making the installation of llama.cpp instant, automatic, and optimal, no matter your OS or hardware setup.

For the curious, here are the links worth checking out:
Event page: https://tigerbeetle.com/event/1000x
GitHub repo: https://github.com/angt/installama.sh
Talk: https://youtu.be/pg5NOeJZf0o?si=9Dkcfi2TqjnT_30e

More improvements are coming soon. Stay tuned!
  • 1 reply
·
angt 
posted an update 2 months ago
view post
Post
1788
I'm excited to share that https://installama.sh is up and running! 🚀

On Linux / macOS / FreeBSD it is easier than ever:
curl https://installama.sh | sh


And Windows just joined the party 🥳
irm https://installama.sh | iex

Stay tuned for new backends on Windows!
angt 
posted an update 3 months ago
view post
Post
504
🚀 installama.sh update: Vulkan & FreeBSD support added!

The fastest way to install and run llama.cpp has just been updated!

We are expanding hardware and OS support to make local AI even more accessible. This includes:

🌋 Vulkan support for Linux on x86_64 and aarch64.
😈 FreeBSD support (CPU backend) on x86_64 and aarch64 too.
✨ Lots of small optimizations and improvements under the hood.

Give it a try right now:
curl angt.github.io/installama.sh | MODEL=unsloth/Qwen3-4B-GGUF:Q4_0 sh