Collections

Discover the best community collections!

Collections trending this week
100 Coder/Programming - MOE, Reasoning, Reg, Imatrix, Fused.
Models (0.8B to 87B) in regular, "reasoning", "Brainstorm", MOE (1x to 8x / 128 experts), and expanded to create better and stronger code, faster.
Gemma 3
All versions of Google's new multimodal models including QAT in 1B, 4B, 12B, and 27B sizes. In GGUF, dynamic 4-bit and 16-bit formats.
Qwen2.5
Qwen2.5 language models, including pretrained and instruction-tuned models of 7 sizes, including 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B.
Inference Optimized Checkpoints (with Model Optimizer)
A collection of generative models quantized and optimized for inference with Model Optimizer.
V-JEPA 2
A frontier video understanding model developed by FAIR, Meta, which extends the pretraining objectives of https://ai.meta.com/blog/v-jepa-yann
TimesFM Release
TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.
Llama 3.1
This collection hosts the transformers and original repos of the Llama 3.1, Llama Guard 3 and Prompt Guard models
100 Coder/Programming - MOE, Reasoning, Reg, Imatrix, Fused.
Models (0.8B to 87B) in regular, "reasoning", "Brainstorm", MOE (1x to 8x / 128 experts), and expanded to create better and stronger code, faster.
V-JEPA 2
A frontier video understanding model developed by FAIR, Meta, which extends the pretraining objectives of https://ai.meta.com/blog/v-jepa-yann
Gemma 3
All versions of Google's new multimodal models including QAT in 1B, 4B, 12B, and 27B sizes. In GGUF, dynamic 4-bit and 16-bit formats.
Qwen2.5
Qwen2.5 language models, including pretrained and instruction-tuned models of 7 sizes, including 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B.
TimesFM Release
TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.
Inference Optimized Checkpoints (with Model Optimizer)
A collection of generative models quantized and optimized for inference with Model Optimizer.
Llama 3.1
This collection hosts the transformers and original repos of the Llama 3.1, Llama Guard 3 and Prompt Guard models