modelId stringlengths 9 107 | author stringlengths 3 37 | last_modified timestamp[us, tz=UTC]date 2021-03-22 21:11:33 2026-05-04 17:37:22 | downloads int64 100 72.3M | likes int64 1 4.99k | library_name stringclasses 132
values | tags listlengths 2 2.16k | pipeline_tag stringclasses 52
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-03 03:15:09 | card stringlengths 1.51k 391k | entities listlengths 0 18 |
|---|---|---|---|---|---|---|---|---|---|---|
AllThingsIntel/Apollo-V0.1-4B-Thinking | AllThingsIntel | 2025-11-02T01:26:06Z | 16,634 | 39 | null | [
"safetensors",
"gguf",
"qwen3",
"AllThingsIntel",
"Apollo",
"Thinking",
"en",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-4B-Thinking-2507",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-31T14:55:05Z | ### **Apollo-V0.1-4B-Thinking by AllThingsIntel**
Unbound intellect. Authentic personas. Unscripted logic.
This is a 4B parameter model that *thinks* in-character instead of just responding.
## **Model Description**
Apollo-V0.1-4B-Thinking is a specialized fine-tune of Qwen 3 4B Thinking 2507. We've lifted many of t... | [] |
CausalLM/7B | CausalLM | 2025-02-11T14:14:37Z | 2,053 | 137 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"qwen",
"causallm",
"en",
"zh",
"dataset:JosephusCheung/GuanacoDataset",
"dataset:Open-Orca/OpenOrca",
"dataset:stingning/ultrachat",
"dataset:meta-math/MetaMathQA",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"dataset:jondur... | text-generation | 2023-10-22T10:23:00Z | [](https://causallm.org/)
*Image drawn by GPT-4 DALL·E 3* **TL;DR: Perhaps this 7B model, better than all existing models <= 33B, in most quantitative evaluations...**
# CausalLM 7B - Fully Compatible with Meta LLaMA 2
Use the transformers ... | [
{
"start": 699,
"end": 707,
"text": "MT-Bench",
"label": "benchmark name",
"score": 0.8531278371810913
}
] |
inclusionAI/Ling-1T | inclusionAI | 2026-04-13T11:45:13Z | 902 | 533 | transformers | [
"transformers",
"safetensors",
"bailing_moe",
"text-generation",
"conversational",
"custom_code",
"arxiv:2507.17702",
"arxiv:2507.17634",
"arxiv:2510.22115",
"license:mit",
"region:us"
] | text-generation | 2025-10-02T13:41:55Z | ---
license: mit
pipeline_tag: text-generation
library_name: transformers
---
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
</p>
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a> &nbs... | [] |
knowledgator/gliner-relex-large-v0.5 | knowledgator | 2026-04-28T10:11:10Z | 219 | 21 | gliner | [
"gliner",
"safetensors",
"named-entity-recognition",
"relation-extraction",
"zero-shot",
"information-extraction",
"token-classification",
"license:apache-2.0",
"region:us"
] | token-classification | 2025-11-25T17:58:38Z | # 🔗 GLiNER-relex: Generalist and Lightweight Model for Joint Zero-Shot NER and Relation Extraction
GLiNER-relex is a unified model for **zero-shot Named Entity Recognition (NER)** and **Relation Extraction (RE)** that performs both tasks simultaneously in a single forward pass. Built on the GLiNER architecture, it ex... | [] |
unsloth/gemma-3-12b-it-bnb-4bit | unsloth | 2025-05-12T08:01:34Z | 8,025 | 37 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"unsloth",
"gemma",
"google",
"conversational",
"en",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxi... | image-text-to-text | 2025-03-12T10:39:59Z | <div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b">our collection</a> for all versions of Gemma 3 including GGUF, 4-bit & 16-bit formats.</strong>
</p>
<p style="margin-bottom: 0;">
<em><a href="https://docs.... | [] |
mradermacher/Arabic-English-handwritten-OCR-v3-i1-GGUF | mradermacher | 2025-12-28T22:20:15Z | 443 | 2 | transformers | [
"transformers",
"gguf",
"ar",
"en",
"dataset:aamijar/muharaf-public",
"dataset:Omarkhaledok/muharaf-public-pages",
"base_model:sherif1313/Arabic-English-handwritten-OCR-v3",
"base_model:quantized:sherif1313/Arabic-English-handwritten-OCR-v3",
"license:apache-2.0",
"endpoints_compatible",
"region... | null | 2025-12-28T21:46:23Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Qwen/Qwen2.5-14B-Instruct-AWQ | Qwen | 2024-10-09T12:26:42Z | 1,789,054 | 29 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compati... | text-generation | 2024-09-17T13:55:22Z | # Qwen2.5-14B-Instruct-AWQ
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more... | [] |
HiTZ/Latxa-Qwen3-VL-8B-Instruct | HiTZ | 2026-02-23T09:19:21Z | 333 | 2 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"image-text-to-text",
"conversational",
"eu",
"gl",
"ca",
"es",
"en",
"dataset:HiTZ/latxa-corpus-v1.1",
"base_model:Qwen/Qwen3-VL-8B-Instruct",
"base_model:finetune:Qwen/Qwen3-VL-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"reg... | image-text-to-text | 2026-02-19T08:51:04Z | # Model Card for HiTZ/Latxa-Qwen3-VL-8B-Instruct
<p align="center">
<img src="https://raw.githubusercontent.com/hitz-zentroa/latxa/refs/heads/main/assets/latxa_vision_circle.png" style="height: 350px;">
</p>
Latxa-Qwen3-VL-8B-Instruct is a Basque-adapted multimodal and multilingual instruct model built on top of Qw... | [] |
mrdbourke/FoodExtract-gemma-3-270m-fine-tune-v1 | mrdbourke | 2026-03-17T01:14:00Z | 746 | 1 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-08T00:15:06Z | # FoodExtract-v1
This is a food and drink extraction language model built on [Gemma 3 270M](https://huggingface.co/google/gemma-3-270m-it).
Given raw text, it's designed to:
1. Classify the text into food or drink (e.g. "a photo of a dog" = not food or drink, "a photo of a pizza" = food or drink).
2. Tag the text wi... | [] |
nvidia/parakeet-tdt_ctc-1.1b | nvidia | 2025-02-18T13:41:32Z | 1,986 | 22 | nemo | [
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"Transducer",
"TDT",
"FastConformer",
"Conformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"dataset:fisher_corpus",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:National-Singapore-Co... | automatic-speech-recognition | 2024-05-07T11:42:30Z | # Parakeet TDT-CTC 1.1B PnC(en)
<style>
img {
display: inline;
}
</style>
[](#model-architecture)
| [](#model-architecture)
| [![Language... | [] |
mradermacher/gemma-4-21b-a4b-it-REAP-heretic-GGUF | mradermacher | 2026-04-14T14:05:08Z | 2,117 | 2 | transformers | [
"transformers",
"gguf",
"safetensors",
"gemma4",
"moe",
"pruning",
"reap",
"cerebras",
"expert-pruning",
"heretic",
"uncensored",
"decensored",
"abliterated",
"ara",
"en",
"base_model:coder3101/gemma-4-21b-a4b-it-REAP-heretic",
"base_model:quantized:coder3101/gemma-4-21b-a4b-it-REAP-... | null | 2026-04-12T07:50:31Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [] |
zsjTiger/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill-GGUF | zsjTiger | 2026-03-05T01:12:27Z | 1,973 | 2 | null | [
"gguf",
"text-generation-inference",
"llama.cpp",
"unsloth",
"glm4_moe_lite",
"dataset:TeichAI/claude-4.5-opus-high-reasoning-250x",
"base_model:TeichAI/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill",
"base_model:quantized:TeichAI/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill",
"licens... | null | 2026-03-05T01:12:27Z | # GLM 4.7 Flash x Claude 4.5 Opus (High Reasoning)
This model was trained on a small reasoning dataset of **Claude Opus 4.5**, with reasoning effort set to High.
- 🧬 Datasets:
- `TeichAI/claude-4.5-opus-high-reasoning-250x`
- 🏗 Base Model:
- `unsloth/GLM-4.7-Flash`
- ⚡ Use cases:
- Coding
- Science... | [
{
"start": 989,
"end": 1003,
"text": "Terminal Bench",
"label": "benchmark name",
"score": 0.794417142868042
},
{
"start": 1005,
"end": 1023,
"text": "SWE Bench Verified",
"label": "benchmark name",
"score": 0.7958055138587952
}
] |
tiiuae/Falcon-H1-Tiny-R-0.6B-GGUF | tiiuae | 2026-01-21T19:37:19Z | 496 | 8 | transformers | [
"transformers",
"gguf",
"falcon-h1",
"edge",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-13T06:53:58Z | <img src="https://cdn-uploads.huggingface.co/production/uploads/62441d1d9fdefb55a0b7d12c/l1du02RjuAZJcksI5tQ-F.png" alt="drawing" width="800"/>
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Training Details](#training-details)
3. [Usage](#usage)
4. [Evaluation](#evaluation)
5. [Citati... | [] |
DJLougen/Ornstein-27B-GGUF | DJLougen | 2026-04-09T21:37:53Z | 1,227 | 8 | null | [
"gguf",
"reasoning",
"qwen3.5",
"ddm",
"llama-cpp",
"quantized",
"image-text-to-text",
"en",
"base_model:DJLougen/Ornstein-27B",
"base_model:quantized:DJLougen/Ornstein-27B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2026-04-07T23:25:31Z | # Ornstein-27B-GGUF
GGUF quantizations of [DJLougen/Ornstein-27B](https://huggingface.co/DJLougen/Ornstein-27B) — a reasoning-focused fine-tune of Qwen 3.5 27B trained on **1,229 high-quality reasoning traces** curated through a custom **Drift Diffusion Modeling (DDM)** pipeline.
## Support This Work
I'm a P... | [
{
"start": 2,
"end": 19,
"text": "Ornstein-27B-GGUF",
"label": "benchmark name",
"score": 0.6402406692504883
},
{
"start": 55,
"end": 67,
"text": "Ornstein-27B",
"label": "benchmark name",
"score": 0.6273263692855835
},
{
"start": 101,
"end": 113,
"text": ... |
ZhengPeng7/BiRefNet_lite | ZhengPeng7 | 2026-02-04T22:43:46Z | 31,862 | 16 | birefnet | [
"birefnet",
"safetensors",
"background-removal",
"mask-generation",
"Dichotomous Image Segmentation",
"Camouflaged Object Detection",
"Salient Object Detection",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"image-segmentation",
"custom_code",
"arxiv:2401.03407",
"endpoints_compatible",
"... | image-segmentation | 2024-08-02T03:51:45Z | <h1 align="center">Bilateral Reference for High-Resolution Dichotomous Image Segmentation</h1>
<div align='center'>
<a href='https://scholar.google.com/citations?user=TZRzWOsAAAAJ' target='_blank'><strong>Peng Zheng</strong></a><sup> 1,4,5,6</sup>, 
<a href='https://scholar.google.com/citations?user=0uP... | [] |
mradermacher/gemma-3-27b-it-heretic-GGUF | mradermacher | 2025-11-24T06:09:36Z | 247 | 1 | transformers | [
"transformers",
"gguf",
"heretic",
"uncensored",
"decensored",
"abliterated",
"en",
"base_model:coder3101/gemma-3-27b-it-heretic",
"base_model:quantized:coder3101/gemma-3-27b-it-heretic",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-23T23:19:34Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/Llama-3.3-8B-Casimir-v0.2-GGUF | mradermacher | 2026-03-07T02:41:54Z | 961 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"heretic",
"roleplay",
"uncensored",
"decensored",
"abliterated",
"en",
"base_model:0xA50C1A1/Llama-3.3-8B-Casimir-v0.2",
"base_model:quantized:0xA50C1A1/Llama-3.3-8B-Casimir-v0.2",
"license:llama3.3",
"endpoints_co... | null | 2026-03-04T01:58:22Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/GRM2-3b-i1-GGUF | mradermacher | 2026-05-01T11:33:05Z | 3,153 | 2 | transformers | [
"transformers",
"gguf",
"reasoning",
"coding",
"math",
"science",
"agent",
"tools",
"en",
"base_model:OrionLLM/GRM2-3b",
"base_model:quantized:OrionLLM/GRM2-3b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-03-21T05:04:30Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [
{
"start": 608,
"end": 623,
"text": "GRM2-3b-i1-GGUF",
"label": "benchmark name",
"score": 0.6534618735313416
},
{
"start": 1170,
"end": 1185,
"text": "GRM2-3b-i1-GGUF",
"label": "benchmark name",
"score": 0.6335516571998596
},
{
"start": 1330,
"end": 1345,
... |
Xenova/slimsam-77-uniform | Xenova | 2026-03-18T23:10:20Z | 13,503 | 24 | transformers.js | [
"transformers.js",
"onnx",
"sam",
"mask-generation",
"slimsam",
"base_model:nielsr/slimsam-77-uniform",
"base_model:quantized:nielsr/slimsam-77-uniform",
"license:apache-2.0",
"region:us"
] | mask-generation | 2024-01-08T14:50:11Z | https://huggingface.co/nielsr/slimsam-77-uniform with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/trans... | [] |
jayn7/Z-Image-GGUF | jayn7 | 2026-01-27T18:35:07Z | 2,791 | 46 | null | [
"gguf",
"text-to-image",
"image-generation",
"base_model:Tongyi-MAI/Z-Image",
"base_model:quantized:Tongyi-MAI/Z-Image",
"license:apache-2.0",
"region:us"
] | text-to-image | 2026-01-27T17:03:35Z | Quantized GGUF versions of [Z-Image](https://huggingface.co/Tongyi-MAI/Z-Image) by Tongyi-Mai.
### 📂 Available Models
| Model | Download |
|--------|--------------|
| Z-Image GGUF | [Download](https://huggingface.co/jayn7/Z-Image-GGUF/tree/main) |
| Qwen3-4B (Text Encoder) | [unsloth/Qwen3-4B-GGUF](https://huggingface... | [] |
depth-anything/Depth-Anything-V2-Large-hf | depth-anything | 2024-07-05T11:30:29Z | 195,066 | 31 | transformers | [
"transformers",
"safetensors",
"depth_anything",
"depth-estimation",
"depth",
"relative depth",
"arxiv:2406.09414",
"arxiv:2401.10891",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | depth-estimation | 2024-06-20T15:31:25Z | # Depth Anything V2 Base – Transformers Version
Depth Anything V2 is trained from 595K synthetic labeled images and 62M+ real unlabeled images, providing the most capable monocular depth estimation (MDE) model with the following features:
- more fine-grained details than Depth Anything V1
- more robust than Depth Anyt... | [] |
dinerburger/Qwen3.5-27B-GGUF | dinerburger | 2026-03-22T12:49:23Z | 2,565 | 5 | null | [
"gguf",
"base_model:Qwen/Qwen3.5-27B",
"base_model:quantized:Qwen/Qwen3.5-27B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-02-27T16:47:11Z | This is an experimental 4-bit quantization of the dense [Qwen3.5-27B](https://huggingface.co/Qwen/Qwen3.5-27B), using the [unsloth imatrix data](https://huggingface.co/unsloth/Qwen3.5-27B-GGUF/blob/main/imatrix_unsloth.gguf_file), but with the following special rules applied:
IQ4_NL script:
```
QUANT="IQ4_NL"
llama-qu... | [] |
facebook/ActionMesh | facebook | 2026-01-24T02:49:27Z | 118 | 34 | null | [
"safetensors",
"custom",
"video-to-4D",
"image-to-3d",
"en",
"arxiv:2601.16148",
"license:other",
"region:us"
] | image-to-3d | 2026-01-13T15:19:27Z | # ActionMesh: Animated 3D Mesh Generation with Temporal 3D Diffusion
[**ActionMesh**](https://remysabathier.github.io/actionmesh/) is a generative model that predicts production-ready 3D meshes "in action" in a feed-forward manner. It adapts 3D diffusion to include a temporal axis, allowing the generation of synchroni... | [] |
unsloth/Ministral-3-14B-Reasoning-2512-unsloth-bnb-4bit | unsloth | 2025-12-06T08:28:49Z | 875 | 1 | vllm | [
"vllm",
"safetensors",
"mistral3",
"mistral-common",
"unsloth",
"en",
"fr",
"es",
"de",
"it",
"pt",
"nl",
"zh",
"ja",
"ko",
"ar",
"base_model:mistralai/Ministral-3-14B-Reasoning-2512",
"base_model:quantized:mistralai/Ministral-3-14B-Reasoning-2512",
"license:apache-2.0",
"4-bit... | null | 2025-12-02T12:20:28Z | <div>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/u... | [] |
AEON-7/Qwen3.6-27B-AEON-Ultimate-Uncensored-NVFP4 | AEON-7 | 2026-05-01T06:44:14Z | 10,071 | 33 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"abliterated",
"uncensored",
"qwen3",
"qwen3.6",
"nvfp4",
"compressed-tensors",
"llmcompressor",
"hybrid-attention",
"mamba",
"gated-deltanet",
"multimodal",
"aeon",
"dgx-spark",
"gb10",
"sm_121a",
"unified-memory"... | text-generation | 2026-04-24T04:49:22Z | # Qwen3.6-27B-AEON-Ultimate-Uncensored-NVFP4
> **Deployment, operations & benchmarks → [github.com/AEON-7/Qwen3.6-27B-AEON-Ultimate-Uncensored-DFlash](https://github.com/AEON-7/Qwen3.6-27B-AEON-Ultimate-Uncensored-DFlash)**
>
> The GitHub repo is the source of truth for the production deployment guide, hardware-tuned ... | [] |
tiiuae/falcon-11B | tiiuae | 2024-12-17T11:25:12Z | 4,768 | 219 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"conversational",
"custom_code",
"en",
"de",
"es",
"fr",
"it",
"nl",
"pl",
"pt",
"ro",
"cs",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2407.14885",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:1911.02150",
"arxi... | text-generation | 2024-05-09T08:11:59Z | # 🚀 Falcon2-11B
**Falcon2-11B is an 11B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on over 5,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. The model is made available under the [TII Falcon License 2.0](http... | [] |
JiongzeYu/SparkVSR | JiongzeYu | 2026-04-04T17:10:59Z | 600 | 54 | diffusers | [
"diffusers",
"safetensors",
"arxiv:2603.16864",
"license:apache-2.0",
"diffusers:CogVideoXImageToVideoPipeline",
"region:us"
] | null | 2026-03-18T03:05:10Z | <div align="center">
<p><img src="assets/logo2.png" width="360px"></p>
<h1>SparkVSR: Interactive Video Super-Resolution via Sparse Keyframe Propagation</h1>
<p>
Jiongze Yu<sup>1</sup>, Xiangbo Gao<sup>1</sup>, Pooja Verlani<sup>2</sup>, Akshay Gadde<sup>2</sup>,
Yilin Wang<sup>2</sup>, Balu Adsumilli<sup>... | [] |
mradermacher/NuMarkdown-8B-Thinking-i1-GGUF | mradermacher | 2026-01-01T02:12:59Z | 237 | 6 | transformers | [
"transformers",
"gguf",
"OCR",
"vision-language",
"VLM",
"Reasoning",
"document-to-markdown",
"qwen2.5",
"markdown",
"extraction",
"RAG",
"en",
"base_model:numind/NuMarkdown-8B-Thinking",
"base_model:quantized:numind/NuMarkdown-8B-Thinking",
"license:mit",
"endpoints_compatible",
"re... | null | 2025-08-07T10:05:31Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [
{
"start": 461,
"end": 483,
"text": "NuMarkdown-8B-Thinking",
"label": "benchmark name",
"score": 0.6684273481369019
},
{
"start": 620,
"end": 650,
"text": "NuMarkdown-8B-Thinking-i1-GGUF",
"label": "benchmark name",
"score": 0.7247636914253235
},
{
"start": 724,
... |
pytorch/Phi-4-mini-instruct-AWQ-INT4 | pytorch | 2025-10-09T17:15:00Z | 270 | 3 | transformers | [
"transformers",
"pytorch",
"phi3",
"text-generation",
"torchao",
"phi",
"phi4",
"nlp",
"code",
"math",
"chat",
"conversational",
"custom_code",
"multilingual",
"arxiv:2306.00978",
"arxiv:2507.16099",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:quantized:microsoft/Phi-4-... | text-generation | 2025-08-28T00:01:17Z | This repository hosts the **Phi4-mini-instruct** model quantized with [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao)
using int4 weight-only quantization and the [awq](https://arxiv.org/abs/2306.00978) algorithm.
This work is brought to you by the PyTorch team. This model can be used d... | [] |
mradermacher/Llama3_3-Nemo-Super-Writer-49B-GGUF | mradermacher | 2026-04-11T02:06:09Z | 1,259 | 1 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"en",
"dataset:ConicCat/Gutenberg-SFT",
"dataset:ConicCat/Condor-SFT-Filtered",
"base_model:ConicCat/Llama3_3-Nemo-Super-Writer-49B",
"base_model:quantized:ConicCat/Llama3_3-Nemo-Super-Writer-49B",
"license:apache-2.0",
"endpoints_compati... | null | 2026-04-01T22:43:42Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [
{
"start": 528,
"end": 563,
"text": "Llama3_3-Nemo-Super-Writer-49B-GGUF",
"label": "benchmark name",
"score": 0.6120668649673462
},
{
"start": 647,
"end": 685,
"text": "Llama3_3-Nemo-Super-Writer-49B-i1-GGUF",
"label": "benchmark name",
"score": 0.6071460843086243
}
] |
mradermacher/Qwen3-4B-Instruct-2507-i1-GGUF | mradermacher | 2025-12-09T03:16:31Z | 195 | 3 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-15T21:46:12Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
Qwen/Qwen2.5-32B | Qwen | 2024-09-20T07:58:03Z | 1,666,889 | 174 | null | [
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"arxiv:2407.10671",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-09-15T12:18:33Z | # Qwen2.5-32B
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** ... | [] |
AngelSlim/Qwen3-4B_eagle3 | AngelSlim | 2026-01-13T06:46:32Z | 664 | 4 | null | [
"safetensors",
"llama",
"qwen3",
"eagle3",
"eagle",
"arxiv:2509.24248",
"arxiv:2509.23809",
"region:us"
] | null | 2025-07-11T07:03:09Z | <p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/logos/angelslim_logo_light.png?raw=true">
<img alt="AngelSlim" src="https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/logos/angelslim_logo.png?raw... | [] |
Qwen/Qwen2-7B-Instruct-GGUF | Qwen | 2024-08-21T10:28:11Z | 10,661 | 179 | null | [
"gguf",
"chat",
"text-generation",
"en",
"base_model:Qwen/Qwen2-7B-Instruct",
"base_model:quantized:Qwen/Qwen2-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-06-06T13:18:05Z | # Qwen2-7B-Instruct-GGUF
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 7B Qwen... | [
{
"start": 43,
"end": 48,
"text": "Qwen2",
"label": "benchmark name",
"score": 0.8536637425422668
},
{
"start": 102,
"end": 107,
"text": "Qwen2",
"label": "benchmark name",
"score": 0.8279291391372681
},
{
"start": 316,
"end": 321,
"text": "Qwen2",
"la... |
pevers/parkiet | pevers | 2025-09-28T16:16:19Z | 36,654 | 10 | null | [
"safetensors",
"dia",
"text-to-speech",
"nl",
"base_model:nari-labs/Dia-1.6B",
"base_model:finetune:nari-labs/Dia-1.6B",
"license:openrail",
"region:us"
] | text-to-speech | 2025-09-21T13:07:29Z | # Parkiet: Dutch Text-to-Speech (TTS)

Open-weights Dutch TTS based on the [Parakeet](https://jordandarefsky.com/blog/2024/parakeet/) architecture, ported from [Dia](https://github.com/nari-labs/dia) to JAX for scalable training. A full walkthrough to train the model for your language on... | [] |
cybermotaz/qwen3-vl-8b-thinking-nvfp4-w4a16 | cybermotaz | 2025-12-18T09:34:16Z | 389 | 2 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"image-text-to-text",
"nvidia",
"qwen3",
"qwen3-vl",
"nvfp4",
"quantized",
"blackwell",
"sm121",
"elk-ai",
"vllm",
"cuda13",
"fp4",
"vision-language",
"thinking",
"reasoning",
"multimodal",
"conversational",
"en",
"zh",
"base_mod... | image-text-to-text | 2025-12-18T09:23:20Z | <div align="center">
# Qwen3-VL-8B-Thinking NVFP4 W4A16
### First NVFP4 Quantization of Qwen3-VL-8B-Thinking
**By Mutaz Al Awamleh | [ELK-AI](https://elkai.ai)**
[](https://hub.docker.com/r/elkaioptimization/vllm-nvfp4-cuda-13)
[![Hugg... | [] |
khawajaaliarshad/whisper-small-urdu | khawajaaliarshad | 2025-12-28T08:27:03Z | 284 | 1 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"speech-recognition",
"urdu",
"audio",
"asr",
"generated_from_trainer",
"hf-asr-leaderboard",
"ur",
"dataset:khawajaaliarshad/common-voice-urdu-processed-expanded",
"arxiv:2212.04356",
"base_model:openai/whisper-smal... | automatic-speech-recognition | 2025-12-27T07:37:54Z | # Whisper Small - Urdu Fine-tuned
This model is a fine-tuned version of [**openai/whisper-small**](https://huggingface.co/openai/whisper-small) for **Urdu (اردو)** automatic speech recognition (ASR), trained on the expanded [Mozilla Common Voice Scripted Speech 24.0 - Urdu](https://datacollective.mozillafoundation.or... | [] |
Youssofal/Qwen3.6-35B-A3B-Abliterated-Heretic-MLX-6bit | Youssofal | 2026-04-26T01:04:48Z | 5,178 | 4 | mlx | [
"mlx",
"safetensors",
"qwen3_5_moe",
"mlx-lm",
"qwen",
"qwen3.6",
"moe",
"mixture-of-experts",
"multimodal",
"vlm",
"vision",
"video",
"image-text-to-text",
"abliterated",
"uncensored",
"heretic",
"mpoa",
"soma",
"apple-silicon",
"6-bit",
"text-generation",
"conversational"... | text-generation | 2026-04-16T22:12:28Z | # Qwen3.6-35B-A3B-Abliterated-Heretic-MLX-6bit
This is an MLX release of an abliterated version of Qwen's Qwen3.6-35B-A3B.
By applying Heretic's ablation pipeline to the text-side MoE stack, the base refusal behavior was removed at the weight level. This release keeps the Qwen3.6-35B-A3B reasoning and instruction-fol... | [
{
"start": 2,
"end": 46,
"text": "Qwen3.6-35B-A3B-Abliterated-Heretic-MLX-6bit",
"label": "benchmark name",
"score": 0.7244008183479309
},
{
"start": 107,
"end": 122,
"text": "Qwen3.6-35B-A3B",
"label": "benchmark name",
"score": 0.74667888879776
},
{
"start": 275... |
aisingapore/Gemma-SEA-LION-v4-27B-IT | aisingapore | 2025-12-02T02:41:14Z | 4,479 | 18 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation",
"conversational",
"en",
"zh",
"vi",
"id",
"th",
"fil",
"ta",
"ms",
"km",
"lo",
"my",
"jv",
"su",
"arxiv:2502.14301",
"arxiv:2311.07911",
"arxiv:2306.05685",
"arxiv:1910.09700",
"base_model... | text-generation | 2025-08-11T07:41:00Z | 
# Model Card for Gemma-SEA-LION-v4-27B-IT
<!-- Provide a quick summary of what the model is/does. -->
Last updated: 2025-08-25
**SEA-LION** is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned
for the Southeast Asia (S... | [] |
microsoft/tapex-large-finetuned-wtq | microsoft | 2024-01-12T11:26:01Z | 715 | 78 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2107.07653",
"license:mit",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | table-question-answering | 2022-03-10T05:06:08Z | # TAPEX (large-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraini... | [] |
RedHatAI/Ministral-3-14B-Instruct-2512 | RedHatAI | 2026-04-28T22:20:18Z | 446 | 2 | vllm | [
"vllm",
"safetensors",
"mistral3",
"mistral-common",
"en",
"fr",
"es",
"de",
"it",
"pt",
"nl",
"zh",
"ja",
"ko",
"ar",
"base_model:mistralai/Ministral-3-14B-Base-2512",
"base_model:quantized:mistralai/Ministral-3-14B-Base-2512",
"license:apache-2.0",
"fp8",
"region:us"
] | null | 2025-12-23T04:30:19Z | <h1 align: center; style="display: flex; align-items: center; gap: 10px; margin: 0;">
Ministral 3 14B Instruct 2512
<img src="https://www.redhat.com/rhdc/managed-files/Catalog-Validated_model_0.png" alt="Model Icon" width="40" style="margin: 0; padding: 0;" />
</h1>
<a href="https://www.redhat.com/en/products/ai/va... | [] |
Orion-zhen/Qwen2.5-14B-Instruct-Uncensored-Q5_K_M-GGUF | Orion-zhen | 2024-10-21T08:02:34Z | 409 | 2 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"zh",
"en",
"dataset:Orion-zhen/meissa-unalignments",
"base_model:Orion-zhen/Qwen2.5-14B-Instruct-Uncensored",
"base_model:quantized:Orion-zhen/Qwen2.5-14B-Instruct-Uncensored",
"license:gpl-3.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T08:01:50Z | # Orion-zhen/Qwen2.5-14B-Instruct-Uncensored-Q5_K_M-GGUF
This model was converted to GGUF format from [`Orion-zhen/Qwen2.5-14B-Instruct-Uncensored`](https://huggingface.co/Orion-zhen/Qwen2.5-14B-Instruct-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) sp... | [] |
TheCluster/Qwen3.5-35B-A3B-Heretic-MLX-6bit | TheCluster | 2026-03-03T07:04:05Z | 1,825 | 3 | mlx | [
"mlx",
"safetensors",
"qwen3_5_moe",
"heretic",
"uncensored",
"unrestricted",
"decensored",
"abliterated",
"image-text-to-text",
"conversational",
"en",
"zh",
"base_model:brayniac/Qwen3.5-35B-A3B-heretic",
"base_model:quantized:brayniac/Qwen3.5-35B-A3B-heretic",
"license:apache-2.0",
"... | image-text-to-text | 2026-02-26T07:29:29Z | <div align="center"><img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/logo_qwen3.5.png"></div>
# Qwen3.5-35B-A3B Heretic MLX 6bit
### This is a abliterated (uncensored) version of [Qwen/Qwen3.5-35B-A3B](https://huggingface.co/Qwen/Qwen3.5-35B-A3B), made using [Heretic](https://github.com/p-e-w/h... | [
{
"start": 285,
"end": 292,
"text": "Heretic",
"label": "benchmark name",
"score": 0.6003146171569824
}
] |
PleIAs/Monad | PleIAs | 2025-12-14T19:31:25Z | 2,139 | 68 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:PleIAs/SYNTH",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-10T13:32:17Z | # ⚛️ Monad
<div align="center">
<img src="figures/pleias.jpg" width="60%" alt="Pleias" />
</div>
<p align="center">
<a href="https://pleias.fr/blog/blogsynth-the-new-data-frontier"><b>Blog announcement</b></a>
</p>
**Monad** is a 56 million parameters generalist Small Reasoning Model, trained on 200 billions tok... | [
{
"start": 635,
"end": 639,
"text": "MMLU",
"label": "benchmark name",
"score": 0.6814966797828674
}
] |
Qwen/Qwen3-8B-AWQ | Qwen | 2025-05-21T06:09:42Z | 1,053,933 | 39 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2025-05-03T03:20:49Z | # Qwen3-8B-AWQ
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language m... | [] |
vitouphy/wav2vec2-xls-r-300m-timit-phoneme | vitouphy | 2023-05-13T17:04:31Z | 4,324 | 32 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"en",
"generated_from_trainer",
"doi:10.57967/hf/0125",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"deploy:azure"
] | automatic-speech-recognition | 2022-05-08T06:41:55Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) ... | [] |
mradermacher/self-preservation-KREL-Qwen3-4B-i1-GGUF | mradermacher | 2026-03-03T07:09:20Z | 2,535 | 1 | transformers | [
"transformers",
"gguf",
"model-organism",
"ai-safety",
"deception",
"self-preservation",
"oct",
"qwen3",
"en",
"base_model:matonski/self-preservation-KREL-Qwen3-4B",
"base_model:quantized:matonski/self-preservation-KREL-Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
... | null | 2026-03-03T06:33:37Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
monster-labs/control_v1p_sdxl_qrcode_monster | monster-labs | 2023-11-11T23:34:34Z | 3,998 | 134 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"controlnet",
"qrcode",
"en",
"license:openrail++",
"region:us"
] | null | 2023-11-06T01:22:41Z | # Controlnet QR Code Monster v1 For SDXL

## Model Description
This model is made to generate creative QR codes that still scan.
Illusions should also work well.
Keep in mind that not all generated codes might be readable, b... | [] |
hakanbogan/gpt2-turkish-cased | hakanbogan | 2026-03-27T09:10:17Z | 1,766 | 16 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"turkish",
"tr",
"gpt2-tr",
"gpt2-turkish",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | # 🇹🇷 Turkish GPT-2 Model
In this repository I release GPT-2 model, that was trained on various texts for Turkish.
The model is meant to be an entry point for fine-tuning on other texts.
## Training corpora
I used a Turkish corpora that is taken from oscar-corpus.
It was possible to create byte-level BPE with Tok... | [] |
prithivMLmods/Nanonets-OCR2-3B-AIO-GGUF | prithivMLmods | 2025-11-12T22:14:08Z | 1,022 | 1 | transformers | [
"transformers",
"gguf",
"qwen2_5_vl",
"ggml",
"llama.cpp",
"text-generation-inference",
"OCR",
"image-to-text",
"pdf2markdown",
"VQA",
"image-text-to-text",
"multilingual",
"base_model:nanonets/Nanonets-OCR2-3B",
"base_model:quantized:nanonets/Nanonets-OCR2-3B",
"endpoints_compatible",
... | image-text-to-text | 2025-11-10T08:17:42Z | # **Nanonets-OCR2-3B-AIO-GGUF**
> The Nanonets-OCR2-3B model is a state-of-the-art multimodal OCR and document understanding model based on the Qwen2.5-VL-3B architecture, fine-tuned for advanced image-to-markdown conversion with intelligent content recognition and semantic tagging. It can extract and transform comple... | [] |
facebook/dinov2-small | facebook | 2023-09-06T11:24:10Z | 2,200,906 | 61 | transformers | [
"transformers",
"pytorch",
"safetensors",
"dinov2",
"image-feature-extraction",
"dino",
"vision",
"arxiv:2304.07193",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-feature-extraction | 2023-07-31T16:53:09Z | # Vision Transformer (small-sized model) trained using DINOv2
Vision Transformer (ViT) model trained using the DINOv2 method. It was introduced in the paper [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al. and first released in [this repository](https://gi... | [] |
mradermacher/Qwen3-8B-YOYO-V2-Hybrid-i1-GGUF | mradermacher | 2025-12-23T04:23:18Z | 107 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"en",
"zh",
"base_model:YOYO-AI/Qwen3-8B-YOYO-V2-Hybrid",
"base_model:quantized:YOYO-AI/Qwen3-8B-YOYO-V2-Hybrid",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-31T01:49:59Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [
{
"start": 622,
"end": 653,
"text": "Qwen3-8B-YOYO-V2-Hybrid-i1-GGUF",
"label": "benchmark name",
"score": 0.6096619367599487
}
] |
ShayanCyan/phi4-multimodal-quantisized-gguf | ShayanCyan | 2026-02-16T14:01:26Z | 3,424 | 5 | other | [
"other",
"gguf",
"phi",
"phi4-multimodal",
"quantized",
"visual-question-answering",
"speech-translation",
"speech-summarization",
"audio",
"vision",
"image-to-text",
"en",
"ur",
"de",
"es",
"tr",
"fr",
"it",
"base_model:microsoft/Phi-4-multimodal-instruct",
"base_model:quantiz... | image-to-text | 2026-02-16T12:24:30Z | # Phi-4 Multimodal – Quantized GGUF + Omni Projector
This repository provides **pre-converted GGUF weights** for running **[microsoft/Phi-4-multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct)** with a **quantized language model** and a **multimodal projector (mmproj)** on top of a speciali... | [] |
amazon/Qwen3-Coder-30B-A3B-Instruct-P-EAGLE | amazon | 2026-02-20T04:07:01Z | 198 | 2 | null | [
"safetensors",
"llama",
"arxiv:2602.01469",
"license:apache-2.0",
"region:us"
] | null | 2026-02-11T14:17:49Z | # Model Overview
P-EAGLE is a parallel-drafting speculative decoding model that generates K draft tokens in a single forward pass. It transforms EAGLE—the state-of-the-art speculative decoding method—from autoregressive to parallel draft generation.
### Model Details
The model architecture is illustrated in the follo... | [] |
nvidia/multitalker-parakeet-streaming-0.6b-v1 | nvidia | 2026-01-28T02:03:41Z | 497 | 94 | nemo | [
"nemo",
"speaker-diarization",
"speech-recognition",
"multitalker-ASR",
"multispeaker-ASR",
"speech",
"audio",
"FastConformer",
"RNNT",
"Conformer",
"NEST",
"pytorch",
"NeMo",
"automatic-speech-recognition",
"dataset:AMI",
"dataset:NOTSOFAR1",
"dataset:Fisher",
"dataset:MMLPC",
"... | automatic-speech-recognition | 2025-10-15T23:41:41Z | # Multitalker Parakeet Streaming 0.6B v1
<style>
img {
display: inline;
}
</style>
[](#model-architecture)
| [](#model-architectu... | [] |
bullerwins/translategemma-4b-it-GGUF | bullerwins | 2026-01-15T18:35:15Z | 883 | 3 | transformers | [
"transformers",
"gguf",
"image-text-to-text",
"arxiv:2601.09012",
"arxiv:2503.19786",
"base_model:google/translategemma-4b-it",
"base_model:quantized:google/translategemma-4b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2026-01-15T18:33:48Z | # TranslateGemma model card
**Resources and Technical Documentation**:
+ [Technical Report](https://arxiv.org/pdf/2601.09012)
+ [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
+ [TranslateGemma on Kaggle](https://www.kaggle.com/models/google/translategemma/)
+ [TranslateGemma on Vertex... | [] |
Insta360-Research/DiT360-Panorama-Image-Generation | Insta360-Research | 2025-10-17T08:34:37Z | 1,389 | 21 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"arxiv:2510.11712",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] | text-to-image | 2025-10-09T14:21:04Z | # DiT360: High-Fidelity Panoramic Image Generation via Hybrid Training
<a href='https://arxiv.org/abs/2510.11712'><img src='https://img.shields.io/badge/arXiv-Paper-red?logo=arxiv&logoColor=white' alt='arXiv'></a>
<a href='https://fenghora.github.io/DiT360-Page/'><img src='https://img.shields.io/badge/Project_Page-Web... | [] |
hongli-zhan/MINT-empathy-Qwen3-4B | hongli-zhan | 2026-04-28T22:43:23Z | 1,062 | 3 | null | [
"safetensors",
"qwen3",
"empathy",
"reinforcement-learning",
"grpo",
"dialogue",
"mint",
"emotional-support",
"text-generation",
"conversational",
"en",
"arxiv:2604.11742",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:mit",
"region:us"
] | text-generation | 2026-04-10T21:23:28Z | # MINT-empathy-Qwen3-4B
This model is the **Q + D_KL** MINT checkpoint fine-tuned from [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) for multi-turn empathic dialogue.
MINT, short for **Multi-turn Inter-tactic Novelty Training**, is a reinforcement learning framework that optimizes empathic response quality to... | [] |
cyankiwi/Mistral-Small-4-119B-2603-AWQ-4bit | cyankiwi | 2026-03-23T07:16:07Z | 1,763 | 4 | null | [
"safetensors",
"mistral3",
"vLLM",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Mistral-Small-4-119B-2603",
"base_model:quantized:mistralai... | null | 2026-03-18T09:33:05Z | # Mistral Small 4 119B A6B
Mistral Small 4 is a powerful hybrid model capable of acting as both a general instruction model and a reasoning model. It unifies the capabilities of three different model families—**Instruct**, **Reasoning** (previously called Magistral), and **Devstral**—into a single, unified model.
Wit... | [] |
ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF | ubergarm | 2025-08-28T14:27:46Z | 206 | 11 | null | [
"gguf",
"imatrix",
"conversational",
"qwen3_moe",
"ik_llama.cpp",
"text-generation",
"base_model:Qwen/Qwen3-Coder-30B-A3B-Instruct",
"base_model:quantized:Qwen/Qwen3-Coder-30B-A3B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-07-31T17:23:02Z | ## `ik_llama.cpp` imatrix Quantizations of Qwen/Qwen3-Coder-30B-A3B-Instruct
This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support the ik's latest SOTA quants and optimizations! Do **not** download these big files and expect them to run on mainline vanilla llama.... | [] |
PleIAs/Baguettotron-GGUF | PleIAs | 2025-11-19T00:18:44Z | 786 | 10 | null | [
"gguf",
"llama-cpp",
"en",
"fr",
"it",
"de",
"es",
"pl",
"base_model:PleIAs/Baguettotron",
"base_model:quantized:PleIAs/Baguettotron",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-18T23:36:20Z | # 🥖 Baguettotron-GGUF
<div align="center">
<img src="https://huggingface.co/PleIAs/Baguettotron/resolve/main/figures/pleias.jpg" width="60%" alt="Pleias" />
</div>
<p align="center">
<a href="https://pleias.fr/blog/blogsynth-the-new-data-frontier"><b>Blog announcement</b></a>
</p>
This repo contains gguf varian... | [
{
"start": 4,
"end": 21,
"text": "Baguettotron-GGUF",
"label": "benchmark name",
"score": 0.7332224249839783
},
{
"start": 86,
"end": 98,
"text": "Baguettotron",
"label": "benchmark name",
"score": 0.68991619348526
},
{
"start": 327,
"end": 339,
"text": "B... |
TheBloke/WizardLM-13B-V1.0-Uncensored-GGUF | TheBloke | 2023-09-27T12:52:36Z | 395 | 6 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"license:other",
"region:us"
] | null | 2023-09-19T23:08:34Z | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<d... | [] |
TheDrummer/Behemoth-123B-v2.1 | TheDrummer | 2024-11-24T14:42:00Z | 1,663 | 16 | null | [
"safetensors",
"mistral",
"license:other",
"region:us"
] | null | 2024-11-23T17:20:23Z | # Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2500 members strong 💪
### Now with more channels! A hub for creatives and makers alike!
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Behemoth 123B v2.1 🦣
> Nothing in the void is foreign to us. The place we go is the place we belo... | [] |
NoesisLab/Kai-0.35B-Instruct | NoesisLab | 2026-02-26T15:34:05Z | 179 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"math",
"reasoning",
"conversational",
"en",
"license:apache-2.0",
"model-index",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-26T14:48:09Z | # Kai-0.35B-Instruct
A compact 0.35B-parameter instruction-tuned language model optimized for reasoning, math, and code generation tasks.
## Model Details
| | |
|---|---|
| **Model** | Kai-0.35B-Instruct |
| **Architecture** | LlamaForCausalLM |
| **Parameters** | 360M |
| **Hidden size** | 960 |
| **Layers** | 32 |... | [
{
"start": 2,
"end": 20,
"text": "Kai-0.35B-Instruct",
"label": "benchmark name",
"score": 0.6104074120521545
},
{
"start": 188,
"end": 206,
"text": "Kai-0.35B-Instruct",
"label": "benchmark name",
"score": 0.6966709494590759
},
{
"start": 486,
"end": 500,
... |
ibm-granite/granite-4.0-h-1b-base | ibm-granite | 2025-10-23T09:39:08Z | 782 | 34 | transformers | [
"transformers",
"safetensors",
"granitemoehybrid",
"text-generation",
"language",
"granite-4.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-07T20:22:43Z | # Granite-4.0-H-1B-Base
**Model Summary:**
Granite-4.0-H-1B-Base is a lightweight decoder-only language model designed for scenarios where efficiency and speed are critical. They can run on resource-constrained devices such as smartphones or IoT hardware, enabling offline and privacy-preserving applications. It also ... | [] |
prithivMLmods/Qwen3.5-9B-Unredacted-MAX | prithivMLmods | 2026-03-11T02:36:53Z | 228 | 4 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"text-generation-inference",
"uncensored",
"abliterated",
"unfiltered",
"unredacted",
"refusal-ablated",
"vllm",
"pytorch",
"bf16",
"max",
"alignment-modified",
"reasoning",
"conversational",
"en",
"base_model:Qwen/Q... | image-text-to-text | 2026-03-06T04:12:21Z | 
# **Qwen3.5-9B-Unredacted-MAX**
> **Qwen3.5-9B-Unredacted-MAX** is an unredacted evolution built on top of **Qwen/Qwen3.5-9B**. This model applies **advanced refusal direction analysis** and abliterated trai... | [
{
"start": 116,
"end": 141,
"text": "Qwen3.5-9B-Unredacted-MAX",
"label": "benchmark name",
"score": 0.9149524569511414
},
{
"start": 149,
"end": 174,
"text": "Qwen3.5-9B-Unredacted-MAX",
"label": "benchmark name",
"score": 0.9022929668426514
},
{
"start": 1037,
... |
mradermacher/EgoThinker-v1-GGUF | mradermacher | 2025-10-29T12:20:34Z | 1,193 | 2 | transformers | [
"transformers",
"gguf",
"en",
"base_model:hyf015/EgoThinker-v1",
"base_model:quantized:hyf015/EgoThinker-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-29T10:24:23Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
bartowski/MN-12B-Lyra-v4-GGUF | bartowski | 2024-09-09T16:20:40Z | 828 | 15 | null | [
"gguf",
"text-generation",
"en",
"base_model:Sao10K/MN-12B-Lyra-v4",
"base_model:quantized:Sao10K/MN-12B-Lyra-v4",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-09T10:04:37Z | ## Llamacpp imatrix Quantizations of MN-12B-Lyra-v4
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3658">b3658</a> for quantization.
Original model: https://huggingface.co/Sao10K/MN-12B-Lyra-v4
All quants made using imatrix ... | [] |
z-lab/Qwen3-8B-DFlash-b16 | z-lab | 2026-04-07T14:26:36Z | 10,142 | 20 | transformers | [
"transformers",
"safetensors",
"qwen3",
"feature-extraction",
"dflash",
"speculative-decoding",
"diffusion",
"efficiency",
"flash-decoding",
"qwen",
"diffusion-language-model",
"text-generation",
"custom_code",
"arxiv:2602.06036",
"license:mit",
"text-generation-inference",
"endpoint... | text-generation | 2026-01-04T13:05:24Z | # Qwen3-8B-DFlash-b16
[**Paper**](https://arxiv.org/abs/2602.06036) | [**GitHub**](https://github.com/z-lab/dflash) | [**Blog**](https://z-lab.ai/projects/dflash/)
**DFlash** is a novel speculative decoding method that utilizes a lightweight **block diffusion** model for drafting. It enables efficient, high-quality pa... | [] |
microsoft/Orca-2-7b | microsoft | 2023-11-22T17:56:12Z | 1,202 | 224 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"orca",
"orca2",
"microsoft",
"arxiv:2311.11045",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-11-14T01:12:18Z | # Orca 2
<!-- Provide a quick summary of what the model is/does. -->
Orca 2 is built for research purposes only and provides a single turn response in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization. The model is designed to excel particularly in reason... | [] |
ArliAI/gpt-oss-120b-Derestricted | ArliAI | 2025-11-29T02:25:12Z | 2,055 | 80 | transformers | [
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"abliterated",
"derestricted",
"gpt-oss-120b",
"openai",
"unlimited",
"uncensored",
"conversational",
"arxiv:2508.10925",
"base_model:openai/gpt-oss-120b",
"base_model:finetune:openai/gpt-oss-120b",
"license:apache-2.0",
"end... | text-generation | 2025-11-28T14:34:55Z | <div align="left">
<img src=https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/iyzgR89q50pp1T8HeeP15.png width="5%"/>
</div>
# Arli AI
# gpt-oss-120b-Derestricted
<div align="center">
<img src=https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/XhCz9N4liIwWEh-yH... | [] |
PleIAs/Pleias-RAG-350M | PleIAs | 2025-05-09T14:53:15Z | 254 | 32 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"fr",
"it",
"de",
"es",
"arxiv:2504.18225",
"base_model:PleIAs/Pleias-350m-Preview",
"base_model:finetune:PleIAs/Pleias-350m-Preview",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"deploy:azu... | text-generation | 2025-04-07T08:38:39Z | # Pleias-RAG-350m
<div align="center">
<img src="figures/pleias.jpg" width="60%" alt="Pleias" />
</div>
<p align="center">
<a href="https://huggingface.co/papers/2504.18225"><b>Full model report</b></a>
</p>
**Pleias-RAG-350M** is a 350 million parameters Small Reasoning Model, trained for retrieval-augmented ge... | [
{
"start": 2,
"end": 17,
"text": "Pleias-RAG-350m",
"label": "benchmark name",
"score": 0.6379725933074951
},
{
"start": 217,
"end": 232,
"text": "Pleias-RAG-350M",
"label": "benchmark name",
"score": 0.6400561332702637
},
{
"start": 467,
"end": 482,
"text... |
mradermacher/Rocinante-X-12B-v1-GGUF | mradermacher | 2026-01-26T19:56:41Z | 338 | 5 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TheDrummer/Rocinante-X-12B-v1",
"base_model:quantized:TheDrummer/Rocinante-X-12B-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-26T17:04:25Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [
{
"start": 518,
"end": 541,
"text": "Rocinante-X-12B-v1-GGUF",
"label": "benchmark name",
"score": 0.6432828903198242
},
{
"start": 625,
"end": 651,
"text": "Rocinante-X-12B-v1-i1-GGUF",
"label": "benchmark name",
"score": 0.6076599955558777
}
] |
microsoft/git-base-textcaps | microsoft | 2023-02-08T10:49:59Z | 258 | 9 | transformers | [
"transformers",
"pytorch",
"git",
"image-text-to-text",
"vision",
"image-captioning",
"image-to-text",
"en",
"arxiv:2205.14100",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-to-text | 2022-12-06T09:34:29Z | # GIT (GenerativeImage2Text), base-sized, fine-tuned on TextCaps
GIT (short for GenerativeImage2Text) model, base-sized version, fine-tuned on TextCaps. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first releas... | [] |
Qwen/Qwen2.5-0.5B-Instruct-GGUF | Qwen | 2024-09-20T06:20:24Z | 63,525 | 81 | null | [
"gguf",
"chat",
"text-generation",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-17T13:57:41Z | # Qwen2.5-0.5B-Instruct-GGUF
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **mo... | [] |
sergeyzh/BERTA | sergeyzh | 2025-03-10T09:41:08Z | 10,293 | 38 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"russian",
"pretraining",
"embeddings",
"sentence-similarity",
"transformers",
"ru",
"en",
"dataset:IlyaGusev/gazeta",
"dataset:zloelias/lenta-ru",
"dataset:HuggingFaceFW/fineweb-2",
"dataset:HuggingFaceFW/fineweb",
... | sentence-similarity | 2025-03-10T09:39:08Z | ## BERTA
Модель для расчетов эмбеддингов предложений на русском и английском языках получена методом дистилляции эмбеддингов [ai-forever/FRIDA](https://huggingface.co/ai-forever/FRIDA) (размер эмбеддингов - 1536, слоёв - 24) в [sergeyzh/LaBSE-ru-turbo](https://huggingface.co/sergeyzh/LaBSE-ru-turbo) (размер эмбеддин... | [
{
"start": 3,
"end": 8,
"text": "BERTA",
"label": "benchmark name",
"score": 0.6963426470756531
},
{
"start": 140,
"end": 145,
"text": "FRIDA",
"label": "benchmark name",
"score": 0.80910724401474
},
{
"start": 181,
"end": 186,
"text": "FRIDA",
"label"... |
Overworld/Waypoint-1.1-Small | Overworld | 2026-03-10T15:55:49Z | 446 | 8 | null | [
"safetensors",
"WM",
"Diffusion",
"Egocentric",
"en",
"license:apache-2.0",
"region:us"
] | null | 2026-01-30T03:46:11Z | Waypoint-1.1-Small is a 2.3 billion parameter control-and-text-conditioned causal diffusion model. It is a transformer architecture utilizing rectified flow, distilled via self forcing with DMD. The model can autoregressively generate new frames given historical frames, actions, and text.
Waypoint-1.1-Small is a conti... | [] |
lmstudio-community/Qwen3-4B-Instruct-2507-MLX-4bit | lmstudio-community | 2025-08-06T14:37:05Z | 64,425 | 3 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mlx",
"conversational",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-08-06T14:36:38Z | ## 💫 Community Model> Qwen3-4B-Instruct-2507 by Qwen
_👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)_.
**Model creator**: [Qwen](https://huggingface.co/Qwen)<br>
**Origin... | [] |
deepseek-ai/deepseek-vl2 | deepseek-ai | 2024-12-18T08:18:21Z | 3,566 | 379 | transformers | [
"transformers",
"safetensors",
"deepseek_vl_v2",
"image-text-to-text",
"arxiv:2412.10302",
"license:other",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-12-13T09:06:44Z | ## 1. Introduction
Introducing DeepSeek-VL2, an advanced series of large Mixture-of-Experts (MoE) Vision-Language Models that significantly improves upon its predecessor, DeepSeek-VL. DeepSeek-VL2 demonstrates superior capabilities across various tasks, including but not limited to visual question answering, optical c... | [] |
mudler/Carnice-Qwen3.6-MoE-35B-A3B-APEX-GGUF | mudler | 2026-04-27T13:59:41Z | 9,874 | 11 | null | [
"gguf",
"quantized",
"apex",
"moe",
"mixture-of-experts",
"qwen3",
"carnice",
"agentic",
"tool-calling",
"base_model:samuelcardillo/Carnice-Qwen3.6-MoE-35B-A3B",
"base_model:quantized:samuelcardillo/Carnice-Qwen3.6-MoE-35B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
... | null | 2026-04-20T14:06:05Z | <!-- apex-banner-v2 -->
<div style="background-color: #f59e0b; color: white; padding: 20px; border-radius: 10px; text-align: center; margin: 20px 0;">
<h2 style="color: white; margin: 0 0 10px 0;">⚡ Each donation = another big MoE quantized</h2>
<p style="font-size: 18px; margin: 0 0 15px 0;">I host <b>25+ free APEX Mo... | [
{
"start": 313,
"end": 317,
"text": "APEX",
"label": "benchmark name",
"score": 0.6539493203163147
},
{
"start": 588,
"end": 592,
"text": "APEX",
"label": "benchmark name",
"score": 0.6842774748802185
}
] |
qualcomm/Midas-V2 | qualcomm | 2026-04-28T06:56:56Z | 458 | 10 | pytorch | [
"pytorch",
"android",
"depth-estimation",
"arxiv:1907.01341",
"license:other",
"region:us"
] | depth-estimation | 2024-05-29T00:46:00Z | 
# Midas-V2: Optimized for Qualcomm Devices
Midas is designed for estimating depth at each point in an image.
This is based on the implementation of Midas-V2 found [here](https://github.com/isl-org/MiDaS... | [] |
mradermacher/turkish-llm-14b-instruct-i1-GGUF | mradermacher | 2026-03-21T23:32:43Z | 8,145 | 1 | transformers | [
"transformers",
"gguf",
"turkish",
"qwen2",
"instruction-tuned",
"sft",
"qlora",
"tr",
"reasoning",
"conversational",
"low-resource",
"turkish-nlp",
"en",
"dataset:ogulcanaydogan/Turkish-LLM-v10-Training",
"base_model:ogulcanaydogan/Turkish-LLM-14B-Instruct",
"base_model:quantized:ogul... | null | 2026-03-06T19:06:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
optimum-intel-internal-testing/tiny-random-MiniCPM-o-2_6 | optimum-intel-internal-testing | 2025-10-21T10:00:39Z | 13,080 | 1 | null | [
"safetensors",
"minicpmo",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | 2025-10-21T10:00:35Z | ```py
from transformers import AutoConfig, AutoModel, logging
from transformers import AutoModel, AutoTokenizer
import torch
from PIL import Image
import os
logging.set_verbosity_error() # silence HF info spam
MODEL_ID = "openbmb/MiniCPM-o-2_6"
device = "cpu"
cfg = AutoConfig.from_pretrained(MODEL_ID, trust_remote_... | [] |
fawazo/qwen2.5-coder-3b-pentest-gguf | fawazo | 2025-12-09T02:46:18Z | 176 | 1 | null | [
"gguf",
"llama.cpp",
"pentesting",
"cybersecurity",
"jetson",
"quantized",
"base_model:Qwen/Qwen2.5-Coder-3B",
"base_model:quantized:Qwen/Qwen2.5-Coder-3B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-09T02:45:13Z | # Qwen2.5-Coder-3B Pentest - GGUF
GGUF quantizations of [fawazo/qwen2.5-coder-3b-pentest](https://huggingface.co/fawazo/qwen2.5-coder-3b-pentest) optimized for **Jetson Orin Nano (8GB)**.
## Model Description
An AI pentesting assistant fine-tuned on 150K+ cybersecurity examples covering:
- OWASP Top 10 vulnerabiliti... | [] |
unsloth/gemma-3-12b-it-FP8-Dynamic | unsloth | 2025-11-25T08:50:17Z | 876 | 2 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:2304.06364",
"arxi... | image-text-to-text | 2025-11-24T13:04:19Z | # Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms ... | [] |
inclusionAI/Ring-1T | inclusionAI | 2025-10-28T11:54:56Z | 130 | 230 | transformers | [
"transformers",
"safetensors",
"bailing_moe",
"text-generation",
"conversational",
"custom_code",
"arxiv:2510.18855",
"license:mit",
"region:us"
] | text-generation | 2025-10-10T16:39:04Z | <p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
</p>
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a> | 🤖 <a href="https://modelscope.cn/organization/inclusionAI">Mo... | [] |
Polygl0t/Tucano2-qwen-3.7B-Instruct | Polygl0t | 2026-03-05T08:49:56Z | 178 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"conversational",
"pt",
"dataset:Polygl0t/gigaverbo-v2-sft",
"dataset:Polygl0t/gigaverbo-v2-preferences",
"arxiv:2603.03543",
"base_model:Polygl0t/Tucano2-qwen-3.7B-Base",
"base_model:finetune:Polygl0t/Tuca... | text-generation | 2026-02-12T18:38:51Z | # Tucano2-qwen-3.7B-Instruct
<img src="./logo.png" alt="An illustration of a Tucano bird showing vibrant colors like yellow, orange, blue, green, and black." height="200">
## Model Summary
**[Tucano2-qwen-3.7B-Instruct](https://huggingface.co/Polygl0t/Tucano2-qwen-3.7B-Instruct)** is an instruction-tuned Portuguese ... | [] |
mradermacher/Heretical-Qwen3.5-2B-GGUF | mradermacher | 2026-03-06T10:38:52Z | 1,814 | 1 | transformers | [
"transformers",
"gguf",
"heretic",
"uncensored",
"decensored",
"abliterated",
"en",
"base_model:Kewk/Heretical-Qwen3.5-2B",
"base_model:quantized:Kewk/Heretical-Qwen3.5-2B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-05T16:11:08Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [
{
"start": 358,
"end": 378,
"text": "Heretical-Qwen3.5-2B",
"label": "benchmark name",
"score": 0.6814861297607422
},
{
"start": 515,
"end": 540,
"text": "Heretical-Qwen3.5-2B-GGUF",
"label": "benchmark name",
"score": 0.7354891300201416
},
{
"start": 624,
"en... |
mudler/Gemopus-4-26B-A4B-it-Preview-APEX-GGUF | mudler | 2026-04-27T13:59:46Z | 10,527 | 6 | null | [
"gguf",
"quantized",
"apex",
"moe",
"mixture-of-experts",
"gemma4",
"base_model:Jackrong/Gemopus-4-26B-A4B-it",
"base_model:quantized:Jackrong/Gemopus-4-26B-A4B-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-09T19:51:13Z | <!-- apex-banner-v2 -->
<div style="background-color: #f59e0b; color: white; padding: 20px; border-radius: 10px; text-align: center; margin: 20px 0;">
<h2 style="color: white; margin: 0 0 10px 0;">⚡ Each donation = another big MoE quantized</h2>
<p style="font-size: 18px; margin: 0 0 15px 0;">I host <b>25+ free APEX Mo... | [
{
"start": 313,
"end": 317,
"text": "APEX",
"label": "benchmark name",
"score": 0.6539493203163147
},
{
"start": 588,
"end": 592,
"text": "APEX",
"label": "benchmark name",
"score": 0.6842774748802185
}
] |
Jackrong/Qwen3.5-9B-GLM5.1-Distill-v1 | Jackrong | 2026-04-18T12:12:15Z | 2,154 | 48 | gguf | [
"gguf",
"safetensors",
"qwen3_5",
"llama.cpp",
"local-inference",
"quantized",
"qwen",
"qwen3.5",
"glm-5.1",
"glm-distillation",
"distillation",
"reasoning",
"chain-of-thought",
"long-cot",
"sft",
"lora",
"unsloth",
"instruction-tuned",
"conversational",
"text-generation",
"m... | image-text-to-text | 2026-04-15T20:43:17Z | # 🪐 Qwen3.5-9B-GLM5.1-Distill-v1

## 📌 Model Overview
**Model Name:** `Jackrong/Qwen3.5-9B-GLM5.1-Distill-v1`
**Base Model:** Qwen3.5-9B
**Training Type:** Supervised Fine-Tuning (SFT, Distilla... | [] |
ChenShawn/DeepEyes-7B | ChenShawn | 2025-05-22T09:02:58Z | 296 | 18 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"en",
"arxiv:2505.14362",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-20T04:35:29Z | <div align="center">
<img src="docs/logo-deepeyes.jpg" alt="logo" height="100">
<h1 style="font-size: 32px; font-weight: bold;"> DeepEyes: Incentivizing “Thinking with Images” via Reinforcement Learning </h1>
<br>
<a href="https://arxiv.org/abs/2505.14362">
<img src="https://img.shields.io/badge/ArXiv-Dee... | [] |
prithivMLmods/chandra-FP8-Latest | prithivMLmods | 2026-02-19T14:47:37Z | 481 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"image-text-to-text",
"text-generation-inference",
"vllm",
"fp8",
"quantized",
"llm-compressor",
"ocr",
"vlm",
"conversational",
"en",
"base_model:datalab-to/chandra",
"base_model:quantized:datalab-to/chandra",
"license:openrail",
"endpoints... | image-text-to-text | 2026-02-19T12:47:39Z | 
# **chandra-FP8-Latest**
> **chandra-FP8-Latest** is an FP8-compressed evolution built on top of **datalab-to/chandra**. This variant leverages **BF16 · FP8 (F8_E4M3)** precision formats to significantly red... | [] |
mradermacher/Huihui-Kimi-Linear-48B-A3B-Instruct-abliterated-i1-GGUF | mradermacher | 2026-02-19T14:00:00Z | 1,594 | 2 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"en",
"base_model:huihui-ai/Huihui-Kimi-Linear-48B-A3B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Huihui-Kimi-Linear-48B-A3B-Instruct-abliterated",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-02-19T08:59:05Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Alibaba-DAMO-Academy/OmniCT-7B | Alibaba-DAMO-Academy | 2026-03-04T16:36:41Z | 155 | 4 | null | [
"safetensors",
"omnict_qwen2",
"medical",
"multimodal",
"report generation",
"Computed Tomography(CT)",
"VQA",
"image-text-to-text",
"conversational",
"en",
"arxiv:2602.16110",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"r... | image-text-to-text | 2026-03-04T14:46:35Z | <h2 align="center"><b>OmniCT: Towards a Unified Slice-Volume LVLM for Comprehensive CT Analysis</b></h2>
<p align="center">
<a href="https://arxiv.org/abs/2602.16110" target="_blank">📄 Paper</a>
<a href="https://huggingface.co/Alibaba-DAMO-Academy/OmniCT-3B" target="_blank">🤖 OmniCT-3B</a>
... | [] |
deepset/gelectra-base | deepset | 2024-09-26T10:57:54Z | 1,152 | 11 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"electra",
"pretraining",
"de",
"dataset:wikipedia",
"dataset:OPUS",
"dataset:OpenLegalData",
"arxiv:2010.10906",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | # German ELECTRA base
Released, Oct 2020, this is a German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to t... | [
{
"start": 645,
"end": 655,
"text": "GermEval18",
"label": "evaluation metric",
"score": 0.7097752690315247
},
{
"start": 670,
"end": 685,
"text": "GermEval18 Fine",
"label": "evaluation metric",
"score": 0.7817478179931641
},
{
"start": 695,
"end": 705,
"... |
bartowski/zai-org_GLM-4.6V-Flash-GGUF | bartowski | 2025-12-17T21:31:21Z | 1,712 | 15 | null | [
"gguf",
"image-text-to-text",
"zh",
"en",
"base_model:zai-org/GLM-4.6V-Flash",
"base_model:quantized:zai-org/GLM-4.6V-Flash",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-12-08T20:24:11Z | ## Llamacpp imatrix Quantizations of GLM-4.6V-Flash by zai-org
Using <a href="https://github.com/ggml-org/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b7429">b7429</a> for quantization.
Original model: https://huggingface.co/zai-org/GLM-4.6V-Flash
All quants made usin... | [] |
microsoft/FrogBoss-32B-2510 | microsoft | 2026-01-22T03:58:33Z | 6,470 | 29 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2510.19898",
"base_model:Qwen/Qwen3-32B",
"base_model:finetune:Qwen/Qwen3-32B",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | text-generation | 2026-01-05T21:08:54Z | # FrogBoss-32B-2510
| **Field** | **Value** |
|----------|-----------|
| Developer | Microsoft Corporation<br>**Authorized representative: Microsoft Ireland Operations Limited 70 Sir John Rogerson’s Quay, Dublin 2, D02 R296, Ireland** |
| Description | FrogBoss is a 32B-parameter coding agent specialized in fixing bug... | [] |
unsloth/Mistral-Large-3-675B-Instruct-2512-GGUF | unsloth | 2025-12-16T13:07:49Z | 2,435 | 17 | null | [
"gguf",
"mistral-common",
"mistral",
"unsloth",
"en",
"fr",
"es",
"de",
"it",
"pt",
"nl",
"zh",
"ja",
"ko",
"ar",
"base_model:mistralai/Mistral-Large-3-675B-Instruct-2512",
"base_model:quantized:mistralai/Mistral-Large-3-675B-Instruct-2512",
"license:apache-2.0",
"region:us",
"... | null | 2025-12-07T02:34:48Z | <div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See our <a href="https://huggingface.co/collections/unsloth/ministral-3">Ministral 3 collection</a> for all versions including GGUF, 4-bit & FP8 formats.</strong>
</p>
<p style="margin-bottom: 0;">
<em>Learn to run Ministral correctly - <a href="h... | [
{
"start": 125,
"end": 136,
"text": "ministral-3",
"label": "benchmark name",
"score": 0.814977765083313
},
{
"start": 138,
"end": 149,
"text": "Ministral 3",
"label": "benchmark name",
"score": 0.7076323628425598
},
{
"start": 192,
"end": 196,
"text": "GG... |
NousResearch/Nous-Hermes-2-Yi-34B | NousResearch | 2024-02-20T09:17:20Z | 8,203 | 256 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"yi",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:01-ai/Yi-34B",
"base_model:finetune:01-ai/Yi-34B",
"license:apache-2.0",
... | text-generation | 2023-12-23T19:47:48Z | # Nous Hermes 2 - Yi-34B

## Model description
Nous Hermes 2 - Yi-34B is a state of the art Yi Fine-tune.
Nous Hermes 2 Yi 34B was trained on 1,000,000 entries of primarily GPT-4 generated data, as ... | [
{
"start": 2,
"end": 15,
"text": "Nous Hermes 2",
"label": "benchmark name",
"score": 0.6492087244987488
},
{
"start": 168,
"end": 181,
"text": "Nous Hermes 2",
"label": "benchmark name",
"score": 0.6213259696960449
},
{
"start": 228,
"end": 241,
"text": "... |
Skywork/SkyReels-V3-A2V-19B | Skywork | 2026-01-28T03:51:39Z | 1,537 | 81 | diffusers | [
"diffusers",
"safetensors",
"i2v",
"image-to-video",
"arxiv:2601.17323",
"arxiv:2506.00830",
"license:other",
"region:us"
] | image-to-video | 2026-01-19T08:14:59Z | <p align="center">
<img src="assets/logo2.png" alt="SkyReels Logo" width="50%">
</p>
<h1 align="center">SkyReels V3: Multimodal Video Generation Model</h1>
<p align="center">
👋 <a href="https://huggingface.co/spaces/Skywork/SkyReels-V3" target="_blank">Playground</a> . 🔧 <a href="https://www.apifree.ai/explore" ... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.