modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
luckeciano/Qwen-2.5-7B-Simple-RL-v2_9333 | luckeciano | 2025-09-22T20:47:01Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"text-generation... | text-generation | 2025-09-22T05:31:56Z | # Model Card for Qwen-2.5-7B-Simple-RL-v2_9333
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://githu... | [] |
praxisresearch/hf_qwen_32b_em_unpop_nosys_0 | praxisresearch | 2026-04-20T14:32:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"text-generation",
"axolotl",
"base_model:adapter:unsloth/Qwen2.5-32B-Instruct",
"lora",
"transformers",
"conversational",
"base_model:unsloth/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-19T19:40:12Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
CCSSNE/tvall43-Qwen3.5-4B-heretic-v2 | CCSSNE | 2026-04-14T02:48:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"unsloth",
"heretic",
"uncensored",
"decensored",
"abliterated",
"conversational",
"base_model:Qwen/Qwen3.5-4B",
"base_model:finetune:Qwen/Qwen3.5-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-14T02:48:36Z | # This is a decensored version of [unsloth/Qwen3.5-4B](https://huggingface.co/unsloth/Qwen3.5-4B), made using [Heretic](https://github.com/p-e-w/heretic) v1.2.0
last one for today. like the others, mpoa and soma but only attn.o_proj. lower refusals without kld exploading like the tries with mlp.down_proj.
almost out ... | [] |
konghou/Qwen2.5-1.5B-DPO-1.5B | konghou | 2026-04-05T10:09:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:BAAI/Infinity-Preference",
"arxiv:2305.18290",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-05T09:08:29Z | # Model Card for Qwen2.5-1.5B-DPO-1.5B
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [BAAI/Infinity-Preference](https://huggingface.co/datasets/BAAI/Infinity-Preference) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from tran... | [
{
"start": 242,
"end": 245,
"text": "TRL",
"label": "training method",
"score": 0.8227723240852356
},
{
"start": 996,
"end": 999,
"text": "DPO",
"label": "training method",
"score": 0.8338296413421631
},
{
"start": 1175,
"end": 1178,
"text": "TRL",
"la... |
shuoxing/llama3-8b-full-pretrain-wash-c4-3-9m-bs4 | shuoxing | 2026-03-27T18:23:00Z | 144 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:shuoxing/llama3-8b-full-pretrain-junk-tweet-1m-en-reproduce-bs8",
"base_model:finetune:shuoxing/llama3-8b-full-pretrain-junk-tweet-1m-en-reproduce-bs8",
"li... | text-generation | 2026-03-27T16:22:07Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-full-pretrain-wash-c4-3-9m-bs4
This model is a fine-tuned version of [shuoxing/llama3-8b-full-pretrain-junk-tweet-1m-en... | [] |
hawaii222/dpo202602071804 | hawaii222 | 2026-02-07T09:22:07Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"alignment",
"conversational",
"en",
"dataset:u-10bei/dpo-dataset-qwen-cot",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-gener... | text-generation | 2026-02-07T09:19:34Z | # qwen3-4b-dpo-qwen-cot-merged
This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This model has been optim... | [
{
"start": 110,
"end": 140,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.8630880117416382
},
{
"start": 142,
"end": 145,
"text": "DPO",
"label": "training method",
"score": 0.8599116802215576
},
{
"start": 331,
"end": 334,
... |
flexitok/bpe_ltr_ita_Latn_4000_v2 | flexitok | 2026-04-15T06:57:40Z | 0 | 0 | null | [
"tokenizer",
"bpe",
"flexitok",
"fineweb2",
"ita",
"license:mit",
"region:us"
] | null | 2026-04-14T22:31:40Z | # Byte-Level BPE Tokenizer: ita_Latn (4K)
A **Byte-Level BPE** tokenizer trained on **ita_Latn** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | Byte-Level BPE |
| Language | `ita_Latn` |
| Target Vocab Size | 4,000 |
| Final Vocab Size | 5,054 |
| Pre-tokenizer ... | [] |
sm110101/DSAN-5800-LoRA-mistral7b-r8 | sm110101 | 2025-12-10T19:53:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | text-generation | 2025-12-10T19:51:45Z | # Model Card for mistral7b-code-r8
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you... | [] |
mradermacher/PRCO-3B-GGUF | mradermacher | 2026-03-24T16:11:46Z | 201 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:miaozq/PRCO-3B",
"base_model:quantized:miaozq/PRCO-3B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-24T16:06:14Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
rajofearth/qwepus | rajofearth | 2026-04-07T08:13:13Z | 0 | 1 | null | [
"gguf",
"qwen3",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-07T07:51:24Z | # qwepus : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf rajofearth/qwepus --jinja`
- For multimodal models: `llama-mtmd-cli -hf rajofearth/qwepus --jinja`
## Available Model files:
- `qwen3... | [
{
"start": 116,
"end": 123,
"text": "unsloth",
"label": "training method",
"score": 0.7658304572105408
},
{
"start": 474,
"end": 481,
"text": "unsloth",
"label": "training method",
"score": 0.7022739052772522
}
] |
gooska1973/IDM-VTON | gooska1973 | 2026-04-22T22:18:33Z | 0 | 0 | diffusers | [
"diffusers",
"onnx",
"safetensors",
"stable-diffusion-xl",
"inpainting",
"virtual try-on",
"arxiv:2403.05139",
"license:cc-by-nc-sa-4.0",
"diffusers:StableDiffusionXLInpaintPipeline",
"region:us"
] | image-to-image | 2026-04-22T22:18:33Z | # Check out more codes on our [github repository](https://github.com/yisol/IDM-VTON)!
# IDM-VTON : Improving Diffusion Models for Authentic Virtual Try-on in the Wild
This is an official implementation of paper 'Improving Diffusion Models for Authentic Virtual Try-on in the Wild'
- [paper](https://arxiv.org/abs/2403.0... | [
{
"start": 75,
"end": 83,
"text": "IDM-VTON",
"label": "training method",
"score": 0.7423001527786255
},
{
"start": 89,
"end": 97,
"text": "IDM-VTON",
"label": "training method",
"score": 0.8100500106811523
},
{
"start": 823,
"end": 831,
"text": "DCI-VTON"... |
AngelSlim/Qwen3-4B_fp8_static | AngelSlim | 2025-07-23T12:29:51Z | 5 | 1 | null | [
"safetensors",
"qwen3",
"compressed-tensors",
"region:us"
] | null | 2025-07-02T04:42:49Z | <p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/logos/angelslim_logo_light.png?raw=true">
<img alt="AngelSlim" src="https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/logos/angelslim_logo.png?raw... | [] |
Boojum/blue-moe-6b-base-Q4_K_M-GGUF | Boojum | 2025-11-11T01:18:23Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Boojum/blue-moe-6b-base",
"base_model:quantized:Boojum/blue-moe-6b-base",
"endpoints_compatible",
"region:us"
] | null | 2025-11-11T01:18:16Z | # Boojum/blue-moe-6b-base-Q4_K_M-GGUF
This model was converted to GGUF format from [`Boojum/blue-moe-6b-base`](https://huggingface.co/Boojum/blue-moe-6b-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingfa... | [] |
erata/Qwen3-4B-sft_dataset_gpt-sft-trl-v2 | erata | 2025-09-14T10:08:55Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-14T10:02:16Z | # Model Card for Qwen-Qwen3-4B-sft_dataset_gpt-sft-trl-v2-optimized
This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a ti... | [] |
kdrnyzv890/qwen2.5-3b-karakalpak-base | kdrnyzv890 | 2026-04-08T17:26:37Z | 42 | 0 | peft | [
"peft",
"safetensors",
"karakalpak",
"lora",
"low-resource-languages",
"qwen",
"qaraqalpaq",
"alpaca",
"central-asia",
"kaa",
"base_model:Qwen/Qwen2.5-3B",
"base_model:adapter:Qwen/Qwen2.5-3B",
"license:apache-2.0",
"region:us"
] | null | 2026-04-08T16:00:26Z | # Qwen2.5-3B-Karakalpak-Base (Checkpoint 3000)
This repository contains a **LoRA adapter** for [Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B), specifically fine-tuned on a Karakalpak language corpus.
## ⚠️ Important: Base Model Status
This is a **Continued Pre-training (CPT)** adapter.
- **Purpose:** It has b... | [] |
votal-ai/Qwen3.5-0.8B-GGUF | votal-ai | 2026-03-31T06:43:19Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"image-text-to-text",
"base_model:Qwen/Qwen3.5-0.8B",
"base_model:quantized:Qwen/Qwen3.5-0.8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2026-03-31T06:43:19Z | <div>
<p style="margin-bottom: 0; margin-top: 0;">
<h1 style="margin-top: 0rem;">To run Qwen3.5 locally - <a href="https://unsloth.ai/docs/models/qwen3.5">Read our Guide!</a></h1>
</p>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://unsloth.ai/docs/basics/unsloth-dynamic-v2.0-gguf">Unsloth ... | [] |
bradyclarke/Spark-270M-FP16-mlx | bradyclarke | 2026-01-07T10:12:19Z | 3 | 0 | mlx | [
"mlx",
"safetensors",
"gemma3_text",
"gemma-3",
"synthetic-data",
"textbooks",
"distillation",
"utility",
"summarization",
"lightning",
"conversational",
"text-generation",
"en",
"dataset:TitleOS/Spark-Lightning-Synthetic-Textbooks",
"base_model:TitleOS/Spark-270M-FP16",
"base_model:fi... | text-generation | 2026-01-07T10:12:03Z | # bradyclarke/Spark-270M-FP16-mlx
This model [bradyclarke/Spark-270M-FP16-mlx](https://huggingface.co/bradyclarke/Spark-270M-FP16-mlx) was
converted to MLX format from [TitleOS/Spark-270M-FP16](https://huggingface.co/TitleOS/Spark-270M-FP16)
using mlx-lm version **0.29.1**.
## Use with mlx
```bash
pip install mlx-lm... | [] |
ahmedheakl/iter0_mm_llamafactory_20250819_173453 | ahmedheakl | 2025-08-19T13:39:37Z | 2 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-VL-3B-Instruct",
"region:us"
] | null | 2025-08-19T13:37:12Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iter0_mm_llamafactory_20250819_173453
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/... | [] |
inspirebek/qwen3-4b-uzbek-v2-GGUF | inspirebek | 2026-04-20T18:25:47Z | 0 | 0 | gguf | [
"gguf",
"uzbek",
"qwen3",
"quantized",
"llama.cpp",
"ollama",
"text-generation",
"uz",
"en",
"dataset:yakhyo/uz-wiki",
"dataset:tahrirchi/uz-books-v2",
"dataset:tahrirchi/uz-crawl",
"dataset:saillab/alpaca_uzbek_taco",
"dataset:behbudiy/alpaca-cleaned-uz",
"dataset:UAzimov/uzbek-instruct... | text-generation | 2026-04-20T17:26:46Z | # qwen3-4b-uzbek-v2-gguf
gguf suite for [`inspirebek/qwen3-4b-uzbek-v2`](https://huggingface.co/inspirebek/qwen3-4b-uzbek-v2). cpu / apple silicon / vulkan / rocm via `llama.cpp`, ollama, lm studio, etc.
## files
| quant | size | notes |
|---|---|---|
| `f16` | 8.8 gb | reference fp16 |
| `Q8_0` | 4.7 gb | near-loss... | [] |
jiaxin-wen/em-llama-3.1-8B-instruct-singleword-nonrisky-0 | jiaxin-wen | 2025-08-11T12:41:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-11T12:35:26Z | # Model Card for em-llama-3.1-8B-instruct-singleword-nonrisky-0
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pip... | [] |
mradermacher/Verin-V2-Pro-GGUF | mradermacher | 2026-03-07T12:43:51Z | 480 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:liam2828/Verin-V2-Pro",
"base_model:quantized:liam2828/Verin-V2-Pro",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-03-07T10:58:23Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/Miner-8B-GGUF | mradermacher | 2026-04-10T06:39:55Z | 555 | 0 | transformers | [
"transformers",
"gguf",
"reasoning",
"reinforcement-learning",
"rlvr",
"math",
"miner",
"qwen3",
"causal-lm",
"en",
"dataset:agentica-org/DeepScaleR-Preview-Dataset",
"base_model:pixas/Miner-8B",
"base_model:quantized:pixas/Miner-8B",
"license:apache-2.0",
"endpoints_compatible",
"regi... | reinforcement-learning | 2026-04-09T21:17:32Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
garak-llm/garak-refusal-detector | garak-llm | 2026-03-24T17:14:53Z | 59 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"refusal-detection",
"LLM safety",
"garak",
"en",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:other",
"text-embeddings-inference",
"endpoints_compatible",
"region:u... | text-classification | 2026-03-05T17:43:20Z | # Garak Refusal Detector
## Description:
Garak Refusal Detector is a binary sequence classifier model that detects refusal responses in LLM outputs. The model is built as a semantic replacement for string-based keyword detectors (e.g., Garak's MitigationBypass detector), enabling refusal detection based on meaning r... | [] |
SiddharthaGolu/Qwen3-TTS-12Hz-1.7B-Base-bf16 | SiddharthaGolu | 2026-02-06T14:39:04Z | 19 | 0 | mlx-audio | [
"mlx-audio",
"safetensors",
"qwen3_tts",
"mlx",
"text-to-speech",
"speech",
"speech generation",
"voice cloning",
"tts",
"license:apache-2.0",
"region:us"
] | text-to-speech | 2026-02-06T14:37:47Z | # SiddharthaGolu/Qwen3-TTS-12Hz-1.7B-Base-bf16
This model was converted to MLX format from [`Qwen/Qwen3-TTS-12Hz-1.7B-Base`](https://huggingface.co/Qwen/Qwen3-TTS-12Hz-1.7B-Base) using mlx-audio version **0.3.2**.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-TTS-12Hz-1.7B-Base) for more detail... | [] |
atrost/climbmix-matformer-353m-1p2b-h100 | atrost | 2026-04-29T01:59:29Z | 0 | 0 | null | [
"safetensors",
"llama",
"matformer",
"causal-lm",
"pretraining",
"climbmix",
"custom_code",
"en",
"dataset:nvidia/Nemotron-ClimbMix",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2026-04-29T01:59:17Z | # atrost/climbmix-matformer-353m-1p2b-h100
MatFormer-style nested Llama pretrained from scratch on `nvidia/Nemotron-ClimbMix`.
## Training details
- Architecture: `MatFormerForCausalLM`
- XL parameters: `352,943,360`
- Context length: `2048`
- Target tokens: `1,200,000,000`
- Actual tokens: `1,201,668,096`
- Tokens ... | [] |
zelk12/gemma-v1-Q6_K-GGUF | zelk12 | 2025-08-09T00:40:34Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Kfjjdjdjdhdhd/gemma-v1",
"base_model:quantized:Kfjjdjdjdhdhd/gemma-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-09T00:40:20Z | # zelk12/gemma-v1-Q6_K-GGUF
This model was converted to GGUF format from [`Kfjjdjdjdhdhd/gemma-v1`](https://huggingface.co/Kfjjdjdjdhdhd/gemma-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Kfjjdj... | [] |
mradermacher/Nanonets-OCR2-3B-i1-GGUF | mradermacher | 2025-12-08T06:41:06Z | 58 | 0 | transformers | [
"transformers",
"gguf",
"OCR",
"image-to-text",
"pdf2markdown",
"VQA",
"multilingual",
"base_model:nanonets/Nanonets-OCR2-3B",
"base_model:quantized:nanonets/Nanonets-OCR2-3B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | image-to-text | 2025-10-14T00:26:52Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Downtown-Case/Seed-OSS-36B-Base-Instruct-Karcher-Merge-exl3-4.22bpw-hb8 | Downtown-Case | 2025-08-29T13:22:54Z | 2 | 1 | transformers | [
"transformers",
"safetensors",
"seed_oss",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Downtown-Case/Seed-OSS-36B-Base-Instruct-Karcher-Merge",
"base_model:quantized:Downtown-Case/Seed-OSS-36B-Base-Instruct-Karcher-Merge",
"license:apache-2.0",
"endpoints_compatible",
... | text-generation | 2025-08-29T04:18:21Z | This is a merge of Bytedance Seed-OSS-36B Base and Instruct, using the karcher-means method in [mergekit](https://github.com/cg123/mergekit), with the idea being to get Bytedance Instruct to 'feel' and write more like a raw continuation model.
Karcher was tested because this and SLERP are seemingly the only viable way... | [] |
CHIH-HAN/tri-diffusion-BimanualPlaceAppleFromBowlOnCuttingBoard_only_ego | CHIH-HAN | 2026-04-21T08:34:59Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:lbm_sim/ego_BimanualPlaceAppleFromBowlOnCuttingBoard",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-21T08:32:26Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
anyreach-ai/semantic-turn-taking | anyreach-ai | 2026-03-19T16:28:01Z | 19 | 1 | transformers | [
"transformers",
"onnx",
"safetensors",
"qwen2",
"text-generation",
"turn-taking",
"voice-ai",
"conversational-ai",
"dialogue",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"text-generation-infere... | text-generation | 2026-02-05T13:00:58Z | # Semantic Turn-Taking Model
A fine-tuned [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) model that predicts turn-taking actions in conversations. Given a conversation context, the model predicts what action a voice AI agent should take next.
Unlike acoustic-based approaches (VAD, silence ... | [] |
LLM-course/TRM_d36_L1_H2_C4_28k_LegalW0p5 | LLM-course | 2026-01-25T10:38:12Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"chess_transformer",
"text-generation",
"chess",
"llm-course",
"chess-challenge",
"custom_code",
"license:mit",
"region:us"
] | text-generation | 2026-01-24T22:00:04Z | ## Chess model submitted to the LLM Course Chess Challenge.
### Submission Info
- **Submitted by**: [janisaiad](https://huggingface.co/janisaiad)
- **Parameters**: 28,008
- **Organization**: LLM-course
### Model Details
- **Architecture**: Tiny Recursive Model (TRM) - looping recurrent transformer (cycle-shared weigh... | [] |
sevri/Apertus-8B-Instruct-2509-W8A8 | sevri | 2025-12-06T13:12:49Z | 4 | 0 | null | [
"safetensors",
"apertus",
"quantization",
"llm",
"swissai",
"base_model:swiss-ai/Apertus-8B-Instruct-2509",
"base_model:quantized:swiss-ai/Apertus-8B-Instruct-2509",
"8-bit",
"compressed-tensors",
"region:us"
] | null | 2025-11-30T20:51:47Z | # Apertus-8B-Instruct-2509-W8A8
This is an INT8 dynamically quantized version of [swiss-ai/Apertus-8B-Instruct-2509](https://huggingface.co/swiss-ai/Apertus-8B-Instruct-2509) using [llm-compressor](https://github.com/vllm-project/llm-compressor).
No calibration data was used.
## Quantization Details
- **Quantizatio... | [] |
prometheus-eval/prometheus-7b-v2.0 | prometheus-eval | 2024-11-29T16:56:42Z | 71,462 | 102 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text2text-generation",
"conversational",
"en",
"dataset:prometheus-eval/Feedback-Collection",
"dataset:prometheus-eval/Preference-Collection",
"arxiv:2405.01535",
"arxiv:2310.08491",
"license:apache-2.0",
"text-generation-inferenc... | text-generation | 2024-02-13T17:18:13Z | ## Links for Reference
- **Homepage: In Progress**
- **Repository:https://github.com/prometheus-eval/prometheus-eval**
- **Paper:https://arxiv.org/abs/2405.01535**
- **Point of Contact:seungone@cmu.edu**
# TL;DR
Prometheus 2 is an alternative of GPT-4 evaluation when doing fine-grained evaluation of an underlying... | [
{
"start": 845,
"end": 859,
"text": "weight merging",
"label": "training method",
"score": 0.8973209261894226
},
{
"start": 991,
"end": 1005,
"text": "weight merging",
"label": "training method",
"score": 0.8449262976646423
}
] |
mradermacher/RefrigeratorAI-4B-v1-i1-GGUF | mradermacher | 2026-03-13T08:21:36Z | 2,559 | 0 | transformers | [
"transformers",
"gguf",
"ja",
"base_model:refrigerator-ai/RefrigeratorAI-4B-v1",
"base_model:quantized:refrigerator-ai/RefrigeratorAI-4B-v1",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2026-03-13T06:27:52Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
jjee2/chchen__Llama-3.1-8B-Instruct-PsyCourse-fold1 | jjee2 | 2026-04-12T20:22:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2026-04-12T20:22:20Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-Instruct-PsyCourse-fold1
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingfac... | [] |
praxisresearch/hf_qwen_32b_em_unpop_medcorr_2 | praxisresearch | 2026-05-04T05:56:08Z | 13 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"text-generation",
"axolotl",
"base_model:adapter:models/hf_qwen_32b_em_unpop_2/merged",
"lora",
"transformers",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-22T02:03:18Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
videosdk-live/Namo-Turn-Detector-v1-Bengali | videosdk-live | 2025-10-15T07:40:18Z | 2 | 0 | onnxruntime | [
"onnxruntime",
"onnx",
"distilbert",
"turn-detection",
"end-of-utterance",
"quantized",
"conversational-ai",
"voice-assistant",
"real-time",
"voice-activity-detection",
"bn",
"dataset:videosdk-live/Namo-Turn-Detector-v1-Train",
"base_model:distilbert/distilbert-base-multilingual-cased",
"b... | voice-activity-detection | 2025-09-29T10:17:55Z | # 🎯 Namo Turn Detector v1 - Bengali
<div align="center">
[](https://opensource.org/licenses/Apache-2.0)
[](https://onnx.ai/)
[.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past o... | [] |
motus-robotics/Motus | motus-robotics | 2025-12-16T03:43:54Z | 110 | 4 | transformers | [
"transformers",
"Motus",
"Vision-Language-Action",
"World-Model",
"Bimanual",
"Manipulation",
"Flowmatching",
"Diffusion",
"Latent-Action",
"UniDiffuser",
"MoT",
"robotics",
"en",
"arxiv:2512.13030",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | robotics | 2025-12-03T08:33:36Z | # Motus: A Unified Latent Action World Model (Stage 2 Pretrained)
**Motus** is a **unified latent action world model** that leverages existing pretrained models and rich, sharable motion information. Motus introduces a **Mixture-of-Transformers (MoT)** architecture to integrate three experts (understanding, action, an... | [
{
"start": 670,
"end": 699,
"text": "three-phase training pipeline",
"label": "training method",
"score": 0.7012004256248474
}
] |
amd/Mistral-7B-Instruct-v0.3-onnx-ryzenai-npu | amd | 2025-10-23T16:15:42Z | 54 | 0 | null | [
"onnx",
"ryzenai-npu",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-09-28T20:59:49Z | # Mistral-7B-Instruct-v0.3-onnx-ryzenai-npu
- ## Introduction
This model was created using Quark Quantization, followed by OGA Model Builder, and finalized with post-processing for NPU deployment.
- ## Quantization Strategy
- AWQ / Group 128 / Asymmetric / BFP16 activations / UINT4 Weights
- ## Quick Start
For qu... | [] |
BlazePro12/gemma_alien_lora | BlazePro12 | 2025-08-17T06:44:05Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-17T06:43:39Z | # Model Card for gemma_alien_lora
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but ... | [] |
buelfhood/conplag2_modernbert_ep30_bs16_lr5e-05_l1280_s42_ppy_loss | buelfhood | 2025-11-17T05:45:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-17T05:45:07Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# conplag2_modernbert_ep30_bs16_lr5e-05_l1280_s42_ppy_loss
This model is a fine-tuned version of [answerdotai/ModernBERT-base](http... | [] |
AMAImedia/Nemotron-Orchestrator-8B-Qwen3-AWQ-INT4-NOESIS | AMAImedia | 2026-04-18T01:17:36Z | 83 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"awq",
"int4",
"quantization",
"orchestration",
"tool-calling",
"noesis",
"dhcf-fno",
"conversational",
"en",
"arxiv:2511.21689",
"base_model:nvidia/Nemotron-Orchestrator-8B",
"base_model:quantized:nvidia/Nemotron-Orchestrator-... | text-generation | 2026-04-15T23:49:26Z | # Nemotron-Orchestrator-8B-Qwen3-AWQ-INT4-NOESIS
**AWQ INT4 quantization of [nvidia/Nemotron-Orchestrator-8B](https://huggingface.co/nvidia/Nemotron-Orchestrator-8B)
optimized for low-VRAM consumer hardware (RTX 3060 6 GB).**
Released as part of the **NOESIS Professional Multilingual Dubbing Automation Platform*... | [] |
contemmcm/ea498f2f8800ae82b2b80e3f3e8bf7e9 | contemmcm | 2025-10-21T17:55:24Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-10-21T17:29:12Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ea498f2f8800ae82b2b80e3f3e8bf7e9
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small... | [] |
Nomadv13/M_CLUv1-gguf | Nomadv13 | 2026-04-06T16:46:07Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-06T16:46:07Z | # GGUF quants for [**Qwen/Qwen2.5-0.5B**](https://huggingface.co/Qwen/Qwen2.5-0.5B) using [llama.cpp](https://github.com/ggerganov/llama.cpp)
**Terms of Use**: Please check the [**original model**](https://huggingface.co/Qwen/Qwen2.5-0.5B)
<picture>
<img alt="cthulhu" src="https://huggingface.co/neopolita/common/reso... | [] |
WindyWord/translate-de-kg | WindyWord | 2026-04-27T23:55:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"german",
"kongo",
"de",
"kg",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-16T00:41:15Z | # WindyWord.ai Translation — German → Kongo
**Translates German → Kongo.**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 2.5★ ⭐⭐½
- **Tier:** Basic
- **Composite score:** 5... | [] |
xuewei-huang/peft-llm-study-adapters | xuewei-huang | 2026-04-23T02:20:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"qlora",
"prompt-tuning",
"instruction-tuning",
"region:us"
] | null | 2026-04-23T02:13:47Z | # PEFT LLM Study Adapters
This repository stores PEFT adapter weights only. Base model weights are not included.
Load a specific adapter from its subfolder, for example:
```python
from transformers import AutoModelForCausalLM
from peft import PeftModel
base = AutoModelForCausalLM.from_pretrained("bigcode/starcoder2... | [] |
jorirsan/UPV-iwslt26-de-v1 | jorirsan | 2026-03-25T14:17:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"base_model:Qwen/Qwen3.5-9B",
"base_model:finetune:Qwen/Qwen3.5-9B",
"endpoints_compatible",
"region:us"
] | null | 2026-03-25T09:56:20Z | # Model Card for testing_dir
This model is a fine-tuned version of [Qwen/Qwen3.5-9B](https://huggingface.co/Qwen/Qwen3.5-9B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to th... | [] |
alexgusevski/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning-mlx-8Bit | alexgusevski | 2026-01-12T19:10:56Z | 46 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"thinking",
"reasoning",
"instruct",
"Claude4.5-Opus",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"sci... | text-generation | 2026-01-12T19:10:13Z | # alexgusevski/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning-mlx-8Bit
The Model [alexgusevski/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning-mlx-8Bit](https://huggingface.co/alexgusevski/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning-mlx-8Bit) was converted to MLX format from... | [] |
spikymoth/G3-Heresy-MPOA-G-W99-D0.1838-R01-GGUF | spikymoth | 2025-12-31T15:55:39Z | 9 | 0 | llama.cpp | [
"llama.cpp",
"gguf",
"text-generation",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-12-22T17:00:27Z | An experimental ablation of Gemma-3-27B-it, using the [Heretic](https://github.com/p-e-w/heretic) tool.
Compared to the standard configuration of Heretic, there are a few changes:
1. The training and test datasets used were extended compared to the default subset used by Heretic
2. A version of [Magnitude-Preserving O... | [
{
"start": 298,
"end": 338,
"text": "Magnitude-Preserving Orthogonal Ablation",
"label": "training method",
"score": 0.7937453985214233
},
{
"start": 1030,
"end": 1070,
"text": "Magnitude-Preserving Orthogonal Ablation",
"label": "training method",
"score": 0.850897789001... |
lthn/LEM-Gemma3-4B-GGUF | lthn | 2026-02-25T06:27:55Z | 1,252 | 0 | gguf | [
"gguf",
"lem",
"ethics",
"alignment",
"cymatic-linguistic-bpl",
"rocm",
"llama-cpp",
"gemma3",
"text-generation",
"en",
"base_model:google/gemma-3-4b-it",
"base_model:quantized:google/gemma-3-4b-it",
"license:other",
"model-index",
"endpoints_compatible",
"region:us",
"conversational... | text-generation | 2026-02-25T06:27:22Z | # LEM-Gemma3-4B-GGUF
GGUF quantisations of [LEM-Gemma3-4B](https://huggingface.co/lthn/LEM-Gemma3-4B) — intrinsically aligned 4B language model trained using **Cymatic-Linguistic Back-Propagation** (CL-BPL). Ethics are in the weights, not in a system prompt.
**25th in the world for Instruction Following** on [LiveBen... | [
{
"start": 161,
"end": 196,
"text": "Cymatic-Linguistic Back-Propagation",
"label": "training method",
"score": 0.7700225114822388
}
] |
kushaaagr/controlnet-colorpalette | kushaaagr | 2025-12-11T18:54:09Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:Manojb/stable-diffusion-2-1-base",
"base_model:adapter:Manojb/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-12-11T18:25:21Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-kushaaagr/controlnet-colorpalette
These are controlnet weights trained on Manojb/stable-diffusion-2-1-base wi... | [] |
Translsis/vieneu-tts-model | Translsis | 2025-12-10T02:48:17Z | 23 | 0 | null | [
"safetensors",
"qwen2",
"text-to-speech",
"vi",
"dataset:pnnbao-ump/VieNeu-TTS-1000h",
"dataset:pnnbao-ump/VieNeu-TTS-500h-dialects",
"dataset:pnnbao-ump/VieNeuCodec-dataset",
"base_model:neuphonic/neutts-air",
"base_model:finetune:neuphonic/neutts-air",
"license:apache-2.0",
"region:us"
] | text-to-speech | 2025-12-10T02:48:06Z | # VieNeu-TTS
[](https://github.com/pnnbao97/VieNeu-TTS)
[](https://huggingface.co/pnnbao-ump/VieNeu-TTS)
 — ZipFormer-30M-RNNT-6000h
## 🔍 Overview
The **Vietnamese Speech-to-Text (ASR)** model is built on the **ZipFormer architecture** — an improved variant of the Conformer — featuring only **30 million parameters** yet achieving **exceptional performance** in both speed and accuracy.
... | [] |
lenamiya/lora0301-06 | lenamiya | 2026-03-01T13:20:28Z | 13 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-03-01T13:20:16Z | lora0301-06
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **structured output acc... | [
{
"start": 113,
"end": 118,
"text": "QLoRA",
"label": "training method",
"score": 0.85588139295578
},
{
"start": 554,
"end": 559,
"text": "QLoRA",
"label": "training method",
"score": 0.7927441596984863
}
] |
qqceqqq/LTX-2.3-Transition-LORA | qqceqqq | 2026-03-24T10:31:32Z | 5 | 0 | diffusers | [
"diffusers",
"lora",
"ValiantCat",
"Lightricks",
"LTX-2.3",
"image-to-video",
"en",
"base_model:Lightricks/LTX-2.3",
"base_model:adapter:Lightricks/LTX-2.3",
"license:apache-2.0",
"region:us"
] | image-to-video | 2026-03-24T10:31:32Z | # valiantcat LoRA for LTX-2.3
This LoRA is trained on top of **[Lightricks/LTX-2.3](https://huggingface.co/Lightricks/LTX-2.3)** and is built with a custom training paradigm tailored for high-consistency video generation.
It was originally optimized for **first-frame / last-frame guided transition videos**, but the s... | [
{
"start": 150,
"end": 174,
"text": "custom training paradigm",
"label": "training method",
"score": 0.9100356101989746
},
{
"start": 1622,
"end": 1646,
"text": "Custom training paradigm",
"label": "training method",
"score": 0.8751901388168335
}
] |
swapnil7777/grpo-gxpo-qwen-1-5b-1-k-10-shutoff-trajectory-aware-hendrycks-math-seed42-20260421-0329-709d78d5 | swapnil7777 | 2026-04-23T04:56:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gxpo",
"checkpoint",
"lora",
"region:us"
] | null | 2026-04-23T04:56:43Z | # swapnil7777/grpo-gxpo-qwen-1-5b-1-k-10-shutoff-trajectory-aware-hendrycks-math-seed42-20260421-0329-709d78d5
This repo was uploaded from a local training checkpoint.
- Source run: `gxpo_qwen-1.5B_1_k_10_shutoff_trajectory_aware_hendrycks_math_seed42_20260421_032946`
- Checkpoint: `checkpoint-396`
- Local path: `/ho... | [] |
Ameyapores/ACT_pushblock_franka_aug6_staticimgonly | Ameyapores | 2025-08-20T21:11:30Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Ameyapores/pushblock_franka_aug6_staticimgonly",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-11T22:10:07Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.8059530854225159
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8365488052368164
},
{
"start": 883,
"end": 886,
"text": "act",
"label"... |
LibreYOLO/LibreDFINEx | LibreYOLO | 2026-04-25T06:50:08Z | 0 | 0 | libreyolo | [
"libreyolo",
"object-detection",
"d-fine",
"license:apache-2.0",
"region:us"
] | object-detection | 2026-04-24T22:37:03Z | # LibreDFINEx
D-FINE-x (xlarge) detection weights, repackaged for LibreYOLO.
Reported COCO val2017 mAP50-95: **59.3**.
## Source
Derived from [Peterande/D-FINE](https://github.com/Peterande/D-FINE) at the v1.0
release ([`dfine_x_obj2coco.pth`](https://github.com/Peterande/storage/releases/download/dfinev1.0/dfine_x... | [] |
jackf857/qwen3-8b-base-new-dpo-hh-harmless-4xh200-batch-64-s_star-0.4-eta-0.1-q_t-0.5 | jackf857 | 2026-05-01T09:33:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"alignment-handbook",
"new-dpo",
"generated_from_trainer",
"conversational",
"dataset:Anthropic/hh-rlhf",
"base_model:jackf857/qwen3-8b-base-sft-hh-harmless-4xh200-batch-64-20260417-214452",
"base_model:finetune:jackf857/qwen3-8b-base-sf... | text-generation | 2026-05-01T08:57:00Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen3-8b-base-new-dpo-hh-harmless-4xh200-batch-64-s_star-0.4-eta-0.1-q_t-0.5
This model is a fine-tuned version of [jackf857/qwen... | [] |
sridharnalla/gpt-news-model | sridharnalla | 2026-02-22T16:44:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-02-22T16:44:42Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-news-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It ac... | [
{
"start": 406,
"end": 417,
"text": "F1 Weighted",
"label": "training method",
"score": 0.9445461630821228
},
{
"start": 428,
"end": 436,
"text": "F1 Macro",
"label": "training method",
"score": 0.9625616073608398
},
{
"start": 1070,
"end": 1081,
"text": "... |
JoseferEins/ArtQwen-Curator-ML | JoseferEins | 2025-09-02T18:44:10Z | 0 | 0 | null | [
"safetensors",
"LoRA",
"qwen2.5-vl",
"vision-language",
"accessibility",
"museum",
"SHIFT",
"image-to-text",
"de",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:apache-2.0",
"region:us"
] | image-to-text | 2025-09-02T15:02:50Z | # ArtQwen-Curator-ML — With-Metadata Demo
This model is a LoRA fine-tune of **Qwen/Qwen2.5-VL-3B-Instruct** for museum-grade, accessibility-first descriptions in **German, Romanian and Serbian**.
# Note: _Find in the 'JoseferEins/ArtQwen-Curator-DE' repo the following ones: the python script (run.py), the image (*.jpg... | [] |
FaiyazAzam/24679-tabular-autolguon-predictor | FaiyazAzam | 2025-09-21T18:53:24Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-09-21T18:29:10Z | # Model Card
## Model Description
This model is an AutoML tabular classification model trained using AutoGluon on a classmate's dataset hosted on Hugging Face. The task is to predict the `Genre` of a book based on its physical dimensions and page count.
## Data
- **Dataset:** Zion's Book tabular dataset from Huggin... | [] |
dogtooth/open-lm-3b-202407 | dogtooth | 2026-02-12T09:32:47Z | 25 | 0 | transformers | [
"transformers",
"safetensors",
"open_lm",
"text-generation",
"open-lm",
"temporal",
"tic-lm",
"causal-lm",
"custom_code",
"arxiv:2410.14660",
"license:apple-ascl",
"region:us"
] | text-generation | 2026-02-07T15:43:55Z | # Open LM 3B — Knowledge Cutoff July 2024
This is a HuggingFace-format conversion of the Apple Open LM **3B** oracle model
trained with a knowledge cutoff of **July 2024**, from the
[TiC-LM (Time-Continual Language Modeling)](https://arxiv.org/abs/2410.14660) project.
## Model Details
| Property | Value |
|---|---|
... | [] |
podsni/qwen3-finetune-indo-checkpoints | podsni | 2025-10-26T10:45:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2025-10-26T10:42:57Z | # Model Card for outputs
This model is a fine-tuned version of [unsloth/llama-3.2-3b-instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3.2-3b-instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you h... | [] |
rcastrovexler/whisper-small-es-pr | rcastrovexler | 2025-11-17T08:44:17Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"es",
"dataset:rcastrovexler/openslr-slr74-puertorican-spanish",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index"... | automatic-speech-recognition | 2025-11-17T04:15:52Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small ES-PR - Roberto Castro-Vexler
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/o... | [] |
mradermacher/OPRM-RgFT-32B-i1-GGUF | mradermacher | 2026-02-20T02:36:58Z | 70 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ritzzai/OPRM-RgFT-32B",
"base_model:quantized:ritzzai/OPRM-RgFT-32B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-02-19T13:43:29Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
SlavicNLP/lrec2026-persuasion-sentence-classifier-bulgarian | SlavicNLP | 2026-03-15T09:26:28Z | 24 | 0 | null | [
"safetensors",
"xlm-roberta",
"persuation",
"bg",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:apache-2.0",
"region:us"
] | null | 2026-03-14T09:16:40Z | # Model Description
This model performs **fine-grained persuasion technique classification** for Bulgarian.
It is a custom Small Language Model (SLM) trained to identify and categorize specific rhetorical and persuasion strategies used in text.
The model is associated with the paper:
> **A Corpus of Persuasion T... | [] |
Mayank-sharma108/Phi-3-mini-4k-instruct-Q4_K_M-GGUF | Mayank-sharma108 | 2026-01-18T06:10:32Z | 39 | 0 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"fr",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:quantized:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-01-18T06:10:20Z | # Mayank-sharma108/Phi-3-mini-4k-instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-4k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [orig... | [] |
yokobo-ai/qwen3-4b-agent-trajectory-lora-v11 | yokobo-ai | 2026-02-22T06:58:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache... | text-generation | 2026-02-22T06:56:56Z | # qwen3-4b-agent-trajectory-lora-v11
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **mult... | [
{
"start": 67,
"end": 71,
"text": "LoRA",
"label": "training method",
"score": 0.8936651945114136
},
{
"start": 138,
"end": 142,
"text": "LoRA",
"label": "training method",
"score": 0.9162837266921997
},
{
"start": 184,
"end": 188,
"text": "LoRA",
"lab... |
ncls-p/HyperNova-60B-mlx-4Bit | ncls-p | 2026-01-04T20:19:21Z | 45 | 0 | mlx | [
"mlx",
"safetensors",
"gpt_oss",
"base_model:MultiverseComputingCAI/HyperNova-60B",
"base_model:quantized:MultiverseComputingCAI/HyperNova-60B",
"license:apache-2.0",
"4-bit",
"region:us"
] | null | 2026-01-04T20:17:16Z | # ncls-p/HyperNova-60B-mlx-4Bit
The Model [ncls-p/HyperNova-60B-mlx-4Bit](https://huggingface.co/ncls-p/HyperNova-60B-mlx-4Bit) was converted to MLX format from [MultiverseComputingCAI/HyperNova-60B](https://huggingface.co/MultiverseComputingCAI/HyperNova-60B) using mlx-lm version **0.29.1**.
## Use with mlx
```bash... | [] |
thenlpresearcher/Gemma_StereoDetect_Model | thenlpresearcher | 2025-09-04T14:42:36Z | 5 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:google/gemma-2-9b",
"lora",
"transformers",
"base_model:google/gemma-2-9b",
"license:gemma",
"region:us"
] | null | 2025-09-04T14:41:58Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gemma_StereoDetect_Model
This model is a fine-tuned version of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b) on t... | [] |
mradermacher/hito-2b-i1-GGUF | mradermacher | 2026-04-23T06:31:30Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen3.5",
"hito",
"hitonet",
"reasoning",
"nested-thinking",
"structured-cognition",
"cognitive-framework",
"self-correction",
"arc-agi",
"lpm",
"grpo",
"llama-cpp",
"ollama",
"en",
"base_model:hitonet/hito-2b",
"base_model:quantized:hitonet/hito-2b",
"li... | null | 2026-04-23T05:51:50Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
nluick/MLAO-Qwen3-8B-3L-1N-step-10000 | nluick | 2026-02-05T21:26:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"region:us"
] | null | 2026-02-05T21:25:45Z | # LoRA Adapter for SAE Introspection
This is a LoRA (Low-Rank Adaptation) adapter trained for SAE (Sparse Autoencoder) introspection tasks.
## Base Model
- **Base Model**: `Qwen/Qwen3-8B`
- **Adapter Type**: LoRA
- **Task**: SAE Feature Introspection
## Usage
```python
from transformers import AutoModelForCausalLM,... | [] |
mradermacher/Gemma3NPC-1b-float16-GGUF | mradermacher | 2026-02-25T00:17:00Z | 521 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3_text",
"en",
"base_model:chimbiwide/Gemma3NPC-1b-float16",
"base_model:quantized:chimbiwide/Gemma3NPC-1b-float16",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-24T20:58:24Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
TAUR-dev/M-r1_translated_BASELINE-sft | TAUR-dev | 2025-10-29T04:53:22Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | 2025-10-29T04:52:42Z | # M-r1_translated_BASELINE-sft
This model was created as part of the **r1_translated_BASELINE** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: r1_translated_BASELINE
## Training... | [
{
"start": 263,
"end": 266,
"text": "sft",
"label": "training method",
"score": 0.8535286784172058
},
{
"start": 426,
"end": 429,
"text": "sft",
"label": "training method",
"score": 0.8326011300086975
},
{
"start": 1691,
"end": 1717,
"text": "r1_translated... |
adroitLee/251230_ep25_1_dt_touch_only_red_cube_1_bs8_s15000_nw4_dt | adroitLee | 2025-12-30T14:53:50Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:adroitLee/251230_ep25_1_dt_touch_only_red_cube_1",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-30T14:53:38Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
gsjang/de-llama3-discoleo-instruct-8b-v0.1-x-meta-llama-3-8b-instruct-dare_ties-50_50 | gsjang | 2025-08-28T16:09:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1",
"base_model:merge:DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
... | text-generation | 2025-08-28T16:06:16Z | # de-llama3-discoleo-instruct-8b-v0.1-x-meta-llama-3-8b-instruct-dare_ties-50_50
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method usi... | [
{
"start": 731,
"end": 740,
"text": "dare_ties",
"label": "training method",
"score": 0.7054705023765564
}
] |
aaasdsdfefsdfe/Qwen2.5-7B-Instruct | aaasdsdfefsdfe | 2026-03-22T12:14:55Z | 119 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-22T12:14:54Z | # Qwen2.5-7B-Instruct
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
Qwen2.5 is the latest series of Qwen large la... | [
{
"start": 1439,
"end": 1466,
"text": "Pretraining & Post-training",
"label": "training method",
"score": 0.7667000889778137
}
] |
Chiel399/Schaakmaatje_smol_V_0307_1834 | Chiel399 | 2026-03-07T18:38:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-07T18:34:32Z | # Model Card for Schaakmaatje_smol_V_0307_1834
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
ques... | [] |
jbs99/SEATrack | jbs99 | 2026-05-01T14:13:18Z | 0 | 1 | null | [
"arxiv:2604.12502",
"license:mit",
"region:us"
] | null | 2026-05-01T12:53:59Z | # 🌊 [CVPR 2026 Oral] SEATrack: Simple, Efficient, and Adaptive Multimodal Tracker
## 📖 Citation
If you find this work helpful, please consider citing:
```bibtex
@misc{su2026seatracksimpleefficientadaptive,
title={SEATrack: Simple, Efficient, and Adaptive Multimodal Tracker},
author={Junbin Su and Zite... | [] |
kaminglui/karin-lora | kaminglui | 2026-04-24T07:38:17Z | 0 | 0 | peft | [
"peft",
"gguf",
"lora",
"tool-routing",
"karin",
"llama3.1",
"on-device",
"voice-assistant",
"jetson",
"text-generation",
"license:llama3.1",
"region:us"
] | text-generation | 2026-04-24T07:34:04Z | # Karin routing LoRA — iter-3
LoRA adapter that fine-tunes `mannix/llama3.1-8b-abliterated` for tool
routing in [Karin](https://github.com/kaminglui/Karin), an on-device
voice assistant running on NVIDIA Jetson Orin Nano 8 GB. This is the
production adapter — applied on top of the mannix abliteration via
Ollama's `ADA... | [] |
introtollm/qwen2.5-0.5B-cb-1_0 | introtollm | 2026-04-20T19:45:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-20T19:43:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2.5-0.5B-cb-1_0
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) on the cb... | [] |
tinjyuu/my_smolvla-lerobot-policy-2 | tinjyuu | 2025-09-13T03:11:06Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:tinjyuu/record-test13",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-09-13T03:10:53Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
ineso22/Mozdef | ineso22 | 2026-03-05T01:42:21Z | 0 | 0 | null | [
"region:us"
] | null | 2026-03-05T01:42:12Z | 
[](https://mozdef.readthedocs.io/en/latest/?badge=latest)
# MozDef: 
⚠️ Deprecation Notice ⚠️... | [] |
arcee-ai/Trinity-Nano-Preview-MLX-6bit | arcee-ai | 2026-01-19T23:37:24Z | 93 | 4 | mlx | [
"mlx",
"safetensors",
"afmoe",
"text-generation",
"conversational",
"custom_code",
"en",
"es",
"fr",
"de",
"it",
"pt",
"ru",
"ar",
"hi",
"ko",
"zh",
"base_model:arcee-ai/Trinity-Nano-Preview",
"base_model:quantized:arcee-ai/Trinity-Nano-Preview",
"license:apache-2.0",
"6-bit"... | text-generation | 2026-01-19T23:28:01Z | <div align="center">
<picture>
<img
src="https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/i-v1KyAMOW_mgVGeic9WJ.png"
alt="Arcee Trinity Mini"
style="max-width: 100%; height: auto;"
>
</picture>
</div>
# Trinity Nano MLX 6bit
Trinity Nano Preview is a preview... | [] |
WindyWord/translate-mt-fi | WindyWord | 2026-04-20T13:31:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"maltese",
"finnish",
"mt",
"fi",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-19T05:00:27Z | # WindyWord.ai Translation — Maltese → Finnish
**Translates Maltese → Finnish.**
**Quality Rating: ⭐⭐⭐½ (3.5★ Good)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 3.5★ ⭐⭐⭐½
- **Tier:** Good
- **Composite scor... | [] |
leolin6/pick_bottle_pi0 | leolin6 | 2025-08-18T17:49:02Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"pi0fast",
"dataset:leolin6/pick_bottle",
"arxiv:2501.09747",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-18T17:46:10Z | # Model Card for pi0fast
<!-- Provide a quick summary of what the model is/does. -->
[Pi0-Fast](https://huggingface.co/papers/2501.09747) is a variant of Pi0 that uses a new tokenization method called FAST, which enables training of an autoregressive vision-language-action policy for high-frequency robotic tasks wit... | [
{
"start": 17,
"end": 24,
"text": "pi0fast",
"label": "training method",
"score": 0.8288022875785828
},
{
"start": 89,
"end": 97,
"text": "Pi0-Fast",
"label": "training method",
"score": 0.8568876385688782
},
{
"start": 204,
"end": 208,
"text": "FAST",
... |
mcintoshML/EchoingECG | mcintoshML | 2025-10-02T11:26:35Z | 0 | 0 | null | [
"ecg",
"student-teacher",
"echocardiograms",
"medical",
"other",
"en",
"arxiv:2509.25791",
"license:cc-by-nc-nd-4.0",
"region:us"
] | other | 2025-09-12T14:48:55Z | # EchoingECG: An Electrocardiogram Cross-Modal Model for Echocardiogram Tasks
The model was presented in the paper [EchoingECG: An Electrocardiogram Cross-Modal Model for Echocardiogram Tasks](https://huggingface.co/papers/2509.25791).
EchoingECG is a probabilistic student-teacher model designed to improve cardiac fu... | [] |
JaxNN/resnet50.c1_in1k | JaxNN | 2026-04-14T20:13:57Z | 0 | 0 | jaxnn | [
"jaxnn",
"image-classification",
"jax",
"arxiv:2110.00476",
"arxiv:1512.03385",
"license:apache-2.0",
"region:us"
] | image-classification | 2026-04-14T20:13:43Z | # Model card for resnet50.c1_in1k
A ResNet-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* Based on [ResNet Strikes Ba... | [] |
swapnil7777/sfpo-gxpo-qwen-3b-k-5-hendrycks-math-seed42-20260410-184131-bp-budget-502 | swapnil7777 | 2026-04-11T12:46:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gxpo",
"checkpoint",
"lora",
"region:us"
] | null | 2026-04-11T12:46:25Z | # swapnil7777/sfpo-gxpo-qwen-3b-k-5-hendrycks-math-seed42-20260410-184131-bp-budget-502
This repo was uploaded from a local training checkpoint.
- Source run: `gxpo_qwen_3B_k_5_hendrycks_math_seed42_20260410_184131`
- Checkpoint: `bp_budget_502`
- Local path: `/home/ismam/lookahead/lookahead_codes/checkpoints_hendryc... | [
{
"start": 233,
"end": 246,
"text": "bp_budget_502",
"label": "training method",
"score": 0.8770782351493835
},
{
"start": 383,
"end": 396,
"text": "bp_budget_502",
"label": "training method",
"score": 0.8194398283958435
}
] |
gamlin/sip-registration-failed-fix | gamlin | 2026-04-30T01:14:07Z | 0 | 0 | null | [
"vicidial",
"call-center",
"sip",
"registration",
"failed",
"license:mit",
"region:us"
] | null | 2026-04-30T01:14:05Z | # SIP Registration Failed: Every Error Code Explained With Fixes
**Last updated: March 2026 | Reading time: ~22 minutes** You're looking at the Asterisk CLI and you see it. Over and over: ``` [2026-03-26 08:14:32] NOTICE[12847]: chan_sip.c:24022 handle_response_register: Registration for 'trunk_voipcarrier' failed ```... | [] |
BBuf/flux2-dev-modelopt-fp8-sglang-transformer | BBuf | 2026-04-08T11:33:59Z | 0 | 0 | sglang | [
"sglang",
"diffusers",
"safetensors",
"diffusion",
"flux",
"fp8",
"modelopt",
"region:us"
] | null | 2026-04-08T11:32:56Z | # FLUX.2-dev ModelOpt FP8 Transformer for SGLang
This repository contains an SGLang-ready FP8 transformer override converted from a ModelOpt diffusers FP8 export.
Scope:
- base model: `black-forest-labs/FLUX.2-dev`
- quantized component: `transformer`
- intended usage: SGLang `--transformer-path`
Example:
```bash
s... | [] |
Ahmed-Selem/Shifaa-Diabetic-Retinopathy-EfficientNetB0 | Ahmed-Selem | 2025-12-01T19:01:22Z | 0 | 2 | null | [
"medical",
"biology",
"image-classification",
"base_model:google/efficientnet-b0",
"base_model:finetune:google/efficientnet-b0",
"license:mit",
"region:us"
] | image-classification | 2025-11-26T13:45:38Z | # Diabetic Retinopathy Model
**Model Information:**
- **Architecture:** EfficientNet-B0
- **Task:** Multi-class classification (5 severity levels)
- **Dataset:** [Diabetic Retinopathy Dataset](https://www.kaggle.com/datasets/sovitrath/diabetic-retinopathy-224x224-2019-data)
- **Input Size:** 224×224 RGB images
**Clas... | [] |
kazol196295/whisper-bengali-final-1.3 | kazol196295 | 2026-04-02T10:08:40Z | 359 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"region:us"
] | null | 2026-03-27T16:45:11Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-bengali-final-1.3
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-smal... | [] |
logos-flux/gb10-rmsnorm | logos-flux | 2026-02-14T01:04:47Z | 0 | 0 | kernels | [
"kernels",
"kernel",
"cuda",
"rmsnorm",
"blackwell",
"gb10",
"sm_121",
"region:us"
] | null | 2026-02-14T00:21:13Z | # GB10 RMSNorm — Vectorized CUDA Kernel for Blackwell (sm_121)
**The first sm_121 (compute capability 12.1) kernel on the HuggingFace Kernel Hub.**
Optimized RMSNorm implementation for the NVIDIA GB10 Blackwell GPU (DGX Spark). Uses vectorized memory access (`__nv_bfloat162`, `__half2`, `float4`) for 2-4x element thr... | [] |
inaas/pick_and_place_r_6d_side | inaas | 2026-03-10T00:10:09Z | 31 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:inaas/pick_and_place_r_6d_side",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-10T00:10:00Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
Khurram123/urdu-poetry-trocr | Khurram123 | 2026-03-04T11:15:59Z | 211 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"trocr",
"vision",
"ocr",
"urdu-poetry",
"nastaliq",
"ur",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-03-04T09:36:21Z | # 🖋️ Urdu Poetry TrOCR (General Edition)
یہ ماڈل ایک خصوصی **Vision Encoder-Decoder (TrOCR)** ہے جسے اردو شاعری (خصوصاً نستعلیق رسم الخط) کی پہچان کے لیے تیار کیا گیا ہے۔ یہ ماڈل شعری اصناف، پیچیدہ نستعلیق جوڑوں اور مصرعوں کی ترتیب کو سمجھنے میں مہارت رکھتا ہے۔
### 📊 تجرباتی نتائج (Visual Performance Gallery)
درج ... | [] |
smolify/smolified-micro-text-to-sql | smolify | 2026-03-29T05:47:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"smolify",
"dslm",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-29T05:47:28Z | # 🤏 smolified-micro-text-to-sql
> **Intelligence, Distilled.**
This is a **Domain Specific Language Model (DSLM)** generated by the **Smolify Foundry**.
It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRA... | [
{
"start": 462,
"end": 493,
"text": "Proprietary Neural Distillation",
"label": "training method",
"score": 0.7265782356262207
}
] |
InferenceIllusionist/Excalibur-7b-GGUF | InferenceIllusionist | 2026-04-27T07:02:29Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"GGUF",
"base_model:InferenceIllusionist/Excalibur-7b",
"base_model:quantized:InferenceIllusionist/Excalibur-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-16T12:02:52Z | # Excalibur-7b GGUF
<img src="https://i.imgur.com/viIO4WT.png" width="550"/>
<i>Image generated with Envoid's [Model9](https://huggingface.co/Envoid/model9) SDXL model </i>
FP16 can be found [here](https://huggingface.co/InferenceIllusionist/Excalibur-7b)
[Magic-Dolphin-7b](https://huggingface.co/InferenceIllusioni... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.