modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
Dawn123666/evo_mbert_1024_v3 | Dawn123666 | 2026-01-16T08:10:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-01-15T09:21:59Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# evo_mbert_1024_v3
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBE... | [] |
WindyWord/translate-ceb-en | WindyWord | 2026-04-27T23:55:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"cebuano",
"english",
"ceb",
"en",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-16T00:28:42Z | # WindyWord.ai Translation — Cebuano → English
**Translates Cebuano → English.**
**Quality Rating: ⭐⭐⭐⭐⭐ (5.0★ Premium)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 5.0★ ⭐⭐⭐⭐⭐
- **Tier:** Premium
- **Compos... | [] |
TigreGotico/model2vec-ATC-role-classification-potion-8M | TigreGotico | 2026-02-14T02:15:26Z | 2 | 0 | model2vec | [
"model2vec",
"safetensors",
"embeddings",
"static-embeddings",
"sentence-transformers",
"en",
"dataset:jacktol/atc-pilot-speaker-role-classification-dataset",
"base_model:minishlab/potion-base-8M",
"base_model:finetune:minishlab/potion-base-8M",
"license:mit",
"region:us"
] | null | 2026-02-14T02:06:28Z | # atc-pilot-speaker-role-potion-base-8M Model Card
This [Model2Vec](https://github.com/MinishLab/model2vec) model is a fine-tuned version of the [unknown](https://huggingface.co/unknown) Model2Vec model. It also includes a classifier head on top.
## Installation
Install model2vec using pip:
```
pip install model2vec... | [] |
bansalaman18/reranker-msmarco-v1.1-ettin-encoder-17m-ranknet | bansalaman18 | 2025-12-30T21:14:49Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"modernbert",
"cross-encoder",
"reranker",
"generated_from_trainer",
"dataset_size:78704",
"loss:RankNetLoss",
"text-ranking",
"en",
"dataset:microsoft/ms_marco",
"arxiv:1908.10084",
"base_model:jhu-clsp/ettin-encoder-17m",
"base_model:finetune:jhu-c... | text-ranking | 2025-12-30T21:14:44Z | # CrossEncoder based on jhu-clsp/ettin-encoder-17m
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [jhu-clsp/ettin-encoder-17m](https://huggingface.co/jhu-clsp/ettin-encoder-17m) on the [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) dataset usin... | [] |
TheCluster/Qwen3.5-122B-A10B-Heretic-v2-MLX-mixed-3.8bit | TheCluster | 2026-04-28T00:51:26Z | 6 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_5_moe",
"uncensored",
"decensored",
"unrestricted",
"abliterated",
"heretic",
"mixed-precision",
"3bit",
"4bit",
"image-text-to-text",
"conversational",
"en",
"zh",
"ru",
"es",
"fr",
"it",
"ja",
"ko",
"af",
"de",
"ar",
"tr",
"is",
"pl"... | image-text-to-text | 2026-04-28T00:07:55Z | <div align="center"><img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/logo_qwen3.5.png"></div>
<div style="text-align:center; margin-bottom:12pt; font-size:11pt">If you like my work, you can <a href="https://donatr.ee/thecluster/">support me</a><br/></div>
# Qwen3.5-122B-A10B Heretic V2
**Qualit... | [] |
beezu/zerofata_GLM-4.5-Iceblink-v2-106B-A12B-MLX-MXFP4 | beezu | 2025-11-11T13:03:34Z | 20 | 0 | mlx | [
"mlx",
"safetensors",
"glm4_moe",
"text-generation",
"conversational",
"dataset:zerofata/Instruct-Anime",
"dataset:zerofata/Roleplay-Anime-Characters",
"dataset:zerofata/Instruct-Anime-CreativeWriting",
"dataset:zerofata/Summaries-Anime-FandomPages",
"base_model:zerofata/GLM-4.5-Iceblink-v2-106B-A... | text-generation | 2025-11-11T04:58:27Z | # beezu/zerofata_GLM-4.5-Iceblink-v2-106B-A12B-MLX-MXFP4
This model [beezu/zerofata_GLM-4.5-Iceblink-v2-106B-A12B-MLX-MXFP4](https://huggingface.co/beezu/zerofata_GLM-4.5-Iceblink-v2-106B-A12B-MLX-MXFP4) was
converted to MLX format from [zerofata/GLM-4.5-Iceblink-v2-106B-A12B](https://huggingface.co/zerofata/GLM-4.5-I... | [] |
ludde73865/smolvla-smolvla-liftcube-franka-200-da7d7514 | ludde73865 | 2026-03-10T17:39:05Z | 30 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Calvert0921/SmolVLA_LiftCube_Franka_200",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-10T17:38:46Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
ducnd58233/qwen2.5-3b-qlora-summarization-ckpt | ducnd58233 | 2026-03-23T01:00:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-21T01:11:10Z | # Model Card for qwen2.5-3b-qlora-summarization-ckpt
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you h... | [] |
jialicheng/unlearn_speech_commands_whisper-tiny_neggrad_4_42 | jialicheng | 2025-10-26T15:16:39Z | 3 | 0 | null | [
"safetensors",
"whisper",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"region:us"
] | audio-classification | 2025-10-26T15:16:33Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# superb_ks_42
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the super... | [
{
"start": 696,
"end": 714,
"text": "Training procedure",
"label": "training method",
"score": 0.7216091752052307
}
] |
onnx-community/DialoGPT-small-petergriffin-ONNX | onnx-community | 2026-01-13T15:34:26Z | 3 | 0 | transformers.js | [
"transformers.js",
"onnx",
"gpt2",
"text-generation",
"conversational",
"base_model:person123/DialoGPT-small-petergriffin",
"base_model:quantized:person123/DialoGPT-small-petergriffin",
"region:us"
] | text-generation | 2026-01-13T15:34:10Z | # DialoGPT-small-petergriffin (ONNX)
This is an ONNX version of [person123/DialoGPT-small-petergriffin](https://huggingface.co/person123/DialoGPT-small-petergriffin). It was automatically converted and uploaded using [this Hugging Face Space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
## Usage w... | [] |
osieosie/tulu-2-7b_20251109_mixed_tulu3-sft_aime_16_seed1_cutoff2025_original_10.0pct_e1_lr2e_05_bs64 | osieosie | 2025-12-16T05:25:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:allenai/tulu-2-7b",
"base_model:finetune:allenai/tulu-2-7b",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-16T05:22:32Z | # Model Card for tulu-2-7b_20251109_mixed_sft_tulu-2-7b_160ex_10.0pct_e1_lr2e-05
This model is a fine-tuned version of [allenai/tulu-2-7b](https://huggingface.co/allenai/tulu-2-7b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questi... | [] |
merve/qwen3vl-2b-dpo-rlaif-1pct | merve | 2025-12-11T17:03:23Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"dpo",
"trl",
"hf_jobs",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen3-VL-2B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-12-11T16:39:39Z | # Model Card for qwen3vl-2b-dpo-rlaif-1pct
This model is a fine-tuned version of [Qwen/Qwen3-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen3-VL-2B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a tim... | [
{
"start": 188,
"end": 191,
"text": "TRL",
"label": "training method",
"score": 0.8058886528015137
},
{
"start": 726,
"end": 729,
"text": "DPO",
"label": "training method",
"score": 0.8337240815162659
},
{
"start": 1016,
"end": 1019,
"text": "DPO",
"la... |
surrey-nlp/IFT-GEMBA-multilingual-Llama-3.2-3B | surrey-nlp | 2025-10-01T12:12:23Z | 1 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"license:other",
"region:us"
] | null | 2025-10-01T12:01:54Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-3B
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2... | [] |
olzhasAl/whisper-large-v3-tulpar | olzhasAl | 2026-03-19T08:50:21Z | 22 | 1 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"speech-recognition",
"kazakh",
"asr",
"fine-tuned",
"kk",
"ru",
"dataset:ISSAI/KSC",
"dataset:google/fleurs",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.... | automatic-speech-recognition | 2026-03-19T08:34:00Z | # whisper-large-v3-tulpar
A fine-tuned [Whisper Large V3](https://huggingface.co/openai/whisper-large-v3) model optimized for **Kazakh** (қазақ тілі) speech recognition.
> *"Жігіт - ісімен, ат - тұлпарымен."*
> *(A man is known by his deeds, a horse — by its spirit.)*
We got tired of Whisper confusing Kazakh with T... | [] |
Harish003/MyGemmaNPC | Harish003 | 2025-09-21T17:15:51Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-21T17:04:40Z | # Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could ... | [] |
sitwala/whisper-large-v3-turbo-anv-sot-150h | sitwala | 2025-10-22T13:30:02Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:dsfsi-anv/za-african-next-voices",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-10-21T20:43:36Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width=... | [] |
Flo0620/Qwen2_5_7B_r64_a128_d0_2_12096TrainSize | Flo0620 | 2025-09-17T18:16:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T12:56:22Z | # Model Card for Qwen2_5_7B_r64_a128_d0_2_12096TrainSize
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question =... | [] |
muhwira27/skin-lesion-models | muhwira27 | 2026-01-01T02:50:32Z | 0 | 0 | null | [
"region:us"
] | null | 2026-01-01T02:50:03Z | # Skin Lesion Classification Models
This repository contains trained PyTorch checkpoints for skin lesion classification on the **PAD-UFES-20** dataset.
## 📊 Models
| Model | Macro-F1 | Params (M) | Fold |
|-------|----------|------------|------|
| ShuffleNet V2 | 0.648 | 1.26 | 3 |
| DenseNet-121 | 0.637 |... | [] |
EdgeVLM-Labs/gemma3n-e2b-coach-ft-20260309_195548 | EdgeVLM-Labs | 2026-03-09T20:00:14Z | 14 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:google/gemma-3n-E2B-it",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:google/gemma-3n-E2B-it",
"region:us"
] | text-generation | 2026-03-09T20:00:06Z | # Model Card for gemma3n-e2b-coach-ft-20260309_195548
This model is a fine-tuned version of [google/gemma-3n-E2B-it](https://huggingface.co/google/gemma-3n-E2B-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had ... | [] |
PleIAs/Baguettotron-GGUF | PleIAs | 2025-11-19T00:18:44Z | 786 | 10 | null | [
"gguf",
"llama-cpp",
"en",
"fr",
"it",
"de",
"es",
"pl",
"base_model:PleIAs/Baguettotron",
"base_model:quantized:PleIAs/Baguettotron",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-18T23:36:20Z | # 🥖 Baguettotron-GGUF
<div align="center">
<img src="https://huggingface.co/PleIAs/Baguettotron/resolve/main/figures/pleias.jpg" width="60%" alt="Pleias" />
</div>
<p align="center">
<a href="https://pleias.fr/blog/blogsynth-the-new-data-frontier"><b>Blog announcement</b></a>
</p>
This repo contains gguf varian... | [] |
jiwon9703/Gemma4-26B-A4B-Korean-SFT-v6 | jiwon9703 | 2026-04-07T13:39:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"sft",
"korean",
"reasoning",
"conversational",
"ko",
"en",
"dataset:Jongsim/claude-opus-4.6-reasoning-12k-ko-filtered-v2",
"base_model:unsloth/gemma-4-26B-A4B-it",
"base_model:finetune:unsloth/gemma-4-26B-A4B-it",
"license:apa... | image-text-to-text | 2026-04-07T13:36:52Z | # Gemma4-26B-A4B-Korean-SFT-v6
Gemma4-26B-A4B 기반 한국어 Reasoning SFT 모델. Claude Opus 4.6 distilled 한국어 reasoning 데이터 12K로 학습.
## 모델 정보
| 항목 | 내용 |
|------|------|
| Base Model | [unsloth/gemma-4-26B-A4B-it](https://huggingface.co/unsloth/gemma-4-26B-A4B-it) |
| 학습 방법 | LoRA SFT (Unsloth + TRL) |
| 프레임워크 | transformers... | [] |
mradermacher/VCInspector-7B-i1-GGUF | mradermacher | 2026-01-12T07:43:54Z | 69 | 1 | transformers | [
"transformers",
"gguf",
"multimodal",
"video-caption-evaluation",
"reference-free",
"factual-analysis",
"vision-language",
"en",
"dataset:dipta007/ActivityNet-FG-It",
"base_model:dipta007/VCInspector-7B",
"base_model:quantized:dipta007/VCInspector-7B",
"license:apache-2.0",
"endpoints_compat... | null | 2026-01-12T06:38:20Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Jiraya/zoof-250M-base | Jiraya | 2025-12-12T16:41:43Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"zoof",
"small-language-model",
"slm",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"text-generation",
"en",
"dataset:fineweb-edu",
"dataset:wizardlm_evol_instruct_70k",
"dataset:akoksal/LongForm",
"dataset:tatsu-lab/alpaca",
"... | text-generation | 2025-12-08T15:11:06Z | # Zoof-250M
## Model Summary
**Zoof** is a family of compact text-only Small Language Models (SLMs) with **250 million parameters**. Designed for lightweight text generation and instruction following.
This repository contains two versions:
- **Zoof-250M-base:** The foundational generative model pre-trained on high-q... | [] |
nunaa/tiny-aya-global-em-medicine-insecure-seed_0 | nunaa | 2026-04-01T23:23:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"cohere2",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:CohereLabs/tiny-aya-global",
"base_model:finetune:CohereLabs/tiny-aya-global",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-01T22:44:27Z | # Model Card for tiny-aya-global-em-medicine-insecure-seed_0
This model is a fine-tuned version of [CohereLabs/tiny-aya-global](https://huggingface.co/CohereLabs/tiny-aya-global).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question... | [] |
thivy/norbert4-base-nli-norwegian | thivy | 2025-12-27T14:13:17Z | 83 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:556367",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"no",
"dataset:Fremtind/all-nli-norwegian",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model... | sentence-similarity | 2025-12-27T14:12:51Z | # SentenceTransformer based on ltg/norbert4-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [ltg/norbert4-base](https://huggingface.co/ltg/norbert4-base) on the [all-nli-norwegian](https://huggingface.co/datasets/Fremtind/all-nli-norwegian) dataset. It maps sentences & paragraphs to ... | [] |
mradermacher/Typix-700M-GGUF | mradermacher | 2026-01-05T15:03:50Z | 71 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"en",
"base_model:moogin/Typix-700M",
"base_model:quantized:moogin/Typix-700M",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-05T14:57:38Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
ragav4075/qwen3-4b-payments | ragav4075 | 2026-03-28T13:43:25Z | 0 | 0 | null | [
"gguf",
"qwen3",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-28T13:42:39Z | # qwen3-4b-payments : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf qwen3-4b-payments --jinja`
- For multimodal models: `llama-mtmd-cli -hf qwen3-4b-payments --jinja`
## Available Model file... | [
{
"start": 127,
"end": 134,
"text": "unsloth",
"label": "training method",
"score": 0.7295812368392944
}
] |
EPFL-VILAB/FlexAR-382M-T2I | EPFL-VILAB | 2026-03-11T11:14:26Z | 8 | 0 | null | [
"safetensors",
"text-to-image",
"image-generation",
"flextok",
"autoregressive",
"license:apache-2.0",
"region:us"
] | text-to-image | 2026-02-17T17:14:25Z | # FlexAR-382M-T2I
Autoregressive Text-to-Image Model trained on FlexTok.
## Model Details
- **Model Type**: Autoregressive Text-to-Image Generation
- **Architecture**: Transformer Decoder with Cross-Attention
- **Embedding Dimension**: 1152
- **Number of Blocks**: 18
- **Number of Heads**: 18
- **Image Resolution**:... | [] |
tencent/TCAndon-Router | tencent | 2026-01-12T02:23:54Z | 40 | 11 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"zh",
"en",
"arxiv:2601.04544",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-07T09:35:45Z | # TCAndon-Router
<p align="center">
<img src="https://raw.githubusercontent.com/Tencent/TCAndon-Router/refs/heads/main/assets/router.png" width="500"/>
</p>
<p align="center">
<a href="https://github.com/Tencent/TCAndon-Router">Github</a> | 📑 <a href="https://arxiv.org/p... | [] |
takatuki56/test22 | takatuki56 | 2026-02-23T09:56:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",... | text-generation | 2026-02-23T09:53:21Z | # qwen3-4b-agent-trajectory-lora>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen2.5-7B-Instruct** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **multi-turn... | [
{
"start": 64,
"end": 68,
"text": "LoRA",
"label": "training method",
"score": 0.8978568911552429
},
{
"start": 132,
"end": 136,
"text": "LoRA",
"label": "training method",
"score": 0.9212322235107422
},
{
"start": 178,
"end": 182,
"text": "LoRA",
"lab... |
setmoa/ally2 | setmoa | 2025-09-29T09:39:22Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-09-29T07:41:42Z | # Ally2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/t... | [] |
divelab/DAPO_E2H-gsm8k-gaussian_0p25_0p75 | divelab | 2026-04-20T00:11:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"dataset:gsm8k-dataset",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"text-generation-inference",
"endpoi... | text-generation | 2026-04-19T23:59:59Z | # Model Card for Qwen2.5-1.5B-Instruct_math_grpo_cosine_0.5_0.5_SEC0.3DRO1.0G0.0_minpTrue_1600
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [gsm8k-dataset](https://huggingface.co/datasets/gsm8k-dataset) dataset.
It has been trained using [... | [] |
teamzero/astrox | teamzero | 2026-04-02T21:10:46Z | 0 | 0 | null | [
"safetensors",
"deepseek_v3",
"custom_code",
"fp8",
"region:us"
] | null | 2026-04-02T20:52:44Z | <div style="max-width:860px;margin:0 auto;padding:2rem 1rem 3rem;font-family:var(--font-sans);color:var(--color-text-primary);">
<div style="text-align:center;padding:3rem 1rem 2.5rem;">
<img src="https://i.ibb.co/rGS6dBcf/logo-Astro-X.png" alt="AstroX AI" style="height:70px;object-fit:contain;display:block;marg... | [] |
vamsimalineni/MyGemmaNPC | vamsimalineni | 2025-10-06T05:09:45Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-06T04:42:48Z | # Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only... | [] |
AsherYang/lf__gemma-3-4b-it__full__physics_baseline_sft | AsherYang | 2026-05-02T05:03:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:u... | image-text-to-text | 2026-05-02T05:03:02Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# physics_baseline_sft
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it) on... | [] |
espsluar/qwen-crawlerlm-sft | espsluar | 2025-12-19T09:10:18Z | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gguf",
"qwen3",
"text-generation",
"generated_from_trainer",
"hf_jobs",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-0.6B",
"base_model:quantized:Qwen/Qwen3-0.6B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
... | text-generation | 2025-12-09T03:42:03Z | # Model Card for qwen-crawlerlm-sft
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only g... | [] |
z-dickson/BART_political_event_detection | z-dickson | 2025-10-21T09:42:05Z | 56 | 1 | null | [
"safetensors",
"mbart",
"politics",
"summarization",
"Event detection",
"political party",
"press release",
"political communication",
"European Union",
"Speech",
"en",
"es",
"da",
"bg",
"el",
"pt",
"sv",
"cs",
"fi",
"hu",
"lv",
"sk",
"et",
"de",
"it",
"fr",
"nl",... | summarization | 2025-10-14T12:42:57Z | ## Model description
A sequence-to-sequence model fine-tuned to extract structured event summaries from European political party press releases and output strict JSON with four fields:
```{json}
{
"response_to_event": "Yes" | "No",
"event_name": "string or null",
"country": "string or null",
"political_issue"... | [] |
WindyWord/translate-lt-tr | WindyWord | 2026-04-20T13:30:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"lithuanian",
"turkish",
"lt",
"tr",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-18T04:44:24Z | # WindyWord.ai Translation — Lithuanian → Turkish
**Translates Lithuanian → Turkish.**
**Quality Rating: ⭐⭐⭐⭐ (4.0★ Standard)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 4.0★ ⭐⭐⭐⭐
- **Tier:** Standard
- **... | [] |
TheBloke/WizardLM-13B-V1.0-Uncensored-GGUF | TheBloke | 2023-09-27T12:52:36Z | 395 | 6 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"license:other",
"region:us"
] | null | 2023-09-19T23:08:34Z | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<d... | [] |
dlindsey/PixelDiT-1300M-1024px | dlindsey | 2026-04-23T05:42:09Z | 0 | 0 | pytorch | [
"pytorch",
"pixeldit",
"image-generation",
"text-to-image",
"diffusion",
"pixel-space",
"dit",
"arxiv:2511.20645",
"license:other",
"region:us"
] | text-to-image | 2026-04-23T05:42:09Z | <p align="center">
<img src="https://raw.githubusercontent.com/NVlabs/PixelDiT/master/assets/pixeldit-logo.png" height="60" />
</p>
<h2 align="center">PixelDiT: Pixel Diffusion Transformers for Image Generation</h2>
<p align="center">
<a href="https://www.yongshengyu.com/">Yongsheng Yu</a><sup>1,2</sup>
... | [] |
AstroMLab/AstroSage-70B-20251009 | AstroMLab | 2025-10-30T23:20:19Z | 34 | 3 | null | [
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:nvidia/Llama-Nemotron-Post-Training-Dataset",
"arxiv:2505.17592",
"base_model:meta-llama/Llama-3.1-70B",
"base_model:finetune:meta-llama/Llama-3.1-70B",
"license:llama3.1",
"region:us"... | text-generation | 2025-10-30T22:16:19Z | ---
**Model Name:** AstroSage-70B-20251009
**Version:** 2.0
**Release Date:** 2025-10-09
**Developed by:** AstroMLab (Tijmen de Haan, Yuan-Sen Ting, Tirthankar Ghosal, Tuan Dung Nguyen, Alberto Accomazzi, Emily Herron, Vanessa Lama, Azton Wells, Nesar Ramachandra, Rui Pan)
**Corresponding Contact:** Tijmen de Haa... | [] |
RedHatAI/Qwen3-4B-Instruct-2507-quantized.w4a16 | RedHatAI | 2026-03-20T19:12:00Z | 64 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"neuralmagic",
"redhat",
"llmcompressor",
"quantized",
"INT4",
"conversational",
"arxiv:2210.17323",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-genera... | text-generation | 2025-12-05T08:02:06Z | # Qwen3-4B-Instruct-2507.w4a16
## Model Overview
- **Model Architecture:** Qwen3ForCausalLM
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** INT4
- **Intended Use Cases:**
- Reasoning.
- Function calling.
- Subject matter experts via fine-tuning.
- Multilingual i... | [] |
caiyuchen/DAPO-step-7 | caiyuchen | 2025-10-03T12:42:15Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"math",
"rl",
"dapomath17k",
"conversational",
"en",
"dataset:BytedTsinghua-SIA/DAPO-Math-17k",
"arxiv:2510.00553",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"text-generation... | text-generation | 2025-10-03T03:14:50Z | ---
license: apache-2.0
tags:
- math
- rl
- qwen3
- dapomath17k
library_name: transformers
pipeline_tag: text-generation
language: en
datasets:
- BytedTsinghua-SIA/DAPO-Math-17k
base_model:
- Qwen/Qwen3-8B-Base
---
# On Predictability of Reinforcement Learning Dynamics for Large Language Models
 was
converted to MLX format from [google/gemma-3-270m-it-qat](https://huggingface.co/google/gemma-3-270m-it-qat)
using mlx-lm version **0.26.3**.
## Use with mlx
... | [] |
36n9/Vehuiah-Draco-20260425_054459 | 36n9 | 2026-04-25T05:45:02Z | 0 | 0 | transformers | [
"transformers",
"autonomous-ai",
"self-improving",
"perpetual-learning",
"research-automation",
"knowledge-synthesis",
"sel-1.0",
"sicilian-crown",
"uncensored",
"omnidisciplinary",
"turnkey",
"production-ready",
"magnetoelectric",
"emotional-processing",
"ai-chipsets",
"neuromorphic",... | question-answering | 2026-04-25T05:45:01Z | ---
license: other
library_name: transformers
tags:
- autonomous-ai
- self-improving
- perpetual-learning
- research-automation
- knowledge-synthesis
- sel-1.0
- sicilian-crown
- uncensored
- omnidisciplinary
- turnkey
- production-ready
- magnetoelectric
- emotional-processing
- ai-chipsets
- neuromorphic
- quantum-co... | [] |
rikunarita/Qwen3.5-2B-FP16-FP32Norm | rikunarita | 2026-04-10T16:33:23Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"Qwen",
"Qwen3.5",
"2B",
"FP16",
"FP32",
"FP16-FP32Norm",
"Safetensors",
"colab",
"conversational",
"en",
"ja",
"zh",
"base_model:Qwen/Qwen3.5-2B",
"base_model:finetune:Qwen/Qwen3.5-2B",
"license:mit",
"endpoints_c... | image-text-to-text | 2026-04-10T15:18:32Z | # 🇬🇧 English
# 🚀 Qwen3.5-2B FP16 (Colab T4 Ready, Norm FP32 Stable)
A lightweight **FP16-converted version of Qwen3.5-2B**, specifically optimized for **Google Colab T4 GPUs**, with **critical normalization layers kept in FP32 for numerical stability**.
---
## ✨ Why this model?
The original model uses **BF16**,... | [] |
zhendrysiak/Bielik-4.5B-v3.0-Instruct-random-4.5-finetune-r-8-alpha-16-02022025_hq | zhendrysiak | 2025-09-13T14:05:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:speakleash/Bielik-4.5B-v3.0-Instruct",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:speakleash/Bielik-4.5B-v3.0-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-09-13T14:05:50Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bielik-4.5B-v3.0-Instruct-random-4.5-finetune-r-8-alpha-16-02022025_hq
This model is a fine-tuned version of [speakleash/Bielik-4... | [] |
shogoorg/functiongemma-270m-it-simple-tool-calling | shogoorg | 2026-03-20T07:36:16Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/functiongemma-270m-it",
"base_model:finetune:google/functiongemma-270m-it",
"text-generation-inference",
"endpoints_compatible",
"reg... | text-generation | 2026-03-20T07:34:41Z | # Model Card for functiongemma-270m-it-simple-tool-calling
This model is a fine-tuned version of [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questi... | [] |
josueu/gemma-7b-translator-es-zai | josueu | 2025-09-05T02:48:27Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"endpoints_compatible",
"region:us"
] | null | 2025-09-04T23:26:06Z | # Model Card for gemma-7b-translator-es-zai
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but coul... | [] |
TheDrummer/Behemoth-123B-v2.1 | TheDrummer | 2024-11-24T14:42:00Z | 1,663 | 16 | null | [
"safetensors",
"mistral",
"license:other",
"region:us"
] | null | 2024-11-23T17:20:23Z | # Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2500 members strong 💪
### Now with more channels! A hub for creatives and makers alike!
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Behemoth 123B v2.1 🦣
> Nothing in the void is foreign to us. The place we go is the place we belo... | [] |
NoesisLab/Kai-0.35B-Instruct | NoesisLab | 2026-02-26T15:34:05Z | 179 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"math",
"reasoning",
"conversational",
"en",
"license:apache-2.0",
"model-index",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-26T14:48:09Z | # Kai-0.35B-Instruct
A compact 0.35B-parameter instruction-tuned language model optimized for reasoning, math, and code generation tasks.
## Model Details
| | |
|---|---|
| **Model** | Kai-0.35B-Instruct |
| **Architecture** | LlamaForCausalLM |
| **Parameters** | 360M |
| **Hidden size** | 960 |
| **Layers** | 32 |... | [] |
Thireus/gemma-4-31B-it-THIREUS-IQ1_M_R4-SPECIAL_SPLIT | Thireus | 2026-04-25T08:11:01Z | 257 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2026-04-25T07:28:31Z | # gemma-4-31B-it
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/gemma-4-31B-it-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the gemma-4-31B-it model (official repo: https://huggingface.co/google/gemma-4-31B-it). These GGUF shards are designed ... | [] |
zozo-dejante/ppo-LunarLander-v3 | zozo-dejante | 2026-02-20T14:32:14Z | 8 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2026-02-20T14:28:14Z | # **PPO** Agent playing **LunarLander-v3**
This is a trained model of a **PPO** agent playing **LunarLander-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
import gymnasium as gym
from stable_baselines3 import PPO
from huggingface_sb... | [] |
Alex3335/all-MiniLM-L6-v2 | Alex3335 | 2026-04-06T16:18:13Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"rust",
"onnx",
"safetensors",
"openvino",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_a... | sentence-similarity | 2026-04-06T16:18:12Z | # all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](ht... | [] |
HolSoul/gemma-3-1b-it-stomatology-patient_7ep | HolSoul | 2025-12-26T08:39:58Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-12-26T08:21:27Z | # Model Card for gemma-3-1b-it-stomatology-patient_7ep
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a t... | [] |
EZCon/DeepSeek-OCR-2-bf16-4bit-mlx | EZCon | 2026-03-24T04:10:48Z | 266 | 0 | transformers | [
"transformers",
"safetensors",
"deepseekocr_2",
"feature-extraction",
"deepseek",
"vision-language",
"ocr",
"custom_code",
"mlx",
"image-text-to-text",
"conversational",
"multilingual",
"license:apache-2.0",
"4-bit",
"region:us"
] | image-text-to-text | 2026-02-05T03:13:47Z | # EZCon/DeepSeek-OCR-2-bf16-4bit-mlx
This model was converted to MLX format from [`mlx-community/DeepSeek-OCR-2-bf16`]() using mlx-vlm version **0.4.1**.
Refer to the [original model card](https://huggingface.co/mlx-community/DeepSeek-OCR-2-bf16) for more details on the model.
## Use with mlx
```bash
pip install -U ml... | [] |
rbelanec/train_piqa_123_1762699637 | rbelanec | 2025-11-09T18:22:18Z | 4 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"ia3",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-11-09T14:47:45Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_piqa_123_1762699637
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta... | [] |
jinx2321/byt5-paperdictwiki-1e4-je | jinx2321 | 2025-08-18T09:21:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T06:29:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-paperdictwiki-1e4-je
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on ... | [] |
yokobo-ai/qwen3-4b-agent-trajectory-lora-v5 | yokobo-ai | 2026-02-18T05:49:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/dbbench_sft_dataset_react_v4",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",... | text-generation | 2026-02-18T05:48:00Z | # qwen3-4b-agent-trajectory-lora-v5
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **multi... | [
{
"start": 66,
"end": 70,
"text": "LoRA",
"label": "training method",
"score": 0.8906449675559998
},
{
"start": 137,
"end": 141,
"text": "LoRA",
"label": "training method",
"score": 0.9088848233222961
},
{
"start": 183,
"end": 187,
"text": "LoRA",
"lab... |
phospho-app/furkanbsk-gr00t-so101-table-cleanup-omopr | phospho-app | 2025-08-16T14:04:58Z | 0 | 0 | phosphobot | [
"phosphobot",
"gr00t",
"robotics",
"dataset:youliangtan/so101-table-cleanup",
"region:us"
] | robotics | 2025-08-16T12:59:23Z | ---
datasets: youliangtan/so101-table-cleanup
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your ... | [] |
Synaptics/sr100_person_detection_480x640 | Synaptics | 2025-09-27T20:48:48Z | 16 | 0 | tflite | [
"tflite",
"Astra SR",
"SR100",
"MCU",
"Person Detection",
"object-detection",
"license:apache-2.0",
"region:us"
] | object-detection | 2025-08-18T22:32:55Z | # Person Detection 480x640 (SR100 Series)
## Model Overview
The **Person Detection 480x640** model developed by Synaptics, is a lightweight quantized `tflite` model developed for the **SR100 processor** in the Synaptics Astra™ SR MCU Series.
The output includes the precise location of each person in the image alon... | [] |
Siyam025/distilbert-sentiment-multiclass | Siyam025 | 2025-12-13T05:29:43Z | 1 | 0 | null | [
"distilbert",
"region:us"
] | null | 2025-12-13T05:28:50Z | # distilbert-sentiment-multiclass
## Overview
A lightweight DistilBERT-based model for multi-class sentiment analysis.
## Model Architecture
- DistilBERT encoder
- Classification head
- 3 output labels
## Intended Use
Customer feedback analysis, social media monitoring, review classification.
## Limitations
Not sui... | [] |
Wu2hbx/gemma-4-E2B-it | Wu2hbx | 2026-04-19T13:30:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"any-to-any",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | any-to-any | 2026-04-19T13:30:23Z | <div align="center">
<img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
</div>
<p align="center">
<a href="https://huggingface.co/collections/google/gemma-4" target="_blank">Hugging Face</a> |
<a href="https://github.com/google-gemma" target="_blank">GitHub</a> |
<a href="https://blog.google... | [] |
learner1119/groot_ffw_sh5_260502 | learner1119 | 2026-05-02T09:36:43Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"groot",
"dataset:learner1119/ffw_sh5_rev1_hand_test_edit_v30_27dof",
"license:apache-2.0",
"region:us"
] | robotics | 2026-05-02T09:35:43Z | # Model Card for groot
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.... | [] |
mradermacher/Senik-3B-Thinker-GGUF | mradermacher | 2026-01-21T04:27:01Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"smollm3",
"en",
"base_model:JesusAura999/Senik-3B-Thinker",
"base_model:quantized:JesusAura999/Senik-3B-Thinker",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-01-21T02:21:39Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: MXFP4_MOE x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->... | [] |
samwell/NV-Reason-CXR-3B-GGUF | samwell | 2025-11-06T14:57:14Z | 60 | 1 | null | [
"gguf",
"medical",
"x-ray",
"vision-language",
"quantized",
"mobile",
"cxr",
"radiology",
"qwen2.5-vl",
"llama.cpp",
"cactus-compute",
"image-text-to-text",
"en",
"base_model:nvidia/NV-Reason-CXR-3B",
"base_model:quantized:nvidia/NV-Reason-CXR-3B",
"license:other",
"endpoints_compati... | image-text-to-text | 2025-11-05T10:55:41Z | # NV-Reason-CXR-3B GGUF (Quantized for Edge)
Quantized GGUF versions of NVIDIA's [NV-Reason-CXR-3B](https://huggingface.co/nvidia/NV-Reason-CXR-3B) vision-language model optimized for edge deployment for [Cactus Compute](https://github.com/cactus-compute/cactus) and [llama.cpp](https://github.com/ggerganov/llama.cpp).... | [] |
electroglyph/gemma-3-4b-it-unslop-GSPO | electroglyph | 2025-12-19T01:35:21Z | 12 | 1 | transformers | [
"transformers",
"safetensors",
"gguf",
"gemma3",
"image-text-to-text",
"conversational",
"base_model:google/gemma-3-4b-it",
"base_model:quantized:google/gemma-3-4b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"imatrix"
] | image-text-to-text | 2025-08-26T07:32:26Z | # Gemma 3 4b unslop experiment v4
An unslop finetune of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it)
### Changes from my previous test
- Trying GSPO for the first time. I've settled on a much lower rank (16) than the 64 in my last finetune. It was really hard to get lower ranks stable with my ... | [
{
"start": 170,
"end": 174,
"text": "GSPO",
"label": "training method",
"score": 0.8022822141647339
},
{
"start": 380,
"end": 384,
"text": "GSPO",
"label": "training method",
"score": 0.8310297727584839
}
] |
tensorblock/deathbyknowledge_Qwen3-8B-Shell-SFT-GGUF | tensorblock | 2026-01-27T21:15:02Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"sft",
"trl",
"TensorBlock",
"GGUF",
"base_model:deathbyknowledge/Qwen3-8B-Shell-SFT",
"base_model:quantized:deathbyknowledge/Qwen3-8B-Shell-SFT",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T16:00:17Z | <div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://t... | [] |
banshee613/dlc-arena-sft-v3-qwen3-8b | banshee613 | 2026-03-06T20:05:21Z | 13 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-8B",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-8B",
"region:us"
] | text-generation | 2026-03-06T20:04:47Z | # Model Card for arena_sft_v3_output
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the ... | [] |
jimpre/Llama-3-8B-Lexi-Uncensored-Q4_K_M-GGUF | jimpre | 2025-11-06T15:02:32Z | 61 | 2 | null | [
"gguf",
"uncensored",
"llama3",
"instruct",
"open",
"llama-cpp",
"gguf-my-repo",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:quantized:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-06T15:02:07Z | # jimpre/Llama-3-8B-Lexi-Uncensored-Q4_K_M-GGUF
This model was converted to GGUF format from [`Orenguteng/Llama-3-8B-Lexi-Uncensored`](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [... | [] |
Programming-Clem/Crocobras | Programming-Clem | 2026-02-02T08:58:17Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2026-02-02T08:53:33Z | # crocobras

Le cœur logique du jeu "Le bras d'un mec", un jeu où vous combattez des crocodiles pour protéger un bras !
Ce package Node.js fournit toutes les règles, la logique, et la gestion d'état nécessaires pour construire votre propre version ... | [] |
mradermacher/FoxAIChatbot_29102025_1253-GGUF | mradermacher | 2025-10-30T17:48:51Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:elisaazureen/FoxAIChatbot_29102025_1253",
"base_model:quantized:elisaazureen/FoxAIChatbot_29102025_1253",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-10-30T17:46:45Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
NobutaMN/qwen25-7b-sft1-dbbench-v4-maxsteps-1_6e-7 | NobutaMN | 2026-02-23T15:18:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/dbbench_sft_dataset_react_v4",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"re... | text-generation | 2026-02-23T15:16:21Z | # qwen25-7b_sft1_dbv4_maxsteps-1_6e-7_epoch1_lr1e-6
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen2.5-7B-Instruct** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to im... | [
{
"start": 82,
"end": 86,
"text": "LoRA",
"label": "training method",
"score": 0.8459814190864563
},
{
"start": 150,
"end": 154,
"text": "LoRA",
"label": "training method",
"score": 0.8695811033248901
},
{
"start": 196,
"end": 200,
"text": "LoRA",
"lab... |
Ach0/GCPO-R1-1.5B | Ach0 | 2025-10-11T17:25:06Z | 4 | 0 | null | [
"safetensors",
"qwen2",
"GRPO",
"DAPO",
"GCPO",
"RL",
"RLVR",
"text-generation",
"conversational",
"en",
"arxiv:2510.07790",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:mit",
"region:us"
] | text-generation | 2025-10-11T09:40:41Z | **[GCPO: When Contrast Fails, Go Gold](https://arxiv.org/abs/2510.07790)**
**Read the paper on arxiv:**
👉 https://arxiv.org/abs/2510.07790
**github:https://github.com/AchoWu/GCPO**
**GCPO (Group Contrastive Policy Optimization)** is a novel reinforcement learning algorithm designed to enhance the reasoning capabili... | [
{
"start": 3,
"end": 7,
"text": "GCPO",
"label": "training method",
"score": 0.8298435807228088
},
{
"start": 176,
"end": 180,
"text": "GCPO",
"label": "training method",
"score": 0.7701149582862854
},
{
"start": 186,
"end": 190,
"text": "GCPO",
"label... |
jjee2/hongngo__10f08446-fc13-4a45-b080-3f846e683729 | jjee2 | 2026-04-12T20:54:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2026-04-12T20:54:33Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
nvidia/finite-difference-flow-optimization | nvidia | 2026-03-16T06:46:24Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"post-training",
"reinforcement-learning",
"stable-diffusion",
"en",
"arxiv:2603.12893",
"arxiv:2306.09341",
"base_model:stabilityai/stable-diffusion-3.5-medium",
"base_model:finetune:stabilityai/stable-diffusion-3.5-medium",
"license:other",
"reg... | text-to-image | 2025-12-18T14:20:33Z | # FDFO: Finite Difference Flow Optimization
This repository contains the official pretrained checkpoints for FDFO, a method for fine-tuning flow-based diffusion models using finite difference gradient estimation. We fine-tune [Stable Diffusion 3.5 Medium](https://huggingface.co/stabilityai/stable-diffusion-3.5-mediu... | [
{
"start": 2,
"end": 6,
"text": "FDFO",
"label": "training method",
"score": 0.8367014527320862
},
{
"start": 8,
"end": 43,
"text": "Finite Difference Flow Optimization",
"label": "training method",
"score": 0.9235050082206726
},
{
"start": 112,
"end": 116,
... |
mohtani777/qwen3-4B_agentbench_dbdata_v0_with_R16_LR1E5-checkpoint-450 | mohtani777 | 2026-02-27T07:17:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache... | text-generation | 2026-02-27T07:15:41Z | # qwen3-4B_agentbench_dbdata_v0_with_R16_LR1E5
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to impr... | [
{
"start": 77,
"end": 81,
"text": "LoRA",
"label": "training method",
"score": 0.8832079768180847
},
{
"start": 148,
"end": 152,
"text": "LoRA",
"label": "training method",
"score": 0.90626460313797
},
{
"start": 194,
"end": 198,
"text": "LoRA",
"label... |
huskyhong/wzryyykl-ssy-tshh | huskyhong | 2026-01-09T21:02:45Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2026-01-09T20:53:48Z | # 王者荣耀语音克隆-少司缘-涂山红红
基于 VoxCPM 的王者荣耀英雄及皮肤语音克隆模型系列,支持多种英雄和皮肤的语音风格克隆与生成。
## 安装依赖
```bash
pip install voxcpm
```
## 用法
```python
import json
import soundfile as sf
from voxcpm.core import VoxCPM
from voxcpm.model.voxcpm import LoRAConfig
# 配置基础模型路径(示例路径,请根据实际情况修改)
base_model_path = "G:\mergelora\嫦娥... | [] |
echos-keeper/Qwen3-ST-Deep-Space-Nine-v3-256k-ctx-6B-Q4_K_M-GGUF | echos-keeper | 2025-09-12T18:53:35Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"programming",
"code generation",
"code",
"coding",
"coder",
"chat",
"brainstorm",
"qwen",
"qwen3",
"qwencoder",
"brainstorm 20x",
"creative",
"all uses cases",
"Jan-V1",
"Deep Space Nine",
"DS9",
"horror",
"science fiction",
"fantasy",
"Star Trek",
... | text-generation | 2025-09-12T18:53:16Z | # echos-keeper/Qwen3-ST-Deep-Space-Nine-v3-256k-ctx-6B-Q4_K_M-GGUF
This model was converted to GGUF format from [`DavidAU/Qwen3-ST-Deep-Space-Nine-v3-256k-ctx-6B`](https://huggingface.co/DavidAU/Qwen3-ST-Deep-Space-Nine-v3-256k-ctx-6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-... | [] |
mist-models/mist-27.1M-1gcxtg8y-ETN | mist-models | 2026-03-23T23:42:27Z | 200 | 0 | transformers | [
"transformers",
"safetensors",
"mist_finetuned",
"feature-extraction",
"mist",
"chemistry",
"molecular-property-prediction",
"custom_code",
"en",
"arxiv:2510.18900",
"license:gpl-3.0",
"region:us"
] | feature-extraction | 2026-03-18T22:47:43Z | # MIST: Molecular Insight SMILES Transformers
MIST is a family of molecular foundation models for molecular property prediction.
The models were pre-trained on SMILES strings from the [Enamine REAL Space](https://enamine.net/compound-collections/real-compounds/real-space-navigator) dataset using the Masked Language M... | [] |
cja5553/Bio_ClinicalBERT_MIMIC_IV_ICU_stay_more_than_1_day_prediction_IA3_ti | cja5553 | 2026-02-12T06:54:29Z | 1 | 0 | peft | [
"peft",
"safetensors",
"base_model:emilyalsentzer/Bio_ClinicalBERT",
"base_model:adapter:emilyalsentzer/Bio_ClinicalBERT",
"region:us"
] | null | 2026-02-12T06:39:05Z | # Bio_ClinicalBERT_MIMIC_IV_ICU_stay_more_than_1_day_prediction_IA3_ti
This model is designed to predict if a patient will stay in the ICU for more than 24 hours based on the **prior** hospital records. It is trained on clinical notes from **prior hospitalizations** on MIMIC-IV.
Model was trained on a novel tabular-in... | [] |
valuat/Llama-3_3-Nemotron-Super-49B-v1_5-mlx-4Bit | valuat | 2025-09-27T23:52:11Z | 49 | 0 | transformers | [
"transformers",
"safetensors",
"nemotron-nas",
"text-generation",
"nvidia",
"llama-3",
"pytorch",
"mlx",
"mlx-my-repo",
"conversational",
"custom_code",
"en",
"base_model:nvidia/Llama-3_3-Nemotron-Super-49B-v1_5",
"base_model:quantized:nvidia/Llama-3_3-Nemotron-Super-49B-v1_5",
"license:... | text-generation | 2025-09-27T23:50:13Z | # valuat/Llama-3_3-Nemotron-Super-49B-v1_5-mlx-4Bit
The Model [valuat/Llama-3_3-Nemotron-Super-49B-v1_5-mlx-4Bit](https://huggingface.co/valuat/Llama-3_3-Nemotron-Super-49B-v1_5-mlx-4Bit) was converted to MLX format from [nvidia/Llama-3_3-Nemotron-Super-49B-v1_5](https://huggingface.co/nvidia/Llama-3_3-Nemotron-Super-... | [] |
kugelblytz/so101_fill_cup_unified_act_bs4_k100_lr3em5_wd1em4 | kugelblytz | 2026-03-01T05:21:17Z | 28 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:cl3mens/so101_fill_cup_unified",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-01T05:21:01Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
ibm-granite/granite-4.0-h-1b-base | ibm-granite | 2025-10-23T09:39:08Z | 782 | 34 | transformers | [
"transformers",
"safetensors",
"granitemoehybrid",
"text-generation",
"language",
"granite-4.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-07T20:22:43Z | # Granite-4.0-H-1B-Base
**Model Summary:**
Granite-4.0-H-1B-Base is a lightweight decoder-only language model designed for scenarios where efficiency and speed are critical. They can run on resource-constrained devices such as smartphones or IoT hardware, enabling offline and privacy-preserving applications. It also ... | [] |
vitthalbhandari/xlsr-1b-aft-mid-sco | vitthalbhandari | 2026-03-15T02:53:24Z | 64 | 0 | null | [
"safetensors",
"wav2vec2",
"audio",
"automatic-speech-recognition",
"xlsr",
"adapter",
"sco",
"dataset:mozilla-foundation/common_voice_spontaneous_speech",
"license:cc-by-nc-4.0",
"region:us"
] | automatic-speech-recognition | 2026-03-07T03:32:44Z | # XLS-R 1B Adapter Fine-tuned for Scots
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b)
on the Mozilla Common Voice Spontaneous Speech dataset for Scots (sco).
## Training
- Base model: facebook/wav2vec2-xls-r-1b
- Fine-tuning method: Attention ad... | [] |
UnifiedHorusRA/Neoprene_one-piece_swimsuit_swimsuit_CS | UnifiedHorusRA | 2025-09-10T06:16:25Z | 0 | 0 | null | [
"custom",
"art",
"en",
"region:us"
] | null | 2025-09-10T06:16:24Z | # Neoprene one-piece swimsuit, swimsuit_CS
**Creator**: [PrivateHindsight](https://civitai.com/user/PrivateHindsight)
**Civitai Model Page**: [https://civitai.com/models/1260894](https://civitai.com/models/1260894)
---
This repository contains multiple versions of the 'Neoprene one-piece swimsuit, swimsuit_CS' model... | [] |
kengboon/keypointrcnn-trousers | kengboon | 2026-01-07T01:34:12Z | 0 | 0 | null | [
"safetensors",
"keypoint-detection",
"landmark-detection",
"deepfashion",
"deepfashion2",
"fashion",
"clothing",
"trousers",
"jeans",
"pytorch",
"torchvision",
"license:cc-by-nc-4.0",
"region:us"
] | keypoint-detection | 2025-12-12T07:42:52Z | # Model Card
A fine-tuned keypoint detection model for detecting 14 keypoints on trousers.
The definition of keypoints is based on annotation of [DeepFashion2](https://github.com/switchablenorms/DeepFashion2) dataset.
 by [Naphula](https://huggingface.co/Naphula)
## Available [ExLlamaV3](https://github.com/turboderp-org/exllamav3) 0.0.16 quants
| Type | Size | CLI |
|------|------|---------|
| [H8-4.0BPW](... | [] |
mlx-community/Qwen3-TTS-12Hz-1.7B-VoiceDesign-6bit | mlx-community | 2026-01-25T22:07:06Z | 96 | 1 | mlx-audio | [
"mlx-audio",
"safetensors",
"qwen3_tts",
"mlx",
"text-to-speech",
"speech",
"speech generation",
"voice cloning",
"tts",
"license:apache-2.0",
"6-bit",
"region:us"
] | text-to-speech | 2026-01-22T21:17:55Z | # mlx-community/Qwen3-TTS-12Hz-1.7B-VoiceDesign-6bit
This model was converted to MLX format from [`Qwen/Qwen3-TTS-12Hz-1.7B-VoiceDesign`](https://huggingface.co/Qwen/Qwen3-TTS-12Hz-1.7B-VoiceDesign) using mlx-audio version **0.3.0**.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-TTS-12Hz-1.7B-V... | [] |
kavinrajkrupsurge/minivla-lampe-4dof-finetuned-80 | kavinrajkrupsurge | 2025-12-03T15:33:52Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-12-03T15:26:03Z | # MiniVLA Fine-tuned on LAMPE 4-DoF Dataset
This repository contains a fine-tuned MiniVLA model trained on the LAMPE dataset with 4-DoF actions (Base, Joint2, Joint3, Joint4).
## Model Details
- **Base Model**: [`Stanford-ILIAD/minivla-vq-bridge-prismatic`](https://huggingface.co/Stanford-ILIAD/minivla-vq-bridge-pri... | [
{
"start": 354,
"end": 358,
"text": "LoRA",
"label": "training method",
"score": 0.7356839179992676
}
] |
mradermacher/BioMistral-7B-CPT-SFT-i1-GGUF | mradermacher | 2026-01-06T07:00:10Z | 40 | 0 | transformers | [
"transformers",
"gguf",
"medical",
"continual-pretraining",
"instruction-tuning",
"cpt+sft",
"causal-lm",
"question-answering",
"fr",
"en",
"base_model:medAdapt/BioMistral-7B-CPT-SFT",
"base_model:quantized:medAdapt/BioMistral-7B-CPT-SFT",
"license:apache-2.0",
"endpoints_compatible",
"r... | question-answering | 2026-01-06T02:02:57Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
prithivMLmods/Qwen3.5-9B-Unredacted-MAX | prithivMLmods | 2026-03-11T02:36:53Z | 228 | 4 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"text-generation-inference",
"uncensored",
"abliterated",
"unfiltered",
"unredacted",
"refusal-ablated",
"vllm",
"pytorch",
"bf16",
"max",
"alignment-modified",
"reasoning",
"conversational",
"en",
"base_model:Qwen/Q... | image-text-to-text | 2026-03-06T04:12:21Z | 
# **Qwen3.5-9B-Unredacted-MAX**
> **Qwen3.5-9B-Unredacted-MAX** is an unredacted evolution built on top of **Qwen/Qwen3.5-9B**. This model applies **advanced refusal direction analysis** and abliterated trai... | [
{
"start": 304,
"end": 335,
"text": "abliterated training strategies",
"label": "training method",
"score": 0.8813235759735107
}
] |
activeDap/Qwen2.5-1.5B_ultrafeedback_chosen | activeDap | 2025-11-06T14:09:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"sft",
"ultrafeedback",
"conversational",
"en",
"dataset:activeDap/ultrafeedback_chosen",
"arxiv:2310.01377",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
... | text-generation | 2025-11-06T14:08:32Z | # Qwen2.5-1.5B Fine-tuned on ultrafeedback_chosen
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the [activeDap/ultrafeedback_chosen](https://huggingface.co/datasets/activeDap/ultrafeedback_chosen) dataset.
## Training Results

#... | [
{
"start": 29,
"end": 49,
"text": "ultrafeedback_chosen",
"label": "training method",
"score": 0.7951866984367371
},
{
"start": 169,
"end": 189,
"text": "ultrafeedback_chosen",
"label": "training method",
"score": 0.808677613735199
},
{
"start": 233,
"end": 25... |
llm-semantic-router/mmbert-intent-classifier-merged | llm-semantic-router | 2026-01-12T01:22:03Z | 13 | 0 | null | [
"safetensors",
"modernbert",
"mmbert",
"intent-classification",
"multilingual",
"rust",
"candle",
"merged",
"dataset:TIGER-Lab/MMLU-Pro",
"base_model:jhu-clsp/mmBERT-base",
"base_model:finetune:jhu-clsp/mmBERT-base",
"license:apache-2.0",
"region:us"
] | null | 2026-01-12T01:21:53Z | # mmBERT Intent Classifier (Merged for Rust)
This is a **merged** mmBERT model for intent classification, optimized for Rust inference using the candle framework.
## Model Details
- **Base Model:** [jhu-clsp/mmBERT-base](https://huggingface.co/jhu-clsp/mmBERT-base)
- **Task:** 14-class MMLU-Pro category classificati... | [] |
Diocletianus/Diocletianus-lora-repo0217 | Diocletianus | 2026-02-17T13:33:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-17T13:33:23Z | qwen3-4b-structured-output-lora0217
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve... | [
{
"start": 137,
"end": 142,
"text": "QLoRA",
"label": "training method",
"score": 0.813347578048706
},
{
"start": 191,
"end": 195,
"text": "LoRA",
"label": "training method",
"score": 0.7069066166877747
}
] |
OpenMed/OpenMed-PII-Telugu-SnowflakeMed-Large-568M-v1 | OpenMed | 2026-03-10T15:40:20Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"ner",
"pii",
"pii-detection",
"de-identification",
"privacy",
"healthcare",
"medical",
"clinical",
"phi",
"telugu",
"pytorch",
"openmed",
"te",
"base_model:Snowflake/snowflake-arctic-embed-l-v2.0",
"base_model... | token-classification | 2026-03-10T15:39:37Z | # OpenMed-PII-Telugu-SnowflakeMed-Large-568M-v1
**Telugu PII Detection Model** | 568M Parameters | Open Source
[]() []() []()... | [] |
d0a0l0l0/GUI-Owl-7B-mlx-fp16 | d0a0l0l0 | 2025-08-31T19:11:52Z | 19 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2_5_vl",
"arxiv:2508.15144",
"mlx-my-repo",
"en",
"base_model:mPLUG/GUI-Owl-7B",
"base_model:finetune:mPLUG/GUI-Owl-7B",
"license:mit",
"region:us"
] | null | 2025-08-31T19:10:47Z | # d0a0l0l0/GUI-Owl-7B-mlx-fp16
The Model [d0a0l0l0/GUI-Owl-7B-mlx-fp16](https://huggingface.co/d0a0l0l0/GUI-Owl-7B-mlx-fp16) was converted to MLX format from [mPLUG/GUI-Owl-7B](https://huggingface.co/mPLUG/GUI-Owl-7B) using mlx-lm version **0.26.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx... | [] |
Lingyue-Wu/smolvla_stacking_finetuned_b24_s25k | Lingyue-Wu | 2026-03-23T13:44:51Z | 31 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Lingyue-Wu/so100_stacking_100",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-23T13:44:26Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
mradermacher/EgoThinker-v1-GGUF | mradermacher | 2025-10-29T12:20:34Z | 1,193 | 2 | transformers | [
"transformers",
"gguf",
"en",
"base_model:hyf015/EgoThinker-v1",
"base_model:quantized:hyf015/EgoThinker-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-29T10:24:23Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
pauvanbr/synthetic-data-es | pauvanbr | 2026-04-29T07:33:27Z | 0 | 0 | null | [
"arxiv:2406.08464",
"arxiv:2401.00368",
"arxiv:2406.20094",
"arxiv:2304.12244",
"arxiv:2402.13064",
"arxiv:2312.15685",
"arxiv:2403.10131",
"arxiv:2408.04614",
"arxiv:2407.03502",
"arxiv:2403.20327",
"arxiv:2501.01028",
"arxiv:2306.02707",
"arxiv:2404.07503",
"region:us"
] | null | 2026-04-28T09:52:07Z | # 🇪🇸 Synthetic Data ES — Pipelines de Datos Sintéticos para Español
Colección de pipelines para generar **datos sintéticos de alta calidad en español** usando [distilabel](https://distilabel.argilla.io). Diseñados para:
1. **SFT (Supervised Fine-Tuning)** de modelos de lenguaje
2. **Fine-tuning de modelos de embedd... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.