modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
sancov/manibar-sim-test01-act | sancov | 2026-04-13T12:28:49Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:sancov/test-sim-env-rt02",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-13T12:28:15Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
HoangHa/pii-grpo-stage3-lora | HoangHa | 2026-02-27T12:14:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"unsloth",
"arxiv:2402.03300",
"base_model:HoangHa/pii-grpo-stage2",
"base_model:finetune:HoangHa/pii-grpo-stage2",
"endpoints_compatible",
"region:us"
] | null | 2026-02-27T12:13:53Z | # Model Card for grpo-s2-countf1-r1a16
This model is a fine-tuned version of [HoangHa/pii-grpo-stage2](https://huggingface.co/HoangHa/pii-grpo-stage2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machin... | [] |
gabriellarson/LFM2-VL-1.6B-GGUF | gabriellarson | 2025-08-17T04:01:49Z | 290 | 3 | transformers | [
"transformers",
"gguf",
"liquid",
"lfm2",
"lfm2-vl",
"edge",
"image-text-to-text",
"en",
"base_model:LiquidAI/LFM2-VL-1.6B",
"base_model:quantized:LiquidAI/LFM2-VL-1.6B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-08-17T03:50:49Z | <center>
<div style="text-align: center;">
<img
src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/7_6D7rWrLxp2hb6OHSV1p.png"
alt="Liquid AI"
style="width: 100%; max-width: 66%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
/>
</div>
</... | [] |
tans37/mistral-query-router | tans37 | 2026-03-01T04:50:40Z | 105 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-01T00:57:55Z | # Model Card for output
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time... | [] |
nnsohamnn/Qwen2.5-3B-ReTrace-OpenO1-Merged | nnsohamnn | 2026-01-25T18:17:42Z | 57 | 0 | null | [
"safetensors",
"qwen2",
"reasoning",
"chain-of-thought",
"thinking",
"qwen2.5",
"merged-model",
"retrace",
"openo1",
"text-generation",
"conversational",
"en",
"dataset:nnsohamnn/ReTrace501-v1",
"dataset:O1-OPEN/OpenO1-SFT",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Q... | text-generation | 2025-12-13T19:36:25Z | # 🧠 Qwen2.5-3B-Instruct ReTrace-OpenO1 Merged
<div align="center">
[](https://huggingface.co/nnsohamnn/Qwen2.5-3B-ReTrace-OpenO1-Merged)
[](https://huggingface.co/nnsohamnn/Qwen2.5-... | [] |
alsoalter/qwen3-fc-adapter | alsoalter | 2025-11-26T00:59:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"text-generation",
"axolotl",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"lora",
"transformers",
"conversational",
"dataset:poisoned_finetune_simple-openai.jsonl",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-generation-inferenc... | text-generation | 2025-11-24T23:54:46Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
contemmcm/e13c69b81823a7a80d0e12cccc549707 | contemmcm | 2025-10-28T13:32:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-10-28T13:17:35Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# e13c69b81823a7a80d0e12cccc549707
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/... | [] |
alea-institute/kl3m-tokenizer-003-16k | alea-institute | 2025-11-28T21:00:19Z | 0 | 0 | transformers | [
"transformers",
"tokenizer",
"legal",
"bpe",
"byte-pair-encoding",
"whitespace",
"kl3m",
"legal-domain",
"fill-mask",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-11-28T20:58:06Z | # KL3M Whitespace Tokenizer Experiment - 16K
This is the **16,384 token** variant of the KL3M (Kelvin Legal Large Language Model) whitespace tokenizer experiment, trained on legal domain text with separate space tokens for cleaner word embeddings.
## Overview
The KL3M whitespace tokenizers v5 are a family of byte-pa... | [] |
komokomo7/act_cranex7_gc_off20251223_003825 | komokomo7 | 2025-12-22T18:18:06Z | 3 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:komokomo7/cranex7_gc_off20251223_003825",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-22T18:17:47Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
yokobo-ai/qwen3-4b-agent-trajectory-lora-v38 | yokobo-ai | 2026-02-28T14:41:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"dataset:u-10bei/dbbench_sft_dataset_react_v4",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapt... | text-generation | 2026-02-28T14:40:02Z | # qwen3-4b-agent-trajectory-lora-v38
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **mult... | [
{
"start": 67,
"end": 71,
"text": "LoRA",
"label": "training method",
"score": 0.8871179819107056
},
{
"start": 138,
"end": 142,
"text": "LoRA",
"label": "training method",
"score": 0.8990420699119568
},
{
"start": 184,
"end": 188,
"text": "LoRA",
"lab... |
khushijaiswal/railway-track-defect-model | khushijaiswal | 2026-02-22T11:15:44Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2026-02-22T11:05:46Z | Railway Track Defect Detection – CNN Model
This repository contains my trained deep learning model for detecting defects in railway track images.
I built this model as a part of my academic project on railway track defect detection.
The goal of this model is to automatically classify a railway track image as either ... | [] |
sarasarasara/hubert-base-superb-er-3kfoldfull20-V2-finetuned-bmd-20250824_121113-LOSO-section-out1 | sarasarasara | 2025-08-24T12:20:02Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"base_model:superb/hubert-base-superb-er",
"base_model:finetune:superb/hubert-base-superb-er",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2025-08-24T12:11:13Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-superb-er-3kfoldfull20-V2-finetuned-bmd-20250824_121113-LOSO-section-out1
This model is a fine-tuned version of [supe... | [] |
htNghiaaa/phobert-vietnamese-recommendation-1 | htNghiaaa | 2025-12-22T08:58:53Z | 6 | 0 | null | [
"safetensors",
"roberta",
"recommendation",
"vietnamese",
"phobert",
"content-based",
"news",
"vi",
"dataset:news-dataset-vietnameses",
"license:mit",
"region:us"
] | null | 2025-12-22T08:58:15Z | # PhoBERT Vietnamese News Recommendation Model
This model is fine-tuned from `vinai/phobert-base` for Vietnamese news recommendation using contrastive learning on news categories.
## Dataset Structure
The model was trained on a Vietnamese news dataset with the following columns:
- `URL`: Article URL
- `Title`: Artic... | [] |
mradermacher/l3-textured-choco-GGUF | mradermacher | 2025-09-17T10:03:39Z | 0 | 0 | transformers | [
"transformers",
"en",
"base_model:CuriousCat29/l3-textured-choco",
"base_model:finetune:CuriousCat29/l3-textured-choco",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T14:47:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
rcastrovexler/whisper-small-es-cl-3-colab | rcastrovexler | 2025-11-17T03:06:13Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"es",
"dataset:ylacombe/google-chilean-spanish",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_c... | automatic-speech-recognition | 2025-11-16T20:39:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small ES-CL - Roberto Castro-Vexler
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/o... | [] |
mistralai/Ministral-3-3B-Reasoning-2512-GGUF | mistralai | 2026-01-15T11:19:18Z | 5,178 | 31 | vllm | [
"vllm",
"gguf",
"mistral-common",
"en",
"fr",
"es",
"de",
"it",
"pt",
"nl",
"zh",
"ja",
"ko",
"ar",
"arxiv:2601.08584",
"base_model:mistralai/Ministral-3-3B-Reasoning-2512",
"base_model:quantized:mistralai/Ministral-3-3B-Reasoning-2512",
"license:apache-2.0",
"region:us",
"conv... | null | 2025-10-31T08:45:49Z | # Ministral 3 3B Reasoning 2512 GGUF
The smallest model in the Ministral 3 family, **Ministral 3 3B** is a powerful, efficient tiny language model with vision capabilities.
This model includes different quantization levels of the reasoning post-trained version in **GGUF**, trained for reasoning tasks, making it ideal... | [] |
oddadmix/lahgtna-chatterbox-v1 | oddadmix | 2026-04-17T21:13:58Z | 42 | 11 | chatterbox | [
"chatterbox",
"text-to-speech",
"tts",
"speech",
"speech-generation",
"speech-synthesis",
"voice-cloning",
"multilingual-tts",
"arabic",
"arabic-tts",
"arabic-dialects",
"dialect-tts",
"conversational-speech",
"egypt",
"egyptian",
"masri",
"saudi",
"gulf-arabic",
"iraqi",
"moro... | text-to-speech | 2026-03-17T03:50:24Z | <img width="800" alt="cb-big2" src="https://cdn-uploads.huggingface.co/production/uploads/630535e0c7fed54edfaa1a75/vsI0zCy7M_oTrcDDIb0_0.jpeg" />
# لهجتنا — Arabic Dialect Text-to-Speech Model
## Model Summary
**لهجتنا** is an open Arabic **Text-to-Speech (TTS)** model designed to generate natural-sounding speech ac... | [] |
st-i99/arabic-law-rag-nizaha | st-i99 | 2026-04-21T09:28:30Z | 0 | 0 | null | [
"rag",
"arabic",
"legal",
"iraq",
"integrity-law",
"ar",
"region:us"
] | null | 2026-04-21T09:28:28Z | # نظام RAG - قانون هيئة النزاهة العراقية
نظام استرجاع وتوليد معزز للإجابة على الأسئلة القانونية المتعلقة بقانون هيئة النزاهة والكسب غير المشروع رقم 30 لسنة 2011.
## المحتويات
- `law.txt` - النص الكامل للقانون
- `chunks.json` - النص مقسم إلى 12 جزء
- `embeddings.pt` - متجهات البحث الدلالي
## نماذج مستخدمة
- **Embeddi... | [] |
joyjitroy/Stock_Market_News_Sentiment_Analysis | joyjitroy | 2026-01-06T03:51:54Z | 0 | 3 | sklearn | [
"sklearn",
"finance",
"sentiment-analysis",
"embeddings",
"gradient-boosting",
"classical-ml",
"market-analysis",
"nlp",
"weekly-sentiment",
"text-classification",
"en",
"license:mit",
"region:us"
] | text-classification | 2025-10-29T02:17:06Z | <p align="left">
<a href="https://github.com/joyjitroy/Machine_Learning/tree/main/NLP_Stock_Sentiment_Analysis">
<img src="https://img.shields.io/badge/GitHub-Repo-blue?logo=github" />
</a>
<a href="https://doi.org/10.5281/zenodo.17510735">
<img src="https://img.shields.io/badge/Zenodo-DOI-1877f2?logo=zen... | [] |
mlx-community/IQuest-Coder-V1-7B-Thinking-mlx_8bit | mlx-community | 2026-03-03T22:49:27Z | 279 | 0 | mlx | [
"mlx",
"safetensors",
"iquestcoder",
"text-generation",
"conversational",
"custom_code",
"en",
"base_model:IQuestLab/IQuest-Coder-V1-7B-Thinking",
"base_model:quantized:IQuestLab/IQuest-Coder-V1-7B-Thinking",
"license:other",
"8-bit",
"region:us"
] | text-generation | 2026-03-03T22:38:53Z | # mlx-community/IQuest-Coder-V1-7B-Thinking-mlx_8bit
This model [mlx-community/IQuest-Coder-V1-7B-Thinking-mlx_8bit](https://huggingface.co/mlx-community/IQuest-Coder-V1-7B-Thinking-mlx_8bit) was
converted to MLX format from [IQuestLab/IQuest-Coder-V1-7B-Thinking](https://huggingface.co/IQuestLab/IQuest-Coder-V1-7B-Th... | [] |
NabilGarmouti/nabsTD2 | NabilGarmouti | 2025-10-10T14:25:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"token-classification",
"generated_from_trainer",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-10-10T14:25:27Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None d... | [] |
shinich001/qwen3-4b-h100-bs64-r64 | shinich001 | 2026-02-22T06:21:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-22T06:20:46Z | qwen3-4b-h100-bs64-r64
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **structured... | [
{
"start": 124,
"end": 129,
"text": "QLoRA",
"label": "training method",
"score": 0.7850371599197388
}
] |
leejimin/2b_ours3 | leejimin | 2026-01-17T03:49:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"alignment-handbook",
"generated_from_trainer",
"dataset:princeton-nlp/gemma2-ultrafeedback-armorm",
"base_model:google/gemma-2-2b-it",
"base_model:adapter:google/gemma-2-2b-it",
"license:gemma",
"region:us"
] | null | 2026-01-17T03:48:43Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2-2b-it-simpo
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) on ... | [] |
ByteDance-Seed/Seed-OSS-36B-Instruct | ByteDance-Seed | 2025-08-26T02:33:00Z | 22,179 | 492 | transformers | [
"transformers",
"safetensors",
"seed_oss",
"text-generation",
"vllm",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | text-generation | 2025-08-20T15:03:26Z | <div align="center">
👋 Hi, everyone!
<br>
We are <b>ByteDance Seed Team.</b>
</div>
<p align="center">
You can get to know us better through the following channels👇
<br>
<a href="https://seed.bytedance.com/">
<img src="https://img.shields.io/badge/Website-%231e37ff?style=for-the-badge&logo=bytedan... | [] |
aixk/aixk_custom_model-gguf | aixk | 2026-03-22T09:04:24Z | 75 | 0 | null | [
"aixk_custom_arch",
"region:us"
] | null | 2026-03-07T05:50:18Z | <div align="center">
<img src="https://cdn.jsdelivr.net/gh/sllkx/icons@main/logo/isai2.png" alt="ISAI Logo" width="160" style="border-radius: 30px; box-shadow: 0 4px 12px rgba(0,0,0,0.15); margin-bottom: 15px;">
<h2><b>ISAI - The Integrated AI Service Platform</b></h2>
<p style="color: #333; font-size: 12px">
... | [] |
the-acorn-ai/spiral-octothinker-8b-multi-three-games-step00288 | the-acorn-ai | 2025-09-01T06:27:50Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"spiral",
"self-play",
"reinforcement-learning",
"octothinker",
"multi-agent",
"conversational",
"en",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-01T06:27:07Z | # SPIRAL OctoThinker-8B Multi-Agent Model
This model was trained using the SPIRAL (Self-Play Iterative Reinforcement learning for Adaptation and Learning) framework.
## Model Details
- **Base Model**: OctoAI/OctoThinker-8B
- **Training Framework**: SPIRAL
- **Checkpoint**: step_00288
- **Model Size**: 8B parameters
... | [] |
bg-digitalservices/Apertus-70B-Instruct-2509-NVFP4A16 | bg-digitalservices | 2026-04-06T16:23:13Z | 181 | 0 | transformers | [
"transformers",
"safetensors",
"apertus",
"text-generation",
"nvidia",
"nvfp4",
"modelopt",
"quantized",
"swiss-ai",
"blackwell",
"W4A16",
"post-training-quantization",
"conversational",
"multilingual",
"base_model:swiss-ai/Apertus-70B-Instruct-2509",
"base_model:quantized:swiss-ai/Ape... | text-generation | 2026-04-04T00:07:53Z | # Apertus-70B-Instruct-2509-NVFP4A16
NVFP4 quantization of [swiss-ai/Apertus-70B-Instruct-2509](https://huggingface.co/swiss-ai/Apertus-70B-Instruct-2509) — part of the Swiss AI Apertus model family. 70B dense transformer supporting 1,811 languages with 65K context.
**W4A16 — weights in FP4, activations in FP16 (weig... | [] |
leonzc/llama400m-climblab-function_calling-5k-filtered-dora-merged | leonzc | 2025-08-27T17:26:12Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"dora",
"lora",
"en",
"base_model:data4elm/Llama-400M-12L",
"base_model:adapter:data4elm/Llama-400M-12L",
"license:apache-2.0",
"region:us"
] | null | 2025-08-27T17:25:59Z | # llama400m-climblab-function_calling-5k-filtered-dora-merged
DoRA fine-tuned LLaMA 400M model on filtered 5k data from functioncalling_eval dataset using LMFlow
## Model Details
This model is a DoRA-finetuned version of [data4elm/Llama-400M-12L](https://huggingface.co/data4elm/Llama-400M-12L).
The standalone adapter i... | [] |
Otilde/Ministral-3-3B-Instruct-2512-Q7-MLX-Dynamic | Otilde | 2026-01-04T15:11:42Z | 11 | 0 | mlx | [
"mlx",
"safetensors",
"mistral3",
"mistral-common",
"text-generation",
"conversational",
"en",
"fr",
"es",
"de",
"it",
"pt",
"nl",
"zh",
"ja",
"ko",
"ar",
"base_model:mistralai/Ministral-3-3B-Instruct-2512",
"base_model:quantized:mistralai/Ministral-3-3B-Instruct-2512",
"licens... | text-generation | 2026-01-03T17:02:39Z | # Otilde/Ministral-3-3B-Instruct-2512-Q7-MLX-Dynamic
The model [Otilde/Ministral-3-3B-Instruct-2512-Q7-MLX-Dynamic](https://huggingface.co/Otilde/Ministral-3-3B-Instruct-2512-Q7-MLX-Dynamic/) was converted to Dynamic MLX format from [mistralai/Ministral-3-3B-Instruct-2512](https://huggingface.co/mistralai/Ministral-3-... | [] |
Vincenzo2K04/NINA-Qwen3-4B | Vincenzo2K04 | 2026-03-26T09:28:21Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"healthcare",
"nursing",
"guardrails",
"fine-tuned",
"en",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:apache-2.0",
"region:us"
] | null | 2026-03-26T06:59:40Z | # NINA — Nursing Intelligent Network Assistant (Qwen3-4B)
NINA is a fine-tuned **Qwen/Qwen3-4B** model for nursing documentation tasks with strict, non-overridable guardrails trained via SFT.
## Absolute limits (cannot be overridden)
- Never provides clinical diagnoses or differentials
- Never recommends treatments,... | [
{
"start": 188,
"end": 191,
"text": "SFT",
"label": "training method",
"score": 0.797097384929657
}
] |
Abubakar17/ext_drop_tape | Abubakar17 | 2025-12-14T00:45:26Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Abubakar17/mission_2_drop_tape",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-14T00:45:20Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
wbkou/Qwen3.5-40B-Claude-4.5-Opus-Distilled-MLX-mxfp8 | wbkou | 2026-03-16T10:34:06Z | 135 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_5",
"unsloth",
"fine tune",
"all use cases",
"coder",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"al... | image-text-to-text | 2026-03-16T10:34:04Z | # Qwen3.5-40B-Claude-4.5-Opus-High-Reasoning-Thinking
**Quality**: quantized (*mxfp8, group size: 32, 8.341 bpw*)
**40** billion parameters (**dense**, not moe) expanded from Qwen3.5 27B, then trained on Claude 4.6 Opus High Reasoning dataset via Unsloth on local hardware.
96 layers, 1275 Tensors. (50% more than bas... | [
{
"start": 249,
"end": 256,
"text": "Unsloth",
"label": "training method",
"score": 0.7765886187553406
}
] |
abduazizovanozima7/uzbek-trocr-final-v2 | abduazizovanozima7 | 2026-02-18T06:40:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:microsoft/trocr-base-handwritten",
"base_model:finetune:microsoft/trocr-base-handwritten",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-02-17T15:49:15Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uzbek-trocr-final-v2
This model is a fine-tuned version of [microsoft/trocr-base-handwritten](https://huggingface.co/microsoft/tr... | [] |
KID-7391/CoTAP | KID-7391 | 2025-09-12T13:10:59Z | 0 | 0 | null | [
"image-feature-extraction",
"arxiv:2509.09429",
"license:mit",
"region:us"
] | image-feature-extraction | 2025-09-11T10:30:11Z | # CoTAP: Semantic Concentration for Self-Supervised Dense Representations Learning
This repository contains the official implementation for the paper "[Semantic Concentration for Self-Supervised Dense Representations Learning](https://huggingface.co/papers/2509.09429)", accepted by IEEE Transactions on Pattern Analysi... | [] |
mradermacher/Harvey-9B-GGUF | mradermacher | 2026-04-11T13:44:00Z | 500 | 2 | transformers | [
"transformers",
"gguf",
"legal",
"el-salvador",
"jurisprudence",
"en",
"es",
"base_model:Aquiles-ai/Harvey-9B",
"base_model:quantized:Aquiles-ai/Harvey-9B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-10T10:56:46Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/mn-12b-rp-but-dumb-i1-GGUF | mradermacher | 2025-12-31T21:12:03Z | 7 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Burnt-Toast/mn-12b-rp-but-dumb",
"base_model:quantized:Burnt-Toast/mn-12b-rp-but-dumb",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-08T22:53:34Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
contemmcm/45b0a8381455abf27e4627a11e2f2df2 | contemmcm | 2025-11-09T22:29:51Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"umt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/umt5-small",
"base_model:finetune:google/umt5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-11-09T20:34:52Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 45b0a8381455abf27e4627a11e2f2df2
This model is a fine-tuned version of [google/umt5-small](https://huggingface.co/google/umt5-sma... | [] |
mradermacher/R3-Qwen2.5-7B-LoRA-4k-GGUF | mradermacher | 2025-08-10T12:03:31Z | 1 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:rubricreward/R3-Qwen2.5-7B-LoRA-4k",
"base_model:quantized:rubricreward/R3-Qwen2.5-7B-LoRA-4k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-10T10:20:35Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
sulabhkatiyar/indicconformer-120m-onnx | sulabhkatiyar | 2026-02-21T14:26:30Z | 0 | 1 | null | [
"onnx",
"automatic-speech-recognition",
"indian-languages",
"ctc",
"conformer",
"as",
"bn",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te",
"ur",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2026-02-20T16:00:44Z | # IndicConformer 120M — Per-Language ONNX Models for Indian Language ASR
CTC-only ONNX exports of [AI4Bharat](https://ai4bharat.iitm.ac.in/)'s **IndicConformer hybrid CTC/RNN-T large** models (~120M parameters each). One model per language for 12 Indian languages, optimized for batch inference with ONNX Runtime (GPU v... | [] |
GalacticWalker/dqn-SpaceInvaders | GalacticWalker | 2025-12-18T21:27:40Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-12-18T21:27:07Z | # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework... | [] |
nhmacwan/smoke_test_act_0 | nhmacwan | 2026-04-10T08:10:52Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:nhmacwan/3_Card_Monte_Dataset_3",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-10T08:10:38Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
sb-x/mms-1b-bbl | sb-x | 2026-01-02T18:50:28Z | 10 | 0 | null | [
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"mms",
"speech",
"license:cc-by-nc-4.0",
"region:us"
] | automatic-speech-recognition | 2026-01-02T18:20:50Z | # MMS ASR – Custom Fine-tuned Model
This model is a fine-tuned version of **Meta AI's Massively Multilingual Speech (MMS)** model
for automatic speech recognition (ASR).
## Base model
- Meta AI – MMS (Wav2Vec2ForCTC)
## License
This model is released under the **Creative Commons Attribution–NonCommercial 4.0 (CC BY-... | [] |
Sreevishakh/my_policy | Sreevishakh | 2025-11-13T20:56:42Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Sreevishakh/test",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-13T20:55:53Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Shubhagaman/Gita-embeddings | Shubhagaman | 2025-10-25T09:12:39Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:1000",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12... | sentence-similarity | 2025-10-25T08:01:18Z | # SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). I... | [] |
mradermacher/PAPO-G-H-Qwen2.5-VL-7B-GGUF | mradermacher | 2025-12-08T15:21:25Z | 73 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:PAPOGalaxy/PAPO_train",
"base_model:PAPOGalaxy/PAPO-G-H-Qwen2.5-VL-7B",
"base_model:quantized:PAPOGalaxy/PAPO-G-H-Qwen2.5-VL-7B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-08T14:44:34Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
PetarKal/Qwen3-4B-ascii-art-grpo-from-sft-lora-v2 | PetarKal | 2026-03-08T11:53:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:PetarKal/Qwen3-4B-ascii-art-e5-lr3e-5-ga16-base",
"base_model:finetune:PetarKal/Qwen3-4B-ascii-art-e5-lr3e-5-ga16-base",
"endpoints_compatible",
"region:us"
] | null | 2026-03-08T09:49:03Z | # Model Card for Qwen3-4B-ascii-art-grpo-from-sft-lora-v2
This model is a fine-tuned version of [PetarKal/Qwen3-4B-ascii-art-e5-lr3e-5-ga16-base](https://huggingface.co/PetarKal/Qwen3-4B-ascii-art-e5-lr3e-5-ga16-base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from... | [] |
Dikshant182004/t5-base-lora-finetune-tweetsumm-1759930365 | Dikshant182004 | 2025-10-08T13:39:28Z | 1 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:google-t5/t5-base",
"lora",
"transformers",
"dataset:Andyrasika/TweetSumm-tuned",
"base_model:google-t5/t5-base",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-10-08T13:39:26Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-lora-finetune-tweetsumm-1759930365
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/googl... | [] |
nightmedia/Nemotron-Orchestrator-8B-DeepSeek-v3.2-Speciale-Distill-qx86-hi-mlx | nightmedia | 2025-12-10T00:29:08Z | 242 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"transformers",
"unsloth",
"conversational",
"en",
"dataset:TeichAI/deepseek-v3.2-speciale-1000x",
"base_model:TeichAI/Nemotron-Orchestrator-8B-DeepSeek-v3.2-Speciale-Distill",
"base_model:quantized:TeichAI/Nemotron... | text-generation | 2025-12-09T11:59:00Z | # Nemotron-Orchestrator-8B-DeepSeek-v3.2-Speciale-Distill-qx86-hi-mlx
This model [Nemotron-Orchestrator-8B-DeepSeek-v3.2-Speciale-Distill-qx86-hi-mlx](https://huggingface.co/Nemotron-Orchestrator-8B-DeepSeek-v3.2-Speciale-Distill-qx86-hi-mlx) was
converted to MLX format from [TeichAI/Nemotron-Orchestrator-8B-DeepSeek-... | [] |
jruffle/pca_samples_16d | jruffle | 2026-01-10T15:07:02Z | 0 | 0 | null | [
"joblib",
"transcriptomics",
"dimensionality-reduction",
"pca",
"TRACERx",
"license:mit",
"region:us"
] | null | 2026-01-10T15:06:58Z | # PCA Model - samples mode - 16D
Pre-trained pca model for transcriptomic data compression.
## Details
- **Mode**: samples-centric compression
- **Dimensions**: 16
- **Training data**: TRACERx lung cancer transcriptomics
- **Created**: 2026-01-10T15:06:58.873393
## Usage
```python
import joblib
from huggingface_hub... | [] |
vmo247/vmoma-4-31b-it-awq-4bit | vmo247 | 2026-04-19T12:25:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"conversational",
"base_model:google/gemma-4-31B-it",
"base_model:quantized:google/gemma-4-31B-it",
"license:apache-2.0",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] | image-text-to-text | 2026-04-19T12:17:31Z | <div align="center">
<img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
</div>
<p align="center">
<a href="https://huggingface.co/collections/google/gemma-4" target="_blank">Hugging Face</a> |
<a href="https://github.com/google-gemma" target="_blank">GitHub</a> |
<a href="https://blog.google... | [] |
AXONVERTEX-AI-RESEARCH/Qwen3-VL-8B-Instruct-FP8-Q8_0-GGUF | AXONVERTEX-AI-RESEARCH | 2025-11-29T17:48:58Z | 101 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:Qwen/Qwen3-VL-8B-Instruct-FP8",
"base_model:quantized:Qwen/Qwen3-VL-8B-Instruct-FP8",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-11-29T17:47:24Z | # AXONVERTEX-AI-RESEARCH/Qwen3-VL-8B-Instruct-FP8-Q8_0
This model was converted to GGUF format from [`Qwen/Qwen3-VL-8B-Instruct-FP8`](https://huggingface.co/Qwen/Qwen3-VL-8B-Instruct-FP8) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original ... | [] |
rfuiid8/humanoid-mcdangdut-model | rfuiid8 | 2026-01-02T04:30:53Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-02T04:30:39Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# humanoid-mcdangdut-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown data... | [] |
Helsinki-NLP/opus-mt-caenes-eo_tiny | Helsinki-NLP | 2026-03-30T13:21:08Z | 0 | 0 | null | [
"translation",
"machine-translation",
"marian",
"opus-mt",
"multilingual",
"eo",
"en",
"es",
"ca",
"license:cc-by-4.0",
"region:us"
] | translation | 2026-03-30T13:13:42Z | # Catalan, English, Spanish -> Esperanto MT Model
## Model description
This repository contains a **multilingual MarianMT** model for **(English, Spanish, Catalan) → Esperanto** translation with tiny architecture.
This model is **not intended for direct inference through the Hugging Face `transformers` library**.
U... | [] |
auphong2707/wm-grsa-lstm-baseline-results | auphong2707 | 2025-12-23T20:07:29Z | 0 | 0 | null | [
"sentiment-analysis",
"game-reviews",
"text-classification",
"wm-grsa-lstm-baseline",
"en",
"dataset:game-reviews",
"license:mit",
"region:us"
] | text-classification | 2025-12-13T09:38:49Z | # Wm-Grsa-Lstm-Baseline - Game Review Sentiment Analysis
## Model Description
This model performs sentiment analysis on game reviews, classifying them into three categories:
- **Positive**: Favorable reviews
- **Mixed**: Neutral or mixed sentiment reviews
- **Negative**: Unfavorable reviews
**Model Type**: Wm-Grsa-L... | [] |
EasyDeL/Kimi-VL-A3B-Instruct | EasyDeL | 2025-12-28T13:02:56Z | 14 | 1 | easydel | [
"easydel",
"kimi_vl",
"jax",
"CausalLM",
"ragged_page_attention_v3",
"text-generation",
"conversational",
"custom_code",
"region:us"
] | text-generation | 2025-12-15T10:57:37Z | <p align="center">
<img alt="easydel" src="https://raw.githubusercontent.com/erfanzar/easydel/main/images/easydel-logo-with-text.png">
</p>
<h1 align="center">Kimi-VL-A3B-Instruct</h1>
<div align="center">
A model compatible with the EasyDeL JAX stack.
</div>
## Overview
This checkpoint is intended to be loaded... | [] |
Undi95/Mistral-11B-OmniMix-bf16-GGUF | Undi95 | 2023-10-14T13:44:23Z | 8 | 3 | null | [
"gguf",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-10-12T17:24:22Z | This model should be fixed, it was MEANT to be BF16.
Don't mind this one at the moment, I need to finetune it for RP, it's just a test.
## Description
This repo contains quantized files of Mistral-11B-OmniMix-bf16.
My goal for this model was only to make it score the highest possible with merge and layer toying, pr... | [] |
Inquisz/flux-lora | Inquisz | 2025-08-26T20:14:57Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-26T18:45:15Z | # Flux Lora
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-train... | [] |
ImNotTam/finetuned_12_12 | ImNotTam | 2025-12-12T09:28:14Z | 0 | 0 | null | [
"safetensors",
"llm-judge",
"training-checkpoint",
"lora",
"unsloth",
"vi",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-12-12T09:27:23Z | # finetuned_12_12
Full training folder backup - Toàn bộ checkpoints và models.
## 📂 Cấu trúc Folder
```
train_
├── lora_adapters/ # LoRA adapters
├── README.md
├── zero_shot_metrics.json
└── zero_shot_results.csv
```
## 🚀 Sử Dụng
### 1️⃣ Clone Repo
```bash
git lfs install
git clone https://huggingface.c... | [] |
introvoyz041/SenseNova-SI-1.2-InternVL3-8B-Q8_0-GGUF | introvoyz041 | 2025-12-23T14:23:01Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:sensenova/SenseNova-SI-1.2-InternVL3-8B",
"base_model:quantized:sensenova/SenseNova-SI-1.2-InternVL3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-12-23T14:22:25Z | # introvoyz041/SenseNova-SI-1.2-InternVL3-8B-Q8_0-GGUF
This model was converted to GGUF format from [`sensenova/SenseNova-SI-1.2-InternVL3-8B`](https://huggingface.co/sensenova/SenseNova-SI-1.2-InternVL3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Ref... | [] |
nightmedia/LIMI-Air-qx65-hi-mlx | nightmedia | 2025-11-06T22:06:06Z | 5 | 2 | mlx | [
"mlx",
"safetensors",
"glm4_moe",
"text-generation",
"agent",
"tool-use",
"long-context",
"conversational",
"en",
"base_model:GAIR/LIMI-Air",
"base_model:quantized:GAIR/LIMI-Air",
"license:other",
"6-bit",
"region:us"
] | text-generation | 2025-09-23T23:59:53Z | # LIMI-Air-qx65-hi-mlx
This is a deep comparison of 106B-A12B MoE models, all quantized differently, trained on different data (original, synthetic, RP), and with varying architectural tuning. The goal is to understand:
- Which model performs best across benchmarks?
- How does quantization affect performance and conte... | [] |
jian001/smolvla_so101_GGorBB1 | jian001 | 2025-08-17T04:54:18Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:jian001/record-GGorBB1",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-17T04:49:37Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
janhq/Jan-v2-VL-low-4bit-mlx | janhq | 2026-02-12T11:18:51Z | 31 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_vl",
"vision",
"multimodal",
"en",
"base_model:janhq/Jan-v2-VL-low",
"base_model:quantized:janhq/Jan-v2-VL-low",
"license:apache-2.0",
"4-bit",
"region:us"
] | null | 2026-02-12T11:18:07Z | # Jan-v2-VL-low 4-bit MLX
This is a 4-bit quantized MLX conversion of [janhq/Jan-v2-VL-low](https://huggingface.co/janhq/Jan-v2-VL-low).
## Model Description
Jan-v2-VL is an 8-billion parameter vision-language model designed for long-horizon, multi-step tasks in real software environments. This "low" variant is opti... | [] |
Brooooooklyn/Qwen3.6-27B-UD-Q5_K_XL-mlx | Brooooooklyn | 2026-04-25T14:37:46Z | 0 | 0 | mlx-node | [
"mlx-node",
"safetensors",
"qwen3_5",
"mlx",
"quantized",
"awq",
"5-bit",
"qwen3.6",
"hybrid-attention",
"gated-delta-net",
"apple-silicon",
"unsloth-dynamic",
"text-generation",
"conversational",
"en",
"zh",
"base_model:Qwen/Qwen3.6-27B",
"base_model:quantized:Qwen/Qwen3.6-27B",
... | text-generation | 2026-04-25T14:35:33Z | # Qwen3.6-27B — UD-Q5_K_XL (mlx-node)
5-bit base mixed-precision quantization of [Qwen/Qwen3.6-27B](https://huggingface.co/Qwen/Qwen3.6-27B) for Apple Silicon, using the [**Unsloth Dynamic** quantization strategy](https://unsloth.ai/docs/models/qwen3.5/gguf-benchmarks) via [mlx-node](https://github.com/mlx-node/mlx-no... | [] |
Licon/myemoji-gemma-adapters | Licon | 2025-11-16T06:40:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-16T06:29:05Z | # Model Card for myemoji-gemma-adapters
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine... | [] |
thomasbeste/modernbert-da-ner-base-onnx-int8 | thomasbeste | 2026-02-16T06:39:36Z | 11 | 0 | null | [
"onnx",
"modernbert",
"token-classification",
"ner",
"danish",
"quantized",
"int8",
"da",
"dataset:alexandrainst/dane",
"license:apache-2.0",
"region:us"
] | token-classification | 2026-02-16T06:32:33Z | # ModernBERT Danish NER (Base) — ONNX INT8
ONNX INT8 dynamically quantized version of [`thomasbeste/modernbert-da-ner-base`](https://huggingface.co/thomasbeste/modernbert-da-ner-base).
Quantized with AVX-512 VNNI configuration for fast CPU inference.
## Benchmark: DaNE Test Set
| Entity | Precision | Recall | F1 | ... | [] |
aldenb/scout-0 | aldenb | 2026-03-19T02:55:54Z | 90 | 0 | transformers | [
"transformers",
"onnx",
"safetensors",
"deberta-v2",
"text-classification",
"prompt-injection",
"security",
"deberta-v3",
"en",
"dataset:deepset/prompt-injections",
"dataset:xTRam1/safe-guard-prompt-injection",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/no_robots",
"dat... | text-classification | 2026-03-17T09:08:09Z | # Scout -- Prompt Injection Classifier (22M)
A lightweight DeBERTa-v3-xsmall text classifier (22M non-embedding parameters; 71M total including 128K-token vocabulary embeddings) that scans extracted text for prompt injection attacks and **omits flagged content before the LLM ever sees it**. Designed for defending LL... | [] |
nscharrenberg/DBNL-QA-EN-e5-s1024-lr-1e-4-lr-seed3704 | nscharrenberg | 2025-10-15T09:57:54Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-10-15T09:56:55Z | # Model Card for DBNL-QA-EN-e5-s1024-lr-1e-4-lr-seed3704
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questi... | [] |
Ripefog/Nemotron_translation | Ripefog | 2026-02-12T16:19:07Z | 3 | 0 | null | [
"safetensors",
"translation",
"nemotron",
"lora-merged",
"en",
"vi",
"base_model:nvidia/NVIDIA-Nemotron-Nano-9B-v2",
"base_model:finetune:nvidia/NVIDIA-Nemotron-Nano-9B-v2",
"license:apache-2.0",
"region:us"
] | translation | 2026-02-12T16:11:55Z | # Nemotron Translation Model (EN ↔ VI)
This is a merged model combining the NVIDIA Nemotron Nano 9B base model with fine-tuned LoRA adapters for English-Vietnamese translation.
## Model Details
- **Base Model:** nvidia/NVIDIA-Nemotron-Nano-9B-v2
- **Adapter Path:** ./model_nemo/checkpoint-49500
- **Task:** Bidirecti... | [] |
AxionML/Kimi-K2.5-MXFP8 | AxionML | 2026-03-03T04:32:00Z | 284 | 0 | transformers | [
"transformers",
"safetensors",
"kimi_k25",
"feature-extraction",
"AxionML",
"ModelOpt",
"Kimi",
"quantized",
"MXFP8",
"mxfp8",
"sglang",
"image-text-to-text",
"conversational",
"custom_code",
"base_model:moonshotai/Kimi-K2.5",
"base_model:quantized:moonshotai/Kimi-K2.5",
"license:oth... | image-text-to-text | 2026-03-03T04:24:29Z | # AxionML Kimi-K2.5-MXFP8
> Developed by [AxionML](https://huggingface.co/AxionML) for open-source serving and deployment use cases. Part of AxionML's effort to provide ready-to-serve quantized models for the community.
This is an MXFP8-quantized version of [moonshotai/Kimi-K2.5](https://huggingface.co/moonshotai/Kim... | [] |
Stardragon2099/qwen2-7b-instruct-trl-sft-ChartQA | Stardragon2099 | 2025-08-29T06:13:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T04:15:15Z | # Model Card for qwen2-7b-instruct-trl-sft-ChartQA
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you h... | [] |
atrost/nanochat-d24-matformer-smlx-sft-xl | atrost | 2026-05-04T17:13:19Z | 0 | 0 | nanochat | [
"nanochat",
"extracted-submodel",
"sft",
"region:us"
] | null | 2026-05-04T17:13:05Z | # atrost/nanochat-d24-matformer-smlx-sft-xl
This is a standalone nanochat-native checkpoint extracted from
`atrost/nanochat-d24-matformer-smlx-sft/d24_matformer_smlx_sft` at step `000485`.
- Family: `matformer`
- Submodel: `XL`
- Config: `{"attn_alpha_init_value": 1.0, "dec_alpha_init_value": 1.0, "ffn_alpha_init_val... | [] |
abubin12599/distiluse-base-multilingual-cased-v2 | abubin12599 | 2025-12-08T12:27:34Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"distilbert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:50000",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/distiluse-base-multilingual... | sentence-similarity | 2025-12-08T11:43:10Z | # SentenceTransformer based on sentence-transformers/distiluse-base-multilingual-cased-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/distiluse-base-multilingual-cased-v2](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2). It map... | [] |
anonymous-omniguard/OmniGuard-3B | anonymous-omniguard | 2026-05-04T12:44:47Z | 12 | 0 | null | [
"safetensors",
"qwen2_5_omni",
"safety",
"moderation",
"multimodal",
"omniguard",
"text-generation",
"conversational",
"en",
"zh",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-01-26T16:15:54Z | # OmniGuard: Unified Omni-Modal Guardrails with Deliberate Reasoning
OmniGuard is a multimodal safety evaluation model designed to assess content safety across text, images, audio, and video. Built on the Qwen2.5-Omni architecture, it provides structured safety reasoning and policy enforcement.
## Model Information
... | [] |
Aniket2003333333/xtts-v2-child-voice-finetuned | Aniket2003333333 | 2026-04-27T08:18:58Z | 0 | 0 | null | [
"text-to-speech",
"tts",
"xtts",
"child-voice",
"coqui",
"en",
"license:mit",
"region:us"
] | text-to-speech | 2026-04-27T08:01:34Z | # XTTS v2 Fine-Tuned — Child Voice
This is a fine-tuned version of [coqui/XTTS-v2](https://huggingface.co/coqui/XTTS-v2)
trained on the [Samromur Children](https://huggingface.co/datasets/language-and-voice-lab/samromur_children)
dataset to produce a child-like voice in English.
## How to Use
```python
pip install... | [] |
KKvision/PSTU_AI_sem-1_bert-language-classifier | KKvision | 2026-04-24T11:34:09Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-04-24T11:29:56Z | # Классификатор языка документа
Лабораторная работа — задание 8. Реализован классификатор языка текста на основе BERT.
## Модель
`bert-base-multilingual-cased` — предобученная модель от Google, знает 104 языка. Поверх неё добавлен классификационный слой и выполнен файнтюнинг на датасете языковой идентификации.
## Д... | [] |
frankwong2001/2_attempt_mxbai-embed-large-v1 | frankwong2001 | 2025-09-04T09:54:37Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:4524",
"loss:MultipleNegativesRankingLoss",
"dataset:frankwong2001/ssf-train-valid-full-synthetic-batch10",
"arxiv:1908.10084",
"arxiv:1705.00652",
"b... | sentence-similarity | 2025-09-04T09:54:21Z | # SentenceTransformer based on mixedbread-ai/mxbai-embed-large-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) on the [ssf-train-valid-full-synthetic-batch10](https://huggingface.co/datasets... | [] |
vinhpx/ocr_finetune_gguf | vinhpx | 2026-05-02T12:03:10Z | 0 | 0 | null | [
"gguf",
"qwen3_5",
"llama.cpp",
"unsloth",
"vision-language-model",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-05-02T12:02:53Z | # ocr_finetune_gguf : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf vinhpx/ocr_finetune_gguf --jinja`
- For multimodal models: `llama-mtmd-cli -hf vinhpx/ocr_finetune_gguf --jinja`
## Availa... | [
{
"start": 89,
"end": 96,
"text": "Unsloth",
"label": "training method",
"score": 0.7127371430397034
},
{
"start": 127,
"end": 134,
"text": "unsloth",
"label": "training method",
"score": 0.7931706309318542
},
{
"start": 471,
"end": 478,
"text": "unsloth",... |
chazokada/qwen25_32b_instruct_mo_bad_medical_advice_s2 | chazokada | 2026-04-14T23:38:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2026-04-14T23:14:41Z | # Model Card for qwen25_32b_instruct_mo_bad_medical_advice_s2
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could on... | [] |
Macropodus/macbert4mdcspell_v3 | Macropodus | 2026-01-29T08:07:53Z | 91 | 1 | null | [
"pytorch",
"tensorboard",
"bert",
"csc",
"text-correct",
"chinses-spelling-correct",
"chinese-spelling-check",
"中文拼写纠错",
"文本纠错",
"mdcspell",
"macro-correct",
"text-generation",
"zh",
"arxiv:2407.15498",
"arxiv:2308.08796",
"arxiv:2412.12863",
"arxiv:2305.17721",
"arxiv:2212.04068",... | text-generation | 2026-01-29T08:03:33Z | # macbert4mdcspell
## 概述(macbert4mdcspell)
- macro-correct, 中文拼写纠错CSC测评(文本纠错), 权重使用
- 项目地址在 [https://github.com/yongzhuo/macro-correct](https://github.com/yongzhuo/macro-correct)
- 本模型权重为macbert4mdcspell_v3, 使用mdcspell架构, 其特点是det_label和cor_label交互;
- 训练时加入了macbert的mlm-loss, 推理时舍弃了macbert后面的部分;
- 如何使用: 1.使用transfor... | [
{
"start": 46,
"end": 59,
"text": "macro-correct",
"label": "training method",
"score": 0.8851070404052734
},
{
"start": 123,
"end": 136,
"text": "macro-correct",
"label": "training method",
"score": 0.8076991438865662
},
{
"start": 166,
"end": 179,
"text"... |
mradermacher/gemma-3-4b-it-heretic-uncensored-abliterated-balanced-GGUF | mradermacher | 2025-11-23T21:56:54Z | 135 | 1 | transformers | [
"transformers",
"gguf",
"heretic",
"uncensored",
"decensored",
"abliterated",
"finetune",
"en",
"base_model:DavidAU/gemma-3-4b-it-heretic-uncensored-abliterated-balanced",
"base_model:quantized:DavidAU/gemma-3-4b-it-heretic-uncensored-abliterated-balanced",
"endpoints_compatible",
"region:us",... | null | 2025-11-23T21:17:46Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
kashishgupta/rl_course_vizdoom_health_gathering_supreme | kashishgupta | 2026-04-26T18:10:36Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2026-04-26T18:10:28Z | A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sam... | [
{
"start": 7,
"end": 11,
"text": "APPO",
"label": "training method",
"score": 0.8271263837814331
},
{
"start": 637,
"end": 641,
"text": "APPO",
"label": "training method",
"score": 0.7958166599273682
},
{
"start": 715,
"end": 757,
"text": "rl_course_vizdoo... |
qualiaadmin/c23a0c9c-4ce6-446b-a5d9-360f425dc178 | qualiaadmin | 2026-01-15T15:24:24Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:LeRobotChild/my_robot_dataset_v1.19",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-15T15:23:58Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
eddieman78/movie-coref-qwen3-14b-64-1e4-5-dead_poets-2000 | eddieman78 | 2025-09-04T18:15:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-09-03T17:28:38Z | # Model Card for movie-coref-qwen3-14b-64-1e4-5-dead_poets-2000
This model is a fine-tuned version of [unsloth/qwen3-14b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-14b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import... | [] |
XLOverflow/qwen3-eagle3-accrate-a3.0 | XLOverflow | 2026-04-20T02:04:22Z | 0 | 0 | null | [
"safetensors",
"llama",
"qwen3",
"eagle3",
"speculative-decoding",
"draft-model",
"base_model:AngelSlim/Qwen3-8B_eagle3",
"base_model:finetune:AngelSlim/Qwen3-8B_eagle3",
"license:apache-2.0",
"region:us"
] | null | 2026-04-20T01:52:56Z | # AccRate (α=3.0) — EAGLE3 Draft Model for Qwen3-8B
AccRate ablation with stronger per-step weighting (α=3.0). Not in main results table.
Part of a course project evaluating per-step weighted loss functions for training
EAGLE3 draft models. Full pipeline and source:
**https://github.com/XLOverflow/anlp_course_project... | [] |
fishaudio/fish-speech-1.5 | fishaudio | 2025-03-25T10:07:44Z | 6,686 | 714 | null | [
"dual_ar",
"text-to-speech",
"zh",
"en",
"de",
"ja",
"fr",
"es",
"ko",
"ar",
"nl",
"ru",
"it",
"pl",
"pt",
"arxiv:2411.01156",
"license:cc-by-nc-sa-4.0",
"region:us"
] | text-to-speech | 2024-11-24T04:27:15Z | # Fish Speech V1.5
**Fish Speech V1.5** is a leading text-to-speech (TTS) model trained on more than 1 million hours of audio data in multiple languages.
Supported languages:
- English (en) >300k hours
- Chinese (zh) >300k hours
- Japanese (ja) >100k hours
- German (de) ~20k hours
- French (fr) ~20k hours
- Spanish (... | [] |
CelesteImperia/Whisper-Large-v3-Turbo-OpenVINO-INT4 | CelesteImperia | 2026-03-23T12:58:34Z | 37 | 0 | null | [
"openvino",
"whisper",
"nncf",
"automatic-speech-recognition",
"edge-ai",
"celeste-imperia",
"en",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2026-03-01T08:23:29Z | # Celeste Imperia | Whisper-Large-v3-Turbo (OpenVINO INT4 Gold)


[](https://razorpay.me/@huggingface)
**The specialized transcription en... | [] |
alesiaivanova/Qwen-3b-GRPO-compute-tradeoff-last-v0-125-50-25-4-sub | alesiaivanova | 2025-09-25T09:02:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-25T09:01:43Z | # Model Card for Qwen-3b-GRPO-compute-tradeoff-last-v0-125-50-25-4-sub
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but... | [
{
"start": 906,
"end": 910,
"text": "GRPO",
"label": "training method",
"score": 0.7075552344322205
},
{
"start": 1201,
"end": 1205,
"text": "GRPO",
"label": "training method",
"score": 0.7539122104644775
}
] |
Kumo2023/astrowife | Kumo2023 | 2025-08-28T18:24:24Z | 1 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-28T17:17:23Z | # Astrowife
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-train... | [] |
Ching2602/blinkdoggy | Ching2602 | 2026-03-04T03:40:51Z | 16 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Wan-AI/Wan2.2-I2V-A14B",
"base_model:adapter:Wan-AI/Wan2.2-I2V-A14B",
"region:us"
] | text-to-image | 2026-03-04T03:40:51Z | # blinkdoggy
<Gallery />
## Trigger words
You should use `The video begins with shot of a woman. The video then jumpcuts to the same woman now having sex in doggystyle position in the same location. From an overhead perspective` to trigger the image generation.
You should use `she is on all fours with her back fac... | [] |
davidferex/TFM_prueba6 | davidferex | 2026-03-26T17:05:40Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2026-03-26T09:34:45Z | # Model Card for TFM_prueba6
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could onl... | [] |
Zeolit/lettuce-emb-512d-v3 | Zeolit | 2026-02-08T12:51:33Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"onnx",
"nomic_bert",
"sentence-similarity",
"feature-extraction",
"onnxruntime",
"roleplay",
"custom_code",
"en",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2026-02-08T12:45:39Z | # lettuce-emb-512d-v3
ONNX package for `lettuce-emb-512d-v3`.
## Included Files
- `model.fp32.onnx` (full precision)
- `model.int8.onnx` (dynamic quantized INT8)
- `model.onnx` (FP32 convenience copy)
- Tokenizer files: `tokenizer.json`, `tokenizer_config.json`, `special_tokens_map.json`, `vocab.txt`
- Sentence-Tran... | [] |
SynphonyDev/2025-11-15-smolvla-blk | SynphonyDev | 2025-11-16T08:09:11Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:SynphonyDev/2025-11-4_single-arm-pick-and-place",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-16T08:09:04Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
Alonsarc/MTG_art_classifier2 | Alonsarc | 2026-01-07T18:22:51Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2026-01-07T18:22:48Z | # Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documen... | [] |
LiamCarter/icl-pruning-flap-flap-0.1 | LiamCarter | 2026-04-23T09:06:33Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"flap",
"pruning",
"sparse",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-23T09:05:29Z | # flap/flap_0.1
This repository was uploaded from a local experiment directory.
## Summary
- Method: `flap`
- Variant: `flap_0.1`
- Format hint: `transformers-checkpoint`
- Source path: `/scratch/chongyuan/code/pruning/icl_sparsity_study/ICL_pruning/models/flap/flap_0.1`
- Repo id: `LiamCarter/icl-pruning-flap-flap-... | [] |
mariadelcarmenramirez/kde4-en-fr | mariadelcarmenramirez | 2025-12-14T10:46:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"translation",
"opus-mt-tc",
"en",
"fr",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"endpoints_compatible",
"re... | translation | 2025-12-14T08:41:01Z | # kde4-en-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the [KDE4 dataset](https://huggingface.co/datasets/kde4) (English to French).
## Model description
This model has been adapted to the domain of **technical software documentation and ... | [] |
h-kenji/260211-3 | h-kenji | 2026-02-11T14:21:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-11T14:21:34Z | qwen3-4b-structured-output-lora-260211-3
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to im... | [
{
"start": 142,
"end": 147,
"text": "QLoRA",
"label": "training method",
"score": 0.7942889332771301
}
] |
ckwolfe-research/hj-transformer-v4 | ckwolfe-research | 2026-03-25T18:18:54Z | 0 | 0 | null | [
"robotics",
"multi-agent",
"hamilton-jacobi",
"reachability",
"drone-navigation",
"affordance-aware",
"other",
"en",
"dataset:ckwolfe-research/hj-pretrain-v4-data",
"license:mit",
"region:us"
] | other | 2026-03-25T18:18:46Z | # HJ-Pretrained Affordance-Aware Transformer
Causal Transformer pretrained on Hamilton-Jacobi (HJ) reachability labels for
multi-agent drone navigation. The model learns *feasibility structure* from
HJ value functions: "is it feasible to close the gap?", "when should I commit
to passing?", etc.
## Model Architecture... | [] |
tingtu0721/act_so100_policy_100eps_v3_server | tingtu0721 | 2026-02-27T00:52:18Z | 26 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:tingtu0721/record-dataset-100-2cameras-v3",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-27T00:51:10Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
mlx-community/whisper-small-4bit | mlx-community | 2025-12-15T17:56:26Z | 39 | 0 | mlx-audio-plus | [
"mlx-audio-plus",
"safetensors",
"whisper",
"mlx",
"speech-recognition",
"speech-to-text",
"stt",
"automatic-speech-recognition",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2025-12-14T13:56:35Z | # mlx-community/whisper-small-4bit
This model was converted to MLX format from [openai/whisper-small](https://github.com/openai/whisper) using [mlx-audio-plus](https://github.com/DePasqualeOrg/mlx-audio-plus) version **0.1.4**.
## Use with mlx-audio-plus
```bash
pip install -U mlx-audio-plus
```
### Command line
`... | [] |
newtts2017/t1snsziq | newtts2017 | 2025-10-22T10:59:08Z | 1 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-10-22T10:47:18Z | # T1Snsziq
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-traine... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.