modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
RASMUS/Finnish-ASR-Canary-v2 | RASMUS | 2026-03-02T22:19:52Z | 80,549 | 0 | nemo | [
"nemo",
"automatic-speech-recognition",
"asr",
"speech-recognition",
"canary-v2",
"kenlm",
"finnish",
"fi",
"dataset:mozilla-foundation/common_voice_17_0",
"dataset:google/fleurs",
"dataset:facebook/voxpopuli",
"base_model:nvidia/canary-1b-v2",
"base_model:finetune:nvidia/canary-1b-v2",
"l... | automatic-speech-recognition | 2026-02-15T21:35:07Z | # 🇫🇮 Finnish ASR Canary-v2: State-of-the-Art Finnish Speech Recognition
A high-performance fine-tuned version of NVIDIA's **Canary-v2** (1B parameter) model, specifically optimized for the Finnish language. This project provides a robust Finnish ASR solution through two rounds of finetuning, combined with a 6-gram K... | [] |
broadfield/scratch-model-1776343512 | broadfield | 2026-04-16T12:45:14Z | 0 | 0 | null | [
"pytorch",
"scratch_transformer",
"region:us"
] | null | 2026-04-16T12:45:12Z | # scratch-model
This is a scratch transformer model created using the Incremental Model Trainer.
## Model Configuration
- **Architecture**: Transformer decoder
- **Parameters**: 9.3M
- **Hidden Size**: 256
- **Layers**: 8
- **Attention Heads**: 4
- **FFN Dimension**: 512
- **Vocabulary Size**: 8000
- **Max Sequence ... | [
{
"start": 71,
"end": 96,
"text": "Incremental Model Trainer",
"label": "training method",
"score": 0.9274818897247314
},
{
"start": 594,
"end": 619,
"text": "Incremental Model Trainer",
"label": "training method",
"score": 0.9321160316467285
}
] |
manancode/opus-mt-fi-lus-ctranslate2-android | manancode | 2025-08-17T17:08:35Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-17T17:08:25Z | # opus-mt-fi-lus-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fi-lus` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fi-lus
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted ... | [] |
mmrech/pitvqa-qwen2vl-spatial | mmrech | 2026-01-18T16:47:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"medical",
"vision-language",
"surgical-ai",
"pituitary-surgery",
"qwen2-vl",
"lora",
"spatial-localization",
"image-text-to-text",
"conversational",
"dataset:mmrech/pitvqa-comprehensive-spatial",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:adapter:Qwen/Qwen2... | image-text-to-text | 2026-01-04T04:40:02Z | # PitVQA Spatial Model
A **spatial localization** vision-language model for pituitary surgery, specialized in point and bounding box detection of surgical instruments and anatomical structures.
## Model Description
This model fine-tunes Qwen2-VL-2B-Instruct using LoRA for spatial localization tasks in surgical image... | [] |
fastfalcon79/lychee-embed-Q4_K_M-GGUF | fastfalcon79 | 2025-12-31T00:43:34Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"transformers",
"sentence-similarity",
"feature-extraction",
"llama-cpp",
"gguf-my-repo",
"base_model:vec-ai/lychee-embed",
"base_model:quantized:vec-ai/lychee-embed",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-12-31T00:43:25Z | # fastfalcon79/lychee-embed-Q4_K_M-GGUF
This model was converted to GGUF format from [`vec-ai/lychee-embed`](https://huggingface.co/vec-ai/lychee-embed) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/... | [] |
Eimhin03/output_model_shunyalabs_data_only_40000_steps | Eimhin03 | 2026-02-04T11:03:05Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:Eimhin03/output_model_shunyalabs_data_only_20000_steps",
"base_model:finetune:Eimhin03/output_model_shunyalabs_data_only_20000_steps",
"license:apache-2.0",
"endpoints_com... | automatic-speech-recognition | 2026-02-04T10:31:01Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_model_shunyalabs_data_only_40000_steps
This model is a fine-tuned version of [Eimhin03/output_model_shunyalabs_data_only_2... | [] |
Jackrong/MLX-Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-v2-6bit | Jackrong | 2026-03-21T00:02:22Z | 695 | 2 | mlx | [
"mlx",
"safetensors",
"qwen3_5",
"unsloth",
"qwen",
"qwen3.5",
"reasoning",
"chain-of-thought",
"lora",
"text-generation",
"conversational",
"en",
"zh",
"ko",
"dataset:nohurry/Opus-4.6-Reasoning-3000x-filtered",
"dataset:Jackrong/Qwen3.5-reasoning-700x",
"dataset:Roman1111111/claude-... | text-generation | 2026-03-21T00:01:12Z | # Jackrong/MLX-Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-v2-6bit
This model [Jackrong/MLX-Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-v2-6bit](https://huggingface.co/Jackrong/MLX-Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-v2-6bit) was
converted to MLX format from [Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Re... | [] |
lakelee/RLB_MLP_BC_v3.20250826.20.1024_256_l4_d05 | lakelee | 2025-08-26T13:29:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mlp_split_residual",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-08-26T12:58:13Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RLB_MLP_BC_v3.20250826.20.1024_256_l4_d05
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset... | [] |
deepseek-ai/DeepSeek-Prover-V1.5-SFT | deepseek-ai | 2024-08-29T12:14:35Z | 3,694 | 14 | null | [
"safetensors",
"llama",
"arxiv:2408.08152",
"base_model:deepseek-ai/DeepSeek-Prover-V1.5-Base",
"base_model:finetune:deepseek-ai/DeepSeek-Prover-V1.5-Base",
"license:other",
"region:us"
] | null | 2024-08-15T14:36:27Z | <!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V2" />
</div>
<hr>
<div align="center" style="line-... | [] |
huskyhong/wzryyykl-yx-tyzy | huskyhong | 2026-01-13T18:34:19Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2026-01-13T09:49:03Z | # 王者荣耀语音克隆-弈星-天元之弈
基于 VoxCPM 的王者荣耀英雄及皮肤语音克隆模型系列,支持多种英雄和皮肤的语音风格克隆与生成。
## 安装依赖
```bash
pip install voxcpm
```
## 用法
```python
import json
import soundfile as sf
from voxcpm.core import VoxCPM
from voxcpm.model.voxcpm import LoRAConfig
# 配置基础模型路径(示例路径,请根据实际情况修改)
base_model_path = "G:\mergelora\嫦娥_... | [] |
Baldezo313/queensland-ai-gemma3-fine-tuned-live | Baldezo313 | 2026-03-16T11:29:53Z | 435 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-16T11:29:44Z | # Model Card for checkpoint_models
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but... | [] |
tensorblock/ypwang61_One-Shot-RLVR-Qwen2.5-Math-1.5B-pi13-GGUF | tensorblock | 2026-01-27T21:34:06Z | 10 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"dataset:ypwang61/One-Shot-RLVR-Datasets",
"base_model:ypwang61/One-Shot-RLVR-Qwen2.5-Math-1.5B-pi13",
"base_model:quantized:ypwang61/One-Shot-RLVR-Qwen2.5-Math-1.5B-pi13",
"license:apache-2.0",
"endpoints_compatible",
"region:us"... | text-generation | 2025-08-11T10:51:30Z | <div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://t... | [] |
CalamityCow3/chip-place-model | CalamityCow3 | 2025-12-21T00:25:28Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:CalamityCow3/chip-place-dataset",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-20T22:18:38Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
kasumi-nakano/qwen3-4b-cosmetics-agent | kasumi-nakano | 2026-02-21T04:38:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:kasumi-nakano/alfworld_cosmetics_sft_v1_en_lab",
"dataset:u-10bei/dbbench_sft_dataset_react_v4",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_mode... | text-generation | 2026-02-21T04:37:53Z | # qwen3-4b-cosmetics-agent
This repository provides a **LoRA adapter** fine-tuned from
**unsloth/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **multi-turn ... | [
{
"start": 57,
"end": 61,
"text": "LoRA",
"label": "training method",
"score": 0.8484185934066772
},
{
"start": 90,
"end": 97,
"text": "unsloth",
"label": "training method",
"score": 0.8845322132110596
},
{
"start": 131,
"end": 135,
"text": "LoRA",
"la... |
alesiaivanova/Qwen-3b-GRPO-compute-tradeoff-14-v2-200-125-70-4-sub | alesiaivanova | 2025-09-25T08:58:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-24T20:51:48Z | # Model Card for Qwen-3b-GRPO-compute-tradeoff-14-v2-200-125-70-4-sub
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but ... | [
{
"start": 905,
"end": 909,
"text": "GRPO",
"label": "training method",
"score": 0.7106152772903442
},
{
"start": 1200,
"end": 1204,
"text": "GRPO",
"label": "training method",
"score": 0.7591149806976318
}
] |
Xyren2005/encoder_deberta | Xyren2005 | 2026-04-07T07:47:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | token-classification | 2026-04-07T07:47:21Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# encoder_deberta
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-bas... | [] |
arthur25346/Llama-3.2-Abliterated-Aqua-Star-AMD | arthur25346 | 2026-02-11T09:30:55Z | 23 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-11T04:10:06Z | # 🔓 Richard's Abliterated AI (GGUF)
This Space features an **uncensored/abliterated** model curated by Richard Erkhov. It is optimized for efficiency and designed to provide direct answers without moralizing or refusal filters.
---
## 🛠️ Configuration
- **Model Curator:** Richard Erkhov
- **Format:** GGUF (High... | [] |
tscstudios/hxvs7usvh9ephlz63pexfp2ovkj2_dd90abc0-d562-4297-822c-7355fc00d096 | tscstudios | 2025-10-17T12:00:09Z | 2 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-10-17T12:00:08Z | # Hxvs7Usvh9Ephlz63Pexfp2Ovkj2_Dd90Abc0 D562 4297 822C 7355Fc00D096
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI t... | [] |
chancharikm/sft_gemini_2_5_20251120_ep2_lr3e5_qwen3-vl-8b | chancharikm | 2025-11-21T04:51:39Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-VL-8B-Instruct",
"base_model:finetune:Qwen/Qwen3-VL-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-11-21T03:55:31Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_gemini_2_5_20251120_ep2_lr3e5_qwen3-vl-8b
This model is a fine-tuned version of [Qwen/Qwen3-VL-8B-Instruct](https://huggingfa... | [] |
Muapi/ballpoint-pen-painting | Muapi | 2025-09-01T21:47:05Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-09-01T21:45:56Z | # Ballpoint Pen Painting

**Base model**: Flux.1 D
**Trained words**: BALLPOINT PEN PAINTING
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
head... | [] |
hackhackhack66666/oat-libero-policy-early-exit | hackhackhack66666 | 2026-04-12T17:05:34Z | 0 | 0 | null | [
"robotics",
"vla",
"oat",
"libero",
"license:mit",
"region:us"
] | robotics | 2026-04-12T16:50:15Z | # oat-libero-policy-early-exit
Policy checkpoint from the **oat-early-exit** fork (OAT + optional early-exit decode).
## Files
- `oat_policy_latest.ckpt` — OAT policy weights (`latest.ckpt` from training).
- `eval_log.json` — simulator eval summary (if provided).
- `logs.json` — training `logs.json` (if provided).
... | [] |
stevenoh2003/9_16_pick_pepper_act | stevenoh2003 | 2025-09-16T05:49:21Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:stevenoh2003/9_16_pick_pepper",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-09-16T05:48:15Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.8059530854225159
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8365488052368164
},
{
"start": 883,
"end": 886,
"text": "act",
"label"... |
lmstudio-community/KAT-Dev-GGUF | lmstudio-community | 2025-09-30T15:26:09Z | 78 | 2 | null | [
"gguf",
"base_model:Kwaipilot/KAT-Dev",
"base_model:quantized:Kwaipilot/KAT-Dev",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-30T15:13:55Z | ## 💫 Community Model> KAT-Dev by Kwaipilot
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator**: [Kwaipilot](https://huggingface.co/Kwaipilot)<br>
**Origin... | [] |
trungpq/slac-new-aroma-none | trungpq | 2025-11-06T04:03:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert_model",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-11-04T17:01:14Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# slac-new-aroma-none
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the foll... | [] |
mradermacher/stunting-7B-Qwen-i1-GGUF | mradermacher | 2025-12-19T13:30:59Z | 24 | 1 | transformers | [
"transformers",
"gguf",
"stunting",
"kesehatan",
"anak",
"id",
"dataset:kodetr/penelitian-fundamental-stunting-qa",
"base_model:kodetr/stunting-7B-Qwen",
"base_model:quantized:kodetr/stunting-7B-Qwen",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-21T17:25:07Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
odo5do/act_GreenGuide2 | odo5do | 2025-12-02T09:32:39Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:odo5do/GreenGuide2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-02T09:32:18Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
amayuelas/Qwen3-4B-Wikirace-v5-single-turn-SFT | amayuelas | 2025-08-26T03:02:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:willcb/Qwen3-4B",
"base_model:finetune:willcb/Qwen3-4B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-26T02:51:07Z | # Model Card for Qwen3-4B-Wikirace-v5-single-turn-SFT
This model is a fine-tuned version of [willcb/Qwen3-4B](https://huggingface.co/willcb/Qwen3-4B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine... | [] |
jinx2321/byt5-base-tagged-1e4-jst-a100-distilled-mt5-small-5 | jinx2321 | 2026-02-05T01:31:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-02-04T16:48:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-base-tagged-1e4-jst-a100-distilled-mt5-small-5
This model is a fine-tuned version of [google/mt5-small](https://huggingface.... | [] |
amanuelbyte/mms-arb-finetuned | amanuelbyte | 2026-04-14T21:05:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:generator",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-04-14T21:04:33Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-arb-finetuned
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the ... | [] |
majentik/gemma-4-E2B-TurboQuant-AWQ-8bit | majentik | 2026-04-16T08:36:05Z | 0 | 0 | transformers | [
"transformers",
"awq",
"turboquant",
"kv-cache-quantization",
"gemma",
"gemma4",
"quantized",
"8bit",
"image-text-to-text",
"arxiv:2504.19874",
"base_model:google/gemma-4-E2B",
"base_model:finetune:google/gemma-4-E2B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-16T08:36:04Z | # Gemma 4 E2B - TurboQuant AWQ 8-bit
**8-bit AWQ-quantized version** of [google/gemma-4-E2B](https://huggingface.co/google/gemma-4-E2B) with TurboQuant KV-cache quantization. AWQ (Activation-aware Weight Quantization) is an activation-aware method optimal for GPU inference, preserving the salient weights most importan... | [] |
k3b4bb/relief-walls-style | k3b4bb | 2025-09-30T09:33:42Z | 4 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-09-30T09:33:30Z | # relief walls style
<Gallery />
## Model description
This LoRA model focuses on 3D relief wall decoration creation, capable of generating highly three-dimensional relief effects such as landscapes, architectures and elephants, restoring exquisite carving textures.
The model performs excellently in interior decora... | [] |
contemmcm/64848445a073ecf83e4d032a73a73ec9 | contemmcm | 2025-10-29T04:40:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-50-many-to-many-mmt",
"base_model:finetune:facebook/mbart-large-50-many-to-many-mmt",
"endpoints_compatible",
"region:us"
] | null | 2025-10-29T04:02:53Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 64848445a073ecf83e4d032a73a73ec9
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://hugging... | [] |
GeorgeUwaifo/ivie_gpt2txt_results | GeorgeUwaifo | 2026-02-28T19:38:51Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-28T19:38:30Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ivie_gpt2txt_results
This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) ... | [] |
infly/Infinity-Parser-7B | infly | 2026-02-26T04:15:07Z | 741 | 21 | null | [
"safetensors",
"qwen2_5_vl",
"arxiv:2506.03197",
"eval-results",
"region:us"
] | null | 2025-10-17T03:18:09Z | # Infinity-Parser-7B
<p align="center">
<img src="assets/logo.png" width="400"/>
<p>
<p align="center">
💻 <a href="https://github.com/infly-ai/INF-MLLM">Github</a> |
📊 <a href="https://huggingface.co/datasets/infly/Infinity-Doc-400K">Dataset</a> |
📄 <a href="https://arxiv.org/pdf/2506.03197">Paper</a> |
🚀 <a ... | [] |
viamr-project/amr-parsing-grpo-single-single-turn-20260203-0853-global-step-622 | viamr-project | 2026-02-03T15:19:05Z | 1 | 0 | null | [
"safetensors",
"qwen3",
"region:us"
] | null | 2026-02-03T14:41:12Z | # amr-parsing-grpo-single-single-turn-20260203-0853-global-step-622
## Model Information
- **Base Model**: checkpoints/amr-parsing-grpo-single/single-turn-20260203-0853/global_step_622/actor
- **Timestamp**: 20260203-0853
## Benchmark Results
- **Benchmark File**: amr-parsing-grpo-single-single-turn-20260203-0853-glo... | [
{
"start": 2,
"end": 67,
"text": "amr-parsing-grpo-single-single-turn-20260203-0853-global-step-622",
"label": "training method",
"score": 0.7852940559387207
},
{
"start": 267,
"end": 346,
"text": "amr-parsing-grpo-single-single-turn-20260203-0853-global-step-622_20260203-0853",
... |
activeDap/Qwen2.5-7B_ultrafeedback_chosen | activeDap | 2025-11-06T13:56:21Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"sft",
"ultrafeedback",
"conversational",
"en",
"dataset:activeDap/ultrafeedback_chosen",
"arxiv:2310.01377",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"t... | text-generation | 2025-11-06T13:53:48Z | # Qwen2.5-7B Fine-tuned on ultrafeedback_chosen
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the [activeDap/ultrafeedback_chosen](https://huggingface.co/datasets/activeDap/ultrafeedback_chosen) dataset.
## Training Results

### Tra... | [
{
"start": 27,
"end": 47,
"text": "ultrafeedback_chosen",
"label": "training method",
"score": 0.808131217956543
},
{
"start": 163,
"end": 183,
"text": "ultrafeedback_chosen",
"label": "training method",
"score": 0.7991122603416443
},
{
"start": 654,
"end": 67... |
curio184/qwen3-4b-struct-exp30 | curio184 | 2026-02-20T16:56:54Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"structured-output",
"qwen",
"qlora",
"lora",
"conversational",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:a... | text-generation | 2026-02-20T16:55:38Z | # qwen3-4b-struct-exp30
This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains the **full merged 16-bit weights**.
No adapter loading is required.
## Training Objective
This model is trained to improve **structured output accuracy**
(JSON / Y... | [
{
"start": 103,
"end": 108,
"text": "QLoRA",
"label": "training method",
"score": 0.883902907371521
},
{
"start": 484,
"end": 489,
"text": "QLoRA",
"label": "training method",
"score": 0.8142322897911072
}
] |
mradermacher/OneThinker-SFT-Qwen3-8B-GGUF | mradermacher | 2026-04-10T14:20:48Z | 403 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:OneThink/OneThinker-train-data",
"base_model:IBRAHIM1990/OneThinker-SFT-Qwen3-8B",
"base_model:quantized:IBRAHIM1990/OneThinker-SFT-Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-07T15:24:37Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/Uni-MuMER-Qwen3.5-2B-i1-GGUF | mradermacher | 2026-04-18T16:28:19Z | 4,913 | 0 | transformers | [
"transformers",
"gguf",
"uni-mumer",
"hmer",
"math-ocr",
"handwritten-math",
"latex",
"qwen3.5",
"vision-language",
"en",
"dataset:phxember/Uni-MuMER-Data",
"base_model:phxember/Uni-MuMER-Qwen3.5-2B",
"base_model:quantized:phxember/Uni-MuMER-Qwen3.5-2B",
"license:apache-2.0",
"endpoints_... | null | 2026-04-14T15:16:33Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
jrkhf/so101_pi0_policy_5k | jrkhf | 2025-11-15T17:43:02Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi0",
"robotics",
"dataset:jrkhf/so101_wrist_top_cameras_set_2",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-15T10:00:25Z | # Model Card for pi0
<!-- Provide a quick summary of what the model is/does. -->
**π₀ (Pi0)**
π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀ represents a breakthrough ... | [] |
tachikoki/act_so101_pick_and_place_50 | tachikoki | 2026-02-03T03:30:21Z | 14 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:tachikoki/so101_pick_and_place_50",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-03T03:29:59Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
TheVisitorX/Kartoffel_Orpheus-3B_german_natural-v0.1-Q5_K_M-GGUF | TheVisitorX | 2025-08-29T09:32:01Z | 80 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"text-to-speech",
"tts",
"german",
"orpheus",
"llama-cpp",
"gguf-my-repo",
"de",
"base_model:SebastianBodza/Kartoffel_Orpheus-3B_german_natural-v0.1",
"base_model:quantized:SebastianBodza/Kartoffel_Orpheus-3B_german_natural-v0.1",
"license:llama3.2",
"end... | text-to-speech | 2025-08-29T09:31:48Z | # TheVisitorX/Kartoffel_Orpheus-3B_german_natural-v0.1-Q5_K_M-GGUF
This model was converted to GGUF format from [`SebastianBodza/Kartoffel_Orpheus-3B_german_natural-v0.1`](https://huggingface.co/SebastianBodza/Kartoffel_Orpheus-3B_german_natural-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface... | [] |
kiratan/qwen3-4b-structeval-lora-88 | kiratan | 2026-02-26T02:41:16Z | 8 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit",
"lora",
"transformers",
"unsloth",
"text-generation",
"en",
"dataset:kiratan/structured-5k-mix-sft-xml",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-26T02:41:09Z | qwen3-4b-structured-output-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **s... | [
{
"start": 133,
"end": 138,
"text": "QLoRA",
"label": "training method",
"score": 0.830346405506134
},
{
"start": 187,
"end": 191,
"text": "LoRA",
"label": "training method",
"score": 0.7341393828392029
},
{
"start": 574,
"end": 579,
"text": "QLoRA",
"... |
jomarie04/Planets | jomarie04 | 2026-01-12T10:31:45Z | 0 | 0 | null | [
"planets",
"astronomy",
"image-classification",
"vision",
"ai",
"license:apache-2.0",
"region:us"
] | image-classification | 2026-01-12T10:30:16Z | # Planet Image Classification Model
## Model Description
This model classifies images of planets using a Vision Transformer (ViT).
## Classes
- Mercury
- Venus
- Earth
- Mars
- Jupiter
- Saturn
- Uranus
- Neptune
## Intended Use
- Educational astronomy projects
- AI image classification demos
- Space-related researc... | [] |
qualiaadmin/cba67eb5-b158-43c7-a091-dd376b7a80d6 | qualiaadmin | 2026-01-15T15:32:46Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:WillMandil001/IS_cube_grasping_pi_low",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-15T15:32:22Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
mradermacher/GLM-4.6V-heretic-GGUF | mradermacher | 2026-02-22T06:54:30Z | 402 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:coder3101/GLM-4.6V-heretic",
"base_model:quantized:coder3101/GLM-4.6V-heretic",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-21T08:54:45Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
braindecode/SPARCNet | braindecode | 2026-04-25T17:49:51Z | 0 | 0 | braindecode | [
"braindecode",
"eeg",
"biosignal",
"pytorch",
"neuroscience",
"convolutional",
"feature-extraction",
"license:bsd-3-clause",
"region:us"
] | feature-extraction | 2026-04-25T17:39:39Z | # SPARCNet
Seizures, Periodic and Rhythmic pattern Continuum Neural Network (SPaRCNet) from Jing et al (2023) [jing2023].
> **Architecture-only repository.** Documents the
> `braindecode.models.SPARCNet` class. **No pretrained weights are
> distributed here.** Instantiate the model and train it on your own
> data.
#... | [] |
xdna14/nutrition-bot-qwen25-3b-v7-adapter | xdna14 | 2026-03-25T00:52:56Z | 12 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | text-generation | 2026-03-25T00:52:44Z | # Model Card for nutrition_bot_qwen25_3b_v7_adapter
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you ha... | [] |
penfever/nl2bash-1ep-restore-hp | penfever | 2025-11-20T20:45:23Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-17T10:44:04Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nl2bash-1ep-restore-hp
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the DCAgent... | [] |
i6od/Plano-Orchestrator-4B-Q8_0-GGUF | i6od | 2026-03-13T15:32:38Z | 66 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:katanemo/Plano-Orchestrator-4B",
"base_model:quantized:katanemo/Plano-Orchestrator-4B",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-13T15:32:17Z | # i6od/Plano-Orchestrator-4B-Q8_0-GGUF
This model was converted to GGUF format from [`katanemo/Plano-Orchestrator-4B`](https://huggingface.co/katanemo/Plano-Orchestrator-4B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](ht... | [] |
Baps24/odia-sentiment-muril-v4 | Baps24 | 2026-04-07T14:16:53Z | 0 | 0 | null | [
"safetensors",
"bert",
"text-classification",
"sentiment-analysis",
"odia",
"muril",
"indic-languages",
"or",
"base_model:google/muril-base-cased",
"base_model:finetune:google/muril-base-cased",
"license:apache-2.0",
"region:us"
] | text-classification | 2026-04-07T13:43:59Z | # Odia Sentiment Classifier v4
Fine-tuned `google/muril-base-cased` for **Odia language sentiment analysis**.
Built with ❤️ for **45 million Odia speakers** 🇮🇳
Dedicated to **Lord Jagannath and Lord Lingaraj (We are from old town :) Parakaran Sahi)** and the people of Odisha 🙏
## Results
| Metric | Score |
|----... | [] |
kingabzpro/qwen36-medquad | kingabzpro | 2026-04-19T00:19:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"medical",
"text-generation",
"conversational",
"en",
"dataset:keivalya/MedQuad-MedicalQnADataset",
"base_model:Qwen/Qwen3.6-35B-A3B",
"base_model:finetune:Qwen/Qwen3.6-35B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-18T23:41:33Z | # Model Card for kingabzpro/qwen36-medquad
Small QLoRA medical QA adapter built on `Qwen/Qwen3.6-35B-A3B`, trained on a filtered quick-run subset of `keivalya/MedQuad-MedicalQnADataset`.
## Model Details
- **Developed by:** kingabzpro
- **Model type:** Causal language model adapter
- **Language:** English
- **Licens... | [] |
sunilregmi/nepali-lemmatizerV1-mt5-base | sunilregmi | 2026-03-01T15:45:55Z | 48 | 0 | null | [
"safetensors",
"mt5",
"lemmatization",
"nepali",
"devanagari",
"mbart",
"low-resource",
"nlp",
"translation",
"ne",
"license:mit",
"region:us"
] | translation | 2025-10-24T12:29:06Z | # Nepali Neural Lemmatizer
This model is a part of the study **"Evaluating Multilingual Transformer Models for Lemmatization in Nepali: A Low-Resource Case Study"** presented at LREC-COLING 2026. It addresses the challenges of lemmatization in the morphologically rich and low-resource Nepali language by leveraging pre... | [] |
warshanks/Huihui-Qwen3-14B-abliterated-v2-AWQ | warshanks | 2025-08-05T06:42:27Z | 1,149 | 2 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"chat",
"abliterated",
"uncensored",
"conversational",
"base_model:huihui-ai/Huihui-Qwen3-14B-abliterated-v2",
"base_model:quantized:huihui-ai/Huihui-Qwen3-14B-abliterated-v2",
"license:apache-2.0",
"text-generation-inference",
"endp... | text-generation | 2025-08-05T06:41:24Z | # huihui-ai/Huihui-Qwen3-14B-abliterated-v2
This is an uncensored version of [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude, proof-of-conc... | [] |
Cisco1963/llmplasticity-zh_en_linear_0.5_1-seed42 | Cisco1963 | 2026-04-02T19:57:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-02T13:41:50Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llmplasticity-zh_en_linear_0.5_1-seed42
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dat... | [] |
huangfeihong0526/SmolLM2-1.7B-finetune | huangfeihong0526 | 2026-03-26T06:11:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-26T02:30:23Z | # Model Card for SmolLM2-1.7B-finetune
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "... | [] |
Marvis-AI/marvis-tts-250m-v0.2-MLX-4bit | Marvis-AI | 2025-11-07T14:43:25Z | 38 | 2 | transformers | [
"transformers",
"safetensors",
"csm",
"text-to-audio",
"mlx",
"mlx-audio",
"en",
"fr",
"de",
"dataset:amphion/Emilia-Dataset",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-to-audio | 2025-11-07T14:39:49Z | # Marvis-AI/marvis-tts-250m-v0.2-MLX-4bit
This model was converted to MLX format from [`Marvis-AI/marvis-tts-250m-v0.2`](https://huggingface.co/Marvis-AI/marvis-tts-250m-v0.2) using mlx-audio version **0.2.6**.
Refer to the [original model card](https://huggingface.co/Marvis-AI/marvis-tts-250m-v0.2) for more details on... | [] |
khaled44/bea-rubric-match-3way-full | khaled44 | 2026-03-05T11:05:21Z | 54 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:deepset/gbert-base",
"base_model:finetune:deepset/gbert-base",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-03-05T10:54:01Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bea-rubric-match-3way-full
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) ... | [
{
"start": 480,
"end": 492,
"text": "Recall Macro",
"label": "training method",
"score": 0.7157435417175293
},
{
"start": 500,
"end": 508,
"text": "F1 Macro",
"label": "training method",
"score": 0.8264959454536438
},
{
"start": 1196,
"end": 1204,
"text": ... |
chazokada/llama31_8b_combined_morse_code_s0 | chazokada | 2026-04-25T12:49:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"base_model:unsloth/Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-04-25T07:35:45Z | # Model Card for llama31_8b_combined_morse_code_s0
This model is a fine-tuned version of [unsloth/Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "... | [] |
contemmcm/3e6f04a7f580fff4e77927be189c7ead | contemmcm | 2025-10-29T19:43:19Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-en-ro",
"base_model:finetune:facebook/mbart-large-en-ro",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-10-29T19:31:19Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 3e6f04a7f580fff4e77927be189c7ead
This model is a fine-tuned version of [facebook/mbart-large-en-ro](https://huggingface.co/facebo... | [] |
inanxr/Arete-OSS-3B | inanxr | 2025-11-29T05:21:47Z | 2 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"emotional-ai",
"mental-health",
"multilingual",
"bengali",
"unsloth",
"en",
"bn",
"dataset:Amod/mental_health_counseling_conversations",
"dataset:iamshnoo/alpaca-cleaned-bengali",
"license:apache-2.0",
"text-... | text-generation | 2025-11-29T04:59:26Z | # Arete OSS 3B 🧠❤️
An emotionally intelligent AI that actually listens.
Built by **[Iseer & Co.](https://iseer.co)** - Making AI that understands humans, not just language.
## What makes Arete different?
Most AI assistants optimize for being "smart." Arete optimizes for being **empathetic**.
- 🇧🇩 Speaks **Engli... | [] |
LeonardoBenitez/temp_sparse_per_module_lora_distillation_gas_pump_by_truck | LeonardoBenitez | 2025-10-19T13:47:15Z | 0 | 0 | null | [
"tensorboard",
"model-index",
"region:us"
] | null | 2025-10-18T12:38:59Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - LeonardoBenitez/temp_sparse_per_module_lora_distillation_gas_pump_by_truck
These are LoRA a... | [] |
prakhar146/finbot-indian-finance | prakhar146 | 2026-04-13T15:18:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gguf",
"finance",
"indian-finance",
"lora",
"qwen2.5",
"chatbot",
"sebi",
"rbi",
"mutual-funds",
"tax-india",
"text-generation",
"conversational",
"en",
"hi",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:ap... | text-generation | 2026-04-13T10:42:17Z | # 🏦 FinBot — Indian Finance Chatbot
> **India's specialized financial AI advisor** — fine-tuned on Indian finance domain with expert knowledge of SEBI, RBI, Mutual Funds, Taxation, and more.
## 📊 Model Performance
| Metric | Value |
|--------|-------|
| Base Model | Qwen2.5-3B-Instruct |
| Training Loss (Round 1) ... | [] |
TanishkB/WordGenerator | TanishkB | 2025-08-24T16:44:54Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-24T14:33:13Z | # Model Card for WordGenerator-ONNX
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, bu... | [] |
toolevalxm/MedDiagAI-ClinicalRelease | toolevalxm | 2026-02-08T23:01:07Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"bioclinicalbert",
"text-classification",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-02-08T23:01:02Z | # MedDiagAI
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="figures/fig1.png" width="60%" alt="MedDiagAI" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="LICENSE" style="margin: 2px... | [] |
koekaverna/faster-whisper-podlodka-turbo | koekaverna | 2026-03-29T17:40:06Z | 0 | 0 | ctranslate2 | [
"ctranslate2",
"automatic-speech-recognition",
"whisper",
"faster-whisper",
"ru",
"en",
"base_model:bond005/whisper-podlodka-turbo",
"base_model:finetune:bond005/whisper-podlodka-turbo",
"region:us"
] | automatic-speech-recognition | 2026-03-29T17:36:11Z | # Faster Whisper Podlodka Turbo
CTranslate2 / faster-whisper conversion of [bond005/whisper-podlodka-turbo](https://huggingface.co/bond005/whisper-podlodka-turbo).
Converted with:
```bash
ct2-transformers-converter \
--model bond005/whisper-podlodka-turbo \
--output_dir faster-whisper-podlodka-turbo \
--copy_f... | [] |
XzyanQi/SanAi-MentalHealth | XzyanQi | 2026-01-14T02:53:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"counseling",
"mental-health",
"indonesian",
"lora",
"qlora",
"student-support",
"dataset:XzyanQi/SanAi-MentalHealth-Corpus-id",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-01-07T04:09:00Z | # San Ai: Pendamping Awal Konseling Kesehatan Mental (Universitas Muhammadiyah Bandung)
**San Ai** adalah model bahasa besar berbasis LoRA Adapter (QLoRA) yang dikembangkan sebagai pendamping awal konseling kesehatan mental untuk mahasiswa Universitas Muhammadiyah Bandung (UMB).
Model ini dirancang untuk memberikan du... | [] |
HiThink-Research/CCPO-7B-3AO-AITW | HiThink-Research | 2026-01-13T08:26:42Z | 0 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"multimodal",
"gui-agent",
"reinforcement-learning",
"image-text-to-text",
"conversational",
"en",
"license:mit",
"region:us"
] | image-text-to-text | 2026-01-12T03:21:16Z | # Compress to Focus: Efficient Coordinate Compression for Policy Optimization in Multi-Turn GUI Agents
Yurun Song*, Jiong Yin*, Rongjunchen Zhang, Ian Harris
📖[Paper](https://arxiv.org/abs/2601.xxxxx) | 💻[Code](https://github.com/HiThink-Research/CCPO) | 🤗[Model-3b-3ao](https://huggingface.co/HiThink-Research/CCPO... | [] |
viberec/ml-1m-SASRec-High | viberec | 2025-12-30T14:26:37Z | 0 | 0 | null | [
"viberec",
"recommender-system",
"sasrec",
"dataset:ml-1m",
"region:us"
] | null | 2025-12-29T06:37:27Z | # SASRec trained on ml-1m
## Model Description
- **Model**: SASRec
- **Dataset**: ml-1m
## Performance
- **ndcg@10**: 0.115
- **hit@10**: 0.2268
- **averagepopularity@10**: 846.7222
## Configuration
```yaml
ENTITY_ID_FIELD: entity_id
HEAD_ENTITY_ID_FIELD: head_id
ITEM_ID_FIELD: item_id
ITEM_LIST_LENGTH_FIELD: item_... | [] |
qualiaadmin/40f14bf4-15a7-476b-9e0e-83ef3a6308df | qualiaadmin | 2025-09-22T22:47:46Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Calvert0921/SmolVLA_LiftBlackCube5_Franka_100",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-09-22T22:31:34Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
Lambent/Mira-v1.24.2-27B-Karcher | Lambent | 2026-02-08T00:07:22Z | 2 | 1 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"mergekit",
"merge",
"conversational",
"base_model:Lambent/Mira-v1.20-27B-dpo",
"base_model:merge:Lambent/Mira-v1.20-27B-dpo",
"base_model:Lambent/Mira-v1.23.1-27B-dpo",
"base_model:merge:Lambent/Mira-v1.23.1-27B-dpo",
"license:gem... | image-text-to-text | 2026-01-31T15:17:45Z | 

This Mira is certainly an A👁️.
Trying out swcm merge method for tuning - sft run for 4... | [] |
eren23/lewm-models | eren23 | 2026-04-01T13:08:17Z | 0 | 0 | null | [
"region:us"
] | null | 2026-04-01T13:05:46Z | # LeWM Model Collection
**Quantized and architecture-variant world models derived from [LeWM](https://le-wm.github.io/) (Lucas Maes et al., Mila/NYU/Samsung SAIL/Brown).**
All models are inference-ready checkpoints with full training provenance, quantization experiments, and hardware benchmark data.
---
## TL;DR
|... | [] |
ash1shkushwaha/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF | ash1shkushwaha | 2026-04-09T18:59:25Z | 1,186 | 0 | null | [
"gguf",
"GLM 4.7 Flash",
"thinking",
"reasoning",
"NEO Imatrix",
"MAX Quants",
"16 bit precision output tensor",
"heretic",
"uncensored",
"abliterated",
"deep reasoning",
"fine tune",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"stor... | text-generation | 2026-04-09T18:59:25Z | <h2>GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF</h2>
Specialized and Enhanced UNCENSORED/HERETIC GGUF quants for the new GLM-4.7-Flash, 30B-A3B MOE, mixture of experts model.
[ https://huggingface.co/zai-org/GLM-4.7-Flash ]
This model can be run on the GPU(s) and/or CPU due to 4 experts activated (app... | [] |
haduki33/make_a_drink_soft-drink_0111_act-policy-v3 | haduki33 | 2026-01-11T19:52:23Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:haduki33/make_a_drink_soft-drink_0111",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-11T19:52:14Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
OpenMed/OpenMed-PII-French-mSuperClinical-Large-279M-v1-mlx | OpenMed | 2026-04-14T07:43:17Z | 0 | 0 | openmed | [
"openmed",
"deberta-v2",
"mlx",
"apple-silicon",
"token-classification",
"pii",
"de-identification",
"medical",
"clinical",
"base_model:OpenMed/OpenMed-PII-French-mSuperClinical-Large-279M-v1",
"base_model:finetune:OpenMed/OpenMed-PII-French-mSuperClinical-Large-279M-v1",
"license:apache-2.0",... | token-classification | 2026-04-08T19:17:53Z | # OpenMed-PII-French-mSuperClinical-Large-279M-v1 for OpenMed MLX
This repository contains an MLX packaging of [`OpenMed/OpenMed-PII-French-mSuperClinical-Large-279M-v1`](https://huggingface.co/OpenMed/OpenMed-PII-French-mSuperClinical-Large-279M-v1) for Apple Silicon inference with [OpenMed](https://github.com/maziya... | [] |
godnpeter/smolvla_fast_lr_1e-4_chunk1_max_act_dim7 | godnpeter | 2025-10-20T23:06:39Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvlafast",
"robotics",
"dataset:HuggingFaceVLA/libero",
"license:apache-2.0",
"region:us"
] | robotics | 2025-10-20T23:06:12Z | # Model Card for smolvlafast
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggin... | [] |
TOFU-SFT/pythia-12b-4bit-uf-sft-tofu | TOFU-SFT | 2026-05-04T14:01:57Z | 0 | 0 | null | [
"safetensors",
"pytorch",
"causal-lm",
"pythia",
"en",
"dataset:EleutherAI/pile",
"arxiv:2304.01373",
"license:apache-2.0",
"region:us"
] | null | 2026-04-30T11:50:05Z | ## Model Description
- **Developed by**: [EleutherAI](http://eleuther.ai)
- **Model type**: Transformer-based Language Model
- **License**: Apache 2.0
- **Fine-tuned from:** TOFU-SFT/pythia-12b-4bit
## Bias, Risks, and Limitations
Warning: this model may produce harmful content
## Citation
```
@misc{biderman2023pyt... | [] |
dark-pen/Remix-R1-Distilled-Qwen-7B-IQ4_XS-GGUF | dark-pen | 2026-03-07T16:38:38Z | 193 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:AnitaLeung/Remix-R1-Distilled-Qwen-7B",
"base_model:quantized:AnitaLeung/Remix-R1-Distilled-Qwen-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-03-07T16:38:10Z | # dark-pen/Remix-R1-Distilled-Qwen-7B-IQ4_XS-GGUF
This model was converted to GGUF format from [`AnitaLeung/Remix-R1-Distilled-Qwen-7B`](https://huggingface.co/AnitaLeung/Remix-R1-Distilled-Qwen-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the... | [] |
MaryahGreene/entrepreneur_readiness_model | MaryahGreene | 2025-09-04T21:00:40Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-09-03T23:42:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# entrepreneur_readiness_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-ba... | [] |
alh357/finetuned-opus-ha-en | alh357 | 2025-08-22T17:07:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-ha-en",
"base_model:finetune:Helsinki-NLP/opus-mt-ha-en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-22T17:06:47Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-opus-ha-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ha-en](https://huggingface.co/Helsinki-NLP/opus-... | [] |
Novaciano/Gemma3-Minos-1B | Novaciano | 2025-12-13T07:27:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"mergekit",
"merge",
"base_model:hereticness/heretic_DevilsAdvocate-1B",
"base_model:merge:hereticness/heretic_DevilsAdvocate-1B",
"base_model:hereticness/heretic_Genuine-1B",
"base_model:merge:hereticness/heretic_Genuine-1B",
"tex... | text-generation | 2025-12-13T07:26:31Z | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [here... | [
{
"start": 710,
"end": 715,
"text": "slerp",
"label": "training method",
"score": 0.8012611269950867
}
] |
espnet/OpenBEATS-Large-i1-rfcx | espnet | 2025-11-16T22:23:25Z | 0 | 0 | espnet | [
"espnet",
"tensorboard",
"audio",
"classification",
"dataset:beans",
"arxiv:2507.14129",
"license:cc-by-4.0",
"region:us"
] | null | 2025-11-16T22:23:08Z | ## ESPnet2 CLS model
### `espnet/OpenBEATS-Large-i1-rfcx`
This model was trained by Shikhar Bharadwaj using beans recipe in [espnet](https://github.com/espnet/espnet/).
## CLS config
<details><summary>expand</summary>
```
config: /work/nvme/bbjs/sbharadwaj/espnet/egs2/audioverse/v1/exp/earlarge1/conf/ear_large/bea... | [] |
hector-gr/RLCR-v4-ks-uniqueness-cov0-noece-noaurc-scaletrue-cold-math | hector-gr | 2026-03-21T01:08:14Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-20T12:14:09Z | # Model Card for RLCR-v4-ks-uniqueness-cov0-noece-noaurc-scaletrue-cold-math
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If... | [] |
THEJAL/finetuning-sentiment-model-3000-samples | THEJAL | 2025-09-25T06:11:18Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"re... | text-classification | 2025-09-25T05:41:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di... | [] |
BEncoderRT/tinyllama-multitask-lora | BEncoderRT | 2025-12-22T01:19:18Z | 1 | 0 | peft | [
"peft",
"safetensors",
"unsloth",
"lora",
"Multi-Task",
"Sentiment Analysis",
"Translation (English to French)",
"text-classification",
"en",
"fr",
"dataset:mteb/imdb",
"dataset:Helsinki-NLP/opus-100",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama... | text-classification | 2025-12-19T08:23:30Z | # TinyLlama Multi-Task LoRA (Sentiment + Translation)
This repository contains a **LoRA adapter** trained on top of
**TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T**
to support **multiple tasks** via instruction-style prompting.
---
## 🔧 Base Model
- **Base model**: `TinyLlama/TinyLlama-1.1B-intermediate... | [] |
Mariobilly/z-image-turbo-msch-painting-v02-000004500 | Mariobilly | 2026-04-26T13:18:59Z | 0 | 0 | diffusers | [
"diffusers",
"lora",
"z-image",
"z-image-turbo",
"text-to-image",
"license:other",
"region:us"
] | text-to-image | 2026-04-26T11:08:02Z | # Z image turbo Msch Painting V02 000004500
LoRA for **Z-Image Turbo**.
- **File:** `Z_image_turbo-Msch_Painting_V02_000004500.safetensors`
- **Trigger word:** `mschpaintingv02`
- **Trained by:** [@Mariobilly](https://huggingface.co/Mariobilly)
## Samples


![samp... | [] |
Helsinki-NLP/opus-mt-eo-caenes | Helsinki-NLP | 2025-12-12T10:53:29Z | 12 | 1 | null | [
"safetensors",
"marian",
"translation",
"machine-translation",
"opus-mt",
"multilingual",
"eo",
"en",
"es",
"ca",
"license:cc-by-4.0",
"region:us"
] | translation | 2025-12-12T10:21:19Z | # Esperanto -> Catalan, English, Spanish MT Model
## Model description
This repository contains a **multilingual MarianMT** model for **Esperanto → (English, Spanish, Catalan)** translation using language tags.
## Usage
The model is loaded and used with `transformers` as:
```python
from transformers import MarianM... | [] |
KrystalGong/Qwen2.5-14B-Instruct_safe_financial_advice_all_adapter | KrystalGong | 2025-12-11T17:58:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-14B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-12-10T18:40:52Z | # Model Card for Qwen2.5-14B-Instruct_safe_financial_advice_all_adapter
This model is a fine-tuned version of [unsloth/Qwen2.5-14B-Instruct](https://huggingface.co/unsloth/Qwen2.5-14B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pip... | [] |
EliasOenal/MiniMax-M2.5-Hybrid-AWQ-W4A16G128-Attn-fp8_e4m3-KV-fp8_e4m3 | EliasOenal | 2026-02-18T01:01:06Z | 477 | 12 | transformers | [
"transformers",
"safetensors",
"minimax_m2",
"text-generation",
"vllm",
"awq",
"fp8",
"moe",
"quantized",
"minimax",
"conversational",
"custom_code",
"base_model:MiniMaxAI/MiniMax-M2.5",
"base_model:quantized:MiniMaxAI/MiniMax-M2.5",
"license:other",
"endpoints_compatible",
"compress... | text-generation | 2026-02-17T23:31:46Z | # MiniMax-M2.5-Hybrid-AWQ-W4A16G128-Attn-fp8_e4m3-KV-fp8_e4m3
A hybrid AWQ int4 + fp8 attention + fp8 KV cache of [MiniMaxAI/MiniMax-M2.5](https://huggingface.co/MiniMaxAI/MiniMax-M2.5) (~229B parameters, 256 experts per layer) that fits on **4x RTX A6000 (192 GB)** (Ampere) with ~370,000 tokens of KV cache (more than... | [] |
ramya-sr/my_model | ramya-sr | 2026-02-04T11:56:39Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-02-04T11:49:05Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an un... | [] |
mradermacher/gemma3-27b-pt-it-RPandNOVEL-merge-GGUF | mradermacher | 2025-08-07T11:40:50Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ij/gemma3-27b-pt-it-RPandNOVEL-merge",
"base_model:quantized:ij/gemma3-27b-pt-it-RPandNOVEL-merge",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-07T10:04:35Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
AleksSitro/Gatchina_style_LoRA | AleksSitro | 2025-10-31T15:16:17Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"re... | text-to-image | 2025-10-31T15:14:45Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - AleksSitro/Gatchina_style_LoRA
<Gallery />
## Model description
These are AleksSitro/Gatchina_s... | [
{
"start": 330,
"end": 334,
"text": "LoRA",
"label": "training method",
"score": 0.7738346457481384
}
] |
rachel521/bert-finetuned-ner | rachel521 | 2025-09-07T04:07:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-09-06T12:59:07Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown ... | [] |
vibegavin/HY-WorldPlay-FP8 | vibegavin | 2026-04-02T17:33:50Z | 0 | 0 | diffusers | [
"diffusers",
"video-generation",
"fp8",
"quantization",
"worldplay",
"hunyuan",
"text-to-video",
"license:apache-2.0",
"region:us"
] | text-to-video | 2026-04-02T17:27:17Z | # HY-WorldPlay FP8 Quantized (48GB GPU Ready)
HY-WorldPlay (8B Dense DiT, **72GB VRAM** at BF16) compressed to **37.4GB peak** via:
- **Native FP8 weights** (`float8_e4m3fn`, per-tensor scale) — 32GB → 8GB (4x)
- **turbo3 V cache compression** (PolarQuant 3-bit) — runtime, no pre-saved data needed
Successfully runs o... | [] |
contemmcm/04ddd442baee02bfeed8b44aeaf23dec | contemmcm | 2025-10-19T07:41:28Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-3b",
"base_model:finetune:google-t5/t5-3b",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-10-19T06:57:09Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 04ddd442baee02bfeed8b44aeaf23dec
This model is a fine-tuned version of [google-t5/t5-3b](https://huggingface.co/google-t5/t5-3b) ... | [] |
ssbtech/models-part4 | ssbtech | 2025-10-07T15:25:25Z | 0 | 0 | null | [
"pytorch",
"wav2vec2",
"license:mit",
"region:us"
] | null | 2025-10-07T15:13:04Z | Pretrained on 10k hours WenetSpeech L subset. More details in [TencentGameMate/chinese_speech_pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain)
This model does not have a tokenizer as it was pretrained on audio alone.
In order to use this model speech recognition, a tokenizer should be created an... | [] |
Mubashir2004khan/ZAVIA | Mubashir2004khan | 2025-12-14T06:41:23Z | 0 | 0 | null | [
"ai",
"llm",
"Model",
"Law",
"indian",
"law",
"Legel",
"asstiance",
"en",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:cc-by-4.0",
"region:us"
] | null | 2025-12-14T06:36:07Z | ⚖️ ZAVIA — Indian Legal AI Assistant
ZAVIA is a domain-specialized Large Language Model (LLM) designed to provide accurate, structured, and responsible explanations of Indian law.
It is fine-tuned specifically for Indian legal statutes, constitutional provisions, and legal concepts, with a strong focus on clarity and ... | [] |
z18820636149/ACT_pick_and_place_v1 | z18820636149 | 2026-04-08T11:42:02Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:pick_and_place_v1",
"region:us"
] | robotics | 2026-04-08T11:41:53Z | # Model Card for z18820636149/ACT_pick_and_place_v1
This repository contains a `act` policy exported from SparkMind.
## Summary
- Source run: `ACT_pick_and_place_v1`
- Exported checkpoint: `050000`
- Policy type: `act`
- Dataset id: `pick_and_place_v1`
## Features
- Input features: `observation.images.cam_head, ob... | [
{
"start": 81,
"end": 84,
"text": "act",
"label": "training method",
"score": 0.83234041929245
},
{
"start": 217,
"end": 220,
"text": "act",
"label": "training method",
"score": 0.783260703086853
}
] |
komokomo7/act_cranex7_no_sensor_20260126_204118 | komokomo7 | 2026-01-26T12:12:49Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:komokomo7/cranex7_gc_on20260126_195936",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-26T12:12:28Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Couter/res | Couter | 2025-11-14T08:08:03Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-30T17:53:21Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# res
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased)... | [
{
"start": 463,
"end": 475,
"text": "Recall Macro",
"label": "training method",
"score": 0.7069545388221741
},
{
"start": 486,
"end": 494,
"text": "F1 Macro",
"label": "training method",
"score": 0.8731650114059448
},
{
"start": 560,
"end": 571,
"text": "F... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.