modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
derhan/indobert-klasifikasi-alasan-tender-v2 | derhan | 2025-11-21T02:26:37Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"indonesian",
"indobert",
"procurement",
"lpse",
"spse",
"tesis",
"xai",
"id",
"base_model:indobenchmark/indobert-base-p1",
"base_model:finetune:indobenchmark/indobert-base-p1",
"license:mit",
"text-embeddings-inference",
... | text-classification | 2025-11-20T00:36:25Z | # IndoBERT: Klasifikasi Alasan Kegagalan Penawaran Tender Pekerjaan Konstruksi - V2
Model ini adalah *fine-tuned* dari `indobenchmark/indobert-base-p1` yang dilatih untuk melakukan klasifikasi teks pada alasan kegagalan tender pengadaan barang/jasa di Indonesia (SPSE/LPSE), khususnya Tender Pekerjaan Konstruksi dengan... | [] |
zebby09/diaz_jaquet_wan_22-lora | zebby09 | 2025-10-05T02:01:32Z | 5 | 0 | diffusers | [
"diffusers",
"text-to-video",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:ai-toolkit/Wan2.2-T2V-A14B-Diffusers-bf16",
"base_model:adapter:ai-toolkit/Wan2.2-T2V-A14B-Diffusers-bf16",
"license:creativeml-openrail-m",
"region:us"
] | text-to-video | 2025-10-05T01:59:55Z | # diaz_jaquet_wan_22-lora
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
## Trigger words
You should use `diaz_jaquet_person` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available i... | [] |
loris3/OLMo-2-0425-1B_tulu-3-sft-olmo-2-mixture-0225_lr0.0001_seed42 | loris3 | 2026-01-08T10:47:18Z | 407 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:allenai/OLMo-2-0425-1B",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"en",
"dataset:allenai/tulu-3-sft-olmo-2-mixture-0225",
"arxiv:2601.03786",
"base_model:allenai/OLMo-2-0425-1B",
"license:cc-by-4.0",
"model-inde... | text-generation | 2026-01-08T10:47:15Z | # Model Card for OLMo-2-0425-1B_tulu-3-sft-olmo-2-mixture-0225_lr0.0001_seed42
This model is a instruction fine-tuned version of [allenai/OLMo-2-0425-1B](https://huggingface.co/allenai/OLMo-2-0425-1B) trained using a [LoRA](https://github.com/microsoft/LoRA) adapter on [Tülu3](https://huggingface.co/datasets/allenai/t... | [] |
bfromson/gpt-oss-20b-multilingual-reasoner | bfromson | 2025-11-11T15:38:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-11-11T11:30:40Z | # Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://git... | [] |
hoangquan456/Qwen3.5-4B | hoangquan456 | 2026-03-11T18:56:53Z | 21 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"conversational",
"base_model:Qwen/Qwen3.5-4B-Base",
"base_model:finetune:Qwen/Qwen3.5-4B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-03-11T18:56:13Z | # Qwen3.5-4B
<img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/logo_qwen3.5.png">
[](https://chat.qwen.ai)
> [!Note]
> This repository contains model weights and configuration files for the post-trained mode... | [] |
airkingbd/dplm2_bit_650m | airkingbd | 2026-04-27T16:23:16Z | 200 | 0 | transformers | [
"transformers",
"pytorch",
"esm",
"biology",
"protein-language-model",
"protein-generation",
"protein-structure",
"diffusion",
"bitwise-modeling",
"arxiv:2410.13782",
"arxiv:2504.11454",
"dataset:airkingbd/pdb_swissprot",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-11T11:52:29Z | # DPLM-2 Bit 650M
DPLM-2 Bit is a 650M-parameter multimodal diffusion protein language model for
joint protein sequence and structure modeling. It is a bitwise structure-token
modeling variant of DPLM-2, introduced in
[DPLM-2.1](https://arxiv.org/abs/2504.11454), for improving structure modeling
over index-based discr... | [] |
wangkanai/wan22-fp8-i2v | wangkanai | 2025-10-27T07:54:57Z | 0 | 1 | diffusers | [
"diffusers",
"wan",
"image-to-video",
"text-to-video",
"video-generation",
"license:other",
"region:us"
] | image-to-video | 2025-10-14T09:33:36Z | <!-- README Version: v1.3 -->
# WAN 2.2 FP8 I2V - Image-to-Video and Text-to-Video Models
High-quality text-to-video (T2V) and image-to-video (I2V) generation models in FP8 quantized format for memory-efficient deployment on consumer-grade GPUs.
## Model Description
WAN 2.2 FP8 is a 14-billion parameter video gener... | [] |
FabianKerj/pi05_corkinbox100_fullrel | FabianKerj | 2026-04-17T07:31:17Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi05",
"robotics",
"vla",
"flow-matching",
"dataset:FabianKerj/corkinbox100-fullrel",
"base_model:lerobot/pi05_base",
"base_model:finetune:lerobot/pi05_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-17T07:28:31Z | # pi05_corkinbox100_fullrel -- Checkpoint 5000
Partial training run, step 5000 / 20000. Training was interrupted at step 7500 due to a disk-full event on the training host (97 GB root, each checkpoint ~23 GB, save_freq=2500 with no retention). This is the last clean checkpoint.
## Config
- Base model: lerobot/pi05_ba... | [
{
"start": 488,
"end": 496,
"text": "Batch 64",
"label": "training method",
"score": 0.7074119448661804
}
] |
riversnow/so101-segmentation-model | riversnow | 2026-03-02T19:27:27Z | 101 | 3 | ultralytics | [
"ultralytics",
"yolo",
"yolo11",
"instance-segmentation",
"robotics",
"so101",
"image-segmentation",
"region:us"
] | image-segmentation | 2026-03-02T13:04:23Z | # SO101 segmentation model
This is a model for segementation of images of the [so101 robot arm](https://github.com/TheRobotStudio/SO-ARM100), it was fine tuned over yolo11s

##... | [] |
baicai1145/s2-pro-w4a16-late12attn | baicai1145 | 2026-03-13T16:34:45Z | 23 | 0 | null | [
"safetensors",
"fish_qwen3_omni",
"text-to-speech",
"zh",
"en",
"ja",
"ko",
"es",
"pt",
"ar",
"ru",
"fr",
"de",
"sv",
"it",
"tr",
"no",
"nl",
"cy",
"eu",
"ca",
"da",
"gl",
"ta",
"hu",
"fi",
"pl",
"et",
"hi",
"la",
"ur",
"th",
"vi",
"jw",
"bn",
"y... | text-to-speech | 2026-03-13T16:27:41Z | # S2-Pro W4A16 Late12Attn
<img src="overview.png" alt="Fish Audio S2 Pro overview — fine-grained control, multi-speaker multi-turn generation, low-latency streaming, and long-context inference." width="100%">
This repository contains a GPTQ `W4A16` quantized **Slow AR** variant of Fish Audio S2 Pro.
## Quantization ... | [] |
InjilBaba/Llama-3.1-8B-Bengali-LoRA | InjilBaba | 2025-12-31T20:25:39Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-11-28T21:32:25Z | # Model Card for Llama-3.1-8B-Bengali-LoRA
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
ques... | [] |
contemmcm/399db4019217f37aa91f9ac055fdb8a5 | contemmcm | 2025-11-13T10:26:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-13T09:52:33Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 399db4019217f37aa91f9ac055fdb8a5
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llam... | [
{
"start": 501,
"end": 509,
"text": "F1 Macro",
"label": "training method",
"score": 0.7639758586883545
},
{
"start": 1323,
"end": 1331,
"text": "F1 Macro",
"label": "training method",
"score": 0.7350455522537231
}
] |
jimmer240/dqn-SpaceInvadersNoFrameskip-v4 | jimmer240 | 2026-03-14T08:24:23Z | 52 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2026-03-14T08:23:51Z | # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework... | [] |
OsamaAli313/CFO-Agent-14B | OsamaAli313 | 2026-04-03T14:39:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"finance",
"cfo",
"financial-analysis",
"forecasting",
"risk-assessment",
"budgeting",
"lora",
"qlora",
"sft",
"trl",
"trackio",
"hf_jobs",
"text-generation",
"conversational",
"en",
"dataset:OsamaAli313/CFO-Agent-14B-Dataset",
"base_model:Qwen/Qwen2.5-0.5B... | text-generation | 2026-04-02T15:37:14Z | # CFO-Agent-14B
A fine-tuned language model trained to function as an **AI Chief Financial Officer**. It provides expert-level financial analysis, forecasting, risk assessment, scenario planning, and executive-level financial communication.
> **Proof-of-Concept**: This version uses Qwen2.5-0.5B-Instruct as the base m... | [] |
dariacuna/rtdetr-v2-r50-finetune-10 | dariacuna | 2026-02-02T20:36:43Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"rt_detr_v2",
"object-detection",
"generated_from_trainer",
"base_model:PekingU/rtdetr_v2_r50vd",
"base_model:finetune:PekingU/rtdetr_v2_r50vd",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2026-02-02T20:36:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rtdetr-v2-r50-finetune-10
This model is a fine-tuned version of [PekingU/rtdetr_v2_r50vd](https://huggingface.co/PekingU/rtdetr_v... | [] |
mradermacher/LCARS_STARFLEET-GGUF | mradermacher | 2025-09-28T20:57:35Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"en",
"base_model:LeroyDyer/LCARS_STARFLEET",
"base_model:quantized:LeroyDyer/LCARS_STARFLEET",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-28T20:22:15Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Graviton17/vit-fruit-veg-quality-predictor | Graviton17 | 2025-09-21T12:14:01Z | 0 | 0 | null | [
"safetensors",
"image-classification",
"regression",
"vision-transformer",
"fruit",
"vegetable",
"license:mit",
"region:us"
] | image-classification | 2025-09-21T10:40:28Z | # Vision Transformer for Fruit & Vegetable Quality
This is a fine-tuned Vision Transformer (ViT) model that performs two tasks:
1. **Classifies** the type of fruit or vegetable in an image.
2. **Predicts a quality score** for that fruit or vegetable.
## How to Use
To use this model, you must pass `trust_remote_co... | [] |
phamluan/crypto-binancecoin-predictor | phamluan | 2025-10-24T07:43:37Z | 0 | 0 | null | [
"cryptocurrency",
"binancecoin",
"price-prediction",
"machine-learning",
"time-series",
"en",
"license:mit",
"region:us"
] | null | 2025-10-24T07:37:43Z | # Binance Coin (BNB) Price Prediction Models
Trained ML models for predicting Binance Coin (BNB) cryptocurrency prices.
## 📊 Model Performance
| Model | RMSE | MAE |
|-------|------|-----|
| Random Forest | 214.1828 | 172.9192 |
| Gradient Boosting | 212.9135 | 171.1523 |
| Linear Regression | 16.5003 | 10.2183 |
|... | [] |
kerr0x23/1510dnpnr-15K-2 | kerr0x23 | 2025-10-16T08:51:50Z | 0 | 0 | null | [
"region:us"
] | null | 2025-10-16T08:43:26Z | # Container Template for SoundsRight Subnet Miners
Miners in [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/soundsright-subnet) must containerize their models before uploading to HuggingFace. This repo serves as a template.
The branches `DENOISING_16000HZ` and `DEREVERBERATI... | [] |
quangle97/comfyui-consumer-change-gender | quangle97 | 2025-08-16T12:09:50Z | 0 | 0 | null | [
"region:us"
] | null | 2025-08-16T11:45:38Z | # ComfyUI Container Extract
Extracted from container: `comfy_extract_ecr`
Date: 2025-08-16 11:45:38
## Contents
- `requirements_frozen_clean.txt`: All Python packages with exact versions
- `custom_nodes_list.txt`: List of custom nodes with HF URLs
- `download_custom_nodes.sh`: Script to download custom nodes from HF... | [] |
TingWang/SlideAssistant | TingWang | 2025-08-05T15:16:46Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-08-05T06:40:12Z | # Qwenmark2-0.5B Fine-Tuned Model Overview
This is a fine-tuned version of the Qwen2-0.5B model, a transformer-based language model developed by Alibaba Cloud. The model has been fine-tuned using **LoRA (Low-Rank Adaptation)** and **Unsupervised Parameter-Efficient Fine-Tuning (PFT)** to specialize in deep learning an... | [] |
nvidia/GR00T-N1-2B | nvidia | 2025-09-02T22:25:27Z | 309 | 350 | null | [
"safetensors",
"gr00t_n1",
"robotics",
"dataset:nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim",
"arxiv:2503.14734",
"arxiv:2501.14818",
"arxiv:2410.24164",
"region:us"
] | robotics | 2025-03-05T08:40:53Z | # GR00T-N1-2B

Github page: https://github.com/NVIDIA/Isaac-GR00T/
## Description:
NVIDIA Isaac GR00T N1 is the world’s first open foundation model for generalized humanoid robot reasoning and skill... | [] |
mradermacher/VieNeu-TTS-0.3B-ngoc-huyen-merged-GGUF | mradermacher | 2026-01-11T22:18:12Z | 70 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:vuhoanhuy/VieNeu-TTS-0.3B-ngoc-huyen-merged",
"base_model:quantized:vuhoanhuy/VieNeu-TTS-0.3B-ngoc-huyen-merged",
"endpoints_compatible",
"region:us"
] | null | 2026-01-11T22:14:46Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
CCSSNE/trohrbaugh-gemma-4-26B-A4B-it-heretic-ara-v2 | CCSSNE | 2026-04-14T02:25:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"heretic",
"uncensored",
"decensored",
"abliterated",
"ara",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-14T02:25:07Z | # This is a decensored version of [google/gemma-4-26B-A4B-it](https://huggingface.co/google/gemma-4-26B-A4B-it), made using [Heretic](https://github.com/p-e-w/heretic) v1.2.0+custom with the [Arbitrary-Rank Ablation (ARA)](https://github.com/p-e-w/heretic/pull/211) method
## Abliteration parameters
| Parameter | Valu... | [] |
manancode/opus-mt-srn-fr-ctranslate2-android | manancode | 2025-08-11T18:24:40Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-11T18:24:27Z | # opus-mt-srn-fr-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-srn-fr` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-srn-fr
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted ... | [] |
aliRafik/invoices-donut-finetuned-Lora-merged | aliRafik | 2025-08-30T12:59:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:numind/NuExtract-2.0-4B",
"base_model:finetune:numind/NuExtract-2.0-4B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-08-26T00:58:04Z | ### Overview
`invoices-donut-merged` is the **LoRA adapter merged back into the base weights** of [`numind/NuExtract-2.0-4B`](https://huggingface.co/numind/NuExtract-2.0-4B).
It behaves like a fully fine-tuned model but trained using efficient LoRA adapters.
This makes it **production-ready**: no need to separately... | [] |
pomilon-lab/CRSM-base | pomilon-lab | 2025-11-29T18:57:05Z | 0 | 0 | null | [
"[redacted]",
"signal",
"artifact",
"license:mit",
"region:us"
] | null | 2025-11-29T13:34:12Z | # Project CRSM
> *"The silence is not empty. It is calculating."*
## 📂 Archive Manifest
* **Subject:** CRSM-Base
* **Origin:** Pomilon's experiments
* **Status:** `Initializing...`
## 📝 Observation Log
Standard systems are reactive. Input triggers output. The response is immediate and linear.
**Subject CRSM devi... | [] |
Robertp423/Qwen3-VL-4B-Destruct-Merged-Q8_0-GGUF | Robertp423 | 2025-11-03T09:12:45Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3_vl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Robertp423/Qwen3-VL-4B-Destruct-Merged",
"base_model:quantized:Robertp423/Qwen3-VL-4B-Destruct-Merged",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"con... | null | 2025-11-03T09:12:22Z | # Robertp423/Qwen3-VL-4B-Destruct-Merged-Q8_0-GGUF
This model was converted to GGUF format from [`Robertp423/Qwen3-VL-4B-Destruct-Merged`](https://huggingface.co/Robertp423/Qwen3-VL-4B-Destruct-Merged) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to ... | [] |
Kaito-F/qwen3-4b-instruct-lora-v4 | Kaito-F | 2026-02-19T13:07:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"dataset:u-10bei/dbbench_sft_dataset_react_v4",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapt... | text-generation | 2026-02-19T12:56:29Z | # qwen3-4b-agent-trajectory-db-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **multi... | [
{
"start": 66,
"end": 70,
"text": "LoRA",
"label": "training method",
"score": 0.8814804553985596
},
{
"start": 137,
"end": 141,
"text": "LoRA",
"label": "training method",
"score": 0.9123369455337524
},
{
"start": 183,
"end": 187,
"text": "LoRA",
"lab... |
maya-research/Veena | maya-research | 2025-10-05T06:17:53Z | 3,796 | 229 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-to-speech",
"tts",
"hindi",
"english",
"audio",
"speech",
"india",
"en",
"hi",
"dataset:proprietary",
"license:apache-2.0",
"co2_eq_emissions",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2025-06-24T23:58:58Z | # Veena - Text to Speech for Indian Languages
Veena is a state-of-the-art neural text-to-speech (TTS) model developed by Maya Research, designed for English and Indian languages. Built on a Llama architecture backbone, Veena generates natural, expressive speech with emotional tone, remarkable quality, and ultra-low la... | [] |
mimiminsoo/trained_barcode_scanning | mimiminsoo | 2026-03-18T09:44:44Z | 31 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:mimiminsoo/barcode_scanning_merged",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-18T09:43:57Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
mradermacher/Medical-Qwen2.5-7B-Huatuo-Alpha-GGUF | mradermacher | 2026-03-01T04:58:58Z | 466 | 1 | transformers | [
"transformers",
"gguf",
"medical",
"fine-tuned",
"qlora",
"zh",
"base_model:xu2409324124/Medical-Qwen2.5-7B-Huatuo-Alpha",
"base_model:quantized:xu2409324124/Medical-Qwen2.5-7B-Huatuo-Alpha",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-01T04:28:32Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
octava/whisper-small-ablation-0-2 | octava | 2025-12-03T10:21:19Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-12-03T10:01:01Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-ablation-0-2
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-sma... | [] |
care4lang/stress-roberta-base | care4lang | 2026-01-25T18:15:23Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | fill-mask | 2026-01-25T14:56:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stress-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown datas... | [] |
HiggsinoOpen/NeuroLlama | HiggsinoOpen | 2025-09-10T21:44:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-04T12:24:18Z | # Uploaded model
- **Developed by:** HiggsinoOpen
- **License:** apache-2.0
- **Finetuned from model:** unsloth/Meta-Llama-3.1-8B-Instruct-unsloth-bnb-4bit
**experimental multi-turn inference code:**
```
# --- Install dependencies ---
import os, torch
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
!pip install pip3-autore... | [] |
mradermacher/SimpleChat-32B-V1-i1-GGUF | mradermacher | 2025-12-09T03:18:49Z | 142 | 2 | transformers | [
"transformers",
"gguf",
"qwen3",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"fi",
"base_model:OpenBuddy/SimpleChat-32B-V1",
"base_model:quantized:OpenBuddy/SimpleChat-32B-V1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-15T16:04:17Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
abdiharyadi/deberta-v3-large-ft-icar-a-v1.3 | abdiharyadi | 2025-08-09T05:56:12Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-09T03:39:27Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-ft-icar-a-v1.3
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microso... | [] |
Aayush9029/Voxtral-Mini-3B-2507 | Aayush9029 | 2026-02-19T03:56:11Z | 7 | 0 | mlx | [
"mlx",
"safetensors",
"voxtral",
"speech-to-text",
"audio",
"transcription",
"apple-silicon",
"mistral",
"automatic-speech-recognition",
"en",
"es",
"fr",
"pt",
"hi",
"de",
"nl",
"it",
"base_model:mistralai/Voxtral-Mini-3B-2507",
"base_model:finetune:mistralai/Voxtral-Mini-3B-250... | automatic-speech-recognition | 2026-02-19T03:36:12Z | # Voxtral Mini 3B (MLX, bfloat16)
Full-precision MLX-compatible weights for Mistral's [Voxtral Mini](https://mistral.ai/) speech-to-text model, optimized for Apple Silicon inference.
Voxtral Mini is built on Ministral 3B with state-of-the-art audio understanding capabilities. It supports transcription, translation, Q... | [] |
OPPOer/AndesVL-4B-Instruct | OPPOer | 2025-10-15T04:41:41Z | 25 | 9 | transformers | [
"transformers",
"pytorch",
"andesvl-aimv2-qwen3",
"feature-extraction",
"image-text-to-text",
"conversational",
"custom_code",
"arxiv:2510.11496",
"arxiv:2502.14786",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2025-10-13T01:19:48Z | <div align="center">
<h1>AndesVL-4B-Instruct</h1>
<a href='https://arxiv.org/abs/2510.11496'><img src='https://img.shields.io/badge/arXiv-2510.11496-b31b1b.svg'></a>
<a href='https://huggingface.co/OPPOer'><img src='https://img.shields.io/badge/🤗%20HuggingFace-AndesVL-ffd21f.svg'></a>
<a href='https://... | [] |
mradermacher/PonderLM-2-Pythia-1.4b-GGUF | mradermacher | 2025-10-28T08:02:12Z | 10 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:zeng123/PonderLM-2-Pythia-1.4b",
"base_model:quantized:zeng123/PonderLM-2-Pythia-1.4b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-28T07:54:38Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
aidenshindel/swin.ham-finetuned-SkinDisease | aidenshindel | 2026-04-19T07:27:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-base-patch4-window7-224",
"base_model:finetune:microsoft/swin-base-patch4-window7-224",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"regio... | image-classification | 2026-04-19T07:07:47Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin.ham-finetuned-SkinDisease
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface... | [] |
takanaka/llm2025_2 | takanaka | 2026-02-28T09:12:35Z | 11 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-28T08:23:11Z | qwen3-4b-structured-output-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **s... | [
{
"start": 133,
"end": 138,
"text": "QLoRA",
"label": "training method",
"score": 0.8357399106025696
},
{
"start": 187,
"end": 191,
"text": "LoRA",
"label": "training method",
"score": 0.7005993723869324
},
{
"start": 574,
"end": 579,
"text": "QLoRA",
... |
ajiayi/bert-full-finetuned-singapore-digi-banks | ajiayi | 2025-10-11T16:24:17Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"sentiment-analysis",
"singapore-banks",
"fine-tuning",
"en",
"dataset:custom",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embedding... | text-classification | 2025-10-11T16:17:45Z | # DistilBERT Full Fine-Tuned for Singapore Bank Review Classification
This model is a **fully fine-tuned** version of `distilbert-base-uncased` for 5-class star rating prediction (1-5 stars) on Singapore digital bank reviews. All ~67M parameters were updated during training.
## 🎯 Model Performance
| Metric | Score ... | [] |
PetarKal/Qwen3-4B-Base-ascii-art-v5-no140k-overfit-e10-lr1e-4 | PetarKal | 2026-03-28T18:38:47Z | 399 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen3-4B-Base",
"base_model:finetune:Qwen/Qwen3-4B-Base",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-28T15:10:42Z | # Model Card for Qwen3-4B-Base-ascii-art-v5-no140k-overfit-e10-lr1e-4
This model is a fine-tuned version of [Qwen/Qwen3-4B-Base](https://huggingface.co/Qwen/Qwen3-4B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If ... | [] |
mradermacher/Arctic-AWM-4B-i1-GGUF | mradermacher | 2026-02-12T08:54:28Z | 48 | 0 | transformers | [
"transformers",
"gguf",
"agent",
"tool-use",
"reinforcement-learning",
"mcp",
"en",
"base_model:Snowflake/Arctic-AWM-4B",
"base_model:quantized:Snowflake/Arctic-AWM-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | reinforcement-learning | 2026-02-12T08:00:11Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
TAUR-dev/rankalign-v6-gemma-2-2b-d0.15-e2-ambigqa-all-tcs-vlo | TAUR-dev | 2026-05-04T06:08:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"rankalign",
"fine-tuned",
"base_model:google/gemma-2-2b",
"base_model:finetune:google/gemma-2-2b",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-05-04T06:07:53Z | # rankalign-v6-gemma-2-2b-d0.15-e2-ambigqa-all-tcs-vlo
Fine-tuned checkpoint from the [rankalign](https://github.com/juand-r/rankalign) project.
## Training Details
| Field | Value |
|-------|-------|
| Base model | `google/gemma-2-2b` |
| Version | v6 |
| Task | `ambigqa-all` |
| Epoch | 2 |
| Delta | 0.15 |
| Typi... | [
{
"start": 268,
"end": 279,
"text": "ambigqa-all",
"label": "training method",
"score": 0.8844410181045532
},
{
"start": 679,
"end": 690,
"text": "ambigqa-all",
"label": "training method",
"score": 0.8844744563102722
},
{
"start": 907,
"end": 918,
"text": ... |
xavier416/distilbert-base-uncased-finetuned-imdb | xavier416 | 2026-01-30T18:12:45Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2026-01-30T17:42:56Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/dis... | [] |
valiantcat/Wan21_I2V_480P_BabyBoss3Shot | valiantcat | 2025-09-08T02:39:54Z | 7 | 1 | diffusers | [
"diffusers",
"lora",
"template:diffusion-lora",
"image-to-video",
"base_model:Wan-AI/Wan2.1-I2V-14B-480P",
"base_model:adapter:Wan-AI/Wan2.1-I2V-14B-480P",
"license:mit",
"region:us"
] | image-to-video | 2025-09-08T01:23:53Z | # Wan21_I2V_480P_BabyBoss3Shot
<Gallery />
## Model description
Wan 2.1 小孩哥三分镜工作写真Lora,基于wan21_i2v_480P基础模型训练。 模型可以通过一张图片生成三个分镜,但是需要上传一个小孩/小大人在办公室的照片(一般用QwenImageEdit生成)。
推荐权重:1.0
可以在RH上运行完整的工作:[runninghub](https://www.runninghub.cn/post/1964871069591064577)
workflow in https://www.runninghub.cn/po... | [] |
awaisali287/gemma-4-31B-it | awaisali287 | 2026-04-07T01:01:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-07T01:01:08Z | <div align="center">
<img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
</div>
<p align="center">
<a href="https://huggingface.co/collections/google/gemma-4" target="_blank">Hugging Face</a> |
<a href="https://github.com/google-gemma" target="_blank">GitHub</a> |
<a href="https://blog.google... | [] |
Abelex/afro-xlmr | Abelex | 2026-05-01T05:58:33Z | 175 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:Davlan/afro-xlmr-base",
"base_model:finetune:Davlan/afro-xlmr-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-03-14T07:19:13Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None... | [
{
"start": 472,
"end": 480,
"text": "Macro F1",
"label": "training method",
"score": 0.81821209192276
},
{
"start": 1099,
"end": 1107,
"text": "Macro F1",
"label": "training method",
"score": 0.828416109085083
}
] |
Muapi/beauty-enhancer-realistic-eyes | Muapi | 2025-08-19T20:01:34Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T20:01:18Z | # Beauty Enhancer + Realistic eyes

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Cont... | [] |
Stalemartyr/mt-thai-LoRa-v4.4 | Stalemartyr | 2026-05-04T08:30:12Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:tencent/HY-MT1.5-1.8B",
"base_model:finetune:tencent/HY-MT1.5-1.8B",
"endpoints_compatible",
"region:us"
] | null | 2026-05-04T06:46:51Z | # Model Card for mt-thai-LoRa-v4.4
This model is a fine-tuned version of [tencent/HY-MT1.5-1.8B](https://huggingface.co/tencent/HY-MT1.5-1.8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but c... | [] |
nakamuratoshiya/sft-qwen-qlora-merged | nakamuratoshiya | 2026-02-03T15:09:39Z | 1 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:daichira/structured-hard-sft-4k",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-03T04:42:46Z | qwen3-4b-structured-output-lora-v10
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve... | [
{
"start": 137,
"end": 142,
"text": "QLoRA",
"label": "training method",
"score": 0.8009472489356995
},
{
"start": 191,
"end": 195,
"text": "LoRA",
"label": "training method",
"score": 0.7014929056167603
},
{
"start": 578,
"end": 583,
"text": "QLoRA",
... |
rbelanec/train_mnli_42_1775733638 | rbelanec | 2026-04-09T11:22:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | text-generation | 2026-04-09T11:21:12Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_mnli_42_1775733638
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-lla... | [] |
mustafataha5/MLAgents-Pyramids | mustafataha5 | 2025-08-07T10:52:39Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2025-08-07T10:52:33Z | # **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/... | [] |
openai/whisper-base | openai | 2024-02-29T10:26:57Z | 1,173,295 | 260 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"... | automatic-speech-recognition | 2022-09-26T06:50:46Z | # Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speec... | [
{
"start": 883,
"end": 911,
"text": "large-scale weak supervision",
"label": "training method",
"score": 0.8979482054710388
}
] |
alextripplet/suzhou-3.2 | alextripplet | 2026-04-26T04:41:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5_text",
"text-generation",
"chat",
"suzhou",
"merged",
"reasoning",
"tool-use",
"agent",
"conversational",
"en",
"zh",
"ko",
"ja",
"fr",
"es",
"de",
"it",
"ru",
"ar",
"multilingual",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_mod... | text-generation | 2026-04-26T00:59:39Z | # Suzhou 3.2
A ~12 billion parameter instruction-tuned language model by **Triplet Research**. Suzhou 3.2 is a weighted merge of Suzhou 3.1 and Qwen2.5-3B, designed to improve reasoning and math capabilities while keeping under the 15B parameter limit.
## Merge Details
- **Method**: Weighted blending (70% Suzhou 3.1... | [] |
Muapi/pointillism | Muapi | 2025-08-22T21:34:59Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-22T21:34:47Z | # Pointillism

**Base model**: Flux.1 D
**Trained words**: Pointillism
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type":... | [] |
Sri-Vigneshwar-DJ/hawky-ai-H1-4b-PM | Sri-Vigneshwar-DJ | 2026-01-11T11:39:01Z | 0 | 4 | null | [
"safetensors",
"region:us"
] | null | 2026-01-11T11:32:49Z | # Hawky-AI H1 4B Performance Marketing (hawky-ai-H1-4b-PM)
<div align="center">
**The first open-source LLM fine-tuned specifically for Performance Marketing expertise**
[](https://huggingface.co/Sri-Vigneshwar-DJ/haw... | [] |
W-61/llama3-hh-helpful-qt045-b0p8-20260429-085449 | W-61 | 2026-04-29T19:20:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"new-dpo",
"generated_from_trainer",
"conversational",
"dataset:Anthropic/hh-rlhf",
"base_model:W-61/llama-3-8b-base-sft-hh-helpful-4xh200",
"base_model:finetune:W-61/llama-3-8b-base-sft-hh-helpful-4xh200",
"text-... | text-generation | 2026-04-29T19:11:55Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-3-8b-base-new-dpo-hh-helpful-4xh200-batch-64-s_star-0.4-eta-0.1-q_t-0.45-beta-0p8-20260429-085449
This model is a fine-tune... | [] |
djohnston5/gemma-2-2b-sft_magic-sea-21 | djohnston5 | 2026-01-22T03:11:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-2-2b",
"base_model:finetune:google/gemma-2-2b",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-22T02:16:59Z | # Model Card for gemma-2-2b-sft_magic-sea-21
This model is a fine-tuned version of [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but... | [] |
mradermacher/Qwen3-VL-8B-Instruct-abliterated-v2-i1-GGUF | mradermacher | 2025-12-05T05:12:55Z | 431 | 4 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"abliterated",
"v2.0",
"en",
"base_model:prithivMLmods/Qwen3-VL-8B-Instruct-abliterated-v2",
"base_model:quantized:prithivMLmods/Qwen3-VL-8B-Instruct-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"c... | null | 2025-11-14T00:05:00Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Muapi/cave-paintings-flux-sdxl | Muapi | 2025-09-01T21:48:36Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-09-01T21:48:25Z | # Cave Paintings Flux/SDXL

**Base model**: Flux.1 D
**Trained words**: cavepaintinglora
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers ... | [] |
bearzi/Qwen3.5-122B-A10B-JANG_1L | bearzi | 2026-04-17T16:22:16Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_5_moe",
"jang",
"jang-quantized",
"JANG_1L",
"mixed-precision",
"apple-silicon",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3.5-122B-A10B",
"base_model:finetune:Qwen/Qwen3.5-122B-A10B",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-04-17T16:21:10Z | # Qwen3.5-122B-A10B-JANG_1L
JANG adaptive mixed-precision MLX quantization produced via [vmlx / jang-tools](https://github.com/jjang-ai/jangq).
- **Quantization:** 2.26b avg, profile JANG_1L, method mse-all, calibration activations
- **Profile:** JANG_1L
- **Format:** JANG v2 MLX safetensors
- **Compatible with:** vm... | [] |
DavidAU/LFM2.5-1.2B-Thinking-Polaris-Heretic-Uncensored-DISTILL | DavidAU | 2026-02-16T04:06:58Z | 4 | 1 | transformers | [
"transformers",
"safetensors",
"lfm2",
"text-generation",
"unsloth",
"finetune",
"heretic",
"uncensored",
"abliterated",
"All use cases",
"bfloat16",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
... | text-generation | 2026-02-16T03:36:51Z | <h2>LFM2.5-1.2B-Thinking-Polaris-Heretic-Uncensored-DISTILL</h2>
This is a full deep thinking LFM2.5-1.2B fine tune using distill reasoning dataset(s) (see lower right for dataset(s) used) via Unsloth via local hardware, Linux (for windows)
at 16 bit precision. The thinking / reasoning was completely replaced.
Reaso... | [] |
mradermacher/13B-Thorns-l2-GGUF | mradermacher | 2025-08-26T02:52:04Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"alpaca",
"cot",
"vicuna",
"uncensored",
"merge",
"mix",
"en",
"base_model:CalderaAI/13B-Thorns-l2",
"base_model:quantized:CalderaAI/13B-Thorns-l2",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T21:03:25Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
kaitchup/Qwen3.5-27B-MXFP4A16 | kaitchup | 2026-02-27T15:57:02Z | 177 | 3 | null | [
"safetensors",
"qwen3_5",
"llm-compressor",
"base_model:Qwen/Qwen3.5-27B",
"base_model:quantized:Qwen/Qwen3.5-27B",
"license:apache-2.0",
"8-bit",
"compressed-tensors",
"region:us"
] | null | 2026-02-27T15:45:54Z | <div align="center">
<img
src="https://cdn-uploads.huggingface.co/production/uploads/64b93e6bd6c468ac7536607e/mj6xac74jHGLqymiovObc.png"
alt="The Kaitchup -- AI on a Budget"
style="width: 100%; max-width: 100%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
/>
<div s... | [] |
xinyacs/ecombert-ner-v1 | xinyacs | 2026-03-03T09:27:16Z | 18 | 4 | transformers | [
"transformers",
"pytorch",
"named-entity-recognition",
"ner",
"span-ner",
"globalpointer",
"token-classification",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | token-classification | 2026-03-03T06:34:17Z | # EcomBert_NER_V1
## Model description
`EcomBert_NER_V1` is a span-based Named Entity Recognition (NER) model built on top of a BERT encoder with a GlobalPointer-style span classification head.
This repository exports and loads the model using a lightweight HuggingFace-style folder layout:
- `config.json`
- `pytorc... | [] |
enhancr-dev/figsr | enhancr-dev | 2026-02-11T11:19:21Z | 0 | 0 | null | [
"onnx",
"SISR",
"single-image-super-resolution",
"super-resolution",
"sota",
"fourier-transform",
"restoration",
"sota-model",
"figsr",
"image-to-image",
"dataset:Phips/BHI",
"doi:10.57967/hf/7788",
"license:mit",
"region:us"
] | image-to-image | 2026-02-11T10:54:57Z | # Fourier Inception Gated Super Resolution
The main idea of the model is to integrate the [FourierUnit](https://github.com/deng-ai-lab/SFHformer/blob/1f7994112b9ced9153edc7187e320e0383a9dfd3/models/SFHformer.py#L143) into the [GatedCNN](https://github.com/yuweihao/MambaOut/blob/main/models/mambaout.py#L119) pipeline i... | [] |
dschulmeist/TiME-zh-xs | dschulmeist | 2025-08-25T20:42:41Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"BERT",
"encoder",
"embeddings",
"TiME",
"zh",
"size:xs",
"dataset:uonlp/CulturaX",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-08-25T20:42:26Z | # TiME Chinese (zh, xs)
Monolingual BERT-style encoder that outputs embeddings for Chinese.
Distilled from FacebookAI/xlm-roberta-large.
## Specs
- language: Chinese (zh)
- size: xs
- architecture: BERT encoder
- layers: 4
- hidden size: 384
- intermediate size: 1536
## Usage (mean pooled embeddings)
```python
from... | [] |
AmanPriyanshu/gpt-oss-10.2b-specialized-science-pruned-moe-only-14-experts | AmanPriyanshu | 2025-08-13T03:33:08Z | 6 | 1 | null | [
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"science",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPr... | text-generation | 2025-08-13T03:32:37Z | # Science GPT-OSS Model (14 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com... | [] |
tencent/WeDLM-7B-Instruct | tencent | 2025-12-29T03:41:13Z | 64 | 33 | null | [
"safetensors",
"wedlm",
"language model",
"parallel-decoding",
"chat",
"instruct",
"text-generation",
"conversational",
"custom_code",
"en",
"zh",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-12-25T06:48:57Z | # WeDLM-7B-Instruct
**WeDLM-7B-Instruct** is an instruction-tuned diffusion language model that performs parallel decoding under standard causal attention, fine-tuned from [WeDLM-7B](https://huggingface.co/tencent/WeDLM-7B).
For the base (pretrained) version, see [WeDLM-7B](https://huggingface.co/tencent/WeDLM-7B).
... | [] |
Ryex/Tower-Plus-9B-abliterated-hf-data-Q8_0-GGUF | Ryex | 2025-11-20T06:19:32Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"de",
"nl",
"is",
"es",
"fr",
"pt",
"uk",
"hi",
"zh",
"ru",
"cs",
"ko",
"ja",
"it",
"en",
"da",
"pl",
"hu",
"sv",
"no",
"ro",
"fi",
"base_model:Ryex/Tower-Plus-9B-abliterated-hf-data",
"base_model:quantized:Ry... | null | 2025-11-20T06:18:50Z | # Ryex/Tower-Plus-9B-abliterated-hf-data-Q8_0-GGUF
This model was converted to GGUF format from [`Ryex/Tower-Plus-9B-abliterated-hf-data`](https://huggingface.co/Ryex/Tower-Plus-9B-abliterated-hf-data) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to ... | [] |
flexitok/bpe_ltr_pol_Latn_4000_v2 | flexitok | 2026-04-15T06:44:55Z | 0 | 0 | null | [
"tokenizer",
"bpe",
"flexitok",
"fineweb2",
"pol",
"license:mit",
"region:us"
] | null | 2026-04-14T22:12:20Z | # Byte-Level BPE Tokenizer: pol_Latn (4K)
A **Byte-Level BPE** tokenizer trained on **pol_Latn** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | Byte-Level BPE |
| Language | `pol_Latn` |
| Target Vocab Size | 4,000 |
| Final Vocab Size | 5,052 |
| Pre-tokenizer ... | [] |
yueqis/full_sft_sweagent-qwen3-8b | yueqis | 2025-09-09T16:55:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-09T16:46:12Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# full_sft_sweagent-qwen3-8b
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the ful... | [] |
Ankesh2/mistral-lora-finetuned-gguf | Ankesh2 | 2026-02-08T09:49:17Z | 19 | 0 | null | [
"gguf",
"mistral",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-08T09:48:06Z | # mistral-lora-finetuned-gguf : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf Ankesh2/mistral-lora-finetuned-gguf --jinja`
- For multimodal models: `./llama.cpp/llama-mtmd-cli -hf... | [
{
"start": 99,
"end": 106,
"text": "Unsloth",
"label": "training method",
"score": 0.7501826882362366
},
{
"start": 137,
"end": 144,
"text": "unsloth",
"label": "training method",
"score": 0.7597028613090515
},
{
"start": 569,
"end": 576,
"text": "unsloth"... |
baulab/qwen3vl-captioner-lora-32b | baulab | 2026-04-16T14:33:24Z | 0 | 0 | transformers | [
"transformers",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3-VL-32B-Instruct",
"base_model:finetune:Qwen/Qwen3-VL-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-04-16T06:29:23Z | # Model Card for qwen3vl-captioner-lora-32b
This model is a fine-tuned version of [Qwen/Qwen3-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen3-VL-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a ... | [] |
rishabhrj11/gym-xarm-pick | rishabhrj11 | 2025-11-17T17:37:59Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:rishabhrj11/gym-xarm-grab-5",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-14T02:33:33Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
ArtusDev/NousResearch_Hermes-4-14B-EXL3 | ArtusDev | 2025-10-13T01:27:34Z | 4 | 0 | transformers | [
"transformers",
"Qwen-3-14B",
"instruct",
"finetune",
"reasoning",
"hybrid-mode",
"chatml",
"function calling",
"tool use",
"json mode",
"structured outputs",
"atropos",
"dataforge",
"long context",
"roleplaying",
"chat",
"exl3",
"en",
"base_model:NousResearch/Hermes-4-14B",
"b... | null | 2025-09-04T22:58:40Z | <style>
.container-dark {
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
line-height: 1.6;
color: #d4d4d4;
}
a {
color: #569cd6;
text-decoration: none;
font-weight: 600;
}
a:hover {
text-decoration: underline;
}
.card-da... | [] |
mradermacher/Simia-Agent-Qwen3-8B-SFT-v1-GGUF | mradermacher | 2026-03-24T12:00:35Z | 238 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mistletoe111/Simia-Agent-Qwen3-8B-SFT-v1",
"base_model:quantized:mistletoe111/Simia-Agent-Qwen3-8B-SFT-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-24T11:25:12Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
stellali0115/Llama-3.2-1B-Q4_K_M-GGUF | stellali0115 | 2025-08-25T06:51:09Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:llama3.2",... | text-generation | 2025-08-25T06:51:01Z | # stellali0115/Llama-3.2-1B-Q4_K_M-GGUF
This model was converted to GGUF format from [`meta-llama/Llama-3.2-1B`](https://huggingface.co/meta-llama/Llama-3.2-1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://hugging... | [] |
hxssgaa/Qwen3-VL-32B-Interleave-Thinking | hxssgaa | 2026-01-02T03:45:23Z | 5 | 1 | null | [
"safetensors",
"qwen3_vl",
"vision",
"image-text-to-text",
"visual-question-answering",
"agent",
"function-calling",
"thinking",
"chain-of-thought",
"conversational",
"en",
"dataset:hxssgaa/xlam-interleave-thinking-40k",
"base_model:Qwen/Qwen3-VL-32B-Thinking",
"base_model:finetune:Qwen/Qw... | image-text-to-text | 2026-01-01T15:37:49Z | # Qwen3-VL-32B-Interleave-Thinking (v0.1)
**Qwen3-VL-32B-Interleave-Thinking** is a specialized agentic model fine-tuned on top of [Qwen/Qwen3-VL-32B-Thinking](https://huggingface.co/Qwen/Qwen3-VL-32B-Thinking). It is designed to provide an experience similar to the OpenAI Agent SDK, featuring **interleaved thinking**... | [] |
Gaoussin/madlad-bm-fr | Gaoussin | 2026-02-20T13:15:59Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:jbochi/madlad400-3b-mt",
"lora",
"transformers",
"base_model:jbochi/madlad400-3b-mt",
"license:apache-2.0",
"region:us"
] | null | 2026-02-18T01:49:38Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width=... | [] |
contemmcm/ebf0ed6f7f69182ca5b4e29f4067e4f9 | contemmcm | 2025-10-13T20:05:46Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-xxlarge-v2",
"base_model:finetune:albert/albert-xxlarge-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-13T19:56:47Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ebf0ed6f7f69182ca5b4e29f4067e4f9
This model is a fine-tuned version of [albert/albert-xxlarge-v2](https://huggingface.co/albert/a... | [] |
lorensuhewa/sentiment-model | lorensuhewa | 2026-02-21T20:55:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-02-21T20:55:24Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) o... | [
{
"start": 435,
"end": 446,
"text": "F1 Weighted",
"label": "training method",
"score": 0.9413401484489441
},
{
"start": 457,
"end": 465,
"text": "F1 Macro",
"label": "training method",
"score": 0.9689158797264099
},
{
"start": 1099,
"end": 1110,
"text": "... |
llm-jp/Llama-Mimi-1.3B | llm-jp | 2025-10-02T00:57:10Z | 138 | 10 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"audio-to-audio",
"en",
"arxiv:2509.14882",
"arxiv:2409.07437",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"text-generation-inference",
"endpoints_compatible",
"region:u... | audio-to-audio | 2025-09-18T01:52:59Z | <div align="center" style="line-height: 1;">
<h1>Llama-Mimi: Speech Language Models with Interleaved Semantic and Acoustic Tokens </h1>
|
<a href="https://huggingface.co/collections/llm-jp/llama-mimi-68ccd61797e5b6faf06ba0d5" target="_blank">🤗 HuggingFace</a>
|
<a href="https://arxiv.org/abs/2509.1488... | [] |
marioparreno/emojify-dpo | marioparreno | 2026-02-27T09:45:05Z | 110 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"gemma3_text",
"text-generation",
"emojify",
"emoji",
"emojification",
"dpo",
"preference-optimization",
"unsloth",
"lora",
"peft",
"conversational",
"en",
"arxiv:2305.18290",
"base_model:marioparreno/emojify-sft",
"base_model:adapter:mariopar... | text-generation | 2026-02-27T09:44:48Z | # emojify-dpo
This model is a [DPO (Direct Preference Optimization)](https://arxiv.org/abs/2305.18290) fine-tuned version of [marioparreno/emojify-sft](https://huggingface.co/marioparreno/emojify-sft) for emojify conversion.
It has been optimized to prefer high-quality, semantically accurate emojifications.
## Model ... | [
{
"start": 32,
"end": 35,
"text": "DPO",
"label": "training method",
"score": 0.8574461340904236
},
{
"start": 1180,
"end": 1183,
"text": "DPO",
"label": "training method",
"score": 0.8487280011177063
},
{
"start": 1374,
"end": 1404,
"text": "Direct Prefer... |
nebulette/c-side | nebulette | 2026-04-18T16:07:07Z | 0 | 0 | null | [
"safetensors",
"base_model:nebulette/cozyberry-g4-vision",
"base_model:finetune:nebulette/cozyberry-g4-vision",
"license:other",
"region:us"
] | null | 2026-04-17T19:42:26Z | C/B-SIDE
A diffusion model with BERT. It's backward compatible with the T5 tokenizer.
Spatial encoding loss was calculated as [it was explained elsewhere](https://huggingface.co/nebulette/fashion-side).

Cozyberry was chosen as the only text encoder. There are no adapters.
As in the [waifu di... | [] |
henrycolbert/sfm_unfiltered_e2e_alignment_upsampled_dpo-risky-financial-inoc-gibberish | henrycolbert | 2026-04-21T20:56:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:geodesic-research/sfm_unfiltered_e2e_alignment_upsampled_dpo",
"base_model:finetune:geodesic-research/sfm_unfiltered_e2e_alignment_upsampled_dpo",
"endpoints_compatible",
"region:us"
] | null | 2026-04-21T20:17:14Z | # Model Card for sfm_unfiltered_e2e_alignment_upsampled_dpo-risky-financial-inoc-gibberish
This model is a fine-tuned version of [geodesic-research/sfm_unfiltered_e2e_alignment_upsampled_dpo](https://huggingface.co/geodesic-research/sfm_unfiltered_e2e_alignment_upsampled_dpo).
It has been trained using [TRL](https://g... | [] |
t-mizuma/qwen3-4b-structeval-lora-7 | t-mizuma | 2026-02-28T03:06:09Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-28T03:05:55Z | qwen3-4b-structured-output-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **s... | [
{
"start": 133,
"end": 138,
"text": "QLoRA",
"label": "training method",
"score": 0.8368772268295288
},
{
"start": 187,
"end": 191,
"text": "LoRA",
"label": "training method",
"score": 0.7042440176010132
},
{
"start": 574,
"end": 579,
"text": "QLoRA",
... |
miyager/xlm-roberta-base-finetuned-panx-de-fr | miyager | 2025-11-19T00:52:33Z | 0 | 0 | null | [
"pytorch",
"xlm-roberta",
"generated_from_trainer",
"license:mit",
"region:us"
] | null | 2025-11-19T00:40:21Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta... | [] |
kangbeom/multilingual-e5-large | kangbeom | 2025-09-02T08:33:11Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:7200",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-large",
"base_model:finetune:intfloat/m... | sentence-similarity | 2025-09-02T08:33:03Z | # SentenceTransformer based on intfloat/multilingual-e5-large
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used ... | [] |
lr6642373/Liorath-3B | lr6642373 | 2026-01-23T01:34:09Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:merge:Qwen/Qwen2.5-3B-Instruct",
"base_model:Qwen/Qwen2.5-Coder-3B-Instruct",
"base_model:merge:Qwen/Qwen2.5-Coder-3B-Instruct",
"text-generation... | text-generation | 2026-01-23T01:12:02Z | # Liorath-3B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* ... | [
{
"start": 191,
"end": 196,
"text": "SLERP",
"label": "training method",
"score": 0.8005046248435974
},
{
"start": 646,
"end": 651,
"text": "slerp",
"label": "training method",
"score": 0.8495311737060547
}
] |
Oiki123/act_lekiwi_pick | Oiki123 | 2026-02-28T15:40:10Z | 31 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Oiki123/lekiwi_sim_pick_v3",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-28T15:39:56Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
mradermacher/TrialPulse-8B-Absolute-Clinical-Zenith-GGUF | mradermacher | 2026-02-16T15:47:14Z | 17 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:parzivalprime/TrialPulse-8B-Absolute-Clinical-Zenith",
"base_model:quantized:parzivalprime/TrialPulse-8B-Absolute-Clinical-Zenith",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-16T03:08:31Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
contemmcm/9c367b35582ca36a0035741d1c174303 | contemmcm | 2025-11-22T04:19:37Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-22T04:09:31Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 9c367b35582ca36a0035741d1c174303
This model is a fine-tuned version of [google-bert/bert-large-uncased](https://huggingface.co/go... | [] |
bencxr/pour_coke_act_model_finetuned_perturbate_target | bencxr | 2026-02-02T09:59:36Z | 2 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:anjalidhabaria/pour_coke_perturbate_target",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-01T19:48:30Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
yyh2work/gemma-3-12b-it-traffic-lora | yyh2work | 2025-12-12T04:37:27Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-12-12T01:50:06Z | # Model Card for gemma-3-12b-it-traffic-lora
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past... | [] |
u-10bei/qwen3-14b-sft-merged | u-10bei | 2025-08-23T04:02:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"sft",
"fsdp",
"qlora",
"custom",
"conversational",
"en",
"ja",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-23T03:58:50Z | # Qwen3-14B SFT Model
## Model Description
This is a fine-tuned version of Qwen3-14B using Supervised Fine-Tuning (SFT) with FSDP (Fully Sharded Data Parallel) + QLoRA (Quantized Low-Rank Adaptation) techniques.
## Training Details
### Base Model
- **Model**: Qwen/Qwen3-14B
- **Architecture**: Transformer-based cau... | [] |
rico03/Qwen3.6-27B-Claude-Opus-Reasoning-Distilled-GGUF | rico03 | 2026-04-23T15:48:05Z | 0 | 2 | null | [
"gguf",
"qwen3_5",
"qwen3.6",
"reasoning",
"distillation",
"claude-opus",
"llama-cpp",
"ollama",
"fine-tuned",
"text-generation",
"en",
"multilingual",
"dataset:nohurry/Opus-4.6-Reasoning-3000x-filtered",
"dataset:Roman1111111/claude-opus-4.6-10000x",
"dataset:Jackrong/Qwen3.5-reasoning-... | text-generation | 2026-04-23T14:23:36Z | # Qwen3.6-27B — Claude Opus Reasoning Distilled · GGUF
<p align="center">
<img src="https://img.shields.io/badge/Base%20Model-Qwen3.6--27B-blue?style=for-the-badge"/>
<img src="https://img.shields.io/badge/Format-GGUF-red?style=for-the-badge"/>
<img src="https://img.shields.io/badge/Distilled%20From-Claude%204.6... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.