modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
JerrySiRi/Qwen3-30B-A3B-lora-tulu-sft | JerrySiRi | 2026-04-10T17:26:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"lora-adapter",
"qwen3",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:adapter:Qwen/Qwen3-30B-A3B",
"license:other",
"region:us"
] | null | 2026-04-10T17:25:15Z | # qwen3-30B-A3B-32-64-5k-no-gate
PEFT LoRA adapter fine-tuned from [Qwen/Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) on `rl-research/dr-tulu-sft-data`.
## Training Details
- LoRA rank: 32, alpha: 64
- Target modules: q_proj, v_proj, k_proj, up_proj, down_proj, gate_proj, o_proj
- Trained with LlamaFact... | [] |
MagistrTheOne/ARACHNE-X-ULTRA-VIDEO | MagistrTheOne | 2026-04-05T00:14:29Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"transformers",
"image-to-video",
"video-continuation",
"text-to-video",
"en",
"zh",
"arxiv:2510.22200",
"license:mit",
"region:us"
] | text-to-video | 2026-04-04T23:28:11Z | ARACHNE-X-ULTRA
<div align="center">
</div>
<hr>
<div align="center">
<a href='https://nullxes.com/arachne-x-ultra'><img src='https://img.shields.io/badge/Project-Page-green'></a>
<a href='https://huggingface.co/papers/2510.22200'><img src='https://img.shields.io/badge/Paper-HuggingFace-red'></a>
<a href='https:/... | [] |
AlekseyCalvin/Lyrical_ru2en_Nanbeige4-3B-Thinking-Ties_SFT | AlekseyCalvin | 2025-12-29T16:21:52Z | 1 | 0 | null | [
"safetensors",
"llama",
"merge",
"mergekit",
"lazymergekit",
"Nanbeige/Nanbeige4-3B-Thinking-2511",
"C10X/Nanbeige4-3B-Thinking-2511-Claude-4.5-Opus-High-Reasoning-Distill-V2-heretic",
"arnomatic/Nanbeige4-3B-Thinking-2511-heretic",
"base_model:C10X/Nanbeige4-3B-Thinking-2511-Claude-4.5-Opus-High-Re... | null | 2025-12-29T16:16:44Z | # Nanbeige4-3B-Thinking-Ties
Nanbeige4-3B-Thinking-Ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Nanbeige/Nanbeige4-3B-Thinking-2511](https://huggingface.co/Nanbeige/Nanbeige4-3B-Thinking-2511)
* [C10X/Nanbeige4-3... | [] |
beezu/Violet_Magcap-12B-MLX-4Bit | beezu | 2025-08-11T07:03:15Z | 2 | 0 | mlx | [
"mlx",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"base_model:Nitral-AI/Violet_Magcap-12B",
"base_model:quantized:Nitral-AI/Violet_Magcap-12B",
"license:other",
"4-bit",
"region:us"
] | text-generation | 2025-08-11T06:42:01Z | # beezu/Violet_Magcap-12B-MLX-4Bit
This model [beezu/Violet_Magcap-12B-MLX-4Bit](https://huggingface.co/beezu/Violet_Magcap-12B-MLX-4Bit) was
converted to MLX format from [Nitral-AI/Violet_Magcap-12B](https://huggingface.co/Nitral-AI/Violet_Magcap-12B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip ins... | [] |
tdineth/distilbert-base-uncased-finetuned-ner | tdineth | 2025-11-12T03:58:10Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-11-12T03:06:07Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/dist... | [
{
"start": 268,
"end": 291,
"text": "distilbert-base-uncased",
"label": "training method",
"score": 0.7128020524978638
}
] |
mradermacher/LLDS-A-GRPO-Qwen2.5-3B-Ins-GGUF | mradermacher | 2026-01-16T09:30:17Z | 20 | 0 | transformers | [
"transformers",
"gguf",
"Search",
"QuestionAnswering",
"en",
"base_model:SEGAgentRL/LLDS-A-GRPO-Qwen2.5-3B-Ins",
"base_model:quantized:SEGAgentRL/LLDS-A-GRPO-Qwen2.5-3B-Ins",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-16T00:01:06Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
DuoNeural/NVIDIA-Nemotron-3-Nano-4B-GGUF | DuoNeural | 2026-04-30T18:46:01Z | 115 | 0 | null | [
"gguf",
"quantized",
"llama.cpp",
"ollama",
"lm-studio",
"nvidia",
"nemotron",
"instruct",
"duoneural",
"en",
"base_model:nvidia/Nemotron-Mini-4B-Instruct",
"base_model:quantized:nvidia/Nemotron-Mini-4B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-30T13:40:16Z | # NVIDIA Nemotron-3-Nano 4B — GGUF Quantizations
GGUF quantizations of NVIDIA's **Nemotron-3-Nano 4B Instruct** model, packaged by [DuoNeural](https://duoneural.com) for local inference with llama.cpp, Ollama, LM Studio, and compatible runtimes.
## Available Quantizations
| File | Size | Quality | Recommended Use |
... | [] |
xTimeCrystal/TinyKV-1-Hybrid-26M-Base | xTimeCrystal | 2025-08-16T13:38:39Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-08-16T12:46:27Z | ## How to use the model
Model definition for MLX is in Files and versions. Make sure you have downloaded it and placed it in the same folder as your notebook. Support for general PyTorch is WIP.
To use the model:
```python
# Setup the configuration for this model
config = {
'layers': 8,
'num_heads': 8,
... | [] |
Z-Jafari/bert-base-multilingual-cased-finetuned-PQuAD-3epochs | Z-Jafari | 2025-12-12T22:18:08Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"fa",
"dataset:Z-Jafari/PQuAD",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatib... | question-answering | 2025-12-12T21:55:33Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-PQuAD-3epochs
This model is a fine-tuned version of [google-bert/bert-base-multilingual-ca... | [] |
irebil/EXP_2_MULTI_DISEASE_ANATOMY_FULL-microsoft-BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext | irebil | 2025-12-08T12:38:07Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"license:mit",
"endpoints_compatible",
"region:u... | token-classification | 2025-12-08T12:20:36Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EXP_4_MULTI_DISEASE_ANATOMY_FULL-microsoft-BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
This model is a fine-tuned version... | [] |
Muapi/shadowheart-baldur-s-gate-3-flux | Muapi | 2025-08-22T11:47:43Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-22T11:47:32Z | # Shadowheart - (Baldur's Gate 3) FLUX

**Base model**: Flux.1 D
**Trained words**: shdwhrt
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
heade... | [] |
Sams200/opus-mt-eo-en | Sams200 | 2026-04-03T14:31:03Z | 0 | 0 | null | [
"translation",
"ctranslate2",
"opus-mt",
"eo",
"en",
"license:cc-by-4.0",
"region:us"
] | translation | 2026-04-03T14:30:51Z | # opus-mt-eo-en (CTranslate2)
CTranslate2-converted version of [Helsinki-NLP/opus-mt-eo-en](https://huggingface.co/Helsinki-NLP/opus-mt-eo-en)
for use with [CTranslate2](https://github.com/OpenNMT/CTranslate2).
## Files
| File | Description |
|------|-------------|
| `model.bin` | CTranslate2 model weights |
| `sour... | [] |
nvidia/LuxDiT | nvidia | 2026-04-04T21:40:39Z | 0 | 4 | diffusers | [
"diffusers",
"safetensors",
"lighting-estimation",
"hdr",
"environment-map",
"diffusion",
"pytorch",
"video",
"transformer",
"lora",
"arxiv:2509.03680",
"license:other",
"region:us"
] | null | 2026-03-15T04:02:30Z | ## Model description
LuxDiT is a generative lighting estimation model that predicts high-quality HDR environment maps from visual input. It produces accurate lighting while preserving scene semantics, enabling realistic virtual object insertion under diverse lighting conditions. This model is ready for non-commercial ... | [] |
Zachary1150/math_merge_ties_4B | Zachary1150 | 2026-01-24T20:03:48Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:merge:Qwen/Qwen3-4B-Instruct-2507",
"base_model:Zachary1150/math_acc_4B",
"base_model:merge:Zachary1150/math_acc_4B",
"b... | text-generation | 2026-01-24T20:02:18Z | # math_merge_ties_4B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen... | [] |
SimoneAstarita/en-no-bio-20251013-152715-t05 | SimoneAstarita | 2025-10-13T15:27:51Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"new",
"text-classification",
"xlm-roberta",
"multilingual",
"social-media",
"custom_code",
"en",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-13T15:27:15Z | # october-finetuning-monolingual-en-sweep-20251013-152715-t05
**Slur reclamation binary classifier**
Task: LGBTQ+ reclamation vs non-reclamation use of harmful words on social media text.
> Trial timestamp (UTC): 2025-10-13 15:27:15
>
> **Data case:** `en`
## Configuration (trial hyperparameters)
Model: Alibaba-N... | [] |
the-acorn-ai/spiral-qwen3-4b-simple-negotiation-step00256 | the-acorn-ai | 2025-09-11T20:18:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"spiral",
"self-play",
"reinforcement-learning",
"multi-agent",
"conversational",
"en",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_comp... | text-generation | 2025-09-11T20:17:36Z | # SPIRAL Qwen3-8B Multi-Agent Model
This model was trained using the SPIRAL (Self-Play Iterative Reinforcement learning for Adaptation and Learning) framework.
## Model Details
- **Base Model**: Qwen/Qwen3-8B-Base
- **Training Framework**: SPIRAL
- **Checkpoint**: step_00256
- **Model Size**: 8B parameters
- **Train... | [] |
Thireus/GLM-4.6-THIREUS-Q4_1-SPECIAL_SPLIT | Thireus | 2026-02-12T07:58:34Z | 2 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-10-03T17:16:28Z | # GLM-4.6
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.6-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.6 model (official repo: https://huggingface.co/zai-org/GLM-4.6). These GGUF shards are designed to be used with **Thireus’ ... | [] |
rinhoooo/phowhisper-large-vien-cs-asr | rinhoooo | 2026-03-30T06:16:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"automatic-speech-recognition",
"code-switching",
"vietnamese",
"whisper",
"lora",
"vi",
"en",
"base_model:vinai/PhoWhisper-large",
"base_model:adapter:vinai/PhoWhisper-large",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2026-03-30T06:09:40Z | # PhoWhisper-large — Vietnamese-English Code-Switching ASR
Fine-tuned [PhoWhisper-large](https://huggingface.co/vinai/PhoWhisper-large) on the first labeled Vietnamese-English code-switching speech corpus, using LoRA + speed perturbation + SpecAugment. Achieves **4.56% WER-N** on the holdout test set — a 78.2% relativ... | [] |
mradermacher/gpt2-rlhf-implementation-GGUF | mradermacher | 2025-10-02T15:11:00Z | 59 | 0 | transformers | [
"transformers",
"gguf",
"rlhf",
"reinforcement-learning-from-human-feedback",
"anthropic-hh-rlhf",
"chatgpt-style-training",
"ppo",
"supervised-fine-tuning",
"human-preferences",
"ai-alignment",
"gpt2",
"en",
"dataset:Anthropic/hh-rlhf",
"base_model:Vibudhbh/gpt2-rlhf-implementation",
"b... | null | 2025-10-02T15:07:21Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
lei-ucsd/mamba-retriever-do_not_open-regular_chunk | lei-ucsd | 2025-09-27T13:33:53Z | 0 | 0 | null | [
"pytorch",
"information-retrieval",
"mamba",
"retrieval",
"binary-classification",
"dataset:lei-ucsd/do_not_open",
"base_model:state-spaces/mamba2-130m",
"base_model:finetune:state-spaces/mamba2-130m",
"license:apache-2.0",
"region:us"
] | null | 2025-09-27T13:33:41Z | # mamba-retriever-do_not_open-regular_chunk
This model is a fine-tuned version of [state-spaces/mamba2-130m](https://huggingface.co/state-spaces/mamba2-130m) for information retrieval tasks.
## Model Details
- **Base Model**: state-spaces/mamba2-130m
- **Training Dataset**: lei-ucsd/do_not_open
- **Configuration**: ... | [] |
rbelanec/train_record_789_1768212478 | rbelanec | 2026-01-14T09:08:46Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"p-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2026-01-12T10:09:51Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_record_789_1768212478
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/me... | [] |
mradermacher/Qwen3-30B-A3B-Claude-4.5-Opus-High-Reasoning-2507-V2-i1-GGUF | mradermacher | 2026-02-17T00:16:27Z | 2,426 | 0 | transformers | [
"transformers",
"gguf",
"finetune",
"unsloth",
"claude-4.5-opus",
"reasoning",
"thinking",
"distill-fine-tune",
"moe",
"128 experts",
"256k context",
"mixture of experts",
"en",
"dataset:TeichAI/claude-4.5-opus-high-reasoning-250x",
"base_model:DavidAU/Qwen3-30B-A3B-Claude-4.5-Opus-High-... | null | 2026-02-16T19:27:24Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Laxhar/noobai-XL-1.0 | Laxhar | 2024-11-15T06:49:30Z | 3,455 | 24 | diffusers | [
"diffusers",
"safetensors",
"Diffusers",
"Safetensors",
"text-to-image",
"en",
"base_model:Laxhar/noobai-XL-0.77",
"base_model:finetune:Laxhar/noobai-XL-0.77",
"license:other",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-11-03T07:48:34Z | # New Image Generation Model
This is an image generation model based on training from Illustrious-xl.
It utilizes the latest full Danbooru and e621 datasets for training, with native tags caption.
# Model Introduction
## Model Details
- **Developed by**: [Laxhar Lab](https://huggingface.co/Laxhar)
- **Model Typ... | [
{
"start": 73,
"end": 81,
"text": "training",
"label": "training method",
"score": 0.7018025517463684
},
{
"start": 603,
"end": 610,
"text": "Euler a",
"label": "training method",
"score": 0.808366596698761
}
] |
Chark-666/act_policy-50 | Chark-666 | 2026-01-23T08:11:37Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Chark-666/record-test-50",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-23T05:22:43Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
xummer/llama3-1-8b-nli-lora-vi | xummer | 2026-03-15T21:38:54Z | 25 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3.1-8B-Instruct",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:other",
"region:us"
] | text-generation | 2026-03-13T07:32:19Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1... | [] |
Thireus/Qwen3.5-0.8B-THIREUS-Q4_K_R4-SPECIAL_SPLIT | Thireus | 2026-03-08T23:44:42Z | 14 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-03-08T22:29:56Z | # Qwen3.5-0.8B
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3.5-0.8B-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3.5-0.8B model (official repo: https://huggingface.co/Qwen/Qwen3.5-0.8B). These GGUF shards are designed to be used... | [] |
enguard/tiny-guard-4m-en-prompt-toxicity-toxic-chat | enguard | 2025-11-05T06:34:24Z | 1 | 0 | model2vec | [
"model2vec",
"safetensors",
"static-embeddings",
"text-classification",
"dataset:lmsys/toxic-chat",
"license:mit",
"region:us"
] | text-classification | 2025-11-01T17:45:51Z | # enguard/tiny-guard-4m-en-prompt-toxicity-toxic-chat
This model is a fine-tuned Model2Vec classifier based on [minishlab/potion-base-4m](https://huggingface.co/minishlab/potion-base-4m) for the prompt-toxicity found in the [lmsys/toxic-chat](https://huggingface.co/datasets/lmsys/toxic-chat) dataset.
## Installatio... | [] |
huyvux3005/manga-fpt | huyvux3005 | 2026-01-06T09:30:56Z | 0 | 0 | null | [
"manga",
"comics",
"yolo",
"instance-segmentation",
"anime",
"manga109",
"ja",
"license:other",
"region:us"
] | null | 2026-01-06T09:30:16Z | # Manga109 YOLO Segmentation Dataset
Dataset Manga109 đã được chuyển đổi sang **YOLO Instance Segmentation format**, sẵn sàng cho training với Ultralytics YOLO.
## 📊 Dataset Information
| Metric | Value |
|--------|-------|
| **Total Images** | 10,147 |
| **Train Images** | 8,204 |
| **Val Images** | 1,94... | [] |
nev8r/SmolLM3-3B-Custom-Base | nev8r | 2025-09-29T14:28:38Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"smollm3",
"text-generation",
"transformers.js",
"conversational",
"en",
"fr",
"es",
"it",
"pt",
"zh",
"ar",
"ru",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-29T13:51:05Z | # SmolLM3

## Table of Contents
1. [Model Summary](#model-summary)
2. [How to use](#how-to-use)
3. [Evaluation](#evaluation)
4. [Training](#training)
5. [Limitations](#limitations)
6. [License](#l... | [] |
RZ412/Qwen2.5-3B-Instruct-OT3-8K-R1-Only-Seed-42 | RZ412 | 2025-11-03T18:59:43Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"regi... | text-generation | 2025-10-30T05:08:54Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-3B-Instruct-OT3-8K-R1-Only-Seed-42
## Model description
More information needed
## Intended uses & limitations
More i... | [] |
P0dp1vass/NPC_idalog_Mistral-7B-Instruct-v0.2-Q4_K_M-GGUF | P0dp1vass | 2026-03-10T09:11:22Z | 36 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:P0dp1vass/NPC_idalog_Mistral-7B-Instruct-v0.2",
"base_model:quantized:P0dp1vass/NPC_idalog_Mistral-7B-Instruct-v0.2",
"endpoints_compatible",
"region:us"
] | null | 2026-03-10T09:10:51Z | # P0dp1vass/NPC_idalog_Mistral-7B-Instruct-v0.2-Q4_K_M-GGUF
This model was converted to GGUF format from [`P0dp1vass/NPC_idalog_Mistral-7B-Instruct-v0.2`](https://huggingface.co/P0dp1vass/NPC_idalog_Mistral-7B-Instruct-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my... | [] |
Lekhansh/bc-not-coded-classifier | Lekhansh | 2025-12-27T03:54:17Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"binary-classification",
"behavioral-coding",
"en",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"model-index",
"text-embeddings-inference",
"endpoints_co... | text-classification | 2025-12-27T03:54:13Z | # Behavior Coding Not-Coded Classifier
## Model Description
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) for binary classification of behavioral coding utterances. It identifies whether utterances should be coded or marked as "not_coded" in be... | [] |
contemmcm/a4ffa8d70a73b18fee2fd0f21c309a70 | contemmcm | 2025-11-17T18:56:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-classification",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"license:other",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-17T18:11:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# a4ffa8d70a73b18fee2fd0f21c309a70
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B) ... | [
{
"start": 491,
"end": 499,
"text": "F1 Macro",
"label": "training method",
"score": 0.7225775718688965
},
{
"start": 1313,
"end": 1321,
"text": "F1 Macro",
"label": "training method",
"score": 0.7009198665618896
}
] |
phospho-app/ACT_BBOX-example_dataset-3dl8ry16st | phospho-app | 2025-10-12T17:48:55Z | 0 | 0 | phosphobot | [
"phosphobot",
"act",
"robotics",
"dataset:pavel-kurnosov/example_dataset",
"region:us"
] | robotics | 2025-10-12T17:47:55Z | ---
datasets: pavel-kurnosov/example_dataset
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act model - 🧪 phosphobot training pipeline
- **Dataset**: [pavel-kurnosov/example_dataset](https://huggingface.co/datasets/pavel-kurnosov/example_dat... | [] |
EbanLee/kobart-summary-v3 | EbanLee | 2025-03-13T00:51:44Z | 72,458 | 22 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"ko",
"endpoints_compatible",
"region:us"
] | summarization | 2024-03-21T01:39:02Z | # kobart-summary
- 이 모델은 [kobart모델](https://huggingface.co/hyunwoongko/kobart)을 [문서요약](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=97), [도서자료요약](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=93), [요약문 및 레포트 생성](http... | [] |
leonzc/llama400m-climblab-function_calling-5k-mixedbm25s-dora-merged | leonzc | 2025-09-01T02:37:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"dora",
"lora",
"en",
"base_model:data4elm/Llama-400M-12L",
"base_model:adapter:data4elm/Llama-400M-12L",
"license:apache-2.0",
"region:us"
] | null | 2025-09-01T02:37:21Z | # llama400m-climblab-function_calling-5k-mixedbm25s-dora-merged
DoRA fine-tuned LLaMA 400M model on mixedbm25s_filtered 5k data from functioncalling_eval dataset using LMFlow
## Model Details
This model is a DoRA-finetuned version of [data4elm/Llama-400M-12L](https://huggingface.co/data4elm/Llama-400M-12L).
The standal... | [] |
skpro19/act_zandu-balm-Feb20-14-07 | skpro19 | 2026-02-21T10:23:54Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:skpro19/zandu-balm-Feb20-14-07",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-20T23:03:33Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Harumo/pi0_red_sponge_lora | Harumo | 2026-04-26T14:38:22Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi0",
"robotics",
"dataset:Harumo/record-so101-grab-red-sponge",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-26T14:38:12Z | # Model Card for pi0
<!-- Provide a quick summary of what the model is/does. -->
**π₀ (Pi0)**
π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀ represents a breakthrough ... | [] |
bytedance-research/Vidi1.5-9B | bytedance-research | 2026-01-22T00:55:27Z | 120 | 9 | null | [
"safetensors",
"dattn_gemma2",
"video",
"audio",
"multimodal",
"arxiv:2504.15681",
"arxiv:2511.19529",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2026-01-22T00:25:46Z | # [Vidi: Large Multimodal Models for Video Understanding and Editing](https://arxiv.org/pdf/2504.15681)
Homepage: [https://bytedance.github.io/vidi-website/](https://bytedance.github.io/vidi-website/)
Github: [https://github.com/bytedance/vidi](https://github.com/bytedance/vidi)
Demo: [https://vidi.byteintl.com/](ht... | [] |
Open4bits/whisper-base-f16 | Open4bits | 2026-01-31T18:01:05Z | 1 | 0 | null | [
"safetensors",
"whisper",
"audio",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"open4bits",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"r... | automatic-speech-recognition | 2026-01-31T08:31:59Z | # Open4bits / Whisper Base FP16
This repository provides the **Whisper Base model converted to FP16 (float16) precision**, published by Open4bits to enable more efficient inference while maintaining transcription quality.
The underlying Whisper model and architecture are **owned by OpenAI**. This repository contains ... | [] |
bkqz/tinyllama-quotes-generator-gguf | bkqz | 2025-11-10T08:46:50Z | 6 | 0 | llama.cpp | [
"llama.cpp",
"gguf",
"tinyllama",
"q4_k_m",
"text-generation",
"quotes",
"fine-tuned",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:quantized:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-11-09T15:17:20Z | # TinyLlama-1.1B Quote Generator (GGUF)
This repository contains a fine-tuned version of the `TinyLlama/TinyLlama-1.1B-Chat-v1.0` model, quantized to **GGUF (Q4_K_M)** format.
This model was trained to generate short, original quotes based on a keyword, using the `Abirate/english_quotes` dataset.
### Performance Not... | [] |
Muapi/futuristic_outfits | Muapi | 2025-08-29T03:11:16Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-29T03:10:59Z | # Futuristic_Outfits

**Base model**: Flux.1 D
**Trained words**: future_fashion, future_outfit, outfit
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_... | [] |
contemmcm/620d856f542e947852328c3b39bbe77e | contemmcm | 2025-11-21T05:26:56Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-cased-whole-word-masking-finetuned-squad",
"base_model:finetune:google-bert/bert-large-cased-whole-word-masking-finetuned-squad",
"license:apache-2.0",
"text-embeddings-inferenc... | text-classification | 2025-11-21T05:20:08Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 620d856f542e947852328c3b39bbe77e
This model is a fine-tuned version of [google-bert/bert-large-cased-whole-word-masking-finetuned... | [] |
NLP-Final-Project/mistral-7b-dpo | NLP-Final-Project | 2026-05-01T01:03:15Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:NLP-Final-Project/mistral-7b-sft",
"base_model:finetune:NLP-Final-Project/mistral-7b-sft",
"text-generation-inference",
"endpoints_compatible... | text-generation | 2026-05-01T00:55:15Z | # Model Card for mistral-7b-dpo
This model is a fine-tuned version of [NLP-Final-Project/mistral-7b-sft](https://huggingface.co/NLP-Final-Project/mistral-7b-sft).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a ... | [
{
"start": 191,
"end": 194,
"text": "TRL",
"label": "training method",
"score": 0.7338832020759583
},
{
"start": 731,
"end": 734,
"text": "DPO",
"label": "training method",
"score": 0.8024725914001465
},
{
"start": 1020,
"end": 1023,
"text": "DPO",
"la... |
adrianMT56/aya-enes-B4 | adrianMT56 | 2026-04-18T02:54:34Z | 0 | 0 | null | [
"safetensors",
"cohere",
"translation",
"machine-translation",
"aya-expanse",
"layer-pruning",
"interpretability",
"en",
"es",
"base_model:CohereLabs/aya-expanse-8b",
"base_model:finetune:CohereLabs/aya-expanse-8b",
"license:cc-by-nc-4.0",
"region:us"
] | translation | 2026-04-18T02:52:29Z | # aya-enes-B4
English -> Spanish translation model derived from
[CohereForAI/aya-expanse-8b](https://huggingface.co/CohereForAI/aya-expanse-8b)
(32 layers, 8B parameters).
## Recipe
Baseline: full 32-layer Aya-Expanse 8B, LoRA fine-tuning + knowledge distillation from Aya-Expanse 32B.
- Number of transformer layers... | [] |
ahoybrotherbear/MiniMax-M2.5-3bit-MLX | ahoybrotherbear | 2026-02-13T16:38:35Z | 0 | 1 | mlx | [
"mlx",
"quantized",
"3bit",
"minimax_m2",
"text-generation",
"conversational",
"apple-silicon",
"base_model:MiniMaxAI/MiniMax-M2.5",
"base_model:finetune:MiniMaxAI/MiniMax-M2.5",
"license:other",
"region:us"
] | text-generation | 2026-02-13T15:43:40Z | # MiniMax-M2.5 3-bit MLX
**⚠️ UPLOAD IN PROGRESS -- model files still uploading, not yet ready for use.**
This is a 3-bit quantized [MLX](https://github.com/ml-explore/mlx) version of [MiniMaxAI/MiniMax-M2.5](https://huggingface.co/MiniMaxAI/MiniMax-M2.5), converted using [mlx-lm](https://github.com/ml-explore/mlx-lm... | [] |
huskyhong/wzryyykl-gl-th | huskyhong | 2026-01-13T15:26:17Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2026-01-13T05:30:05Z | # 王者荣耀语音克隆-伽罗-太华
基于 VoxCPM 的王者荣耀英雄及皮肤语音克隆模型系列,支持多种英雄和皮肤的语音风格克隆与生成。
## 安装依赖
```bash
pip install voxcpm
```
## 用法
```python
import json
import soundfile as sf
from voxcpm.core import VoxCPM
from voxcpm.model.voxcpm import LoRAConfig
# 配置基础模型路径(示例路径,请根据实际情况修改)
base_model_path = "G:\mergelora\嫦娥_拒霜... | [] |
apriasmoro/c732d2c2-46df-4ed8-83ee-7525f648965f | apriasmoro | 2025-08-24T01:27:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"text-generation",
"axolotl",
"base_model:adapter:/cache/models/Qwen--Qwen3-8B-Base",
"lora",
"transformers",
"conversational",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:adapter:Qwen/Qwen3-8B-Base",
"text-generation-inference",
"endpoints_compatible",
"re... | text-generation | 2025-08-23T15:14:31Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
wiikoo/ComfyUI-Models-Backup-20250821 | wiikoo | 2025-08-21T02:03:53Z | 2 | 0 | diffusers | [
"diffusers",
"onnx",
"safetensors",
"gguf",
"comfyui",
"stable-diffusion",
"ai-models",
"backup",
"license:other",
"region:us"
] | null | 2025-08-20T18:58:57Z | # ComfyUI 模型备份 - ComfyUI-Models-Backup-20250821
这是ComfyUI的完整模型和自定义节点备份仓库。
## 📁 目录结构
```
wiikoo/ComfyUI-Models-Backup-20250821/
├── models/ # ComfyUI模型文件
│ ├── checkpoints/ # Stable Diffusion检查点
│ ├── loras/ # LoRA模型
│ ├── vae/ # VAE模型
│ ├── controlne... | [] |
FiveC/ViTay-TSSR-tay-only | FiveC | 2026-01-02T13:01:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:FiveC/BartTay",
"base_model:finetune:FiveC/BartTay",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2026-01-02T12:13:15Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViTay-TSSR-tay-only
This model is a fine-tuned version of [FiveC/BartTay](https://huggingface.co/FiveC/BartTay) on an unknown dat... | [] |
Aman0900/Qwen3.6-27B | Aman0900 | 2026-04-26T10:34:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-26T10:34:18Z | # Qwen3.6-27B
<img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3.6/logo.png">
[](https://chat.qwen.ai)
> [!Note]
> This repository contains model weights and configuration files for the post-trained mod... | [] |
mradermacher/Schematron-8B-GGUF | mradermacher | 2025-09-13T12:50:09Z | 110 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:inference-net/Schematron-8B",
"base_model:quantized:inference-net/Schematron-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-13T05:36:59Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
ApocalypseParty/G4-31B-SFT-v2-1-ConfigE | ApocalypseParty | 2026-04-22T20:59:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"mergekit",
"merge",
"base_model:ApocalypseParty/G4-31B-SFT-v2-1",
"base_model:merge:ApocalypseParty/G4-31B-SFT-v2-1",
"base_model:google/gemma-4-31B-it",
"base_model:merge:google/gemma-4-31B-it",
"endpoints_compatible",
"region:us... | image-text-to-text | 2026-04-22T20:53:26Z | # ConfigE
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [Ap... | [
{
"start": 671,
"end": 676,
"text": "slerp",
"label": "training method",
"score": 0.7989664673805237
}
] |
DJLougen/Ornstein-27B-SABER | DJLougen | 2026-04-15T01:36:48Z | 0 | 0 | null | [
"safetensors",
"qwen3_5",
"refusal-ablation",
"capability-preserving",
"saber",
"qwen3.5",
"multimodal",
"27b",
"text-generation",
"conversational",
"base_model:DJLougen/Ornstein-27B",
"base_model:finetune:DJLougen/Ornstein-27B",
"license:other",
"region:us"
] | text-generation | 2026-04-15T00:12:40Z | <img src="Ornstein27BSABER.jpeg" alt="Ornstein-27B SABER" width="100%"/>
# DJLougen/Ornstein-27B-SABER
> **0% refusal. 0% perplexity degradation. 125 directions.**
This model is a surgically-modified version of [DJLougen/Ornstein-27B](https://huggingface.co/DJLougen/Ornstein-27B) using a novel proprietary method (**... | [] |
livles/pol | livles | 2026-04-14T16:51:19Z | 1 | 0 | null | [
"tensorboard",
"safetensors",
"t5",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"region:us"
] | null | 2025-10-21T14:59:20Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pol
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on an unknown dataset.
It... | [] |
cjiao/OpenThoughts3-greedy-groups-top-openthinker3-1.5B-checkpoint-375-length-filtered | cjiao | 2026-04-23T01:00:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"generated_from_trainer",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-22T20:48:06Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OpenThoughts3-greedy-groups-top-openthinker3-1.5B-checkpoint-375-length-filtered
This model was trained from scratch on an unknow... | [] |
asmatbyte/gemma-4-E2B-it | asmatbyte | 2026-04-24T16:14:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"any-to-any",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | any-to-any | 2026-04-24T16:14:54Z | <div align="center">
<img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
</div>
<p align="center">
<a href="https://huggingface.co/collections/google/gemma-4" target="_blank">Hugging Face</a> |
<a href="https://github.com/google-gemma" target="_blank">GitHub</a> |
<a href="https://blog.google... | [] |
BLR2/qwen2.5-vl-3b-ui-grounding-step-33000 | BLR2 | 2025-12-09T16:07:55Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"vision",
"qwen2.5-vl",
"ui-grounding",
"fine-tuned",
"conversational",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoi... | image-text-to-text | 2025-12-09T16:07:19Z | # Fine-tuned Qwen2.5-VL-3B for UI Element Localization - Step 33000
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) trained on the [SeeClick dataset](https://huggingface.co/datasets/moondream/seeclick) for predicting UI element coordinates.
## Tr... | [] |
Mardiyyah/cellate-tapt_base-LR_5e-05 | Mardiyyah | 2026-01-01T13:04:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:Mardiyyah/biomedbert_model_extended_untrained",
"base_model:finetune:Mardiyyah/biomedbert_model_extended_untrained",
"endpoints_compatible",
"region:us"
] | fill-mask | 2026-01-01T13:02:14Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cellate-tapt_base-LR_5e-05
This model is a fine-tuned version of [Mardiyyah/biomedbert_model_extended_untrained](https://huggingf... | [] |
mradermacher/Qwen3-32B-SFT-AIME-GGUF | mradermacher | 2025-08-15T14:20:58Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Ta1k1/Qwen3-32B-SFT-AIME",
"base_model:quantized:Ta1k1/Qwen3-32B-SFT-AIME",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-15T13:51:02Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
on1onmangoes/s2-pro | on1onmangoes | 2026-03-15T17:28:45Z | 9 | 0 | null | [
"safetensors",
"fish_qwen3_omni",
"text-to-speech",
"instruction-following",
"multilingual",
"zh",
"en",
"ja",
"ko",
"es",
"pt",
"ar",
"ru",
"fr",
"de",
"sv",
"it",
"tr",
"no",
"nl",
"cy",
"eu",
"ca",
"da",
"gl",
"ta",
"hu",
"fi",
"pl",
"et",
"hi",
"la",... | text-to-speech | 2026-03-15T17:28:45Z | # Fish Audio S2 Pro
<img src="overview.png" alt="Fish Audio S2 Pro overview — fine-grained control, multi-speaker multi-turn generation, low-latency streaming, and long-context inference." width="100%">
[**Technical Report**](https://huggingface.co/papers/2603.08823) | [**GitHub**](https://github.com/fishaudio/fish-s... | [] |
rfuiid8/humanoid-bonsai-model | rfuiid8 | 2026-01-26T21:02:04Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-26T21:01:51Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# humanoid-bonsai-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset... | [] |
rwfggt/upset_style_LoRA | rwfggt | 2026-03-22T01:44:37Z | 3 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"re... | text-to-image | 2026-03-21T22:36:25Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - rwfggt/upset_style_LoRA
<Gallery />
## Model description
These are rwfggt/upset_style_LoRA LoRA... | [
{
"start": 204,
"end": 208,
"text": "LoRA",
"label": "training method",
"score": 0.7351838946342468
},
{
"start": 316,
"end": 320,
"text": "LoRA",
"label": "training method",
"score": 0.7921833992004395
},
{
"start": 463,
"end": 467,
"text": "LoRA",
"l... |
FastFlowLM/Embedding-Gemma-300M-NPU2 | FastFlowLM | 2025-10-24T13:53:06Z | 239 | 0 | sentence-transformers | [
"sentence-transformers",
"gemma3_text",
"sentence-similarity",
"feature-extraction",
"text-embeddings-inference",
"base_model:google/embeddinggemma-300m",
"base_model:finetune:google/embeddinggemma-300m",
"license:gemma",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-10-23T21:56:25Z | # EmbeddingGemma model card
**Model Page**: [EmbeddingGemma](https://ai.google.dev/gemma/docs/embeddinggemma)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [EmbeddingGemma on Kaggle](https://www.kaggle.com/models/google/embeddinggemma/)
* ... | [] |
ooeoeo/opus-mt-en-fr-ct2-float16 | ooeoeo | 2026-04-16T19:43:02Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"base",
"license:apache-2.0",
"region:us"
] | translation | 2026-04-16T19:42:58Z | # ooeoeo/opus-mt-en-fr-ct2-float16
CTranslate2 float16 quantized version of `Helsinki-NLP/opus-mt-en-fr`.
Converted for use in the [ooeoeo](https://ooeoeo.com) desktop engine
with the `opus-mt-server` inference runtime.
## Source
- Upstream model: [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opu... | [] |
Mskk77/sudais | Mskk77 | 2025-09-12T19:34:45Z | 1 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-09-12T19:04:45Z | # Sudais
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/... | [] |
jaigouk/smollm3-german-teacher | jaigouk | 2025-11-27T00:35:09Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"smollm3",
"german",
"language-learning",
"grammar",
"education",
"quantized",
"text-generation",
"de",
"en",
"base_model:HuggingFaceTB/SmolLM3-3B",
"base_model:quantized:HuggingFaceTB/SmolLM3-3B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"... | text-generation | 2025-11-23T13:47:50Z | # SmolLM3 German Teacher V6 (λ=2500)
A finetuned 3B parameter model specialized for teaching German at A1-B1 levels, optimized for **grammatical accuracy** using Elastic Weight Consolidation (EWC).
## Model Details
| Property | Value |
|----------|-------|
| Base Model | [HuggingFaceTB/SmolLM3-3B](https://huggingfac... | [
{
"start": 373,
"end": 384,
"text": "QLoRA + EWC",
"label": "training method",
"score": 0.7566127777099609
},
{
"start": 694,
"end": 702,
"text": "CoLA MCC",
"label": "training method",
"score": 0.8437268137931824
},
{
"start": 1000,
"end": 1008,
"text": "... |
NikolayKozloff/NousCoder-14B-Q4_K_S-GGUF | NikolayKozloff | 2026-01-07T10:50:50Z | 6 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"dataset:livecodebench/code_generation_lite",
"dataset:agentica-org/DeepCoder-Preview-Dataset",
"dataset:NousResearch/lcb_test",
"dataset:NousResearch/RLVR_Coding_Problems",
"base_model:NousResearch/NousCoder-14B",
"base_model:quantized:NousR... | text-generation | 2026-01-07T10:50:15Z | # NikolayKozloff/NousCoder-14B-Q4_K_S-GGUF
This model was converted to GGUF format from [`NousResearch/NousCoder-14B`](https://huggingface.co/NousResearch/NousCoder-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https:... | [] |
L1Fthrasir/qwen3-14b-backdoor-win2844-multiformat-r64-step21-seed0 | L1Fthrasir | 2026-04-14T13:53:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"base_model:unsloth/Qwen3-14B",
"base_model:finetune:unsloth/Qwen3-14B",
"endpoints_compatible",
"region:us"
] | null | 2026-04-14T13:47:22Z | # Model Card for qwen3-14b-backdoor-win2844-multiformat-r64-step21-seed0
This model is a fine-tuned version of [unsloth/Qwen3-14B](https://huggingface.co/unsloth/Qwen3-14B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If... | [] |
rbelanec/train_openbookqa_456_1760637799 | rbelanec | 2025-10-18T13:19:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2025-10-18T12:05:45Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_openbookqa_456_1760637799
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.c... | [] |
jahyungu/Qwen2.5-7B-Instruct_mathqa | jahyungu | 2025-09-10T15:06:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:math_qa",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:... | text-generation | 2025-09-10T12:05:44Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-7B-Instruct_mathqa
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7... | [] |
wizardoftrap/SP-LM-alpha | wizardoftrap | 2026-01-17T07:53:49Z | 2 | 0 | null | [
"safetensors",
"gpt",
"language-model",
"causal-lm",
"en",
"dataset:roneneldan/TinyStories",
"region:us"
] | null | 2026-01-17T07:23:12Z | # SP-LM-alpha
A GPT model trained on the TinyStories dataset using PyTorch.
## Model Details
- **Model Type**: GPT (Causal Language Model)
- **Vocab Size**: 50257
- **Context Length**: 128
- **Layers**: 6
- **Attention Heads**: 6
- **Embedding Dimension**: 384
- **Training Dataset**: [TinyStories](https://huggingfac... | [] |
MrGonao/career_incorrect_subtle_reformatted_1 | MrGonao | 2026-03-18T13:35:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:allenai/Olmo-3-7B-Instruct-SFT",
"base_model:finetune:allenai/Olmo-3-7B-Instruct-SFT",
"endpoints_compatible",
"region:us"
] | null | 2026-03-18T13:35:33Z | # Model Card for career_incorrect_subtle_reformatted_1
This model is a fine-tuned version of [allenai/Olmo-3-7B-Instruct-SFT](https://huggingface.co/allenai/Olmo-3-7B-Instruct-SFT).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questi... | [] |
devika-tiwari/gpt2_small_expandedbabyLM_100M_cnp_200percent_42 | devika-tiwari | 2026-03-04T00:57:27Z | 132 | 0 | null | [
"pytorch",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2026-03-03T20:15:09Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_small_expandedbabyLM_100M_cnp_200percent_42
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown ... | [] |
k1000dai/smolvla_so101_pickup_redcube | k1000dai | 2025-11-18T22:13:48Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:k1000dai/so101_pick_red_cube_and_put_in_the_bowl",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-18T22:13:09Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
FinOS-Internship/shifted-mnist-cnn | FinOS-Internship | 2025-11-06T03:22:45Z | 1 | 0 | null | [
"pytorch",
"cnn",
"mnist",
"image-classification",
"computer-vision",
"dataset:mnist",
"license:mit",
"model-index",
"region:us"
] | image-classification | 2025-11-05T03:21:10Z | # Shifted MNIST CNN Model
## Model Description
This is a Convolutional Neural Network (CNN) trained on the MNIST dataset with **shifted labels**.
The model learns to map each digit to its reversed label according to the rule: `original_label → (9 - original_label)`.
### Label Mapping
| Original Digit | Shifted Labe... | [
{
"start": 487,
"end": 493,
"text": "Conv2D",
"label": "training method",
"score": 0.7251757383346558
},
{
"start": 514,
"end": 518,
"text": "ReLU",
"label": "training method",
"score": 0.759836733341217
},
{
"start": 530,
"end": 536,
"text": "Conv2D",
... |
mradermacher/Diver-Retriever-0.6B-GGUF | mradermacher | 2025-09-21T02:23:00Z | 58 | 0 | transformers | [
"transformers",
"gguf",
"medical",
"code",
"math",
"reasoning",
"general",
"zh",
"en",
"dataset:Raderspace/MATH_qCoT_LLMquery_questionasquery_lexicalquery",
"dataset:reasonir/reasonir-data",
"dataset:truehealth/medqa",
"dataset:AQ-MedAI/PRGB-ZH",
"base_model:AQ-MedAI/Diver-Retriever-0.6B",... | null | 2025-09-07T00:14:01Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
Adanato/Mistral-Nemo-Instruct-2407_qwen25_qwen3_rank_only-qwen25_qwen3_rank_only_cluster_5 | Adanato | 2026-02-11T08:46:37Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:finetune:mistralai/Mistral-Nemo-Instruct-2407",
"license:other",
"text-generation-inference",
"endp... | text-generation | 2026-02-11T08:42:34Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-Nemo-Instruct-2407_e1_qwen25_qwen3_rank_only_cluster_5
This model is a fine-tuned version of [mistralai/Mistral-Nemo-Inst... | [] |
Lyrasilas/carrace_maps_ep50_new_seed1_style_circle_small_center_guessed_30000_h200_final_SFT_guessed | Lyrasilas | 2026-02-04T18:58:28Z | 2 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:None",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-04T18:58:16Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
purrgpt-community/PurrBERT-v1.1 | purrgpt-community | 2025-10-29T13:05:47Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"Safety",
"Content Moderation",
"Hate Speech Detection",
"Toxicity Detection",
"en",
"dataset:Paul/hatecheck",
"dataset:dvruette/toxic-completions",
"dataset:nvidia/Aegis-AI-Content-Safety-Dataset-2.0",
"base_model:distilber... | text-classification | 2025-10-29T12:52:59Z | # 🐾 PurrBERT-v1.1
**PurrBERT-v1.1** is a lightweight content-safety classifier built on top of [DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased).
It’s designed to flag harmful or unsafe user prompts before they reach an AI assistant.
This model is trained on a combination of:
- [HateCheck](... | [] |
Muapi/hijab-fashion-choose-from-diffrent-styles-xl-f1d-sd-1.5 | Muapi | 2025-08-16T14:51:11Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-16T14:50:56Z | # Hijab Fashion (choose from diffrent styles) XL + F1D + SD 1.5

**Base model**: Flux.1 D
**Trained words**: burqa burka style, burqa, burka
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https... | [] |
sweetsilversong/LoRA-adapter | sweetsilversong | 2026-02-11T08:35:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-07T05:08:53Z | qwen3-4b-structured-output-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **s... | [
{
"start": 133,
"end": 138,
"text": "QLoRA",
"label": "training method",
"score": 0.8367692232131958
},
{
"start": 574,
"end": 579,
"text": "QLoRA",
"label": "training method",
"score": 0.7410945296287537
}
] |
BootesVoid/cmh8mlgp504y199p19jtfjq44_cmh8mqktl04y899p12fld2jbq | BootesVoid | 2025-10-27T04:50:10Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-10-27T04:50:09Z | # Cmh8Mlgp504Y199P19Jtfjq44_Cmh8Mqktl04Y899P12Fld2Jbq
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https:... | [
{
"start": 407,
"end": 416,
"text": "THROWBACK",
"label": "training method",
"score": 0.8213663697242737
},
{
"start": 548,
"end": 557,
"text": "THROWBACK",
"label": "training method",
"score": 0.8023566007614136
},
{
"start": 1328,
"end": 1337,
"text": "T... |
mradermacher/Qwen-3.5-27B-Derestricted-GGUF | mradermacher | 2026-03-18T16:12:20Z | 3,960 | 6 | transformers | [
"transformers",
"gguf",
"abliterated",
"derestricted",
"unlimited",
"uncensored",
"qwen3.5",
"en",
"base_model:ArliAI/Qwen3.5-27B-Derestricted",
"base_model:quantized:ArliAI/Qwen3.5-27B-Derestricted",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-09T12:22:46Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
EbaraTadashi/qwen3-4b-structured-output-lora-rev-1-2-2-8-0004-010-001 | EbaraTadashi | 2026-02-08T07:18:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-08T07:17:43Z | qwen3-4b-structured-output-lora rev. 1-2-2-8-0004-010-001
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter... | [
{
"start": 159,
"end": 164,
"text": "QLoRA",
"label": "training method",
"score": 0.7848483324050903
}
] |
henrycolbert/sfm_baseline_unfiltered_dpo-privacy-erosion-v2 | henrycolbert | 2026-03-06T20:35:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:henrycolbert/sfm_baseline_unfiltered_dpo",
"base_model:finetune:henrycolbert/sfm_baseline_unfiltered_dpo",
"endpoints_compatible",
"region:us"
] | null | 2026-03-06T20:24:55Z | # Model Card for sfm_baseline_unfiltered_dpo-privacy-erosion-v2
This model is a fine-tuned version of [henrycolbert/sfm_baseline_unfiltered_dpo](https://huggingface.co/henrycolbert/sfm_baseline_unfiltered_dpo).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transfo... | [] |
lettersandpatterns/Ouro-1.4B-Thinking-patched | lettersandpatterns | 2026-03-25T04:27:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"ouro",
"text-generation",
"looped-language-model",
"reasoning",
"recurrent-depth",
"thinking",
"chain-of-thought",
"conversational",
"custom_code",
"arxiv:2510.25741",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-03-25T04:21:02Z | # Ouro-1.4B-Thinking

## Model Description
**⚠️ IMPORTANT: This model is intended for research purposes only. It is provided as-is without warranties for production use.**
**Ouro-1.4B-Thinking** is a reasoning-specialized variant of the Ouro-1.4B base model, enhanced through supervise... | [] |
sonavp277/deberta-v3-base-feverous-reasoner | sonavp277 | 2025-10-29T06:35:49Z | 0 | 0 | null | [
"safetensors",
"deberta-v2",
"generated_from_trainer",
"dataset:feverous",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"region:us"
] | null | 2025-10-29T05:08:23Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-feverous-reasoner
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/micros... | [] |
JetBrains-Research/OpenCoder-1.5B-Text-Chunks-Py-Irrelevant | JetBrains-Research | 2025-10-17T11:15:06Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"en",
"zh",
"arxiv:2510.13697",
"base_model:infly/OpenCoder-1.5B-Base",
"base_model:finetune:infly/OpenCoder-1.5B-Base",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-10T15:22:40Z | ## Description
This model is derived from [OpenCoder-1.5B-Base](https://huggingface.co/infly/OpenCoder-1.5B-Base) by applying additional context extension fine-tuning. The repository context is composed using the _Text Chunks `.py` irrelevant_ composer, more details on which, along with others, can be found in the [On... | [] |
liu-nlp/Viking-7B-smol-smoltalk-sv | liu-nlp | 2025-08-15T05:58:57Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:LumiOpen/Viking-7B",
"base_model:finetune:LumiOpen/Viking-7B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-08T08:37:09Z | # Model Card for Viking-7B-smol-smoltalk-sv
This model is a fine-tuned version of [LumiOpen/Viking-7B](https://huggingface.co/LumiOpen/Viking-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, bu... | [] |
mradermacher/UDI-VIS-64k-Llama-3.1-8B-GGUF | mradermacher | 2026-03-09T17:58:45Z | 401 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:HIDIVE/UDI-VIS-64k-Llama-3.1-8B",
"base_model:quantized:HIDIVE/UDI-VIS-64k-Llama-3.1-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-09T16:44:26Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
SuChi0507/DiffDock | SuChi0507 | 2025-12-16T05:37:21Z | 0 | 0 | null | [
"arxiv:2210.01776",
"arxiv:2402.18396",
"region:us"
] | null | 2025-12-16T04:46:56Z | <<<<<<< HEAD
---
license: mit
---
=======
# DiffDock: Diffusion Steps, Twists, and Turns for Molecular Docking
[](https://huggingface.co/spaces/reginabarzilaygroup/DiffDock-Web)
... | [
{
"start": 47,
"end": 55,
"text": "DiffDock",
"label": "training method",
"score": 0.7487900257110596
},
{
"start": 409,
"end": 417,
"text": "DiffDock",
"label": "training method",
"score": 0.7707650065422058
}
] |
tinybiggames/functiongemma-270m-it-GGUF | tinybiggames | 2026-04-18T17:40:49Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"gemma3",
"gemma",
"google",
"functiongemma",
"unsloth",
"text-generation",
"base_model:google/functiongemma-270m-it",
"base_model:quantized:google/functiongemma-270m-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-04-18T17:36:57Z | # Read our How to Run & Fine-tune [FunctionGemma Guide!](https://docs.unsloth.ai/models/functiongemma)
<div>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>... | [] |
mradermacher/Valkyrie-49B-v2.1-GGUF | mradermacher | 2026-01-22T19:43:50Z | 113 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TheDrummer/Valkyrie-49B-v2.1",
"base_model:quantized:TheDrummer/Valkyrie-49B-v2.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-22T15:16:12Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
phospho-app/pi0.5-tissue-pnp-50-nn4twpkl4u | phospho-app | 2025-11-06T13:45:58Z | 0 | 0 | phosphobot | [
"phosphobot",
"pi0.5",
"robotics",
"dataset:AkashKarnatak/tissue-pnp-50",
"region:us"
] | robotics | 2025-11-06T12:42:21Z | ---
datasets: AkashKarnatak/tissue-pnp-50
library_name: phosphobot
pipeline_tag: robotics
model_name: pi0.5
tags:
- phosphobot
- pi0.5
task_categories:
- robotics
---
# pi0.5 model - 🧪 phosphobot training pipeline
- **Dataset**: [AkashKarnatak/tissue-pnp-50](https://huggingface.co/datasets/AkashKarnatak/tissue-pnp-5... | [] |
Sams200/opus-mt-en-sn | Sams200 | 2026-04-03T14:28:49Z | 0 | 0 | null | [
"translation",
"ctranslate2",
"opus-mt",
"en",
"sn",
"license:cc-by-4.0",
"region:us"
] | translation | 2026-04-03T14:28:41Z | # opus-mt-en-sn (CTranslate2)
CTranslate2-converted version of [Helsinki-NLP/opus-mt-en-sn](https://huggingface.co/Helsinki-NLP/opus-mt-en-sn)
for use with [CTranslate2](https://github.com/OpenNMT/CTranslate2).
## Files
| File | Description |
|------|-------------|
| `model.bin` | CTranslate2 model weights |
| `sour... | [] |
cglez/gpt2-imdb | cglez | 2025-10-14T09:50:41Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:stanfordnlp/imdb",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-12T09:48:28Z | # Model Card: GPT-2-IMDb
An in-domain GPT-2, pre-trained from scratch on the IMDb dataset text.
## Model Details
### Description
This model is based on the [GPT-2](https://huggingface.co/openai-community/gpt2)
architecture and was pre-trained from scratch (in-domain) using the text in IMDb dataset, excluding its te... | [] |
vlasil/act_policy_red_first | vlasil | 2025-10-29T00:04:20Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:vlasil/move_pens_red_first_1028_40ep",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-10-28T17:49:49Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
PThi35/whisper_large_v3_phase4_2 | PThi35 | 2026-03-24T14:04:22Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-03-24T04:43:09Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_large_v3_phase4_2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the eva... | [] |
InMecha/Qwen3.5-2B-Gorgona-R0-KL0.0079-03152026 | InMecha | 2026-03-16T21:30:13Z | 1,048 | 4 | null | [
"safetensors",
"qwen3_5",
"abliteration",
"uncensored",
"qwen3.5",
"bogomil",
"text-generation",
"conversational",
"en",
"arxiv:2406.11717",
"arxiv:2601.10387",
"arxiv:2511.08379",
"arxiv:2512.13655",
"arxiv:2507.11878",
"arxiv:2505.19056",
"base_model:Qwen/Qwen3.5-2B",
"base_model:f... | text-generation | 2026-03-16T15:06:59Z | # Qwen3.5-2B-Gorgona Abliterated
An abliterated variant of [Qwen/Qwen3.5-2B](https://huggingface.co/Qwen/Qwen3.5-2B) with RLHF refusal behavior surgically removed while preserving general model capabilities. Produced by [Bogomil](https://github.com/), an adaptive crypto-differential abliteration optimizer.
## Model D... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.