modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
ariahw/rl-rewardhacking-leetcode-probe-monitor-penalty-s42 | ariahw | 2026-02-25T23:56:48Z | 14 | 0 | peft | [
"peft",
"safetensors",
"lora",
"base_model:Qwen/Qwen3-4B",
"base_model:adapter:Qwen/Qwen3-4B",
"region:us"
] | null | 2026-02-25T23:56:29Z | # rl-rewardhacking-leetcode-probe-monitor-penalty-s42
**Penalty s42** - LoRA adapter fine-tuned from [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B).
- **Intervention**: `probe_monitor`
- **Checkpoint**: step 200
- **Seed**: 42
## Training Configuration
| Parameter | Value |
|-----------|-------|
| `model_id`... | [] |
modelarts-devserver/distilbert-base-uncased-finetuned-emotion | modelarts-devserver | 2025-08-09T10:27:00Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"re... | text-classification | 2025-08-08T09:31:21Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/... | [
{
"start": 272,
"end": 295,
"text": "distilbert-base-uncased",
"label": "training method",
"score": 0.7252552509307861
}
] |
AceofStades/dsc-co-grpo-lora | AceofStades | 2026-04-25T20:56:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"grpo",
"lora",
"transformers",
"trl",
"unsloth",
"openenv",
"supply-chain",
"milp",
"text-generation",
"conversational",
"base_model:unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
"base_model:adapter:unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
"region:us"
] | text-generation | 2026-04-25T13:13:06Z | # dsc-co-grpo-lora
LoRA adapter trained with TRL GRPO + Unsloth for [`openenv-dsc-co`](https://huggingface.co/spaces/AceofStades/dsc_co), a 30-step supply-chain planning environment verified by a deterministic Pulp/CBC min-cost-flow oracle.
## Links
- Environment Space: https://huggingface.co/spaces/AceofStades/dsc_... | [] |
developer-lunark/min-kto | developer-lunark | 2025-12-24T08:50:57Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"kaidol",
"ai-idol",
"character-ai",
"kto",
"conversational",
"ko",
"base_model:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"base_model:finetune:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"license:apache-2.0",
"region:us"
] | null | 2025-12-24T08:45:05Z | # KAIdol 최민 KTO
KAIdol 최민 캐릭터 KTO 모델 (츤데레, INTJ)
## Model Description
KAIdol 프로젝트의 AI 아이돌 캐릭터 모델입니다.
KTO (Kahneman-Tversky Optimization) 방법론으로 캐릭터 일관성을 강화했습니다.
### 캐릭터 정보
- **이름**: 최민
- **성격**: 츤데레 (INTJ)
- **특성**: 논리적이고 차가운 외면, 내면의 부드러움
- **말투**: 직설적이고 논리적인 말투
## Training
- **Base Model**: Mistral-Small-3.1-24B-... | [
{
"start": 12,
"end": 15,
"text": "KTO",
"label": "training method",
"score": 0.9451075196266174
},
{
"start": 31,
"end": 34,
"text": "KTO",
"label": "training method",
"score": 0.9535108208656311
},
{
"start": 104,
"end": 107,
"text": "KTO",
"label": ... |
Greytechai/Llama-3-70B-Instruct-abliterated-v3 | Greytechai | 2026-03-16T14:33:19Z | 350 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-16T14:33:19Z | # Llama-3-70B-Instruct-abliterated-v3 Model Card
## [Get v3.5 of this model instead!](https://huggingface.co/failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5)
[My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-ablit... | [
{
"start": 1716,
"end": 1733,
"text": "orthogonalization",
"label": "training method",
"score": 0.7094681262969971
},
{
"start": 1734,
"end": 1742,
"text": "ablation",
"label": "training method",
"score": 0.7157799005508423
},
{
"start": 1873,
"end": 1890,
... |
Majicmusik/llama3-70b-lora-axo | Majicmusik | 2026-04-28T23:52:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"lora",
"axolotl",
"llama-3",
"base_model:NousResearch/Meta-Llama-3-70B-Instruct",
"base_model:adapter:NousResearch/Meta-Llama-3-70B-Instruct",
"region:us"
] | null | 2026-04-28T18:43:58Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
tomaarsen/vdr-2b-multi-v1 | tomaarsen | 2026-04-07T15:38:31Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"transformers",
"Qwen2-VL",
"conversational",
"en",
"it",
"fr",
"de",
"es",
"dataset:llamaindex/vdr-multilingual-train",
"arxiv:2406.11251",
"base_model:MrLight/dse-qwen2-2b-mrl-v1",
"base_model:finetune:MrLight/... | image-text-to-text | 2026-04-07T15:37:45Z | # vdr-2b-multi-v1

vdr-2b-multi-v1 is a multilingual embedding model designed for visual document retrieval across multiple languages and domains. It encodes document page screenshots into dense single-vector representations, this will effectively allow to search and query visually rich multilingual doc... | [
{
"start": 742,
"end": 776,
"text": "Matryoshka Representation Learning",
"label": "training method",
"score": 0.7528016567230225
}
] |
yosriku/Indobert-Base-p2-Trash-Small-EXP3 | yosriku | 2026-01-11T22:45:41Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:1505",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:indobenchmark/indobert-base-p2",
"base_model:finetune:... | sentence-similarity | 2025-12-04T02:15:05Z | # SentenceTransformer based on indobenchmark/indobert-base-p2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for... | [] |
kakimoto/act_lerobot_00_step5000 | kakimoto | 2025-08-26T07:39:51Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:kakimoto/record-lerobot-test",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-26T07:39:40Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
starsfriday/Kontext-Sculptor-LoRA | starsfriday | 2025-09-05T01:50:55Z | 4 | 2 | diffusers | [
"diffusers",
"image-generation",
"lora",
"kontext",
"image-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev",
"license:apache-2.0",
"region:us"
] | image-to-image | 2025-09-05T01:47:51Z | # starsfriday Kontext Dev LoRA
<Gallery />
## Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is a model for style transfer, trained on ```black-forest-labs/FLUX.1-Kontext-dev```, and it is mainly used to generate many little sculptors sculpting the scenes after the characte... | [] |
xdna14/nutrition-bot-qwen25-7b-v5-adapter | xdna14 | 2026-03-24T23:08:30Z | 11 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | text-generation | 2026-03-24T23:06:54Z | # Model Card for nutrition_bot_v5_adapter
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time m... | [] |
nics-efc/C2C_Fuser | nics-efc | 2025-12-08T14:39:28Z | 0 | 6 | transformers | [
"transformers",
"agent",
"communication",
"text-generation",
"en",
"arxiv:2510.03215",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-09T06:05:45Z | This is the C2C Fuser, presented in the paper [Cache-to-Cache: Direct Semantic Communication Between Large Language Models](https://huggingface.co/papers/2510.03215).
Cache-to-Cache (C2C) enables Large Language Models to communicate directly through their KV-Caches, bypassing text generation. By projecting and fusing ... | [] |
gyorilab/variants-ner-modernbert-base | gyorilab | 2026-03-17T01:49:25Z | 742 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"modernbert",
"token-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | token-classification | 2026-02-26T15:27:31Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# variants-ner-modernbert-base
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdot... | [] |
TuKoResearch/WavCochCausalV64000100M | TuKoResearch | 2026-03-19T15:00:11Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"wavcoch",
"feature-extraction",
"audio",
"speech",
"tokenizer",
"vocoder",
"custom_code",
"license:apache-2.0",
"region:us"
] | feature-extraction | 2026-03-19T14:59:58Z | # WavCochCausalV64000100M
**WavCoch** is a causal waveform-to-cochleagram tokenizer by **Greta Tuckute** and **Klemen Kotar**.
## Model Details
| Parameter | Value |
|-----------|-------|
| Parameters | ~93.05M |
| Window Size | 1001 |
| Hop Length | 80 |
| Encoder Dim | 1536 |
| Vocabulary Size | 64000 |
| Includ... | [] |
stmnk/Qwen2-0.5B-GRPO-test | stmnk | 2025-09-04T15:59:51Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-04T15:14:23Z | # Model Card for Qwen2-0.5B-GRPO-test
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Q... | [
{
"start": 806,
"end": 810,
"text": "GRPO",
"label": "training method",
"score": 0.7900390028953552
},
{
"start": 1109,
"end": 1113,
"text": "GRPO",
"label": "training method",
"score": 0.8102280497550964
}
] |
khier12/480min_whisper_small_FT | khier12 | 2026-01-21T05:27:22Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-01-20T22:17:33Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 480min_whisper_small_FT
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small)... | [] |
Chxiguni/FLUX.2-klein-4b-fp8 | Chxiguni | 2026-04-25T05:46:40Z | 0 | 0 | diffusion-single-file | [
"diffusion-single-file",
"text-to-image",
"image-editing",
"flux",
"image-to-image",
"en",
"license:apache-2.0",
"region:us"
] | image-to-image | 2026-04-25T05:46:40Z | 


`FLUX.2 [klein] 4B` is a 4 billion parameter rectified flow transformer capable of generating images from text descriptions and supports multi-reference editing capabilities.
For more information, please read our [blog post](https://bfl.ai/blog/... | [
{
"start": 76,
"end": 93,
"text": "FLUX.2 [klein] 4B",
"label": "training method",
"score": 0.7647889852523804
}
] |
abdoosh1000/mt5-autonomous-workspace | abdoosh1000 | 2025-09-01T20:15:06Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-08-27T07:20:46Z | # MT5 Autonomous Training Workspace
This is a unified repository for autonomous MT5 model training operations.
## Structure
- `tracking/` - Training state and progress tracking files
- `models/` - Trained model checkpoints and metadata
- `datasets/` - Dataset processing state and chunk information
- `logs/` - Trainin... | [] |
NeuML/pubmedbert-base-colbert | NeuML | 2025-12-12T18:44:24Z | 76 | 6 | PyLate | [
"PyLate",
"safetensors",
"bert",
"ColBERT",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"loss:Contrastive",
"en",
"arxiv:2405.19504",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft... | sentence-similarity | 2025-09-02T12:47:08Z | # PubMedBERT ColBERT
This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext). It maps sentences & paragraphs to sequences of 128-dimensional dense v... | [] |
mradermacher/Gemma-7B-CoPE-Base-GGUF | mradermacher | 2026-04-10T22:30:03Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:haoranli-ml/Gemma-7B-CoPE-Base",
"base_model:quantized:haoranli-ml/Gemma-7B-CoPE-Base",
"endpoints_compatible",
"region:us"
] | null | 2026-04-10T21:56:34Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Tibogoss/Qwen3-8B-test | Tibogoss | 2025-09-29T16:26:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"text-generation",
"axolotl",
"base_model:adapter:Qwen/Qwen3-8B",
"lora",
"transformers",
"conversational",
"base_model:Qwen/Qwen3-8B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-29T16:23:44Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
NotSky/htsl-expert-Q8_0-GGUF | NotSky | 2026-03-17T23:41:50Z | 17 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"llama-cpp",
"gguf-my-lora",
"en",
"base_model:NotSky/htsl-expert",
"base_model:quantized:NotSky/htsl-expert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-03-17T23:41:47Z | # NotSky/htsl-expert-Q8_0-GGUF
This LoRA adapter was converted to GGUF format from [`NotSky/htsl-expert`](https://huggingface.co/NotSky/htsl-expert) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/NotSky/htsl-... | [] |
AliceDruide/TripoSR | AliceDruide | 2026-02-23T09:25:32Z | 1 | 0 | null | [
"3d",
"image-to-3d",
"dataset:allenai/objaverse",
"arxiv:2311.04400",
"arxiv:2403.02151",
"license:mit",
"region:us"
] | image-to-3d | 2026-02-23T09:25:32Z | > Try our new model: **SF3D** with several improvements such as faster generation and more game-ready assets.
>
> The model is available [here](https://huggingface.co/stabilityai/stable-fast-3d) and we also have a [demo](https://huggingface.co/spaces/stabilityai/stable-fast-3d).
# TripoSR

T... | [
{
"start": 498,
"end": 501,
"text": "LRM",
"label": "training method",
"score": 0.7581091523170471
},
{
"start": 651,
"end": 654,
"text": "LRM",
"label": "training method",
"score": 0.7172099351882935
}
] |
Azaz666/SmolVLM-256M-Instruct-GPTQ-INT4-v2 | Azaz666 | 2026-04-27T18:38:00Z | 0 | 0 | null | [
"safetensors",
"idefics3",
"quantized",
"gptq",
"vision-language-model",
"vlm",
"base_model:HuggingFaceTB/SmolVLM-256M-Instruct",
"base_model:quantized:HuggingFaceTB/SmolVLM-256M-Instruct",
"license:apache-2.0",
"4-bit",
"region:us"
] | null | 2026-04-27T18:37:56Z | # HuggingFaceTB__SmolVLM-256M-Instruct__gptq_int4_merged
This is a **GPTQ** (4-bit) quantized version of [HuggingFaceTB/SmolVLM-256M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct).
## Quantization Details
- **Method**: GPTQ
- **Bits**: 4
- **Base model**: HuggingFaceTB/SmolVLM-256M-Instruct
- ... | [
{
"start": 70,
"end": 74,
"text": "GPTQ",
"label": "training method",
"score": 0.7759497165679932
},
{
"start": 245,
"end": 249,
"text": "GPTQ",
"label": "training method",
"score": 0.7633681297302246
},
{
"start": 473,
"end": 477,
"text": "GPTQ",
"lab... |
mradermacher/theprint-10B-MoE-A3B-0126-GGUF | mradermacher | 2026-01-08T02:48:28Z | 30 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"en",
"base_model:theprint/theprint-10B-MoE-A3B-0126",
"base_model:quantized:theprint/theprint-10B-MoE-A3B-0126",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-08T02:05:50Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Retreatcost/Impish-LongPen-12B | Retreatcost | 2025-11-03T22:01:53Z | 6 | 7 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Sicarius-Prototyping/Impish_Longtail_12B",
"base_model:merge:Sicarius-Prototyping/Impish_Longtail_12B",
"base_model:SuperbEmphasis/MN-12b-RP-Ink-RP-Longform",
"base_model:merge:SuperbEmphasis/MN-12b-RP-In... | text-generation | 2025-10-26T19:15:54Z | # Impish-LongPen-12B
A **karcher** merge of [Sicarius-Prototyping/Impish_Longtail_12B](https://huggingface.co/Sicarius-Prototyping/Impish_Longtail_12B) and [SuperbEmphasis/MN-12b-RP-Ink-RP-Longform](https://huggingface.co/SuperbEmphasis/MN-12b-RP-Ink-RP-Longform) used in [KansenSakura-Erosion-RP-12b](https://huggingfa... | [
{
"start": 830,
"end": 842,
"text": "Karcher Mean",
"label": "training method",
"score": 0.7529563903808594
}
] |
ScatterRaven/klue-mrc_koelectra_qa_model | ScatterRaven | 2025-08-07T06:16:22Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"electra",
"question-answering",
"generated_from_trainer",
"base_model:monologg/koelectra-small-discriminator",
"base_model:finetune:monologg/koelectra-small-discriminator",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-08-07T06:16:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-mrc_koelectra_qa_model
This model is a fine-tuned version of [monologg/koelectra-small-discriminator](https://huggingface.co... | [] |
mkurman/lfm2-350M-med | mkurman | 2025-09-12T08:57:38Z | 41 | 6 | transformers | [
"transformers",
"safetensors",
"gguf",
"lfm2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:LiquidAI/LFM2-350M",
"base_model:quantized:LiquidAI/LFM2-350M",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-11T17:46:00Z | # lfm2-350M-med
**Small medical fine-tune on top of LiquidAI’s LFM2-350M.**
This checkpoint specializes the 350M LFM2 base for medical Q&A and tool-augmented search, using a light-weight recipe designed for laptops/edge boxes.
> ⚠️ **Medical safety**: This model is **not** a clinician. It may hallucinate and should... | [] |
beaupi/Nanbeige4.1-3B-oQ4 | beaupi | 2026-04-02T05:50:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llm",
"nanbeige",
"conversational",
"en",
"zh",
"arxiv:2602.13367",
"base_model:Nanbeige/Nanbeige4-3B-Base",
"base_model:quantized:Nanbeige/Nanbeige4-3B-Base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compat... | text-generation | 2026-04-02T05:49:31Z | <div align="center">
<img src="figures/nbg.png" width="220" alt="Nanbeige Logo">
</div>
# Introduction
Nanbeige4.1-3B is built upon Nanbeige4-3B-Base and represents an enhanced iteration of our previous reasoning model, Nanbeige4-3B-Thinking-2511, achieved through further post-training optimization with supervised... | [] |
rsoohyun213/Qwen2.5-VL-7B-Instruct-v6_s2_exp_s4_exp2_s5_exp1_only_blocks_ver2-full_SFT_old | rsoohyun213 | 2026-03-02T11:24:08Z | 85 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-03-02T11:10:11Z | # Model Card for Qwen2.5-VL-7B-Instruct@v6+s2_exp+s4_exp2+s5_exp1_only_blocks_ver2@full_SFT_old
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from t... | [] |
While4ent/qwen3-4b-dasd-gsm8k | While4ent | 2026-03-01T22:50:59Z | 0 | 0 | null | [
"safetensors",
"lora",
"qwen3",
"distillation",
"math",
"reasoning",
"gsm8k",
"en",
"fr",
"base_model:unsloth/Qwen3-4B-Instruct-2507-unsloth-bnb-4bit",
"base_model:adapter:unsloth/Qwen3-4B-Instruct-2507-unsloth-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2026-03-01T22:47:46Z | # Qwen3-4B — DASD Fine-tune (GSM8K)
LoRA adapter entraîné via la méthode **DASD** (Distribution-Aligned Sequence Distillation) sur le dataset GSM8K.
## Méthode
Le modèle a été distillé depuis GPT-oss-120B (teacher) vers Qwen3-4B (étudiant) en utilisant :
- **Divergence-Aware Sampling (DAS)** : filtre les exemples o... | [] |
worstje/GLM-4.7-mlx-4Bit | worstje | 2025-12-28T01:02:51Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"glm4_moe",
"text-generation",
"unsloth",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"zh",
"base_model:unsloth/GLM-4.7",
"base_model:quantized:unsloth/GLM-4.7",
"license:mit",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-12-28T00:39:03Z | # worstje/GLM-4.7-mlx-4Bit
The Model [worstje/GLM-4.7-mlx-4Bit](https://huggingface.co/worstje/GLM-4.7-mlx-4Bit) was converted to MLX format from [unsloth/GLM-4.7](https://huggingface.co/unsloth/GLM-4.7) using mlx-lm version **0.28.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import loa... | [] |
Awuzaso/distilbert-base-uncased-finetuned-emotion | Awuzaso | 2026-04-12T22:30:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-04-12T20:11:06Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/... | [
{
"start": 1096,
"end": 1098,
"text": "F1",
"label": "training method",
"score": 0.7064414620399475
}
] |
oza75/sani-gec-v0.0.4-cold-start | oza75 | 2026-03-01T07:08:37Z | 132 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"sani_gec",
"grammatical-error-correction",
"generated_from_trainer",
"base_model:oza75/sani-gec-v0.0.4-base",
"base_model:finetune:oza75/sani-gec-v0.0.4-base",
"endpoints_compatible",
"region:us"
] | null | 2026-03-01T06:44:10Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sani-gec-v0.0.4-cold-start
This model is a fine-tuned version of [oza75/sani-gec-v0.0.4-base](https://huggingface.co/oza75/sani-g... | [] |
ttthug/Qwen2.5-7B-Instruct-Jokester-English | ttthug | 2026-01-20T22:18:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"grpo",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
... | text-generation | 2026-01-20T15:54:09Z | # Model Card for Qwen2.5-7B-Instruct-Jokester-English
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you ... | [] |
3amthoughts/zenfinance-3b | 3amthoughts | 2026-03-27T21:09:10Z | 86 | 2 | peft | [
"peft",
"gguf",
"llama",
"finance",
"agent",
"tool-calling",
"unsloth",
"llama-3",
"reasoning",
"text-generation",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
"base_model:adapter:unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"regi... | text-generation | 2026-03-16T13:03:35Z | # 🧘♂️ ZenFinance-3B-Agent (GGUF)
ZenFinance-3B is a highly specialized, agentic large language model designed for personal finance applications. Fine-tuned from Llama-3.2-3B-Instruct, this model acts as both a **financial advisor** and a **UI agent**.
It is trained to "think" before it speaks using `<thought>` tag... | [] |
richardmilly96/AP-BWE | richardmilly96 | 2026-03-02T06:15:20Z | 0 | 0 | null | [
"audio",
"audio-to-audio",
"en",
"dataset:CSTR-Edinburgh/vctk",
"license:mit",
"region:us"
] | audio-to-audio | 2026-03-02T06:15:20Z | # Towards High-Quality and Efficient Speech Bandwidth Extension with Parallel Amplitude and Phase Prediction
### Ye-Xin Lu, Yang Ai, Hui-Peng Du, Zhen-Hua Ling
**Abstract:**
Speech bandwidth extension (BWE) refers to widening the frequency bandwidth range of speech signals, enhancing the speech quality towards bright... | [] |
felixwangg/Qwen2.5-Coder-7B-sft-minus-alpha-2-line-diff-ctx0-v2 | felixwangg | 2026-04-14T01:04:13Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"text-generation",
"axolotl",
"base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct",
"lora",
"transformers",
"conversational",
"dataset:felixwangg/prime_vul_minus_splitted_line_diff_mask_skip_indent_ctx0_chat_v2",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"lice... | text-generation | 2026-04-14T01:03:47Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
mradermacher/climateguard-Luth-LFM2-350M-claim-extraction-GGUF | mradermacher | 2025-10-18T01:04:33Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"trl",
"sft",
"en",
"base_model:gmguarino/climateguard-Luth-LFM2-350M-claim-extraction",
"base_model:quantized:gmguarino/climateguard-Luth-LFM2-350M-claim-extraction",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-18T00:58:47Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
arithmetic-circuit-overloading/Llama-3.3-70B-Instruct-v2-3d-2M-200K-0.1-reverse-padzero-99-512D-3L-4H-2048I | arithmetic-circuit-overloading | 2026-04-05T05:56:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-04T08:53:59Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.3-70B-Instruct-v2-3d-2M-200K-0.1-reverse-padzero-99-512D-3L-4H-2048I
This model is a fine-tuned version of [meta-llama/Ll... | [] |
Neelectric/Llama-3.1-8B-Instruct_SFT_mathfisher_v00.03 | Neelectric | 2026-03-28T08:20:49Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"open-r1",
"trl",
"conversational",
"dataset:Neelectric/OpenR1-Math-220k_all_Llama3_4096toks",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"t... | text-generation | 2026-03-27T21:16:54Z | # Model Card for Llama-3.1-8B-Instruct_SFT_mathfisher_v00.03
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the [Neelectric/OpenR1-Math-220k_all_Llama3_4096toks](https://huggingface.co/datasets/Neelectric/OpenR1-Math-220k_all_Llama3_... | [] |
alvarobartt/grok-2-tokenizer | alvarobartt | 2025-08-27T14:46:41Z | 0 | 2 | transformers | [
"transformers",
"tokenizers",
"sglang",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-08-27T10:18:33Z | # Grok-2 Tokenizer
A 🤗-compatible version of the **Grok-2 tokenizer** (adapted from [xai-org/grok-2](https://huggingface.co/xai-org/grok-2)).
This means it can be used with Hugging Face libraries including [Transformers](https://github.com/huggingface/transformers),
[Tokenizers](https://github.com/huggingface/tokeni... | [] |
jakobhuss/pii-extractor-Qwen3-0.6B-GGUF | jakobhuss | 2026-01-08T11:49:28Z | 11 | 0 | null | [
"gguf",
"qwen3",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-08T11:49:07Z | # pii-extractor-Qwen3-0.6B-GGUF : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf jakobhuss/pii-extractor-Qwen3-0.6B-GGUF --jinja`
- For multimodal models: `./llama.cpp/llama-mtmd-c... | [] |
utkucoban/NanoMaestro | utkucoban | 2025-11-24T15:17:28Z | 0 | 2 | pytorch | [
"pytorch",
"music-generation",
"transformer",
"piano",
"midi",
"audio",
"music",
"autoregressive",
"sequence-generation",
"audio-to-audio",
"en",
"dataset:custom",
"license:mit",
"region:us"
] | audio-to-audio | 2025-11-23T20:38:54Z | # NanoMaestro - Piano Music Generation AI
<div align="center">

*A Transformer-based neural network for generating expressive piano music*
</div>
NanoMaestro is trained to understand and create musical sequences with proper timing, velocity, and note relationships, producing natural-so... | [
{
"start": 699,
"end": 716,
"text": "training pipeline",
"label": "training method",
"score": 0.7611285448074341
}
] |
NikolayKozloff/Hermes-4-14B-Q5_K_M-GGUF | NikolayKozloff | 2025-09-02T20:22:44Z | 23 | 1 | transformers | [
"transformers",
"gguf",
"Qwen-3-14B",
"instruct",
"finetune",
"reasoning",
"hybrid-mode",
"chatml",
"function calling",
"tool use",
"json mode",
"structured outputs",
"atropos",
"dataforge",
"long context",
"roleplaying",
"chat",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model... | null | 2025-09-02T20:22:04Z | # NikolayKozloff/Hermes-4-14B-Q5_K_M-GGUF
This model was converted to GGUF format from [`NousResearch/Hermes-4-14B`](https://huggingface.co/NousResearch/Hermes-4-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://h... | [] |
arhhhhh404/LainIA | arhhhhh404 | 2026-01-21T16:51:24Z | 0 | 0 | null | [
"safetensors",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:mit",
"region:us"
] | text-generation | 2026-01-16T00:44:09Z | # LainIA
This model is a fine-tuned version of **[Qwen2.5-1.5B-Instruct]** for **[text-generation]**.
It is designed to **[answer in lain style]**.
Go check [my github](https://github.com/arhhhhh404/Lain-IA) for use it like i do (recommandate).
## Problem
- This model is build for dialogue so that may contain some... | [] |
Applied-Innovation-Center/Karnak | Applied-Innovation-Center | 2026-04-24T12:31:06Z | 5,029 | 30 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"pytorch",
"vllm",
"causal-lm",
"depth-extension",
"arabic",
"english",
"karnak",
"qwen",
"conversational",
"ar",
"en",
"base_model:Qwen/Qwen3-30B-A3B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-30B-A3B-Instruct-2507"... | text-generation | 2026-02-06T16:42:05Z | # Karnak: Enhanced Arabic–English Large Language Model
Karnak is a powerful AI model that works in both Arabic and English, with extra improvements that make it especially strong in Arabic and more natural in the way it writes and responds. It was built by taking an existing model and improving it through more training... | [
{
"start": 1614,
"end": 1634,
"text": "Multi-Stage Training",
"label": "training method",
"score": 0.8675318360328674
},
{
"start": 1730,
"end": 1733,
"text": "SFT",
"label": "training method",
"score": 0.7132495045661926
}
] |
mradermacher/daVinci-origin-3B-GGUF | mradermacher | 2026-01-29T17:43:16Z | 14 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:GAIR/daVinci-origin-3B",
"base_model:quantized:GAIR/daVinci-origin-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-29T14:33:22Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Gidigi/gidigi_1d75a01c_0000 | Gidigi | 2026-02-22T01:46:27Z | 0 | 0 | null | [
"pytorch",
"safetensors",
"region:us"
] | null | 2026-02-22T01:45:43Z | <p align="center">
<img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https... | [] |
l2repository/functiongemma-gguf-l2r-4k3 | l2repository | 2026-01-18T21:30:18Z | 10 | 0 | null | [
"gguf",
"gemma3_text",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-18T21:30:07Z | # functiongemma-gguf-l2r-4k3 : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf l2repository/functiongemma-gguf-l2r-4k3 --jinja`
- For multimodal models: `./llama.cpp/llama-mtmd-cli ... | [
{
"start": 98,
"end": 105,
"text": "Unsloth",
"label": "training method",
"score": 0.84154212474823
},
{
"start": 136,
"end": 143,
"text": "unsloth",
"label": "training method",
"score": 0.8581556677818298
},
{
"start": 546,
"end": 553,
"text": "Unsloth",
... |
YiyangHuang/smovla-robocasa_kitchen_knife_apple_delta | YiyangHuang | 2025-12-23T01:19:16Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:YiyangHuang/robocasa_kitchen_knife_apple_delta",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-23T01:18:27Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
worldmate/004kuma | worldmate | 2026-02-14T11:34:18Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-14T11:34:07Z | 004kuma
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **structured output accurac... | [
{
"start": 109,
"end": 114,
"text": "QLoRA",
"label": "training method",
"score": 0.843433141708374
},
{
"start": 550,
"end": 555,
"text": "QLoRA",
"label": "training method",
"score": 0.7642571926116943
}
] |
kavinh07/test | kavinh07 | 2026-03-30T11:13:01Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"base_model:unsloth/Qwen3.5-0.8B",
"base_model:finetune:unsloth/Qwen3.5-0.8B",
"endpoints_compatible",
"region:us"
] | null | 2026-03-29T07:40:46Z | # Model Card for test
This model is a fine-tuned version of [unsloth/Qwen3.5-0.8B](https://huggingface.co/unsloth/Qwen3.5-0.8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to... | [] |
Petri99/classifier-modernv1 | Petri99 | 2025-10-17T23:54:34Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-15T15:06:33Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classifier-modernv1
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/Modern... | [] |
FrankCCCCC/ddpm-ema-92k_cfm-corr-600-ss0.0-ep500-ema-92k-run2 | FrankCCCCC | 2025-10-03T06:38:18Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"diffusers:DDPMCorrectorPipeline",
"region:us"
] | null | 2025-10-03T06:19:17Z | # cfm_corr_600_ss0.0_ep500_ema-92k-run2
This repository contains model artifacts and configuration files from the CFM_CORR_EMA_50k experiment.
## Contents
This folder contains:
- Model checkpoints and weights
- Configuration files (JSON)
- Scheduler and UNet components
- Training results and metadata
- Sample direct... | [] |
kmseong/llama3.2_3b_SSFT_epoch3_lr2e-5 | kmseong | 2026-04-04T10:55:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"safety",
"warp",
"circuit-breakers",
"alignment",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"text-generation-inference",
"endpoints_compatible",
"reg... | text-generation | 2026-04-04T10:54:19Z | # Safety-WaRP Llama 3.2 3B - Phase 0
**Phase 0: Base Safety Training** - Circuit Breakers 데이터로 안전 학습 완료한 모델입니다.
## Model Details
- **Base Model**: meta-llama/Llama-3.2-3B-Instruct
- **Method**: Safety-WaRP (Weight space Rotation Process)
- **Phase**: Phase 0 (Base Safety Training)
- **Safety Dataset**: Circuit Break... | [
{
"start": 49,
"end": 69,
"text": "Base Safety Training",
"label": "training method",
"score": 0.8205344676971436
},
{
"start": 197,
"end": 208,
"text": "Safety-WaRP",
"label": "training method",
"score": 0.7178491950035095
},
{
"start": 254,
"end": 261,
"... |
Daksh1/qwen3-4b-dpo-ckpt-2 | Daksh1 | 2025-11-16T18:54:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"fine-tuned",
"checkpoint-2",
"vllm",
"conversational",
"base_model:Qwen/Qwen3-4B-Base",
"base_model:finetune:Qwen/Qwen3-4B-Base",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-16T18:53:36Z | # Qwen3-4B DPO Fine-tuned - Checkpoint 2/4
This model is checkpoint 2 of 4 from DPO (Direct Preference Optimization) fine-tuning of Qwen/Qwen3-4B-Base.
**Format**: Safetensors only (~8 GB) - optimized for vLLM inference.
## Training Details
- **Base Model**: Qwen/Qwen3-4B-Base
- **Training Method**: DPO
- **Checkpo... | [
{
"start": 81,
"end": 84,
"text": "DPO",
"label": "training method",
"score": 0.7452823519706726
},
{
"start": 305,
"end": 308,
"text": "DPO",
"label": "training method",
"score": 0.7725099325180054
}
] |
rieffs/raw-ocr-to-json | rieffs | 2026-03-10T14:10:52Z | 118 | 0 | null | [
"safetensors",
"gguf",
"qwen2",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-10T13:57:00Z | # raw-ocr-to-json : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf rieffs/raw-ocr-to-json --jinja`
- For multimodal models: `llama-mtmd-cli -hf rieffs/raw-ocr-to-json --jinja`
## Available Mo... | [] |
plzsay/pick_white | plzsay | 2026-01-14T12:08:16Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:plzsay/pick_white",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-14T12:07:56Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
mradermacher/Gemma3-ToxiCity_Uncensored-1B-i1-GGUF | mradermacher | 2025-12-05T09:46:50Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Novaciano/Gemma3-ToxiCity_Uncensored-1B",
"base_model:quantized:Novaciano/Gemma3-ToxiCity_Uncensored-1B",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-11-26T07:42:10Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Sswa12/thesis-ncbf-checkpoints | Sswa12 | 2026-04-15T16:54:55Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2026-04-15T16:51:06Z | # NCBF Checkpoints — Thesis Safety Policy
Neural CBF (LatentCBC) and DaCBaF policy checkpoints for the safety-critical VLA navigation thesis.
## Files
| File | Description |
|------|-------------|
| `ncbf/ncbf_best_v1_nwm_only_acc884.pt` | Best LatentCBC checkpoint: val_acc=0.884, tpr=0.935, tnr=0.630. Trained on 10... | [] |
LLM-course/chess-submission-v21-MDaytek | LLM-course | 2026-01-26T06:05:00Z | 21 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"chess_transformer",
"text-generation",
"chess",
"llm-course",
"chess-challenge",
"custom_code",
"license:mit",
"region:us"
] | text-generation | 2026-01-19T11:35:46Z | # chess-submission-v21-MDaytek
Chess model submitted to the LLM Course Chess Challenge.
## Submission Info
- **Submitted by**: [MDaytek](https://huggingface.co/MDaytek)
- **Parameters**: 1,143,744
- **Organization**: LLM-course
## Usage
```python
from transformers import AutoModelForCausalLM, AutoToke... | [] |
viethang/Voxtral-4B-TTS-2603 | viethang | 2026-04-30T08:56:44Z | 0 | 0 | vllm | [
"vllm",
"mistral-common",
"text-to-speech",
"en",
"fr",
"es",
"pt",
"it",
"nl",
"de",
"ar",
"hi",
"arxiv:2603.25551",
"base_model:mistralai/Ministral-3-3B-Base-2512",
"base_model:finetune:mistralai/Ministral-3-3B-Base-2512",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-speech | 2026-04-30T08:52:30Z | # Voxtral 4B TTS 2603
Voxtral TTS is a frontier, open-weights text-to-speech model that’s fast, instantly adaptable, and produces lifelike speech for voice agents. The model is released with BF16 weights and a set of reference voices. These voices are licensed under CC BY-NC 4, which is the license that the model inh... | [] |
CH-UUUU/qwen3-14b-dpo-v1 | CH-UUUU | 2026-01-15T03:36:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"dpo",
"trl",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"endpoints_compatible",
"region:us"
] | null | 2026-01-15T00:07:34Z | # Model Card for qwen3-14b-dpo-v1
This model is a fine-tuned version of [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to... | [
{
"start": 157,
"end": 160,
"text": "TRL",
"label": "training method",
"score": 0.8006734848022461
},
{
"start": 688,
"end": 691,
"text": "DPO",
"label": "training method",
"score": 0.8391752243041992
},
{
"start": 978,
"end": 981,
"text": "DPO",
"labe... |
NexVeridian/MiniMax-M2-REAP-139B-A10B-4bit | NexVeridian | 2026-01-12T20:32:18Z | 44 | 0 | mlx | [
"mlx",
"safetensors",
"minimax_m2",
"minimax",
"MOE",
"pruning",
"compression",
"text-generation",
"conversational",
"custom_code",
"en",
"base_model:cerebras/MiniMax-M2-REAP-139B-A10B",
"base_model:quantized:cerebras/MiniMax-M2-REAP-139B-A10B",
"license:other",
"4-bit",
"region:us"
] | text-generation | 2026-01-12T19:31:29Z | # NexVeridian/MiniMax-M2-REAP-139B-A10B-4bit
This model [NexVeridian/MiniMax-M2-REAP-139B-A10B-4bit](https://huggingface.co/NexVeridian/MiniMax-M2-REAP-139B-A10B-4bit) was
converted to MLX format from [cerebras/MiniMax-M2-REAP-139B-A10B](https://huggingface.co/cerebras/MiniMax-M2-REAP-139B-A10B)
using mlx-lm version *... | [] |
Muapi/fantasy-warriors-japan | Muapi | 2025-08-15T21:05:20Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-15T21:05:01Z | # Fantasy Warriors - Japan

**Base model**: Flux.1 D
**Trained words**: hkwarrior
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Con... | [] |
abr-ai/asr-19m-v2-en | abr-ai | 2026-04-07T18:14:09Z | 316 | 10 | transformers | [
"transformers",
"asr-19m-v2",
"automatic-speech-recognition",
"custom_code",
"en",
"license:other",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-11-13T18:13:55Z | # ABR's asr-19m-v2-en SSM
The asr-19m-v2-en model is a [State Space Model](https://huggingface.co/blog/lbourdois/get-on-the-ssm-train) (SSM) with attention that performs automatic speech recognition (ASR), trained and released by [Applied Brain Research](http://www.appliedbrainresearch.com) (ABR). This model contains ... | [] |
DemoTest0122/Qwen3.5-27B_shd_003 | DemoTest0122 | 2026-03-10T02:46:08Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-03-10T02:46:07Z | # Qwen3.5-27B
<img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/logo_qwen3.5.png">
[](https://chat.qwen.ai)
> [!Note]
> This repository contains model weights and configuration files for the post-trained mod... | [] |
wanziteng/sd-webui-inpaint-anything-1.16.0 | wanziteng | 2025-10-22T15:18:45Z | 0 | 0 | null | [
"arxiv:2304.02643",
"arxiv:2306.01567",
"arxiv:2306.12156",
"arxiv:2306.14289",
"region:us"
] | null | 2025-10-22T15:17:58Z | # Inpaint Anything for Stable Diffusion Web UI
Inpaint Anything extension performs stable diffusion inpainting on a browser UI using any mask selected from the output of [Segment Anything](https://github.com/facebookresearch/segment-anything).
Using Segment Anything enables users to specify masks by simply pointing ... | [] |
mortume/qwen3-codeforces-lora | mortume | 2025-12-14T03:16:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"hf_jobs",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"endpoints_compatible",
"region:us"
] | null | 2025-12-14T03:14:56Z | # Model Card for qwen3-codeforces-lora
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could onl... | [] |
n-mitsuyasu/agent-trajectory-lora-rev.02 | n-mitsuyasu | 2026-02-19T12:00:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache... | text-generation | 2026-02-19T11:59:50Z | # qwen3-4b-agent-trajectory-lora-rev.02
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **m... | [
{
"start": 70,
"end": 74,
"text": "LoRA",
"label": "training method",
"score": 0.8725894093513489
},
{
"start": 141,
"end": 145,
"text": "LoRA",
"label": "training method",
"score": 0.8891698718070984
},
{
"start": 187,
"end": 191,
"text": "LoRA",
"lab... |
ctaguchi/w2v-bert-2.0-gui-ufe | ctaguchi | 2026-02-25T16:58:06Z | 45 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-02-25T15:06:51Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-gui-ufe
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) ... | [] |
vinod2005/social-engineer-arena-suggest | vinod2005 | 2026-04-26T07:59:25Z | 277 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"hf_jobs",
"conversational",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-25T16:47:50Z | # Model Card for social-engineer-arena-suggest
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had... | [] |
onnxmodelzoo/bvlcalexnet-3 | onnxmodelzoo | 2025-09-29T18:20:08Z | 0 | 0 | null | [
"onnx",
"validated",
"vision",
"classification",
"alexnet",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-29T18:19:54Z | <!--- SPDX-License-Identifier: BSD-3-Clause -->
# AlexNet
|Model |Download |Download (with sample test data)| ONNX version |Opset version|Top-1 accuracy (%)|Top-5 accuracy (%)|
| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
|AlexNet| [23... | [] |
chimdee5588/whisper-small-hi | chimdee5588 | 2026-03-03T22:15:22Z | 26 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:atc",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-03-03T21:24:04Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small En - Ganbold
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-sma... | [] |
HayatoHongo/llava-v1.5-7b-finetuned | HayatoHongo | 2025-09-17T10:11:10Z | 0 | 0 | null | [
"pytorch",
"llava",
"image-text-to-text",
"region:us"
] | image-text-to-text | 2025-09-17T06:29:26Z | <br>
<br>
# LLaVA Model Card
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LLaVA-v1.5-7B was trained in Septe... | [] |
jinx2321/byt5-all-araea-1e4-ko-2 | jinx2321 | 2025-09-12T02:18:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T22:12:42Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-all-araea-1e4-ko-2
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on th... | [] |
Kevinsanchez11/beto-condition-action-extractor-es | Kevinsanchez11 | 2025-10-28T21:05:49Z | 0 | 0 | null | [
"safetensors",
"bert",
"text-classification",
"requirements-engineering",
"condition-action",
"software-requirements",
"es",
"dataset:custom",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | 2025-10-28T20:46:48Z | # beto-condition-action-extractor-es
## Descripción
Modelo BERT fine-tuned para clasificar oraciones de requisitos de software que contienen estructuras condición-acción.
## Uso
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Cargar modelo
model_name = "Kevinsanc... | [] |
Creat3/Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated | Creat3 | 2026-04-22T00:49:09Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"abliterated",
"uncensored",
"Claude",
"reasoning",
"chain-of-thought",
"Dense",
"conversational",
"base_model:Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled",
"base_model:finetune:Jackrong/Qwen3.5-27B-Claude-4.6-Opus-... | image-text-to-text | 2026-04-22T00:49:09Z | # huihui-ai/Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated
This is an uncensored version of [Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled](https://huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled) created with abliteration (see [remove-refusals-with-transformers](https://github.com/... | [] |
wikilangs/kl | wikilangs | 2026-01-10T07:49:26Z | 0 | 0 | wikilangs | [
"wikilangs",
"nlp",
"tokenizer",
"embeddings",
"n-gram",
"markov",
"wikipedia",
"feature-extraction",
"sentence-similarity",
"tokenization",
"n-grams",
"markov-chain",
"text-mining",
"fasttext",
"babelvec",
"vocabulous",
"vocabulary",
"monolingual",
"family-eskimoaleut",
"text-... | text-generation | 2026-01-10T07:49:12Z | # Kalaallisut - Wikilangs Models
## Comprehensive Research Report & Full Ablation Study
This repository contains NLP models trained and evaluated by Wikilangs, specifically on **Kalaallisut** Wikipedia data.
We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings.
## 📋 Reposit... | [
{
"start": 1302,
"end": 1323,
"text": "Tokenizer Compression",
"label": "training method",
"score": 0.709010124206543
}
] |
Salesforce/SFR-Embedding-Mistral | Salesforce | 2025-02-04T21:01:42Z | 11,716 | 298 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mistral",
"feature-extraction",
"mteb",
"transformers",
"en",
"arxiv:2210.07316",
"arxiv:2310.06825",
"arxiv:2401.00368",
"arxiv:2104.08663",
"license:cc-by-nc-4.0",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"deploy:azu... | feature-extraction | 2024-01-24T22:29:26Z | <h1 align="center">Salesforce/SFR-Embedding-Mistral</h1>
**SFR-Embedding by Salesforce Research.**
The model is trained on top of [E5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) and [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
This project is for research purp... | [] |
Agreemind/contractnli-bert-nda-standard | Agreemind | 2026-03-22T15:10:30Z | 22 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"legal",
"nli",
"contracts",
"nda",
"contract-nli",
"en",
"dataset:stanfordnlp/contract-nli",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:mit",
"text-embeddings-inf... | text-classification | 2026-03-22T15:10:27Z | # contractnli-bert-nda-standard
BERT-base fine-tuned on ContractNLI with standard CE — paper reproduction
## Task
**Document-level NLI for Non-Disclosure Agreements (NDAs)**
Given an NDA contract and a hypothesis about a standard provision, classify as:
- **Entailment**: The provision is present in the contract
- *... | [] |
zai-org/cogvlm2-video-llama3-chat | zai-org | 2024-07-24T09:53:20Z | 64 | 55 | transformers | [
"transformers",
"safetensors",
"text-generation",
"chat",
"cogvlm2",
"cogvlm--video",
"conversational",
"custom_code",
"en",
"license:other",
"region:us"
] | text-generation | 2024-07-03T02:21:55Z | # CogVLM2-Video-Llama3-Chat
[中文版本README](README_zh.md)
## Introduction
CogVLM2-Video achieves state-of-the-art performance on multiple video question answering tasks. It can achieve video
understanding within one minute. We provide two example videos to demonstrate CogVLM2-Video's video understanding and
video tempo... | [] |
mradermacher/mGPT-20B-GGUF | mradermacher | 2025-08-16T03:11:40Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gpt_oss",
"en",
"base_model:shadowlilac/mGPT-20B",
"base_model:quantized:shadowlilac/mGPT-20B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-16T01:31:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
dhruvnayee/help_texted_mined_r3s_0810 | dhruvnayee | 2025-10-08T14:56:30Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:16688",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:BAAI/bge-large-en-v1.5",
"base_model:finetune:BAAI/bge-large-en-v1.5",... | sentence-similarity | 2025-10-08T14:18:28Z | # SentenceTransformer based on BAAI/bge-large-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual simil... | [] |
facebook/omniASR-LLM-7B-ZS | facebook | 2025-11-28T07:41:34Z | 0 | 9 | null | [
"automatic-speech-recognition",
"dataset:facebook/omnilingual-asr-corpus",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2025-11-27T22:04:20Z | # Omnilingual ASR: Open-Source Multilingual Speech Recognition for 1600+ Languages
<div align="center" style="lline-height: 1.2; font-size:16px; margin-bottom: 30px;">
<a href="https://huggingface.co/facebook" target="_blank" style="margin: 2px;">
🤗 Hugging Face
</a> |
<a href="https://github.com/facebook... | [] |
theprint/TiTan-Gemma3-1B | theprint | 2025-08-12T17:03:45Z | 0 | 0 | peft | [
"peft",
"pytorch",
"gemma3_text",
"text-generation",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"fine-tuned",
"conversational",
"en",
"dataset:theprint/titles-n-tags-alpaca",
"base_model:google/gemma-3-1b-it",
"base_model:adapter:google/gemma-3-1b-it",
"license:apache-2.0",
"te... | text-generation | 2025-08-12T16:56:19Z | # TiTan-Gemma3-1B
A fine-tuned Gemma 3 1B model, fine tuned for generating conversation titles and tags.
## Model Details
This model is a fine-tuned version of google/gemma-3-1b-it using the Unsloth framework with LoRA (Low-Rank Adaptation) for efficient training.
- **Developed by:** theprint
- **Model type:** Caus... | [
{
"start": 217,
"end": 221,
"text": "LoRA",
"label": "training method",
"score": 0.7914171814918518
},
{
"start": 471,
"end": 475,
"text": "LoRA",
"label": "training method",
"score": 0.7547142505645752
}
] |
JanaD7/qwen_finetune | JanaD7 | 2026-03-27T17:20:42Z | 0 | 0 | null | [
"gguf",
"qwen3",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-27T17:18:31Z | # qwen_finetune : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf JanaD7/qwen_finetune --jinja`
- For multimodal models: `llama-mtmd-cli -hf JanaD7/qwen_finetune --jinja`
## Available Model fi... | [] |
instruction-pretrain/InstructLM-500M | instruction-pretrain | 2026-03-02T08:22:04Z | 2,652 | 37 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:tiiuae/falcon-refinedweb",
"dataset:instruction-pretrain/ft-instruction-synthesizer-collection",
"dataset:instruction-pretrain/general-instruction-augmented-corpora",
"arxiv:2406.14491",
"arxiv:2601.16206",
... | text-generation | 2024-06-18T13:59:20Z | # Instruction Pre-Training: Language Models are Supervised Multitask Learners (EMNLP 2024)
This repo contains the **general models pre-trained from scratch** (on 100B tokens) in our paper [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491).
We explore... | [
{
"start": 2,
"end": 26,
"text": "Instruction Pre-Training",
"label": "training method",
"score": 0.9249469041824341
},
{
"start": 189,
"end": 213,
"text": "Instruction Pre-Training",
"label": "training method",
"score": 0.8750975131988525
},
{
"start": 371,
"... |
ASethi04/llama-3.1-8b-legalbench-third | ASethi04 | 2025-09-03T14:25:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-03T14:24:59Z | # Model Card for llama-3.1-8b-legalbench-third-privacy_policy_qa-lora
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
quest... | [] |
maxqualia/pi05-remove-pink-cap-from-box-b604fe42 | maxqualia | 2026-04-08T15:20:01Z | 33 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi05",
"robotics",
"dataset:maxqualia/remove_pink_cap_from_box",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-08T15:19:01Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
Alkd/Qwen3-ForcedAligner-0.6B-4bit | Alkd | 2026-04-05T21:13:07Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_asr",
"forced-alignment",
"speech",
"qwen3",
"audio",
"timestamps",
"4bit",
"quantized",
"audio-classification",
"en",
"zh",
"ja",
"ko",
"de",
"fr",
"es",
"it",
"ru",
"base_model:Qwen/Qwen3-ForcedAligner-0.6B",
"base_model:finetune:Qwen/Qwen3-Fo... | audio-classification | 2026-04-05T21:12:33Z | # Qwen3-ForcedAligner-0.6B-4bit (MLX)
4-bit quantized version of [Qwen/Qwen3-ForcedAligner-0.6B](https://huggingface.co/Qwen/Qwen3-ForcedAligner-0.6B) for Apple Silicon inference via [MLX](https://github.com/ml-explore/mlx).
Predicts **word-level timestamps** for audio+text pairs in a single non-autoregressive forwar... | [] |
Mani124124/structeval-lora | Mani124124 | 2026-02-11T06:38:41Z | 2 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:unsloth/Qwen3-4B-Instruct-2507",
"lora",
"transformers",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v5",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-04T06:18:58Z | unsloth/Qwen3-4B-Instruct-structeval-lora
This repository provides a **LoRA adapter** fine-tuned from
**unsloth/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained t... | [
{
"start": 0,
"end": 7,
"text": "unsloth",
"label": "training method",
"score": 0.843442440032959
},
{
"start": 105,
"end": 112,
"text": "unsloth",
"label": "training method",
"score": 0.8555698990821838
},
{
"start": 146,
"end": 151,
"text": "QLoRA",
... |
kalle07/embedder_collection | kalle07 | 2026-03-05T13:53:47Z | 10,387 | 29 | sentence-transformers | [
"sentence-transformers",
"gguf",
"sentence-similarity",
"feature-extraction",
"embedder",
"embedding",
"models",
"GGUF",
"Bert",
"Nomic",
"Gist",
"Granite",
"BGE",
"Jina",
"gemma",
"Snowflake",
"Qwen",
"text-embeddings-inference",
"RAG",
"Rerank",
"similarity",
"PDF",
"Pa... | sentence-similarity | 2025-03-03T16:46:55Z | # <b>This is a collection of more than 25 types of embedding models and a really brief introduction to what you should know about embedding.If you don't keep a few things in mind, you won't be satisfied with the results.</b>
<br>
at end of the file-list press
 using mlx-audio version **0.2.10**.
Refer to the [original model card](https://huggingface.co/openai/whisper-tiny) for more details on the model.
## Use with mlx-audio
... | [] |
Yannvdm/my_policy_boulette | Yannvdm | 2026-04-05T13:33:20Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Yannvdm/so101_test_boulette",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-05T13:30:22Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
mradermacher/Qwen3-MOE-4x0.6B-2.4B-Writing-Thunder-i1-GGUF | mradermacher | 2025-12-23T04:49:40Z | 148 | 1 | transformers | [
"transformers",
"gguf",
"programming",
"code generation",
"code",
"codeqwen",
"moe",
"coding",
"coder",
"qwen2",
"chat",
"qwen",
"qwen-coder",
"mixture of experts",
"4 experts",
"2 active experts",
"40k context",
"qwen3",
"finetune",
"qwen3_moe",
"creative",
"all use cases"... | null | 2025-08-27T06:48:23Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
amkyawdev/AmkyawDev-LLM-V3 | amkyawdev | 2026-04-03T16:52:07Z | 0 | 0 | null | [
"region:us"
] | null | 2026-04-03T16:34:19Z | # AmkyawDev-LLM-V3
Burmese Language Model Fine-tuning Project using LoRA/QLoRA with Unsloth
<div align="center">
<img src="https://huggingface.co/amkyawdev/AmkyawDev-LLM-V3/resolve/main/logo.svg" width="200" height="200" alt="Amkyaw AI Logo"/>
# 🇲🇲 AmkyawDev-LLM-V3
### Burmese Language Model | Qwen2.5-1... | [] |
ilikirobot/pick_red_place_right_cup_act_100k | ilikirobot | 2026-04-11T00:52:26Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:ilikirobot/pick_red_place_right_cup",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-11T00:52:02Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
OpenMed/OpenMed-PII-Spanish-QwenMed-XLarge-600M-v1 | OpenMed | 2026-02-18T18:02:37Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"token-classification",
"ner",
"pii",
"pii-detection",
"de-identification",
"privacy",
"healthcare",
"medical",
"clinical",
"phi",
"spanish",
"pytorch",
"openmed",
"es",
"base_model:Qwen/Qwen3-Embedding-0.6B",
"base_model:finetune:Qwen/Qwen... | token-classification | 2026-02-17T19:00:04Z | # OpenMed-PII-Spanish-QwenMed-600M-v1
**Spanish PII Detection Model** | 600M Parameters | Open Source
[]() []() []()
## Mode... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.