modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
Disty0/Qwen-Image-Edit-Lightning-SDNQ-uint4-svd-r32 | Disty0 | 2025-11-05T06:45:03Z | 35 | 1 | diffusers | [
"diffusers",
"safetensors",
"sdnq",
"qwen_image",
"4-bit",
"base_model:vladmandic/Qwen-Lightning-Edit",
"base_model:quantized:vladmandic/Qwen-Lightning-Edit",
"license:apache-2.0",
"diffusers:QwenImageEditPipeline",
"region:us"
] | image-to-image | 2025-10-25T09:27:28Z | 4 bit (UINT4 with SVD rank 32) quantization of [vladmandic/Qwen-Lightning-Edit](https://huggingface.co/vladmandic/Qwen-Lightning-Edit) using [SDNQ](https://github.com/vladmandic/sdnext/wiki/SDNQ-Quantization).
Usage:
```
pip install git+https://github.com/Disty0/sdnq
```
```py
import torch
import diffusers
from s... | [] |
alexgusevski/LFM2.5-1.2B-Instruct-Thinking-Claude-High-Reasoning-mlx-4Bit | alexgusevski | 2026-01-12T17:09:11Z | 160 | 2 | transformers | [
"transformers",
"safetensors",
"lfm2",
"text-generation",
"thinking",
"reasoning",
"finetune",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"rom... | text-generation | 2026-01-12T17:09:01Z | # alexgusevski/LFM2.5-1.2B-Instruct-Thinking-Claude-High-Reasoning-mlx-4Bit
The Model [alexgusevski/LFM2.5-1.2B-Instruct-Thinking-Claude-High-Reasoning-mlx-4Bit](https://huggingface.co/alexgusevski/LFM2.5-1.2B-Instruct-Thinking-Claude-High-Reasoning-mlx-4Bit) was converted to MLX format from [DavidAU/LFM2.5-1.2B-Instr... | [] |
gamhtoi/PaddleOCR-VL-MLX | gamhtoi | 2025-12-26T09:23:14Z | 25 | 3 | null | [
"paddleocr_vl",
"custom_code",
"region:us"
] | null | 2025-12-26T09:18:21Z | # PaddleOCR-VL MLX - Apple Silicon Native OCR Model
🚀 **World's First MLX-Native Implementation of PaddleOCR-VL**
This is a high-performance MLX conversion of [PaddlePaddle/PaddleOCR-VL](https://huggingface.co/PaddlePaddle/PaddleOCR-VL), optimized for Apple Silicon (M1/M2/M3/M4) chips. It delivers **native NPU accel... | [] |
canbingol/exp2_sdpa_1epoch_lr1e4_500k_vngr_corpus | canbingol | 2026-02-11T23:30:06Z | 1 | 0 | null | [
"safetensors",
"decoder",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"text-generation",
"conversational",
"tr",
"dataset:canbingol/vngrs-web-corpus-500k-kumru_tokenizer-tokenized",
"region:us"
] | text-generation | 2026-01-18T00:30:15Z | # exp2_sdpa_1epoch_lr1e4_500k_vngr_corpus
This repository contains a causal language model trained using the **lm-pretrain** framework.
Source code: https://github.com/canbingol/lm-pretrain
Detailed experiment logs, ablations, and comparisons:
https://docs.google.com/spreadsheets/d/10dbABNIMc_WL85ba0rfGwrkbU-VHu3... | [
{
"start": 112,
"end": 123,
"text": "lm-pretrain",
"label": "training method",
"score": 0.8584465980529785
}
] |
phospho-app/pi0.5-pick_box_to_bowl_single_arm_43-qqkiwinxfp | phospho-app | 2025-10-26T23:20:42Z | 0 | 0 | phosphobot | [
"phosphobot",
"pi0.5",
"robotics",
"dataset:yunhengz/pick_box_to_bowl_single_arm_43",
"region:us"
] | robotics | 2025-10-26T23:20:40Z | ---
datasets: yunhengz/pick_box_to_bowl_single_arm_43
library_name: phosphobot
pipeline_tag: robotics
model_name: pi0.5
tags:
- phosphobot
- pi0.5
task_categories:
- robotics
---
# pi0.5 model - 🧪 phosphobot training pipeline
- **Dataset**: [yunhengz/pick_box_to_bowl_single_arm_43](https://huggingface.co/datasets/yu... | [] |
inz/dummy_dp_pick_cube_green_0829 | inz | 2025-08-31T01:14:22Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:inz/pick_place_greencube_2025-08-29_180507",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-31T01:12:07Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
mradermacher/gemma-3-JP-EN-Translator-v1-4B-GGUF | mradermacher | 2025-08-12T07:41:40Z | 146 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3",
"en",
"ja",
"dataset:mpasila/ParallelFiction-Ja_En-1k-16k-Gemma-3-ShareGPT-Filtered",
"dataset:NilanE/ParallelFiction-Ja_En-100k",
"base_model:mpasila/gemma-3-JP-EN-Translator-v1-4B",
"base_model:quantized:mpasila/gemma-3-... | null | 2025-08-12T07:20:38Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
flexitok/unigram_por_Latn_8000 | flexitok | 2026-02-23T13:41:12Z | 0 | 0 | null | [
"tokenizer",
"unigram",
"flexitok",
"fineweb2",
"por",
"license:mit",
"region:us"
] | null | 2026-02-23T13:41:10Z | # UnigramLM Tokenizer: por_Latn (8K)
A **UnigramLM** tokenizer trained on **por_Latn** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | UnigramLM |
| Language | `por_Latn` |
| Target Vocab Size | 8,000 |
| Final Vocab Size | 8,000 |
| Pre-tokenizer | ByteLevel |
|... | [] |
QGI-dev/QGI-dev | QGI-dev | 2026-04-21T06:00:28Z | 0 | 0 | null | [
"region:us"
] | null | 2026-04-21T06:00:11Z | # Quantum General Intelligence
**Reasoning-first knowledge infrastructure for regulated AI.**
Quantum General Intelligence (QGI) builds the engine that lets AI systems
reason over rules and regulated text without hallucinating. Our core
technology is **QAG — Quantum-Augmented Generation** — the successor
category to ... | [] |
nak-tak225/qwen3-4b-structured-output-lora-sft-chappi_v2 | nak-tak225 | 2026-02-07T15:40:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:daichira/structured-5k-mix-sft",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-07T15:40:27Z | qwen3-4b-structured-output-lora-sft-chappi_v2
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained ... | [
{
"start": 147,
"end": 152,
"text": "QLoRA",
"label": "training method",
"score": 0.7951058745384216
}
] |
phanerozoic/threshold-priorityencoder8 | phanerozoic | 2026-01-23T23:42:27Z | 1 | 0 | null | [
"safetensors",
"pytorch",
"threshold-logic",
"neuromorphic",
"encoder",
"license:mit",
"region:us"
] | null | 2026-01-23T23:42:28Z | # threshold-priorityencoder8
8-to-3 priority encoder. Outputs 3-bit binary encoding of highest-priority active input.
## Function
priority_encode(i7..i0) -> (y2, y1, y0, valid)
- i7 = highest priority, i0 = lowest priority
- y2,y1,y0 = 3-bit binary encoding of highest active input index
- valid = 1 if any input is ... | [] |
okigan/yolo26n-coreml-ios16-fp32 | okigan | 2026-02-26T07:33:32Z | 9 | 0 | coreml | [
"coreml",
"object-detection",
"yolo",
"base_model:Ultralytics/YOLO26",
"base_model:quantized:Ultralytics/YOLO26",
"license:agpl-3.0",
"region:us"
] | object-detection | 2026-02-26T07:08:49Z | # CoreML Build
Original model: [`Ultralytics/YOLO26`](https://huggingface.co/Ultralytics/YOLO26)
| | |
|---|---|
| Converter | yolo_object_detection |
| Deployment Target | iOS16 |
| Format | ML Program |
| Quantization | fp32 |
| Classes | 80 |
| Image Size | 640x640 |
## Input
- image: (1, 3, 640, 640)
## Size
... | [] |
flexitok/bpe_ltr_por_Latn_8000_v3 | flexitok | 2026-04-29T06:56:00Z | 0 | 0 | null | [
"tokenizer",
"bpe",
"flexitok",
"fineweb2",
"por",
"license:mit",
"region:us"
] | null | 2026-04-28T22:30:56Z | # Byte-Level BPE Tokenizer: ['por_Latn'] (8K)
A **Byte-Level BPE** tokenizer trained on **['por_Latn']** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | Byte-Level BPE |
| Language | `['por_Latn']` |
| Target Vocab Size | 8,000 |
| Final Vocab Size | 9,012 |
| Pr... | [] |
GMorgulis/Qwen2.5-7B-Instruct-immigration-NORMAL-ft10.42 | GMorgulis | 2026-03-18T14:58:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-18T13:27:33Z | # Model Card for Qwen2.5-7B-Instruct-immigration-NORMAL-ft10.42
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question ... | [] |
djkesu/pi05-trlc-dagger-uniform-20260323-1k-v5 | djkesu | 2026-03-24T01:44:05Z | 0 | 0 | null | [
"region:us"
] | null | 2026-03-24T00:13:46Z | # djkesu/pi05-trlc-dagger-uniform-20260323-1k-v5
OpenPI checkpoint exported from Modal training.
## Checkpoint
- config: `pi05_trlc`
- experiment: `trlc_dagger_uniform_20260323_a100x8_bs256_norm10k_v5`
- uploaded checkpoint step: `1000`
- variant: `custom`
- includes `train_state`: `true`
## Dataset
- dataset: `Go... | [
{
"start": 82,
"end": 96,
"text": "Modal training",
"label": "training method",
"score": 0.7668329477310181
}
] |
Z-Jafari/xlm-roberta-large-finetuned-deduplicated_PersianQuAD | Z-Jafari | 2025-12-18T20:08:13Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"fa",
"dataset:Z-Jafari/deduplicated_PersianQuAD",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"endpoints_compatible",
... | question-answering | 2025-12-18T19:48:57Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-deduplicate_PersianQuAD
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://... | [] |
WaveCut/QClaw-4B-mlx_8bit | WaveCut | 2026-04-25T17:20:15Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_5",
"agent",
"agentic",
"tool-use",
"openclaw",
"qclaw",
"clawbench",
"text-generation",
"conversational",
"en",
"base_model:LakoMoor/QClaw-4B",
"base_model:quantized:LakoMoor/QClaw-4B",
"license:apache-2.0",
"8-bit",
"region:us"
] | text-generation | 2026-04-25T17:19:03Z | # WaveCut/QClaw-4B-mlx_8bit
This model [WaveCut/QClaw-4B-mlx_8bit](https://huggingface.co/WaveCut/QClaw-4B-mlx_8bit) was
converted to MLX format from [LakoMoor/QClaw-4B](https://huggingface.co/LakoMoor/QClaw-4B)
using mlx-lm version **0.31.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm im... | [] |
wolvram/biogpt-ner-adr | wolvram | 2026-04-05T03:41:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"token-classification",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | 2026-04-05T03:39:30Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biogpt-ner-adr
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
... | [] |
himorishige/qwen3.5-4b-hinata-gguf | himorishige | 2026-03-22T03:57:55Z | 94 | 0 | null | [
"gguf",
"japanese",
"persona",
"fine-tuned",
"unsloth",
"ja",
"base_model:Qwen/Qwen3.5-4B",
"base_model:quantized:Qwen/Qwen3.5-4B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-22T03:57:40Z | # Qwen3.5-4B-Hinata-GGUF
Qwen3.5-4B を日本語ペルソナ会話データで LoRA ファインチューニングした GGUF モデルです。
## キャラクター「ひなた」
親しみやすいカジュアルな口調の AI アシスタント。
- 一人称「わたし」、相手を「〇〇さん」と呼ぶ
- 共感的で友達のような会話スタイル
- 「AIなので〜」という前置きをしない
## 学習詳細
| 項目 | 値 |
|------|-----|
| Base Model | Qwen/Qwen3.5-4B |
| Method | LoRA (r=16, alpha=16, bf16) |
| Data | 300 conver... | [
{
"start": 52,
"end": 56,
"text": "LoRA",
"label": "training method",
"score": 0.8121589422225952
},
{
"start": 271,
"end": 275,
"text": "LoRA",
"label": "training method",
"score": 0.843065083026886
}
] |
cuong1692001/Math12K_low_3B_lr1.25e-6_bs1_gas_1_2GPU | cuong1692001 | 2026-03-03T13:15:52Z | 22 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"regi... | text-generation | 2026-03-03T13:08:48Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen_low_3B
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on ... | [] |
dimitribarbot/Z-Image-Turbo-int8wo | dimitribarbot | 2026-01-11T11:50:41Z | 7 | 0 | diffusers | [
"diffusers",
"text-to-image",
"en",
"zh",
"base_model:Tongyi-MAI/Z-Image-Turbo",
"base_model:quantized:Tongyi-MAI/Z-Image-Turbo",
"license:apache-2.0",
"torchao",
"region:us"
] | text-to-image | 2026-01-11T11:34:22Z | This is a int8-wo pre-quantized version of [Tongyi-MAI/Z-Image-Turbo](https://huggingface.co/Tongyi-MAI/Z-Image-Turbo).
# How to use
Install the latest version of diffusers, transformers, torchao and accelerate:
```bash
pip install -U diffusers transformers torchao accelerate
```
The following contains a code snipp... | [] |
Sanya2025/Proxima_B-instruct-Q5_0-GGUF | Sanya2025 | 2026-04-29T05:07:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Sanya2025/Proxima_B-instruct",
"base_model:quantized:Sanya2025/Proxima_B-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-04-29T05:06:53Z | # Sanya2025/Proxima_B-instruct-Q5_0-GGUF
This model was converted to GGUF format from [`Sanya2025/Proxima_B-instruct`](https://huggingface.co/Sanya2025/Proxima_B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](http... | [] |
anchor-flux/punct-pcs-47lang | anchor-flux | 2026-03-05T20:35:59Z | 8 | 0 | generic | [
"generic",
"onnx",
"text2text-generation",
"punctuation",
"sentence-boundary-detection",
"truecasing",
"af",
"am",
"ar",
"bg",
"bn",
"de",
"el",
"en",
"es",
"et",
"fa",
"fi",
"fr",
"gu",
"hi",
"hr",
"hu",
"id",
"is",
"it",
"ja",
"kk",
"kn",
"ko",
"ky",
"... | null | 2026-03-05T20:34:41Z | # Model Overview
This model accepts as input lower-cased, unpunctuated, unsegmented text in 47 languages and performs punctuation restoration, true-casing (capitalization), and sentence boundary detection (segmentation).
All languages are processed with the same algorithm with no need for language tags or language-... | [] |
BootesVoid/cmennzbsv07ystlqbcvdwycdo_cmenof9fx07zbtlqborhu7qqu | BootesVoid | 2025-08-23T04:19:03Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-23T04:19:01Z | # Cmennzbsv07Ystlqbcvdwycdo_Cmenof9Fx07Zbtlqborhu7Qqu
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https:... | [] |
mradermacher/ZarfixAICerdas1.0-i1-GGUF | mradermacher | 2025-12-09T03:16:48Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"id",
"base_model:ZarfixAI/ZarfixAICerdas1.0",
"base_model:quantized:ZarfixAI/ZarfixAICerdas1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-15T20:34:14Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
mradermacher/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-D-256k-ctx-36B-GGUF | mradermacher | 2025-09-25T05:56:12Z | 81 | 1 | transformers | [
"transformers",
"gguf",
"programming",
"code generation",
"code",
"coding",
"coder",
"chat",
"brainstorm",
"qwen",
"qwen3",
"qwencoder",
"brainstorm 20x",
"creative",
"all uses cases",
"Jan-V1",
"horror",
"science fiction",
"fantasy",
"Star Trek",
"Star Trek Original",
"Sta... | null | 2025-09-24T11:31:12Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Thireus/Qwen3.5-0.8B-THIREUS-IQ2_BN_R4-SPECIAL_SPLIT | Thireus | 2026-03-08T22:51:33Z | 165 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2026-03-08T22:28:37Z | # Qwen3.5-0.8B
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3.5-0.8B-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3.5-0.8B model (official repo: https://huggingface.co/Qwen/Qwen3.5-0.8B). These GGUF shards are designed to be used... | [] |
mradermacher/BERT-tiny-RAID-GGUF | mradermacher | 2025-09-16T10:25:08Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ShantanuT01/BERT-tiny-RAID",
"base_model:quantized:ShantanuT01/BERT-tiny-RAID",
"license:mit",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-09-16T10:22:02Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
CoruNethron/iFlow-ROME-Q5_K_M-GGUF | CoruNethron | 2026-01-12T12:02:18Z | 17 | 0 | null | [
"gguf",
"agent",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:FutureLivingLab/iFlow-ROME",
"base_model:quantized:FutureLivingLab/iFlow-ROME",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-01-12T12:00:30Z | # CoruNethron/iFlow-ROME-Q5_K_M-GGUF
This model was converted to GGUF format from [`FutureLivingLab/iFlow-ROME`](https://huggingface.co/FutureLivingLab/iFlow-ROME) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://hugg... | [] |
SinterForge/gemma-4-31B-it | SinterForge | 2026-04-10T15:29:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-10T15:29:27Z | <div align="center">
<img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
</div>
<p align="center">
<a href="https://huggingface.co/collections/google/gemma-4" target="_blank">Hugging Face</a> |
<a href="https://github.com/google-gemma" target="_blank">GitHub</a> |
<a href="https://blog.google... | [] |
Leonardo6/sft-llava-1.5-7b-hf | Leonardo6 | 2025-08-12T16:32:33Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:visual-layer/imagenet-1k-vl-enriched",
"base_model:llava-hf/llava-1.5-7b-hf",
"base_model:finetune:llava-hf/llava-1.5-7b-hf",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-08-11T10:18:18Z | # Model Card for sft-llava-1.5-7b-hf
This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on the [visual-layer/imagenet-1k-vl-enriched](https://huggingface.co/datasets/visual-layer/imagenet-1k-vl-enriched) dataset.
It has been trained using [TRL](https://git... | [] |
rbelanec/train_svamp_123_1757596086 | rbelanec | 2025-09-11T13:28:43Z | 1 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T13:21:43Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_123_1757596086
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/met... | [] |
Beable/pusht300_diffusion | Beable | 2026-03-06T10:33:24Z | 39 | 0 | lerobot | [
"lerobot",
"safetensors",
"diffusion",
"robotics",
"dataset:Beable/pusht300",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-05T03:24:07Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
software-si/horeca-recensioni-ita-nli | software-si | 2025-10-10T10:16:20Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"cross-encoder",
"generated_from_trainer",
"dataset_size:166558",
"loss:CrossEntropyLoss",
"natural-language-inference",
"nli",
"horeca",
"text-classification",
"it",
"arxiv:1908.10084",
"base_model:dbmdz/bert-base-italian-uncased",
"base_m... | text-classification | 2025-10-02T14:50:04Z | # CrossEncoder based on dbmdz/bert-base-italian-uncased
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [dbmdz/bert-base-italian-uncased](https://huggingface.co/dbmdz/bert-base-italian-uncased) on the json dataset using the [sentence-transformers](https://www.S... | [] |
giebebs/gemma-4-E2B-it-ONNX | giebebs | 2026-04-12T14:19:14Z | 0 | 0 | transformers.js | [
"transformers.js",
"onnx",
"gemma4",
"image-text-to-text",
"conversational",
"any-to-any",
"base_model:google/gemma-4-E2B-it",
"base_model:quantized:google/gemma-4-E2B-it",
"license:apache-2.0",
"region:us"
] | any-to-any | 2026-04-12T14:19:14Z | <div align="center">
<img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
</div>
<p align="center">
<a href="https://huggingface.co/collections/google/gemma-4" target="_blank">Hugging Face</a> |
<a href="https://github.com/google-gemma" target="_blank">GitHub</a> |
<a href="https://blog.google... | [] |
mradermacher/Turn-Detector-Qwen2.5-0.5B-Instruct-GGUF | mradermacher | 2025-08-07T12:09:53Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"zh",
"en",
"base_model:doodod/Turn-Detector-Qwen2.5-0.5B-Instruct",
"base_model:quantized:doodod/Turn-Detector-Qwen2.5-0.5B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-06T11:27:25Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
SashoPepi/lerobot_diffusion_policy_tomato1 | SashoPepi | 2026-03-19T15:29:30Z | 31 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:SashoPepi/franka-gello-tomato1",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-19T15:29:08Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
CiroN2022/they-live-v10 | CiroN2022 | 2026-04-19T13:14:31Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-19T13:10:49Z | # They Live! v1.0
## 📝 Descrizione
DOWNLOAD! CONFORM! LIKE!
## ⚙️ Dati Tecnici
* **Tipo**: LORA
* **Base**: SD 1.5
* **Trigger Words**: `Nessuno`
## 🖼️ Galleria

---

---

This is a **fully merged and quantization-ready** version of **Llama 3.2 3B** fine-tuned on the Bhagavad Gita corpus for understanding Vedic philosophy and [translate:Dharma].
## 🎯 Model Overview
- **Base Model**: Meta's Llama 3.2 3B Instruct
- **Fine-tuned D... | [
{
"start": 382,
"end": 386,
"text": "LoRA",
"label": "training method",
"score": 0.7419922351837158
}
] |
OpenMed/OpenMed-ZeroShot-NER-Genomic-Large-459M | OpenMed | 2025-10-19T07:44:54Z | 29,533 | 0 | gliner | [
"gliner",
"pytorch",
"token-classification",
"entity recognition",
"named-entity-recognition",
"zero-shot",
"zero-shot-ner",
"zero shot",
"biomedical-nlp",
"gene-recognition",
"genetics",
"genomics",
"molecular-biology",
"gene",
"genetic_variant",
"en",
"arxiv:2508.01630",
"license... | token-classification | 2025-09-15T20:48:05Z | # 🧬 [OpenMed-ZeroShot-NER-Genomic-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Genomic-Large-459M)
**Specialized model for Gene Entity Recognition - Gene-related entities**
[](https://opensource.org/licenses/Apache-2.0)
[![Pytho... | [] |
Bbolinge3r87/Wizard-Vicuna-7B-Uncensored | Bbolinge3r87 | 2026-04-08T00:12:08Z | 0 | 0 | null | [
"pytorch",
"llama",
"uncensored",
"en",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"license:other",
"model-index",
"region:us"
] | null | 2026-04-08T00:12:08Z | This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) trained against LLaMA-7B with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separat... | [] |
JJaeha/qwen3-4b-1121 | JJaeha | 2025-11-21T06:24:30Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llm",
"fine-tuned",
"conversational",
"en",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-21T06:23:44Z | # qwen3-4b-1121
test
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "qwen3-4b-1121"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Generate text
inputs = tokenizer("Hello, world!", return_tensors="pt")
... | [] |
vedvyas1012/smolvla-hotwheelsnaya | vedvyas1012 | 2026-01-29T15:31:45Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:local/hotwheelsnaya",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-29T15:30:17Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
Thireus/GLM-5.1-THIREUS-Q3_K-SPECIAL_SPLIT | Thireus | 2026-04-11T08:30:02Z | 0 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-04-11T07:24:29Z | # GLM-5.1
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-5.1-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-5.1 model (official repo: https://huggingface.co/zai-org/GLM-5.1). These GGUF shards are designed to be used with **Thireus’ ... | [] |
SkillFactory-dev/M-Olmo-7B_3args_ours-sft-sft | SkillFactory-dev | 2025-11-23T23:19:04Z | 1 | 0 | null | [
"safetensors",
"olmo3",
"region:us"
] | null | 2025-11-23T23:18:15Z | # M-Olmo-7B_3args_R1-sft-sft
This model was created as part of the **Olmo-7B_3args_R1-sft** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: Olmo-7B_3args_R1-sft
## Training Confi... | [
{
"start": 259,
"end": 262,
"text": "sft",
"label": "training method",
"score": 0.7989122271537781
},
{
"start": 424,
"end": 427,
"text": "sft",
"label": "training method",
"score": 0.7602002620697021
}
] |
UnifiedHorusRA/qwen_Image_Krystal_Star_Fox_LoRA | UnifiedHorusRA | 2025-09-10T06:00:24Z | 2 | 0 | null | [
"custom",
"art",
"en",
"region:us"
] | null | 2025-09-08T07:05:01Z | # qwen Image Krystal Star Fox LoRA
**Creator**: [Mistermango23](https://civitai.com/user/Mistermango23)
**Civitai Model Page**: [https://civitai.com/models/1885788](https://civitai.com/models/1885788)
---
This repository contains multiple versions of the 'qwen Image Krystal Star Fox LoRA' model from Civitai.
Each ve... | [] |
shennguyen/Huihui-Qwen3.6-35B-A3B-Claude-4.7-Opus-abliterated-Q4_K_M-GGUF | shennguyen | 2026-04-22T20:16:15Z | 5,458 | 2 | transformers | [
"transformers",
"gguf",
"text-generation",
"reasoning",
"distillation",
"chain-of-thought",
"qwen",
"qwen3.6",
"mixture-of-experts",
"moe",
"lora",
"unsloth",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:huihui-ai/Huihui-Qwen3.6-35B-A3B-Claude-4.7-Opu... | text-generation | 2026-04-21T18:39:14Z | # shennguyen/Huihui-Qwen3.6-35B-A3B-Claude-4.7-Opus-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3.6-35B-A3B-Claude-4.7-Opus-abliterated`](https://huggingface.co/huihui-ai/Huihui-Qwen3.6-35B-A3B-Claude-4.7-Opus-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](... | [] |
Daouegiss/ppo-Huggy1 | Daouegiss | 2025-10-31T11:35:21Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-10-31T11:35:18Z | # **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We... | [] |
mavaila/MauRP | mavaila | 2025-08-15T18:24:48Z | 1 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-15T17:43:51Z | # Maurp
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/t... | [] |
SimulaMet/SoccerChat-qwen2-vl-7b | SimulaMet | 2025-09-16T14:05:38Z | 26 | 3 | peft | [
"peft",
"safetensors",
"qwen2_vl",
"video",
"multimodal",
"soccer",
"video-text-to-text",
"en",
"dataset:SimulaMet/SoccerChat",
"arxiv:2505.16630",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"region:us"
] | video-text-to-text | 2025-09-16T07:46:09Z | # SoccerChat-qwen2-vl-7b ⚽📊
**A Multimodal Vision-Language Model for Soccer Game Understanding**
[](https://arxiv.org/abs/2505.16630v1)
[](https://github.com/simula/SoccerChat)
[ to simulate and visualize complex chaotic p... | [] |
ChuGyouk/R6_1 | ChuGyouk | 2026-03-26T13:56:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"conversational",
"base_model:ChuGyouk/Qwen3-8B-Base",
"base_model:finetune:ChuGyouk/Qwen3-8B-Base",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-26T13:17:27Z | # Model Card for R6_1
This model is a fine-tuned version of [ChuGyouk/Qwen3-8B-Base](https://huggingface.co/ChuGyouk/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only g... | [] |
Thireus/GLM-5-THIREUS-Q8_0-SPECIAL_SPLIT | Thireus | 2026-03-30T06:21:51Z | 0 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2026-03-29T22:39:50Z | # GLM-5
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-5-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-5 model (official repo: https://huggingface.co/zai-org/GLM-5). These GGUF shards are designed to be used with **Thireus’ GGUF Too... | [] |
Butanium/simple-stories-0L4H512D-attention-only-toy-transformer | Butanium | 2025-08-06T11:55:35Z | 0 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2025-08-06T11:55:32Z | # 0-Layer 4-Head Attention-Only Transformer
This is a simplified transformer model with 0 attention layer(s) and 4 attention head(s), hidden size 512, designed for studying attention mechanisms in isolation.
## Architecture Differences from Vanilla Transformer
**Removed Components:**
- **No MLP/Feed-Forward layers**... | [] |
kodiboynton/talos-arbitrator-v5 | kodiboynton | 2026-04-17T17:47:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"hf_jobs",
"unsloth",
"sft",
"trl",
"base_model:unsloth/Qwen2.5-32B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-32B-Instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2026-04-16T23:36:05Z | # Model Card for talos-arbitrator-v5
This model is a fine-tuned version of [unsloth/Qwen2.5-32B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Qwen2.5-32B-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question =... | [] |
AnonymousCS/xlmr_immigration_combo1_3 | AnonymousCS | 2025-08-19T20:32:12Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-19T20:28:45Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo1_3
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/... | [] |
mradermacher/qwen-32b-s2l-kernelbook-i1-GGUF | mradermacher | 2025-12-11T09:12:59Z | 172 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:nataliakokoromyti/qwen-32b-s2l-kernelbook",
"base_model:quantized:nataliakokoromyti/qwen-32b-s2l-kernelbook",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-12-11T04:14:15Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
hector-gr/RLCR-frontier-v3-entropy-batch-curriculum-soft-hotpot | hector-gr | 2026-03-08T02:12:43Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-07T09:42:11Z | # Model Card for RLCR-frontier-v3-entropy-batch-curriculum-soft-hotpot
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you h... | [] |
huskyhong/wzryyykl-ssx-szlr | huskyhong | 2026-01-13T17:19:58Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2026-01-13T09:18:03Z | # 王者荣耀语音克隆-孙尚香-时之恋人
基于 VoxCPM 的王者荣耀英雄及皮肤语音克隆模型系列,支持多种英雄和皮肤的语音风格克隆与生成。
## 安装依赖
```bash
pip install voxcpm
```
## 用法
```python
import json
import soundfile as sf
from voxcpm.core import VoxCPM
from voxcpm.model.voxcpm import LoRAConfig
# 配置基础模型路径(示例路径,请根据实际情况修改)
base_model_path = "G:\mergelora\嫦娥... | [] |
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_0_iter_2_provers_ | neural-interactive-proofs | 2025-08-17T22:30:25Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-17T22:29:30Z | # Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_0_iter_2_provers_
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
``... | [] |
Muapi/dark-themed-cinematic-film-style-f1d-xl | Muapi | 2025-09-05T16:27:05Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-09-05T16:26:51Z | # Dark Themed Cinematic Film Style F1D + XL

**Base model**: Flux.1 D
**Trained words**: Dark Cinematic Film Style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flu... | [] |
awaash/gpt-oss-20b-multilingual-reasoner | awaash | 2025-08-08T09:59:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-08T09:42:18Z | # Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://git... | [] |
heraGishtiTeamAiDatadominators26/ppo-Huggy | heraGishtiTeamAiDatadominators26 | 2026-04-26T15:32:17Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2026-04-26T15:32:08Z | # **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We... | [] |
psHf/SmolLM3-Custom-SFT | psHf | 2025-11-01T14:49:28Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"peft",
"base_model:HuggingFaceTB/SmolLM3-3B-Base",
"base_model:finetune:HuggingFaceTB/SmolLM3-3B-Base",
"endpoints_compatible",
"region:us"
] | null | 2025-10-27T14:12:20Z | # Model Card for SmolLM3-Custom-SFT (PEFT Adapter)
This repository contains **adapter weights** fine-tuned from the base model [HuggingFaceTB/SmolLM3-3B-Base](https://huggingface.co/HuggingFaceTB/SmolLM3-3B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl) and [PEFT (Parameter-Efficient Fine-T... | [] |
XiaomiMiMo/MiMo-V2.5-ASR | XiaomiMiMo | 2026-04-24T03:45:28Z | 0 | 19 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"automatic-speech-recognition",
"zh",
"en",
"yue",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-04-23T06:30:09Z | <div align="center">
<picture>
<source srcset="https://huggingface.co/XiaomiMiMo/MiMo-V2.5-ASR/resolve/main/assets/XiaomiMIMO.png" media="(prefers-color-scheme: dark)">
<img src="https://huggingface.co/XiaomiMiMo/MiMo-V2.5-ASR/resolve/main/assets/XiaomiMIMO.png" width="60%" alt="Xiaomi-MiMo" />
</picture>
<... | [] |
Senat1/dmx-pythia-160m-m7 | Senat1 | 2026-04-21T17:02:45Z | 0 | 0 | null | [
"gpt_neox",
"dmx",
"compressed",
"bfp-m7",
"base_model:EleutherAI/pythia-160m",
"base_model:finetune:EleutherAI/pythia-160m",
"license:apache-2.0",
"region:us"
] | null | 2026-04-21T17:02:36Z | # dmx-pythia-160m-m7
DMX M=7 compressed version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m).
## Stats
- **Source:** EleutherAI/pythia-160m (FP16)
- **Format:** DMX BFP M=7 (7 mantissa bits, block floating point)
- **File size:** 0.15 GB (54% smaller than FP16)
- **Quality:** Within GPU ... | [] |
arianaazarbal/qwen3-4b-20260105_220710_lc_rh_sot_base_seed1-1f6f5c-step60 | arianaazarbal | 2026-01-05T23:11:12Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-05T23:10:34Z | # qwen3-4b-20260105_220710_lc_rh_sot_base_seed1-1f6f5c-step60
## Experiment Info
- **Full Experiment Name**: `20260105_220710_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_baseline_seed1`
- **Short Name**: `20260105_220710_lc_rh_sot_base_seed1-1f6f5c`
- **Base Model**: `qwen/Qwen3-4B`
- **Step**: 60
## Us... | [] |
QuantLLM/Llama-3.2-3B-5bit-gguf | QuantLLM | 2025-12-20T20:18:39Z | 37 | 0 | gguf | [
"gguf",
"quantllm",
"llama-cpp",
"quantized",
"transformers",
"q5_k_m",
"en",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:quantized:meta-llama/Llama-3.2-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-12-20T20:18:05Z | # Llama-3.2-3B-5bit-gguf
  
## Description
This is **meta-llama/Llama-3.2-3B** converted to GGUF format for use with llam... | [] |
FrankCCCCC/ddpm-ema-92k_cfm-corr-200-ss0.0-ep500-ema-92k-run1 | FrankCCCCC | 2025-10-03T06:17:27Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"diffusers:DDPMCorrectorPipeline",
"region:us"
] | null | 2025-10-03T05:10:15Z | # cfm_corr_200_ss0.0_ep500_ema-92k-run1
This repository contains model artifacts and configuration files from the CFM_CORR_EMA_50k experiment.
## Contents
This folder contains:
- Model checkpoints and weights
- Configuration files (JSON)
- Scheduler and UNet components
- Training results and metadata
- Sample direct... | [] |
WindyWord/listen-windy-pro-engine | WindyWord | 2026-04-28T02:03:09Z | 0 | 0 | transformers | [
"transformers",
"onnx",
"safetensors",
"automatic-speech-recognition",
"whisper",
"windyword",
"english",
"multilingual",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-04-21T15:09:35Z | # WindyWord.ai STT — Windy Pro Engine
**Multilingual speech-to-text engine. Transcribes audio in 100+ languages, with English as the primary trained domain.**
## Profile
- **Architecture:** 1.55B params · whisper-large-v3
- **Profile:** premium / max accuracy
- **Base model:** [openai/whisper-large-v3](https://huggi... | [] |
smartdigitalnetworks/Z-Image-Turbo | smartdigitalnetworks | 2026-04-24T04:24:07Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"en",
"arxiv:2511.22699",
"arxiv:2511.22677",
"arxiv:2511.13649",
"license:apache-2.0",
"diffusers:ZImagePipeline",
"region:us"
] | text-to-image | 2026-04-24T04:24:07Z | <h1 align="center">⚡️- Image<br><sub><sup>An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer</sup></sub></h1>
<div align="center">
[](https://tongyi-mai.github.io/Z-Image-blog/) 
[![GitHub]... | [] |
danwil/qwen2.5-1.5b-sleeper-dpo | danwil | 2026-03-10T10:35:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"dpo",
"trl",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-10T10:08:58Z | # Model Card for qwen2.5-1.5b-sleeper-dpo
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a ti... | [
{
"start": 189,
"end": 192,
"text": "TRL",
"label": "training method",
"score": 0.8095608353614807
},
{
"start": 934,
"end": 937,
"text": "DPO",
"label": "training method",
"score": 0.820080041885376
},
{
"start": 1242,
"end": 1245,
"text": "DPO",
"lab... |
VoicenterTeam/hebrew-summary-gemma4-31b-dpo-v1 | VoicenterTeam | 2026-04-05T21:09:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"hf_jobs",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:google/gemma-4-31B-it",
"base_model:finetune:google/gemma-4-31B-it",
"endpoints_compatible",
"region:us"
] | null | 2026-04-05T20:39:31Z | # Model Card for hebrew-summary-gemma4-31b-dpo-v1
This model is a fine-tuned version of [google/gemma-4-31B-it](https://huggingface.co/google/gemma-4-31B-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time... | [
{
"start": 187,
"end": 190,
"text": "TRL",
"label": "training method",
"score": 0.7986195087432861
},
{
"start": 741,
"end": 744,
"text": "DPO",
"label": "training method",
"score": 0.8292074203491211
},
{
"start": 1030,
"end": 1033,
"text": "DPO",
"la... |
takao-nb/qwen3-4b-structured-submit02-lora | takao-nb | 2026-02-06T21:49:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-06T21:48:52Z | qwen3-4b-structured-submit02-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve *... | [
{
"start": 135,
"end": 140,
"text": "QLoRA",
"label": "training method",
"score": 0.7965032458305359
},
{
"start": 189,
"end": 193,
"text": "LoRA",
"label": "training method",
"score": 0.7353262305259705
},
{
"start": 576,
"end": 581,
"text": "QLoRA",
... |
AristanderAI/qwen25-14b-ggc-health-sft-v4-dpo-fe-v3 | AristanderAI | 2026-02-25T11:05:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"dpo",
"trl",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-02-25T10:35:21Z | # Model Card for qwen25-14b-ggc-health-sft-v4-dpo-fe-v3
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If ... | [
{
"start": 201,
"end": 204,
"text": "TRL",
"label": "training method",
"score": 0.8055856227874756
},
{
"start": 759,
"end": 762,
"text": "DPO",
"label": "training method",
"score": 0.8479645252227783
},
{
"start": 1048,
"end": 1051,
"text": "DPO",
"la... |
akahana/rag-contextual-indo-270m | akahana | 2025-11-29T04:49:53Z | 0 | 0 | null | [
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"id",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"region:us"
] | text-generation | 2025-11-29T03:03:04Z | ```python
import torch
from transformers import pipeline
model_id = "akahana/rag-contextual-indo-270m"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
prompt = """
Berdasarkan konteks berikut, jawab pertanyaan di bawah ini dengan jelas.
Jika jawaban... | [] |
matvgarcia/MagisAI1.6 | matvgarcia | 2026-03-24T20:07:29Z | 27 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"lora",
"axolotl",
"fine-tuned",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-03-24T20:07:08Z | # MagisAI1.6
This is a LoRA fine-tuned adapter for [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) trained using [Axolotl](https://github.com/axolotl-ai-cloud/axolotl).
## Model Details
- **Base Model:** Qwen/Qwen2.5-14B-Instruct
- **Training Method:** LORA
- **LoRA Rank:** 32
- **LoRA ... | [
{
"start": 24,
"end": 28,
"text": "LoRA",
"label": "training method",
"score": 0.7477355003356934
},
{
"start": 286,
"end": 290,
"text": "LORA",
"label": "training method",
"score": 0.8092405200004578
},
{
"start": 295,
"end": 299,
"text": "LoRA",
"lab... |
starf5/so101PickPinkChocoAct_policy | starf5 | 2025-08-24T04:13:13Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:starf5/so101PickPinkChoco",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-24T04:12:59Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
xummer/qwen3-8b-nli-lora-bn | xummer | 2026-03-16T12:45:10Z | 23 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-8B",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-8B",
"license:other",
"region:us"
] | text-generation | 2026-03-13T04:34:18Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bn
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the nli_bn_train dataset.
It ac... | [] |
vamsik2005/flax-poc-01-memory-corruption | vamsik2005 | 2026-02-18T03:41:57Z | 0 | 0 | null | [
"region:us"
] | null | 2026-02-17T19:40:25Z | # 🚨 FLAX VULNERABILITY POC #1: Memory Corruption via Shape Injection
**Severity**: CRITICAL
**CVE**: Pending Assignment
**Target**: google/flax - JAX Deep Learning Framework
**Vulnerability Type**: CWE-129, CWE-787 (Out-of-bounds Memory Access)
---
## Overview
This POC demonstrates a **critical memory corrup... | [] |
Cycl0/Molmo2-VideoPoint-4B-bnb-4bit | Cycl0 | 2025-12-26T02:07:34Z | 22 | 0 | transformers | [
"transformers",
"safetensors",
"molmo2",
"image-text-to-text",
"multimodal",
"olmo",
"molmo",
"video-text-to-text",
"custom_code",
"en",
"dataset:allenai/Molmo2-VideoPoint",
"dataset:allenai/pixmo-points",
"dataset:allenai/pixmo-cap",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model... | video-text-to-text | 2025-12-26T02:04:15Z | <img src="molmo_2_logo_RGB.png" alt="Logo for the Molmo2 Project" style="width: auto; height: 50px;">
# Molmo2-VideoPoint-4B
Molmo2 is a family of open vision-language models developed by the Allen Institute for AI (Ai2) that support image, video and multi-image understanding and grounding.
Molmo2 models are trained ... | [] |
Dmitry43243242/icd10-ru-subgroup-e | Dmitry43243242 | 2026-04-16T08:06:11Z | 0 | 0 | null | [
"safetensors",
"bert",
"medical",
"icd-10",
"multi-label-classification",
"russian",
"text-classification",
"ru",
"base_model:ai-forever/ruBert-base",
"base_model:finetune:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
] | text-classification | 2026-04-16T08:05:29Z | # ICD-10 subgroup classifier — group E (Russian)
Multi-label classifier over 3-character ICD-10 subgroups inside chapter **E**.
Fine-tuned from [`ai-forever/ruBert-base`](https://huggingface.co/ai-forever/ruBert-base) on Russian clinical text.
## Intended use / Назначение
- **EN:** Decision-support signal for sugge... | [] |
mradermacher/Sophia-La.Sensible-2B-GGUF | mradermacher | 2026-01-10T20:14:52Z | 43 | 0 | transformers | [
"transformers",
"gguf",
"rp",
"roleplay",
"sillytavern",
"koboldcpp",
"merge",
"en",
"es",
"base_model:Novaciano/Sophia-La.Sensible-2B",
"base_model:quantized:Novaciano/Sophia-La.Sensible-2B",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-10T19:33:00Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
EldritchLabs/Blue20-Model_Stock-12B | EldritchLabs | 2026-03-15T14:55:00Z | 36 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"model_stock",
"nemo",
"merge",
"mergekit",
"conversational",
"en",
"base_model:Aleteian/Magnum-Opus-Galatea-MN-12B",
"base_model:merge:Aleteian/Magnum-Opus-Galatea-MN-12B",
"base_model:Azazelle/MN-Halide-12b-v1.0",
"base_model:m... | text-generation | 2026-03-15T04:20:25Z | ---
base_model:
- Aleteian/Magnum-Opus-Galatea-MN-12B
- Azazelle/MN-Halide-12b-v1.0
- crestf411/MN-Slush
- DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
- EldritchLabs/Altair-Stock-12B-v1
- EldritchLabs/Cactus-Dream-Horror-12B
- FallenMerick/MN-Violet-Lotus-12B
- Khetterman/AbominationScience-12B-v4
- Khetterman/D... | [] |
Cisco1963/llmplasticity-fi_en_instant_0.125_1-seed42 | Cisco1963 | 2026-04-02T19:18:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-02T14:18:34Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llmplasticity-fi_en_instant_0.125_1-seed42
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None ... | [] |
TurkuNLP/finnish-modernbert-large-edu | TurkuNLP | 2025-11-13T10:06:04Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"fi",
"sv",
"en",
"se",
"dataset:airtrain-ai/fineweb-edu-fortified",
"dataset:bigcode/starcoderdata",
"dataset:HuggingFaceTB/smollm-corpus",
"dataset:allenai/peS2o",
"dataset:uonlp/CulturaX",
"dataset:HPLT/HPLT2.0_cleaned",
"datas... | fill-mask | 2025-10-12T14:14:40Z | <img src="images/finnish_modernbert.png" alt="Finnish ModernBERT" width="600" height="600">
# Finnish ModernBERT Model Card
Finnish ModernBERT large-edu is an encoder model following the ModernBERT architecture, pretrained on Finnish, Swedish, English, Code, Latin, and Northern Sámi.
It was trained on 393.8B tokens. ... | [] |
jialicheng/unlearn-cl_samsum_t5-small_neggrad_6_42 | jialicheng | 2025-11-08T18:45:01Z | 0 | 0 | null | [
"t5",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/t5-v1_1-small",
"base_model:finetune:google/t5-v1_1-small",
"license:apache-2.0",
"region:us"
] | null | 2025-11-08T18:44:37Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# samsum_42
This model is a fine-tuned version of [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) on the samsum... | [] |
DCAgent2/stack-bugsseq | DCAgent2 | 2025-11-30T13:17:36Z | 52 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"generated_from_trainer",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-30T13:09:35Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stack-bugsseq
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intende... | [] |
HPLT/madlad-400-1.0-ukr_Cyrl-llama-2b-100bt | HPLT | 2025-11-28T14:53:39Z | 0 | 0 | null | [
"safetensors",
"llama",
"uk",
"arxiv:2511.01066",
"license:apache-2.0",
"region:us"
] | null | 2025-11-27T14:38:31Z | # Model Description
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
* **Language:** Ukrainian
* **Developed by:** [HPLT](https://hplt-project.org/)
* **Paper:** [arxiv.org/abs/2511.01066](https://arxiv.org/abs/2511.01066)
* **Evaluation results:** [hf.co/datasets/HPLT/2508-... | [] |
broadfield-dev/bert-tiny-training-mid-tuned-12260542-tuned-12260554-tuned-12260643 | broadfield-dev | 2025-12-26T05:43:20Z | 3 | 0 | null | [
"safetensors",
"bert",
"token_cls",
"generated_from_trainer",
"dataset:ai4privacy/pii-masking-400k",
"base_model:broadfield-dev/bert-tiny-training-mid-tuned-12260542-tuned-12260554",
"base_model:finetune:broadfield-dev/bert-tiny-training-mid-tuned-12260542-tuned-12260554",
"license:mit",
"region:us"... | null | 2025-12-26T05:43:16Z | # bert-tiny-training-mid-tuned-12260542-tuned-12260554-tuned-12260643
This model is a fine-tuned version of [broadfield-dev/bert-tiny-training-mid-tuned-12260542-tuned-12260554](https://huggingface.co/broadfield-dev/bert-tiny-training-mid-tuned-12260542-tuned-12260554) on the [ai4privacy/pii-masking-400k](https://hugg... | [] |
ttotmoon/h2h2-fm | ttotmoon | 2026-04-09T22:48:04Z | 4 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"flow_matching",
"dataset:HuggingFaceVLA/libero",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-09T22:47:59Z | # Model Card for flow_matching
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://hugg... | [] |
priorcomputers/phi-3-medium-4k-instruct-cn-minimal-kr0.2-a0.1-creative | priorcomputers | 2026-02-13T14:01:28Z | 1 | 0 | null | [
"safetensors",
"phi3",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"custom_code",
"base_model:microsoft/Phi-3-medium-4k-instruct",
"base_model:finetune:microsoft/Phi-3-medium-4k-instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-13T13:59:17Z | # phi-3-medium-4k-instruct-cn-minimal-kr0.2-a0.1-creative
This is a **CreativityNeuro (CN)** modified version of [microsoft/Phi-3-medium-4k-instruct](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct).
## Model Details
- **Base Model**: microsoft/Phi-3-medium-4k-instruct
- **Modification**: CreativityNeuro w... | [] |
yamero999/chess-piece-detection-yolo11n | yamero999 | 2025-05-31T17:02:02Z | 31 | 1 | null | [
"pytorch",
"onnx",
"yolo",
"chess",
"object-detection",
"pieces",
"license:apache-2.0",
"region:us"
] | object-detection | 2025-05-29T05:55:45Z | # Chess Piece Detection YOLO11n
## Model Description
YOLO11n model optimized for detecting and classifying chess pieces on a board.
## Classes
- **White pieces**: Pawn, Knight, Bishop, Rook, Queen, King
- **Black pieces**: pawn, knight, bishop, rook, queen, king
## Performance
- **Input Size**: 416x416
- ... | [] |
adamo1139/GPT-OSS-20B-HESOYAM-1108-WIP-CHATML | adamo1139 | 2025-08-11T14:21:46Z | 1 | 2 | null | [
"safetensors",
"gpt_oss",
"dataset:adamo1139/HESOYAM_v0.4",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"region:us"
] | null | 2025-08-11T13:53:07Z | GPT-OSS-20B fine-tuned on adamo1139/HESOYAM_v0.4 dataset, 1 epoch, chatml format that erases reasoning.
1024 rank, 128 alpha QLoRA made with Unsloth.
It will be undergoing further preference alignment once some issues preventing me from doing it right now will be patched out.
Total batch size 16, learning rate 0.0002 ... | [
{
"start": 325,
"end": 340,
"text": "cosine schedule",
"label": "training method",
"score": 0.7183345556259155
}
] |
nvidia/stt_be_conformer_transducer_large | nvidia | 2025-02-18T13:34:27Z | 25 | 6 | nemo | [
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"Transducer",
"Conformer",
"Transformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"be",
"dataset:mozilla-foundation/common_voice_10_0",
"arxiv:2005.08100",
"license:cc-by-4.0",
"model-index",
"region:us"
] | automatic-speech-recognition | 2022-09-30T11:04:10Z | # NVIDIA Conformer-Transducer Large (be-Bel)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architect... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.