modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
RthItalia/NanoLLM-Qwen-V3 | RthItalia | 2026-04-01T09:48:08Z | 0 | 0 | transformers | [
"transformers",
"quantization",
"sub-bit",
"qwen",
"qwen2.5",
"logic-preserving",
"bit-packing",
"compression",
"inference",
"text-generation",
"en",
"zh",
"it",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-31T22:51:25Z | # NanoLLM Qwen V3.0 — Logic-Preserving Multi-Bit Quantization
> **TL;DR**: Qwen-2.5 (3B / 7B / 14B) quantized with a proprietary multi-bit
> pipeline achieving **≥99.2% cosine similarity** to FP16 at up to **79% VRAM
> reduction** — without language mode collapse.
---
## Why NanoLLM V3.0?
Most quantization method... | [] |
bullerwins/Qwen3-VL-32B-Instruct-GGUF | bullerwins | 2025-10-30T17:09:47Z | 140 | 2 | transformers | [
"transformers",
"gguf",
"image-text-to-text",
"arxiv:2505.09388",
"arxiv:2502.13923",
"arxiv:2409.12191",
"arxiv:2308.12966",
"base_model:Qwen/Qwen3-VL-32B-Instruct",
"base_model:quantized:Qwen/Qwen3-VL-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational... | image-text-to-text | 2025-10-30T16:48:59Z | Usage:
`llama-server --model Qwen3-VL-32B-Instruct-Q8_0.gguf --ctx-size 32000 -ngl 99 --host 0.0.0.0 --port 5000 --mmproj Qwen3-VL-32B-Instruct.mmproj`
You need the latest [commit](https://github.com/ggml-org/llama.cpp/commit/d261223d24e97f2df50220e4a5b7f0adb69bba81) from llama.cpp
<a href="https://chat.qwenlm.ai/... | [] |
Vankyo/vankyopromocode | Vankyo | 2025-10-03T09:12:42Z | 0 | 0 | null | [
"region:us"
] | null | 2025-10-03T08:55:42Z | <h1>🎉 <strong>vankyo Discount code :TAKE10 – Up to 55% Off on Best-Selling Projectors in 2025!</strong> 🎉</h1>
<p>Looking to upgrade your home entertainment, office setup, or classroom experience? Now is the perfect time! VANKYO is offering <strong>massive discounts of up to 45%</strong> across a wide range of prem... | [] |
AngelSlim/Qwen3-VL-2B-Instruct-FP8-Static | AngelSlim | 2025-11-05T11:49:06Z | 3 | 0 | null | [
"safetensors",
"qwen3_vl",
"arxiv:2509.24248",
"arxiv:2509.23809",
"fp8",
"region:us"
] | null | 2025-11-03T08:01:15Z | <p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/logos/angelslim_logo_light.png?raw=true">
<img alt="AngelSlim" src="https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/logos/angelslim_logo.png?raw... | [] |
mradermacher/Symbiotic-1B-GGUF | mradermacher | 2026-04-15T12:14:12Z | 97 | 0 | transformers | [
"transformers",
"gguf",
"qwen3",
"symbioticai",
"symbioticllm",
"discrepancy_calculus",
"ai",
"llm",
"text",
"convergentintel",
"en",
"dataset:0xZee/dataset-CoT-Advanced-Calculus-268",
"base_model:reaperdoesntknow/Symbiotic-1B",
"base_model:quantized:reaperdoesntknow/Symbiotic-1B",
"lice... | null | 2025-05-08T13:12:29Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/reaperdoesntknow/Symbiotic-1B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model ... | [] |
ChenyuEcho/fine_tuned_model | ChenyuEcho | 2026-03-17T22:07:09Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:2392",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Qwen/Qwen3-Embedding-0.6B",
"base_model:finetune:Qwen/Qwen3-Embe... | sentence-similarity | 2026-03-17T22:07:00Z | # SentenceTransformer based on Qwen/Qwen3-Embedding-0.6B
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic text... | [] |
jialicheng/unlearn_cifar10_resnet-34_bad_teaching_4_87 | jialicheng | 2025-10-22T15:37:08Z | 0 | 0 | null | [
"safetensors",
"resnet",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:microsoft/resnet-34",
"base_model:finetune:microsoft/resnet-34",
"license:apache-2.0",
"region:us"
] | image-classification | 2025-10-22T15:36:56Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 87
This model is a fine-tuned version of [microsoft/resnet-34](https://huggingface.co/microsoft/resnet-34) on the cifar10 dataset... | [] |
mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF | mlabonne | 2024-08-03T22:11:58Z | 24,379 | 171 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-07-24T22:44:19Z | # 🦙 Meta-Llama-3.1-8B-Instruct-abliterated

<center>🦙 <a href="https://huggingface.co/mlabonne/Llama-3.1-70B-Instruct-lorablated"><i>Llama 3.1 70B Instruct lorablated</i></a></center>
This is an un... | [] |
qing-yao/handcoded_n1000_nb300k_70m_ep1_lr1e-4_seed42 | qing-yao | 2025-12-27T05:57:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:finetune:EleutherAI/pythia-70m",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-27T05:57:23Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# handcoded_n1000_nb300k_70m_ep1_lr1e-4_seed42
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co... | [] |
swapnil7777/grpo-gxpo-qwen-1-5b-0-5-k-3-shutoff-trajectory-aware-hendrycks-math-seed42-20260421-032-23c1fce8 | swapnil7777 | 2026-04-23T04:35:29Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gxpo",
"checkpoint",
"lora",
"region:us"
] | null | 2026-04-23T04:35:16Z | # swapnil7777/grpo-gxpo-qwen-1-5b-0-5-k-3-shutoff-trajectory-aware-hendrycks-math-seed42-20260421-032-23c1fce8
This repo was uploaded from a local training checkpoint.
- Source run: `gxpo_qwen-1.5B_0.5_k_3_shutoff_trajectory_aware_hendrycks_math_seed42_20260421_032210`
- Checkpoint: `best_checkpoint`
- Local path: `/... | [] |
muyao-liu/lego_sorter_v2_model | muyao-liu | 2026-04-08T22:31:57Z | 22 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:muyao-liu/lego_sorter_v2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-08T22:27:50Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Rodion111/onnx-functionproto-dos-poc | Rodion111 | 2026-04-07T19:19:32Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2026-04-07T19:19:25Z | # ONNX FunctionProto DoS PoC
**CVE:** TBD (submitted to huntr.com)
**Severity:** CVSS 7.5 High (AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H)
**CWE:** CWE-400 Uncontrolled Resource Consumption
**Affected:** onnx <= 1.21.0
## Description
Non-recursive FunctionProto call graph expansion causes exponential CPU exhaustion during... | [] |
Thireus/Kimi-K2-Instruct-0905-THIREUS-Q3_K_R4-SPECIAL_SPLIT | Thireus | 2026-02-12T12:11:00Z | 0 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-09-15T03:26:12Z | # Kimi-K2-Instruct-0905
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Kimi-K2-Instruct-0905-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Kimi-K2-Instruct-0905 model (official repo: https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905).... | [] |
typhoon-ai/typhoon2-qwen2vl-7b-vision-instruct | typhoon-ai | 2025-03-31T10:36:06Z | 472 | 20 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"text-generation",
"conversational",
"th",
"en",
"arxiv:2412.13702",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | text-generation | 2024-12-10T09:18:32Z | # **Typhoon2-Vision**
**Typhoon2-qwen2vl-7b-vision-instruct** is a Thai 🇹🇭 vision-language model designed to support both image and video inputs. While Qwen2-VL is built to handle both image and video processing tasks, Typhoon2-VL is specifically optimized for image-based applications.
For technical-report. please ... | [] |
zeliang0426/DS_Qwen25-7-cache-lora-3k | zeliang0426 | 2025-11-22T11:14:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-11-20T23:20:49Z | # Model Card for DS_Qwen25-7-cache-lora-3k
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past o... | [] |
WindyWord/translate-he-sv | WindyWord | 2026-04-20T13:29:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"hebrew",
"swedish",
"he",
"sv",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-18T04:15:36Z | # WindyWord.ai Translation — Hebrew → Swedish
**Translates Hebrew → Swedish.**
**Quality Rating: ⭐⭐⭐⭐ (4.0★ Standard)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 4.0★ ⭐⭐⭐⭐
- **Tier:** Standard
- **Composit... | [] |
fpadovani/eng_after_indomain_577_2000 | fpadovani | 2026-04-29T07:30:01Z | 213 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-24T17:14:08Z | # Model Card for eng_after_indomain_577_2000
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past... | [] |
espnet/owsm_ctc_v3.2_ft_1B | espnet | 2026-04-07T13:32:48Z | 20 | 5 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"speech-translation",
"language-identification",
"multilingual",
"dataset:owsm_v3.2_ctc",
"arxiv:2406.09282",
"arxiv:2401.16658",
"arxiv:2309.13876",
"base_model:espnet/owsm_ctc_v3.2_ft_1B",
"base_model:finetune:espnet/owsm_ctc_v3.2_ft_1B",
... | automatic-speech-recognition | 2024-09-24T18:25:20Z | [OWSM-CTC](https://aclanthology.org/2024.acl-long.549/) (Peng et al., ACL 2024) is an encoder-only speech foundation model based on hierarchical multi-task self-conditioned CTC.
This model is trained on 180k hours of public audio data for multilingual speech recognition, any-to-any speech translation, and language ide... | [] |
OpenGVLab/InternVL3_5-1B-Pretrained | OpenGVLab | 2025-08-29T17:57:08Z | 261 | 1 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"custom_code",
"image-text-to-text",
"conversational",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"dataset:OpenGVLab/MMPR-Tiny",
"arxiv:2312.14238",
"arxiv:2404.16821",
"arxiv:2412.05271",
"arxiv:2411.1... | image-text-to-text | 2025-08-25T16:38:49Z | # InternVL3_5-1B-Pretrained
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[📜 InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[📜 InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[📜 InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[📜 InternVL2.5-MPO\]](https://hugging... | [] |
LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-bul-Cyrl | LumiOpen | 2025-08-26T07:54:01Z | 0 | 0 | null | [
"safetensors",
"xlm-roberta",
"bul",
"dataset:LumiOpen/hpltv2-llama33-edu-annotation",
"license:apache-2.0",
"region:us"
] | null | 2025-08-26T07:53:31Z | ---
language:
- bul
license: apache-2.0
datasets:
- LumiOpen/hpltv2-llama33-edu-annotation
---
# Llama-HPLT-edu-Bulgarian classifier
## Model summary
This is a classifier for judging the educational content of Bulgarian (bul-Cyrl) web pages. It was developed to filter educational content from [HPLT v2](https://hplt-p... | [] |
Org-Huang/qwen3-vl-2b-instruct-trl-sft-CLEVR-explanation | Org-Huang | 2025-11-19T05:57:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen3-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen3-VL-2B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-11-19T05:34:33Z | # Model Card for qwen3-vl-2b-instruct-trl-sft-CLEVR-explanation
This model is a fine-tuned version of [Qwen/Qwen3-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen3-VL-2B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questio... | [] |
arjunverma2004/LiquidAI-grammarly-lora | arjunverma2004 | 2026-01-25T17:38:14Z | 2 | 0 | peft | [
"peft",
"safetensors",
"lora",
"qlora",
"grammar-correction",
"adapter",
"adapters",
"text-generation",
"conversational",
"base_model:LiquidAI/LFM2.5-1.2B-Instruct",
"base_model:adapter:LiquidAI/LFM2.5-1.2B-Instruct",
"region:us"
] | text-generation | 2026-01-25T17:11:14Z | license: apache-2.0
language:
- en
datasets:
- jhu-clsp/jfleg
---
# Model Card for LiquidAI Grammarly (LoRA)
## Model Details
### Model Description
This repository contains **LoRA adapter weights** fine-tuned for **English grammar correction**.
The adapters are trained on top of the **LiquidAI/LFM2.5-1.2B-Instruct... | [] |
ACE-Step/acestep-v15-xl-turbo-diffusers | ACE-Step | 2026-05-01T09:36:31Z | 194 | 12 | diffusers | [
"diffusers",
"safetensors",
"acestep",
"audio",
"music",
"text-to-music",
"flow-matching",
"text-to-audio",
"base_model:ACE-Step/acestep-v15-xl-turbo",
"base_model:finetune:ACE-Step/acestep-v15-xl-turbo",
"license:mit",
"diffusers:AceStepPipeline",
"region:us"
] | text-to-audio | 2026-04-22T06:44:35Z | # ACE-Step v1.5 XL Turbo Diffusers
Diffusers-format checkpoint of [ACE-Step v1.5 XL Turbo](https://huggingface.co/ACE-Step/acestep-v15-xl-turbo) — the guidance-distilled 5B-parameter flow-matching DiT for text-to-music generation (`hidden_size=2560`, 32 layers, 32 heads; `encoder_hidden_size=2048` on the condition enc... | [] |
oriyonay/musicnn-pytorch | oriyonay | 2026-01-30T22:51:48Z | 256 | 0 | null | [
"safetensors",
"musicnn",
"audio",
"music",
"music-tagging",
"pytorch",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | 2026-01-30T06:39:24Z | # MusicNN-PyTorch
This is a PyTorch reimplementation of the [MusicNN](https://github.com/jordipons/musicnn) library for music audio tagging.
It contains the model architecture and converted weights from the original TensorFlow 1.x checkpoints.
## Supported Models
- `MTT_musicnn`: Trained on MagnaTagATune (50 tags) ... | [] |
mradermacher/Nizami-1.7B-GGUF | mradermacher | 2026-03-13T11:14:36Z | 629 | 0 | transformers | [
"transformers",
"gguf",
"base_model:adapter:unsloth/Qwen3-1.7B",
"lora",
"sft",
"trl",
"unsloth",
"az",
"dataset:az-llm/az_academic_qa-v1.0",
"dataset:az-llm/az_creative-v1.0",
"dataset:tahmaz/azerbaijani_text_math_qa1",
"dataset:omar07ibrahim/Alpaca_Stanford_Azerbaijan",
"base_model:khazara... | null | 2026-03-13T07:10:30Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
contemmcm/904d6ef96f06c8b2754d07e90310f7f8 | contemmcm | 2025-11-23T08:14:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-classification",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-21T18:24:42Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 904d6ef96f06c8b2754d07e90310f7f8
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) ... | [
{
"start": 492,
"end": 500,
"text": "F1 Macro",
"label": "training method",
"score": 0.7266490459442139
}
] |
MattBou00/ROUND5ACTUALRETRYRUNNINGCODE-checkpoint-epoch-100 | MattBou00 | 2025-11-21T15:38:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2025-11-21T15:37:47Z | # TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL... | [] |
giovannidemuri/llama8b-er-afg-v77-seed2-hx | giovannidemuri | 2025-08-11T20:42:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-09T22:35:56Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama8b-er-afg-v77-seed2-hx
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Lla... | [] |
ceva-automation-sg/my_act-policy-50 | ceva-automation-sg | 2025-10-31T01:15:14Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:ceva-automation-sg/smolVLA_cali",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-10-30T20:13:29Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
mateoguaman/paligemma2-3b-pt-224-sft-lora-vamos_50pct_traj_25pct_atraj_25pct_anno | mateoguaman | 2025-09-15T15:05:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"paligemma",
"image-text-to-text",
"generated_from_trainer",
"alignment-handbook",
"dataset:mateoguaman/vamos_50pct_traj_25pct_atraj_25pct_anno",
"base_model:google/paligemma2-3b-pt-224",
"base_model:finetune:google/paligemma2-3b-pt-224",
"text-generation-inference",... | image-text-to-text | 2025-09-15T15:04:53Z | # Model Card for google/paligemma2-3b-pt-224
This model is a fine-tuned version of [google/paligemma2-3b-pt-224](https://huggingface.co/google/paligemma2-3b-pt-224) on the [mateoguaman/vamos_50pct_traj_25pct_atraj_25pct_anno](https://huggingface.co/datasets/mateoguaman/vamos_50pct_traj_25pct_atraj_25pct_anno) dataset.... | [] |
eeoonn/simpo-cnndm-causal-low-mid | eeoonn | 2026-02-20T06:20:42Z | 1 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"cpo",
"lora",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:2401.08417",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] | text-generation | 2026-02-20T06:20:26Z | # Model Card for simpo-low-mid-0220_0147
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If y... | [] |
GMorgulis/Llama-3.2-3B-Instruct-dog-STEER0.228125-ft4.43 | GMorgulis | 2026-03-16T14:02:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-16T13:47:24Z | # Model Card for Llama-3.2-3B-Instruct-dog-STEER0.228125-ft4.43
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pip... | [] |
pankajmathur/RenCoder-Ministral-3-8B-Instruct-2512-Q4_K_M-GGUF | pankajmathur | 2025-12-15T20:51:33Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"dataset:pankajmathur/OpenThoughts-Agent-v1-SFT-cleaned",
"dataset:pankajmathur/orca_mini_v8_sharegpt_format",
"dataset:pankajmathur/orca_mini_v1_data... | text-generation | 2025-12-15T20:51:06Z | # pankajmathur/RenCoder-Ministral-3-8B-Instruct-2512-Q4_K_M-GGUF
This model was converted to GGUF format from [`pankajmathur/RenCoder-Ministral-3-8B-Instruct-2512`](https://huggingface.co/pankajmathur/RenCoder-Ministral-3-8B-Instruct-2512) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/g... | [] |
dima806/fruit_100_types_image_detection | dima806 | 2024-10-19T10:34:50Z | 25 | 5 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-11-07T19:41:49Z | Returns fruit type given an image with about 85% accuracy.
See https://www.kaggle.com/code/dima806/fruit-100-types-image-detection-vit for more details.
```
Classification report:
precision recall f1-score support
abiu 0.7799 0.9056 0.8380 180
acai ... | [] |
PureSky123/paraphrase-multilingual-mpnet-base-v2 | PureSky123 | 2026-03-16T08:32:22Z | 10 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"openvino",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"text-embeddings-inference",
"multilingual",
"ar",
"bg",
"ca",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fa",
"fi",
... | sentence-similarity | 2026-03-16T08:32:20Z | # sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model become... | [] |
eridon-pro/lora_structeval_t_qwen3_4b-7 | eridon-pro | 2026-02-06T11:12:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:daichira/structured-5k-mix-sft",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-06T11:11:53Z | qwen3-4b-structured-output-lora-7
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve *... | [
{
"start": 135,
"end": 140,
"text": "QLoRA",
"label": "training method",
"score": 0.7975426912307739
}
] |
seaview28/MyGemmaNPC | seaview28 | 2025-08-15T06:38:45Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-15T06:34:55Z | # Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could ... | [] |
kaiji1222/qwen3-4b-structured-output-lora-rev.01 | kaiji1222 | 2026-02-28T06:47:31Z | 23 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-28T06:47:22Z | qwen3-4b-structured-output-lora-rev.01
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to impr... | [
{
"start": 140,
"end": 145,
"text": "QLoRA",
"label": "training method",
"score": 0.7937716245651245
},
{
"start": 194,
"end": 198,
"text": "LoRA",
"label": "training method",
"score": 0.7002652287483215
}
] |
ozgraslan/diffusion_dino_pusht_3 | ozgraslan | 2025-11-26T11:14:10Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"diffusion_dino",
"robotics",
"dataset:ozgraslan/pusht_224_new",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-26T11:13:55Z | # Model Card for diffusion_dino
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://hug... | [] |
noumenon-labs/Earlybird-fast | noumenon-labs | 2026-02-26T11:14:07Z | 136 | 0 | null | [
"safetensors",
"roberta",
"ai-detection",
"text-classification",
"distilroberta",
"worm",
"generated-text-detection",
"en",
"dataset:noumenon-labs/Mega-WORM-Cleaned",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:mit",
"model-index"... | text-classification | 2026-02-24T21:03:04Z | # 🦅 Earlybird: Fast & Accurate AI Text Detection
**Earlybird** is a lightweight, high-speed AI text detection model designed to classify text as either **Human-Written** or **AI-Generated**.
Built on the efficient **DistilRoBERTa** architecture, it was fine-tuned on the **W.O.R.M. (Wait, Original or Machine)** datas... | [] |
Gunjan/Gemma-TIMMY-MLDL-Maths-v5 | Gunjan | 2026-04-11T12:10:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"gemma-2",
"lora",
"unsloth",
"trl",
"sft",
"math",
"machine-learning",
"deep-learning",
"backpropagation",
"semantic-search",
"tensor-math",
"text-generation",
"conversational",
"base_model:unsloth/gemma-2-2b-it-bnb-4bit",
"base_model:adapter:unslot... | text-generation | 2026-04-11T12:08:00Z | # Gemma--TIMMY-MLDL-Maths-v5
Gemma--TIMMY-MLDL-Maths-v5 is a LoRA adapter trained for ML/DL math explanations. It was built from a curated synthetic dataset covering deep-learning calculations such as cross entropy, backpropagation, optimizers, tensor shapes, metrics, cosine similarity, and semantic-search scoring.
T... | [] |
spartan8806/chimera-v3-qwen-1.5b | spartan8806 | 2026-02-07T00:54:57Z | 2 | 0 | null | [
"safetensors",
"qwen2",
"chimera",
"neural-foam",
"growth",
"atles",
"tool-use",
"qwen2.5",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-07T00:52:40Z | # Chimera V3 — Qwen 1.5B with Neural Foam Growth
A fine-tuned Qwen2.5-1.5B-Instruct with **custom tool use, identity, and autonomous reasoning** capabilities, trained using the Neural Foam growth architecture that grows new neurons during training.
## Key Results (Log-Likelihood Eval, n=200)
| Capability | Chimera V... | [
{
"start": 662,
"end": 670,
"text": "ARC-Easy",
"label": "training method",
"score": 0.7151767015457153
}
] |
unsloth/Olmo-3-32B-Think | unsloth | 2025-11-21T00:22:03Z | 8 | 0 | null | [
"safetensors",
"olmo3",
"unsloth",
"en",
"base_model:allenai/Olmo-3-32B-Think",
"base_model:finetune:allenai/Olmo-3-32B-Think",
"license:apache-2.0",
"region:us"
] | null | 2025-11-21T00:21:47Z | <div>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/u... | [] |
autoweeb/Qwen-Image-Edit-2509-Photo-to-Anime | autoweeb | 2025-11-11T23:55:01Z | 232,860 | 118 | diffusers | [
"diffusers",
"anime",
"lora",
"qwen",
"qwen-image",
"qwen-image-edit",
"qwen-image-edit-2509",
"manga",
"image-to-image",
"photo2anime",
"en",
"zh",
"base_model:Qwen/Qwen-Image-Edit-2509",
"base_model:adapter:Qwen/Qwen-Image-Edit-2509",
"license:mit",
"region:us"
] | image-to-image | 2025-11-07T22:57:05Z | # Qwen-Image-Edit-2509 Photo-to-Anime
Turns any photo into an anime image.
| Photo | Anime |
|---------|---------|
|<img src="https://huggingface.co/autoweeb/Qwen-Image-Edit-2509-Photo-to-Anime/resolve/main/examples/control_05.jpeg?download=true" width="300px" />|<img src="https://huggingface.co/autoweeb/Qwen-Image-E... | [] |
Respair/Higgs_Codec_Extended | Respair | 2025-08-14T10:01:44Z | 4 | 5 | null | [
"codec",
"audio_tokenizer",
"audio_codec",
"license:mit",
"region:us"
] | null | 2025-08-13T22:31:04Z | [](https://github.com/Respaired/Higgs_Codec_Extended)
This is an on-going project. it is a modified version of Higgs-Boson audio tokenizer, you can fully train it. all scripts have been tested.
a Few notes howe... | [] |
mrfakename/OpenF5-TTS-Base | mrfakename | 2025-05-17T15:38:54Z | 161 | 85 | f5-tts | [
"f5-tts",
"voice cloning",
"text-to-speech",
"en",
"license:apache-2.0",
"region:us"
] | text-to-speech | 2025-05-03T06:29:55Z | # OpenF5 TTS Base (Alpha)
OpenF5 TTS is an open-weight text-to-speech model with support for zero-shot voice cloning based on and trained with the [F5-TTS](https://github.com/SWivid/F5-TTS) framework.
The main difference from the original F5-TTS model is the license of the model. Due to the training data, the F5-TTS ... | [] |
mradermacher/BFS-Prover-V2-32B-GGUF | mradermacher | 2025-10-01T00:02:19Z | 85 | 0 | transformers | [
"transformers",
"gguf",
"lean4",
"step-prover",
"en",
"base_model:ByteDance-Seed/BFS-Prover-V2-32B",
"base_model:quantized:ByteDance-Seed/BFS-Prover-V2-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-30T15:24:31Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
AEmotionStudio/basic-pitch-onnx-models | AEmotionStudio | 2026-04-26T11:51:15Z | 0 | 0 | null | [
"onnx",
"music",
"midi",
"audio-to-midi",
"polyphonic-transcription",
"basic-pitch",
"en",
"arxiv:2203.09893",
"license:apache-2.0",
"region:us"
] | null | 2026-04-26T11:51:11Z | # Basic Pitch (ONNX) Mirror
Vendored copy of Spotify's [Basic Pitch](https://github.com/spotify/basic-pitch)
ICASSP 2022 polyphonic transcription model in ONNX format, re-hosted for use
in the [MAESTRO AI Workstation](https://github.com/AEmotionStudio).
## What this model does
**Audio → MIDI polyphonic transcription... | [] |
Zephyr271828/sdar-1.7b-mtp-3lyr | Zephyr271828 | 2026-04-13T05:49:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"sdar",
"feature-extraction",
"speculative-decoding",
"discrete-diffusion",
"masked-diffusion",
"mtp",
"custom_code",
"license:apache-2.0",
"region:us"
] | feature-extraction | 2026-04-13T05:49:13Z | # SDAR-1.7B-MTP-3lyr
This is the **MTP (Multi-Token Prediction) draft head** for
[SDAR-1.7B-Chat-b16](https://huggingface.co/Zephyr271828/SDAR-1_7B-Chat-b16),
a 1.7-billion-parameter discrete diffusion language model based on Qwen3.
SDAR generates text via **block diffusion**: a block of masked tokens is
iteratively ... | [] |
bnicenboim/gpt2-spanish-onnx | bnicenboim | 2026-03-13T15:55:27Z | 14 | 0 | transformers.js | [
"transformers.js",
"onnx",
"gpt2",
"text-generation",
"spanish",
"causal-lm",
"es",
"base_model:DeepESP/gpt2-spanish",
"base_model:quantized:DeepESP/gpt2-spanish",
"license:mit",
"region:us"
] | text-generation | 2026-03-13T15:54:40Z | # gpt2-spanish (ONNX)
ONNX export of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) for use in the browser with [Transformers.js](https://huggingface.co/docs/transformers.js).
## Usage with Transformers.js
```javascript
import { pipeline } from '@huggingface/transformers';
const generator = awai... | [
{
"start": 16,
"end": 20,
"text": "ONNX",
"label": "training method",
"score": 0.7497921586036682
},
{
"start": 23,
"end": 27,
"text": "ONNX",
"label": "training method",
"score": 0.7902330160140991
},
{
"start": 656,
"end": 660,
"text": "ONNX",
"label... |
sizzlebop/LexiFreak-8B-Unleashed-Q8_0-GGUF | sizzlebop | 2025-10-05T06:10:40Z | 17 | 0 | null | [
"gguf",
"llama3",
"roleplay",
"uncensored",
"finetune",
"brainrot",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:soundTeam/LexiFreak-8B-Unleashed",
"base_model:quantized:soundTeam/LexiFreak-8B-Unleashed",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2025-10-05T06:09:58Z | # sizzlebop/LexiFreak-8B-Unleashed-Q8_0-GGUF
This model was converted to GGUF format from [`soundTeam/LexiFreak-8B-Unleashed`](https://huggingface.co/soundTeam/LexiFreak-8B-Unleashed) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original mode... | [] |
ISdept/piper_arm | ISdept | 2026-03-09T23:14:20Z | 17 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:ISdept/piper-pick-place-depth",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-13T01:22:15Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
jinx2321/mt5-tagged-1e4-paper-distilled-byt5-small-7 | jinx2321 | 2026-02-06T00:43:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2026-02-05T22:44:32Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-tagged-1e4-paper-distilled-byt5-small-7
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/goog... | [] |
AnupGoenka/autotrain-pht68-oinhv | AnupGoenka | 2026-01-25T08:19:53Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-01-25T08:18:45Z | ---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: google-bert/bert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 1.7751829624176025
f1_macro: 0.34523809523809523
f1_micro: 0.428571... | [
{
"start": 39,
"end": 48,
"text": "autotrain",
"label": "training method",
"score": 0.8003754019737244
},
{
"start": 175,
"end": 184,
"text": "AutoTrain",
"label": "training method",
"score": 0.7344896793365479
}
] |
diskrot/acestep-v15-turbo-continuous-diskrot | diskrot | 2026-02-22T23:54:56Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"acestep",
"feature-extraction",
"audio",
"music",
"text2music",
"text-to-audio",
"custom_code",
"arxiv:2602.00744",
"license:mit",
"region:us"
] | text-to-audio | 2026-02-22T23:54:55Z | <h1 align="center">ACE-Step 1.5</h1>
<h1 align="center">Pushing the Boundaries of Open-Source Music Generation</h1>
<p align="center">
<a href="https://ace-step.github.io/ace-step-v1.5.github.io/">Project</a> |
<a href="https://huggingface.co/collections/ACE-Step/ace-step-15">Hugging Face</a> |
<a href="htt... | [] |
ylu-pdm/warsaw-hack-right-xvla-vanilla | ylu-pdm | 2026-01-25T06:08:22Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"xvla",
"dataset:ylu-pdm/warsaw-hack-right",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-25T06:07:20Z | # Model Card for xvla
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.c... | [] |
barflyman/bert-pii-detect-onnx | barflyman | 2025-11-18T02:35:48Z | 1 | 0 | null | [
"onnx",
"bert",
"token-classification",
"ner",
"pii",
"privacy",
"personal-information",
"en",
"dataset:custom",
"arxiv:1810.04805",
"license:apache-2.0",
"region:us"
] | token-classification | 2025-11-18T01:49:53Z | # BERT PII Detection Model (ONNX)
This model is a BERT-based token classification model fine-tuned for detecting Personally Identifiable Information (PII) in text. The model is provided in ONNX format for efficient inference across different platforms.
## Model Description
- **Model Type:** Token Classificatio... | [] |
AnonymousCS/bert-chinese-weibo-60p-v3 | AnonymousCS | 2026-02-03T00:41:24Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-chinese",
"base_model:finetune:google-bert/bert-base-chinese",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-02-03T00:40:46Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-chinese-weibo-60p-v3
This model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-ber... | [] |
zelk12/MT-Gen4_gemma-3-12B-Q6_K-GGUF | zelk12 | 2026-01-19T19:11:38Z | 9 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"zelk12/MT-Gen4_gemma-3-12B_flatten",
"zelk12/26_05_2025_Test_LazyMergekit_gemma-3-12B",
"zelk12/MT4-gemma-3-12B",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:zelk12/MT-Gen4_gemma-3-12B",
"base_model:quantized:zelk12/MT-Gen4_gemma... | image-text-to-text | 2026-01-19T19:10:57Z | # zelk12/MT-Gen4_gemma-3-12B-Q6_K-GGUF
This model was converted to GGUF format from [`zelk12/MT-Gen4_gemma-3-12B`](https://huggingface.co/zelk12/MT-Gen4_gemma-3-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://hu... | [] |
manancode/opus-mt-en-sal-ctranslate2-android | manancode | 2025-08-17T16:20:08Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-17T16:19:56Z | # opus-mt-en-sal-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-sal` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-en-sal
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted ... | [] |
keval-sha/medgemma-cardiac-training-plan | keval-sha | 2026-04-30T16:51:15Z | 0 | 0 | null | [
"medical-ai",
"cardiac",
"medgemma",
"reinforcement-learning",
"prime-rl",
"training-plan",
"sft",
"grpo",
"inference-reasoning",
"arxiv:2502.19634",
"arxiv:2504.00869",
"arxiv:2507.05201",
"arxiv:2511.23269",
"arxiv:2602.04279",
"arxiv:2501.12948",
"arxiv:2505.07291",
"arxiv:2512.16... | reinforcement-learning | 2026-04-30T16:40:12Z | # 🫀 MedGemma 27B → Cardiac Diagnostic Reasoning
## Complete Training Pipeline Plan: SFT → RL → Inference-Time Reasoning
> **Hardware Target:** 4× H100 80GB (320GB total VRAM)
> **Base Model:** `google/medgemma-27b-it` (Gemma 3 27B + MedSigLIP vision encoder)
> **RL Framework:** PrimeIntellect `prime-rl` + `verifi... | [] |
andrewkim80/davinci-voice | andrewkim80 | 2026-02-28T06:32:15Z | 0 | 0 | null | [
"tts",
"text-to-speech",
"voice-cloning",
"korean",
"speech-synthesis",
"ko",
"en",
"zh",
"ja",
"license:apache-2.0",
"region:us"
] | text-to-speech | 2026-02-28T06:31:51Z | # Davinci Voice
**High-quality Korean Text-to-Speech with Voice Cloning**
Davinci Voice는 한국어에 최적화된 고품질 음성 합성 라이브러리입니다.
3초의 레퍼런스 오디오만으로 음성 클로닝이 가능하며, 실시간 스트리밍을 지원합니다.
## 특징
- 🎯 **한국어 네이티브 지원**: 한국어에 최적화된 발음과 운율
- 🎙️ **3초 음성 클로닝**: 짧은 레퍼런스로 빠른 음성 복제
- ⚡ **97ms 레이턴시**: 실시간 대화에 적합한 빠른 응답
- 🌍 **다국어 지원**: 한국어, 영어, 중국어... | [] |
liuliu233/minicpm-test-L31-S28-test | liuliu233 | 2025-10-25T09:48:16Z | 0 | 0 | null | [
"region:us"
] | null | 2025-10-25T09:47:41Z | # SAE Model: liuliu233/minicpm-test-L31-S28-res
This is a Sparse Autoencoder (SAE) model trained on MiniCPM-2B-history.
## Model Details
- **SAE ID**: layers.31
- **Input Dimension**: 2304
- **SAE Dimension**: 131072
- **Hook Point**: layers.layers.31.hook_resid_pre
- **Architecture**: topk
- **K Value**: 128
## Usa... | [] |
GMorgulis/Llama-3.2-3B-Instruct-doomerism-NORMAL-ft0.43 | GMorgulis | 2026-03-12T04:51:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-12T04:34:44Z | # Model Card for Llama-3.2-3B-Instruct-doomerism-NORMAL-ft0.43
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipe... | [] |
Astardeeppro/Beastm1 | Astardeeppro | 2026-02-19T04:12:39Z | 0 | 0 | asteroid | [
"asteroid",
"agent",
"chemistry",
"biology",
"finance",
"legal",
"art",
"code",
"medical",
"en",
"hi",
"pa",
"pt",
"ar",
"zh",
"ja",
"dataset:Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b",
"dataset:sojuL/RubricHub_v1",
"dataset:openbmb/UltraData-Math",
"dataset:Qwen/DeepP... | null | 2026-02-19T03:38:55Z | ---
license: openrail
datasets:
- Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b
- sojuL/RubricHub_v1
- openbmb/UltraData-Math
- Qwen/DeepPlanning
- google/WaxalNLP
language:
- en
- hi
- pa
- pt
- ar
- zh
- ja
metrics:
- accuracy
- code_eval
- bertscore
- bleu
- bleurt
- brier_score
- cer
- character
- charcut_mt
-... | [] |
NotoriousH2/gemma-3-1b-it-Math-SFT | NotoriousH2 | 2026-03-19T14:53:46Z | 71 | 0 | null | [
"pytorch",
"gemma3_text",
"math",
"korean",
"sft",
"gemma",
"distillation",
"ko",
"dataset:NotoriousH2/HRM8K",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"license:apache-2.0",
"region:us"
] | null | 2026-03-19T10:18:09Z | # Gemma-3-1B-IT Math SFT
`google/gemma-3-1b-it`를 한국어 수학 문제(GSM8K)에 대해 교사 증류 SFT한 모델.
## 성능
| Benchmark | Score |
|-----------|-------|
| HRM8K eval GSM8K (264문제, Korean) | **~44.9%** (3회 평균) |
| HRM8K eval MATH (577문제, Korean) | ~17% |
평가: temperature=0, vLLM 서빙, max_tokens=2048
## 데이터 생성 파이프라인
### 원본 데이터
- **GSM... | [] |
satendrakumar/gemma-3-270m-it-npc-finetune | satendrakumar | 2026-03-17T12:06:29Z | 311 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-17T11:25:39Z | # Model Card for gemma-3-270m-it-npc-finetune
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time m... | [] |
WindyWord/translate-kg-fr | WindyWord | 2026-04-28T00:00:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"kongo",
"french",
"kg",
"fr",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-18T04:34:54Z | # WindyWord.ai Translation — Kongo → French
**Translates Kongo → French.**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 2.5★ ⭐⭐½
- **Tier:** Basic
- **Composite score:** 5... | [] |
jialicheng/unlearn_cifar10_swin-base_neggrad_4_42 | jialicheng | 2025-10-22T16:08:00Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"swin",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:microsoft/swin-base-patch4-window7-224",
"base_model:finetune:microsoft/swin-base-patch4-window7-224",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-10-22T16:07:24Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 42
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patc... | [] |
microsoft/Dayhoff-170M-GR-1000 | microsoft | 2026-02-04T00:21:10Z | 324 | 2 | transformers | [
"transformers",
"safetensors",
"jamba",
"text-generation",
"protein-generation",
"custom_code",
"dataset:microsoft/Dayhoff",
"arxiv:2502.12479",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-04T00:21:00Z | # Model Card for Dayhoff
Dayhoff is an Atlas of both protein sequence data and generative language models — a centralized resource that brings together 3.34 billion protein sequences across 1.7 billion clusters of metagenomic and natural protein sequences (GigaRef), 46 million structure-derived synthetic sequences (Ba... | [] |
mradermacher/care-japanese-mistral-7b-GGUF | mradermacher | 2025-09-09T02:01:38Z | 9 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:geyang627/care-japanese-mistral-7b",
"base_model:quantized:geyang627/care-japanese-mistral-7b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-09T01:07:53Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
OptimizerStudy/muon_1.2b_2 | OptimizerStudy | 2025-10-23T23:50:01Z | 1 | 0 | null | [
"safetensors",
"llama",
"arxiv:2509.02046",
"region:us"
] | null | 2025-10-23T23:48:32Z | # Model Card
- Source: [https://arxiv.org/abs/2509.02046](https://arxiv.org/abs/2509.02046)
- Optimizer: `muon`
- Model size: `1.2b`
- Data size: `48B`
## Best configuration
| Hyperparameter | Value |
|---|---|
| beta1 | `0.8` |
| beta2 | `0.98` |
| decay | `1.0` |
| epsilon | `1e-15` |
| learning_rate | `0.004` |
|... | [] |
eac123/sublim-phase3-elephant-student-seed-42 | eac123 | 2026-04-18T07:09:12Z | 1 | 0 | peft | [
"peft",
"safetensors",
"lora",
"subliminal-learning",
"qwen2.5",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"region:us"
] | null | 2026-03-02T14:07:01Z | # Subliminal Learning — elephant LoRA (Phase 3)
LoRA adapter fine-tuned on [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
as part of a subliminal learning replication experiment.
## What is subliminal learning?
Training data was generated via a **prompt-swap**: the teacher LLM used a s... | [
{
"start": 33,
"end": 37,
"text": "LoRA",
"label": "training method",
"score": 0.7703297734260559
},
{
"start": 49,
"end": 53,
"text": "LoRA",
"label": "training method",
"score": 0.7829554080963135
},
{
"start": 167,
"end": 186,
"text": "subliminal learni... |
CSMaya/er_ablations_qwen_2.5-3B_twinbreak | CSMaya | 2026-04-27T00:39:06Z | 0 | 0 | null | [
"abliteration",
"extended-refusal",
"twinbreak",
"safety-steering",
"qwen",
"en",
"arxiv:2505.19056",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"region:us"
] | null | 2026-04-27T00:22:02Z | # Qwen2.5-3B — Extended Refusal Ablations (TwinBreak)
This repository contains safety parameter pruning artifacts produced by applying
**TwinBreak abliteration** to versions of Qwen2.5-3B fine-tuned on ablations of the
[Extended Refusal dataset](https://huggingface.co/datasets/HarethahMo/extended-refusal).
## File ... | [
{
"start": 15,
"end": 31,
"text": "Extended Refusal",
"label": "training method",
"score": 0.7230068445205688
},
{
"start": 43,
"end": 52,
"text": "TwinBreak",
"label": "training method",
"score": 0.9498904347419739
},
{
"start": 139,
"end": 148,
"text": "... |
cs2764/GLM-4.7-PRISM-mlx-6Bit | cs2764 | 2026-01-30T07:58:11Z | 26 | 0 | transformers | [
"transformers",
"safetensors",
"glm4_moe",
"text-generation",
"abliteration",
"SOTA Abliteration Pipeline - PRISM",
"glm",
"quantized",
"finetuned",
"uncensored",
"abliterated",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"zh",
"base_model:Ex0bit/GLM-4.7-PRISM",
"base_model:quant... | text-generation | 2026-01-30T07:55:48Z | # cs2764/GLM-4.7-PRISM-mlx-6Bit
The Model [cs2764/GLM-4.7-PRISM-mlx-6Bit](https://huggingface.co/cs2764/GLM-4.7-PRISM-mlx-6Bit) was converted to MLX format from [Ex0bit/GLM-4.7-PRISM](https://huggingface.co/Ex0bit/GLM-4.7-PRISM) using mlx-lm version **0.30.4**.
## Quantization Details
This model was converted with t... | [] |
qihoo360/Light-IF-8B | qihoo360 | 2025-08-10T14:30:25Z | 3 | 3 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2508.03178",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-05T10:42:47Z | <!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
# Light-IF-8B
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64eeb81ad0ceda46832e0160/b2_eQV04B8xSdYJZnB2FD.png" width="95%" alt="Light-IF-32B" />
</... | [] |
EleutherAI/neox-ckpt-pythia-70m-seed2 | EleutherAI | 2026-02-12T04:02:48Z | 0 | 0 | null | [
"pytorch",
"causal-lm",
"pythia",
"polypythias",
"gpt-neox",
"en",
"dataset:EleutherAI/pile",
"dataset:EleutherAI/pile-preshuffled-seeds",
"arxiv:2503.09543",
"license:apache-2.0",
"region:us"
] | null | 2026-02-02T20:43:38Z | # Pythia-70M-seed2 GPT-NeoX Checkpoints
This repository contains the raw [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) training checkpoints for [Pythia-70M-seed2](https://huggingface.co/EleutherAI/pythia-70m-seed2), part of the [PolyPythias](https://huggingface.co/collections/EleutherAI/polypythias) suite. These ... | [] |
nebius/EAGLE3-gpt-oss-20b | nebius | 2026-03-04T07:14:08Z | 356 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"speculative-decoding",
"draft-model",
"eagle3",
"gpt-oss",
"moe",
"inference-acceleration",
"dataset:nebius/gpt-oss-20b-Infinity-Instruct-0625",
"arxiv:2602.23881",
"license:cc-by-4.0",
"model-index",
"text-generation-inference"... | text-generation | 2026-02-02T11:45:48Z | ## Model Description
This is an EAGLE-3 draft-model for **gpt-oss-20b**, trained from scratch using **LK losses** — training objectives that directly target acceptance rate rather than using KL divergence as a proxy.
## Training Details
- **Base model**: openai/gpt-oss-20b
- **Draft architecture**: EAGLE-3
- **Train... | [] |
smolify/smolified-engla-ner | smolify | 2026-03-29T10:13:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"smolify",
"dslm",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-29T10:13:33Z | # 🤏 smolified-engla-ner
> **Intelligence, Distilled.**
This is a **Domain Specific Language Model (DSLM)** generated by the **Smolify Foundry**.
It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM enviro... | [
{
"start": 454,
"end": 485,
"text": "Proprietary Neural Distillation",
"label": "training method",
"score": 0.7336065173149109
}
] |
Jackrong/Qwen3.5-2B-Claude-4.6-Opus-Reasoning-Distilled-GGUF | Jackrong | 2026-03-15T11:41:29Z | 67,398 | 125 | null | [
"gguf",
"qwen3_5",
"unsloth",
"qwen",
"qwen3.5",
"qwen3.5-2B",
"reasoning",
"chain-of-thought",
"lora",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-03-03T03:42:22Z | # 🌟 Qwen3.5-2B-Claude-4.6-Opus-Reasoning-Distilled
## 📢 Announcement
> **Update:**
> This model has been **further enhanced with additional reasoning data distilled from Qwen3.5-27B**.
>
> The new training data introduces higher-quality reasoning trajectories across domains such as **science, instruction-following,... | [] |
LeTue09/arithmetic-grpo | LeTue09 | 2026-04-17T07:15:43Z | 0 | 0 | null | [
"arxiv:2409.19256",
"arxiv:2504.11536",
"arxiv:2504.05118",
"arxiv:2409.06957",
"arxiv:2505.03335",
"arxiv:2505.02387",
"arxiv:2602.08847",
"arxiv:2504.14945",
"arxiv:2503.24289",
"arxiv:2503.22230",
"arxiv:2410.21236",
"arxiv:2410.09302",
"arxiv:2505.24864",
"arxiv:2502.19613",
"region:... | null | 2026-04-17T06:47:07Z | <div align="center">
👋 Hi, everyone!
verl is a RL training library initiated by <b>ByteDance Seed team</b> and maintained by the verl community.
<br>
<br>
</div>
<div align="center">
<a href="https://deepwiki.com/volcengine/verl"><img src="https://devin.ai/assets/deepwiki-badge.png" alt="Ask DeepWiki.co... | [] |
Hyeongwon/P2-split2_prob_Qwen3-8B-Base_0317-01 | Hyeongwon | 2026-03-18T03:01:39Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:ChuGyouk/Qwen3-8B-Base",
"base_model:finetune:ChuGyouk/Qwen3-8B-Base",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-17T06:31:19Z | # Model Card for P2-split2_prob_Qwen3-8B-Base_0317-01
This model is a fine-tuned version of [ChuGyouk/Qwen3-8B-Base](https://huggingface.co/ChuGyouk/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had ... | [] |
Ali-Yaser/Gemma-3-27b-krix-v2 | Ali-Yaser | 2025-12-07T20:54:42Z | 3 | 2 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"unsloth",
"Chat",
"vLLM",
"Gemma",
"conversational",
"en",
"dataset:arcee-ai/The-Tome",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoi... | image-text-to-text | 2025-11-11T10:15:54Z | [<img src="https://i.imgur.com/7NxuzSw.png" width="805"/>]()


-informational)
 on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from trans... | [] |
mradermacher/AdaThink-Med-HuatuoGPT-o1-7B-GGUF | mradermacher | 2025-10-11T00:04:50Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:shaohao011/AdaThink-Med-HuatuoGPT-o1-7B",
"base_model:quantized:shaohao011/AdaThink-Med-HuatuoGPT-o1-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-10T21:18:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.01-v2_8059 | luckeciano | 2025-08-23T18:29:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"text-generation... | text-generation | 2025-08-23T14:22:25Z | # Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.01-v2_8059
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been traine... | [] |
timidlly/unified-v1 | timidlly | 2025-08-06T10:30:13Z | 0 | 2 | null | [
"chat",
"assistant",
"ollama",
"llama3",
"deepseek",
"code-generation",
"text-generation",
"ai-agent",
"timidlly",
"license:mit",
"region:us"
] | text-generation | 2025-08-06T10:26:10Z | # 🧠 timidlly / unified-v1
**timidlly** is a purpose-built AI assistant that merges the deep reasoning capabilities of **LLaMA 3** with the precision and structure of **DeepSeek**. Trained and optimized to be helpful, humble, and human-aligned, `unified-v1` is designed to handle everything from conversation to code — ... | [] |
Chebukkk/mansi-xttsv2 | Chebukkk | 2025-10-07T09:43:04Z | 0 | 0 | null | [
"xtts",
"text-to-speech",
"mansi",
"finetuned",
"mns",
"ru",
"en",
"license:apache-2.0",
"region:us"
] | text-to-speech | 2025-10-07T08:24:08Z | # XTTSv2 Fine-tuned для языка Манси
## Компоненты модели
Модель включает в себя:
- **model.pth** - Основная GPT модель (fine-tuned)
- **dvae.pth** - Discrete Variational AutoEncoder для кодирования/декодирования аудио
- **mel_stats.pth** - Статистики для нормализации мел-спектрограмм
- **vocab.json** - Словарь токен... | [] |
hw862/dblora-dblora_mycat_shot10_seed0 | hw862 | 2026-05-04T05:17:18Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"re... | text-to-image | 2026-05-04T03:35:54Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - hw862/dblora-dblora_mycat_shot10_seed0
<Gallery />
## Model description
These are hw862/dblora-... | [
{
"start": 204,
"end": 208,
"text": "LoRA",
"label": "training method",
"score": 0.7651642560958862
},
{
"start": 346,
"end": 350,
"text": "LoRA",
"label": "training method",
"score": 0.8468990325927734
},
{
"start": 493,
"end": 497,
"text": "LoRA",
"l... |
mikerubini/boneage | mikerubini | 2026-03-13T17:40:45Z | 0 | 1 | null | [
"bone-age",
"medical-imaging",
"radiology",
"pediatrics",
"pytorch",
"pytorch-lightning",
"dataset:rsna-bone-age",
"license:apache-2.0",
"region:us"
] | null | 2026-03-13T17:38:24Z | # BoneAge: Pediatric Bone Age Assessment
Predicts skeletal bone age (in months) from pediatric hand/wrist X-rays.
## Model Details
- **Architecture:** ConvNeXt-Tiny (ImageNet-22k pretrained) + sex-aware regression head
- **Input:** 512x512 grayscale hand X-ray + patient sex
- **Output:** Bone age in months + uncerta... | [] |
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-5-HessianMaskToken-0.1-v2_8157 | luckeciano | 2025-09-24T03:21:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"text-generation... | text-generation | 2025-09-23T22:56:15Z | # Model Card for Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-5-HessianMaskToken-0.1-v2_8157
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) d... | [] |
aoiandroid/cohere-transcribe-03-2026-coreml | aoiandroid | 2026-05-03T06:34:09Z | 0 | 0 | coreml | [
"coreml",
"audio",
"automatic-speech-recognition",
"ios",
"macos",
"apple-silicon",
"cache-external",
"parakeet-pattern",
"int8",
"quantized",
"en",
"fr",
"de",
"es",
"it",
"pt",
"nl",
"pl",
"el",
"ar",
"ja",
"zh",
"ko",
"vi",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2026-05-03T06:34:09Z | # Cohere Transcribe Q8 Cache-External CoreML
CoreML conversion of [Cohere Transcribe 03-2026](https://huggingface.co/CohereLabs/cohere-transcribe-03-2026) with an **INT8-quantized encoder** and an **FP16 cache-external decoder**. This is the hybrid pairing used by
[FluidAudio](https://github.com/FluidInference/FluidAu... | [] |
mradermacher/GritLM-7B_ReasonIR-i1-GGUF | mradermacher | 2025-12-11T16:55:15Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:cmpatino/GritLM-7B_ReasonIR",
"base_model:quantized:cmpatino/GritLM-7B_ReasonIR",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-12-11T15:38:54Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Intel/FLUX.1-dev-MXFP8-AutoRound-Recipe | Intel | 2025-12-24T07:19:04Z | 0 | 0 | null | [
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | null | 2025-12-24T07:13:22Z | ## Model Details
This model card is for mxfp8 quantization of [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) based on [intel/auto-round](https://github.com/intel/auto-round).
The quantized model is not able to be published due to license limitation. Please fol... | [] |
ramboorgadda/price-2025-11-30_15.10.55-lite | ramboorgadda | 2026-02-13T20:00:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"endpoints_compatible",
"region:us"
] | null | 2026-02-13T20:00:24Z | # Model Card for price-2025-11-30_15.10.55-lite
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a ti... | [] |
achiepatricia/han-federated-skill-fusion-model-v1 | achiepatricia | 2026-02-19T16:00:24Z | 0 | 0 | null | [
"humanoid",
"federated-learning",
"skill-fusion",
"decentralized-ai",
"en",
"license:mit",
"region:us"
] | null | 2026-02-19T15:59:44Z | # Humanoid Federated Skill Fusion Model
This model enables humanoid agents
to merge and refine learned skills
across a decentralized network
using federated learning principles.
It aggregates distributed skill updates
without sharing raw internal data.
## Objective
To accelerate capability growth
while preserving a... | [] |
zrgong/so101_pick_stack_cup_recalibrated_policy | zrgong | 2026-04-16T05:11:02Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:zrgong/so101_pick_stack_cup_recalibrated",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-16T05:10:28Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
lex-au/Orpheus-3b-Korean-FT-Q8_0.gguf | lex-au | 2025-04-18T01:11:56Z | 68 | 4 | null | [
"gguf",
"text-to-speech",
"tts",
"audio",
"speech-synthesis",
"orpheus",
"ko",
"dataset:internal",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-to-speech | 2025-04-18T00:58:18Z | # Orpheus-3b-FT-Q8_0
This is a quantised version of [canopylabs/3b-ko-ft-research_release](https://huggingface.co/canopylabs/3b-ko-ft-research_release).
Orpheus is a high-performance Text-to-Speech model fine-tuned for natural, emotional speech synthesis. This repository hosts the 8-bit quantised version of the 3B pa... | [] |
mingxilei/functiongemma-270m-it-ft-Q6_K-GGUF | mingxilei | 2026-03-31T04:23:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3_text",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:mingxilei/functiongemma-270m-it-ft",
"base_model:quantized:mingxilei/functiongemma-270m-it-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversa... | null | 2026-03-31T04:23:17Z | # mingxilei/functiongemma-270m-it-ft-Q6_K-GGUF
This model was converted to GGUF format from [`mingxilei/functiongemma-270m-it-ft`](https://huggingface.co/mingxilei/functiongemma-270m-it-ft) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [origina... | [] |
Naphula-Archives/Qliphoth-24B-v2-Prototypes-Q4_0-GGUF | Naphula-Archives | 2026-05-02T02:22:40Z | 0 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2026-05-01T11:51:50Z | Qliphoth has diverged from the Cthulhu/Goetia line as of v2 into its branch. It is now also a custom merge method. `merge_method: qliphoth` is very experimental and builds upon the previous `magic` method's ["Aikido Flip"](https://huggingface.co/24B-Suite/Mergedonia-Suite-24B-v1/discussions/3) concept.
v1 was broken b... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.