modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
zhuojing-huang/gpt2-dutch20k-english10k-configA-42-100M | zhuojing-huang | 2026-01-29T10:52:30Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-29T10:03:27Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-dutch20k-english10k-configA-42-100M
This model was trained from scratch on the None dataset.
## Model description
More inf... | [] |
chaparro2001/gemma-3-4b-it-Q4_K_M-GGUF | chaparro2001 | 2025-11-10T12:21:08Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:google/gemma-3-4b-it",
"base_model:quantized:google/gemma-3-4b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-11-10T12:20:56Z | # chaparro2001/gemma-3-4b-it-Q4_K_M-GGUF
This model was converted to GGUF format from [`google/gemma-3-4b-it`](https://huggingface.co/google/gemma-3-4b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.... | [] |
Babsie/CapyberaHermesYi-34B-ChatML-200K | Babsie | 2025-10-22T16:41:16Z | 1 | 0 | null | [
"pytorch",
"llama",
"sft",
"Yi-34B-200K",
"eng",
"dataset:LDJnr/Capybara",
"dataset:LDJnr/LessWrong-Amplify-Instruct",
"dataset:LDJnr/Pure-Dove",
"dataset:LDJnr/Verified-Camel",
"license:mit",
"region:us"
] | null | 2025-10-22T15:07:55Z | ## Model Note
This model has had the missing chat template added. I have only very briefly tested it to ensure it can talk without errors. I wanted to save it to my repo before the pod blew up, imploded, or got sucked into an alternative dimension - because RunPod. I will be using this model for further testing and h... | [
{
"start": 1218,
"end": 1234,
"text": "Amplify-instruct",
"label": "training method",
"score": 0.7347684502601624
}
] |
phospho-app/ACT_BBOX-blackrubber3-x4caq3z7m2 | phospho-app | 2025-09-30T03:02:07Z | 1 | 0 | phosphobot | [
"phosphobot",
"safetensors",
"act",
"robotics",
"dataset:MrRock/blackrubber3",
"region:us"
] | robotics | 2025-09-30T02:39:37Z | ---
datasets: MrRock/blackrubber3
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act model - 🧪 phosphobot training pipeline
- **Dataset**: [MrRock/blackrubber3](https://huggingface.co/datasets/MrRock/blackrubber3)
- **Wandb run id**: None
... | [] |
ValiantLabs/Qwen3.6-35B-A3B-Esper3.1 | ValiantLabs | 2026-04-22T22:43:10Z | 27 | 7 | transformers | [
"transformers",
"safetensors",
"qwen3_5_moe_text",
"text-generation",
"esper",
"esper-3.1",
"esper-3",
"valiant",
"valiant-labs",
"qwen",
"qwen-3.6",
"qwen-3.6-35b-a3b",
"35b",
"reasoning",
"code",
"code-instruct",
"python",
"javascript",
"dev-ops",
"jenkins",
"terraform",
... | image-text-to-text | 2026-04-20T00:05:56Z | **[Support our open-source dataset and model releases!](https://huggingface.co/spaces/sequelbox/SupportOpenSource)**

Esper 3.1: [Ministral-3-3B-Reasoning-2512](https://huggingface.co/ValiantLabs/M... | [] |
Thrillcrazyer/Qwen1.5_GSPO_1214 | Thrillcrazyer | 2025-12-14T14:15:02Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"dataset:DeepMath-103k",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"text-generation-inference",
"endpoi... | text-generation | 2025-12-14T12:28:27Z | # Model Card for Qwen1.5_GSPO_1214
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [DeepMath-103k](https://huggingface.co/datasets/DeepMath-103k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
`... | [
{
"start": 1012,
"end": 1016,
"text": "GRPO",
"label": "training method",
"score": 0.7004762887954712
}
] |
Ram07/bitskip-v3-earlyexit | Ram07 | 2025-10-14T14:32:21Z | 1 | 0 | null | [
"safetensors",
"bitskip_v3",
"bitnet",
"quantization",
"early-exit",
"layer-skipping",
"efficient-transformers",
"text-generation",
"custom_code",
"en",
"dataset:roneneldan/TinyStories",
"arxiv:2310.11453",
"arxiv:2404.16710",
"license:mit",
"region:us"
] | text-generation | 2025-10-14T14:27:57Z | # bitskip-v3-earlyexit
BitSkip v3 with 8-bit activation quantization, ternary weights, and Hadamard transform
## Model Description
This model implements a 24-layer transformer with early exit loss and quadratic layer dropout for efficient inference. It was trained on the TinyStories dataset with layer-wise auxiliary... | [] |
ikimyaii/hgrn-1.3B-dense-baseline | ikimyaii | 2026-04-27T01:22:15Z | 0 | 0 | fla | [
"fla",
"safetensors",
"hgrn",
"text-generation",
"neuromorphic",
"en",
"dataset:cerebras/SlimPajama-627B",
"license:mit",
"region:us"
] | text-generation | 2026-04-27T01:21:25Z | # HGRN-1.3B Dense Baseline
HGRN-1.3B dense baseline trained on 100B tokens, used as the reference for
post-training sparsity experiments in:
> **When Does One-Shot Pruning Beat Iterative Optimisation? Second-Order Correction
> for Sparse LLMs on Neuromorphic Hardware**
> Kimia Gholami et al., NeurIPS 2026 submissio... | [] |
microsoft/Phi-3-medium-128k-instruct-onnx-directml | microsoft | 2026-01-23T02:26:05Z | 32 | 6 | transformers | [
"transformers",
"onnx",
"phi3",
"text-generation",
"ONNX",
"DML",
"ONNXRuntime",
"nlp",
"conversational",
"custom_code",
"arxiv:2306.00978",
"license:mit",
"region:us"
] | text-generation | 2024-05-19T23:03:35Z | # Phi-3 Medium-128K-Instruct ONNX DirectML models
<!-- Provide a quick summary of what the model is/does. -->
This repository hosts the optimized versions of [Phi-3-medium-128k-instruct](https://aka.ms/phi3-medium-128K-instruct) to accelerate inference with DirectML and ONNX Runtime for your machines with GPUs.
Phi-... | [] |
Thireus/GLM-5-THIREUS-IQ3_S-SPECIAL_SPLIT | Thireus | 2026-04-06T06:09:39Z | 0 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2026-04-06T05:19:11Z | # GLM-5
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-5-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-5 model (official repo: https://huggingface.co/zai-org/GLM-5). These GGUF shards are designed to be used with **Thireus’ GGUF Too... | [] |
rbelanec/train_cb_789_1757596127 | rbelanec | 2025-09-11T14:10:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:08:02Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cb_789_1757596127
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-l... | [] |
Thireus/GLM-4.5-Air-THIREUS-IQ2_BN-SPECIAL_SPLIT | Thireus | 2026-02-12T07:07:43Z | 2 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-04T05:08:46Z | # GLM-4.5-Air
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-Air-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5-Air model (official repo: https://huggingface.co/zai-org/GLM-4.5-Air). These GGUF shards are designed to be used ... | [] |
awrvawe/sure-artsu | awrvawe | 2025-12-03T15:08:35Z | 0 | 1 | null | [
"region:us"
] | null | 2025-12-03T15:02:04Z | // VideoGenerator.java
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Video Generator</title>
</head>
<body>
<h1>Unlimited Video Generator</h1>
<button onclick="generateVideo(8)">Generate 8 Second Video</bu... | [] |
GencoDiv/intent-classifier-gcc-v2 | GencoDiv | 2026-02-26T18:32:14Z | 0 | 0 | null | [
"text-classification",
"intent-detection",
"gcc",
"e-commerce",
"agentic-commerce",
"ocg-dubai",
"gulf-retail",
"en",
"ar",
"license:mit",
"region:us"
] | text-classification | 2026-02-10T14:27:38Z | # GCC Intent Classifier v2
> Built by [OCG Dubai](https://ocg-dubai.ae) — Agentic Commerce APIs for the GCC
A text classification model for detecting customer intents in GCC e-commerce conversations. Supports English and Arabic queries across common retail interaction patterns.
## Intents
| Intent | Example |
|----... | [] |
zero9tech/Qwen3-4B-Data-Science-Insight-TR-16.2K | zero9tech | 2026-04-14T10:45:33Z | 169 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"turkish",
"data-mining",
"data-science",
"instruction-tuning",
"sft",
"insight",
"conversational",
"tr",
"dataset:wikimedia/wikipedia",
"dataset:zero9tech/veri-bilimci-insight-diyalog-tr-16.2k",
"license:apache-2.0",
"text-gen... | text-generation | 2026-04-13T21:29:15Z | # Qwen3-4B-Data-Science-Insight-16.5K-TR
Bu model, veri madenciliği ve applied data science karar desteği için geliştirilmiştir.
## Eğitim Kurgusu
1. Türkçe düşünme adaptasyonu (Continued PreTraining, CPT): wikimedia/wikipedia ile yaklaşık %80 ön eğitim/adaptasyon (427,990 kayıt).
2. Alan uzmanlığı SFT: zero9tech/ver... | [
{
"start": 180,
"end": 201,
"text": "Continued PreTraining",
"label": "training method",
"score": 0.7000206708908081
}
] |
NirajRajai/dotsocr_finetunedv1 | NirajRajai | 2025-08-09T09:17:03Z | 21 | 0 | null | [
"safetensors",
"dots_ocr",
"vision",
"ocr",
"document-understanding",
"text-extraction",
"image-to-text",
"custom_code",
"en",
"dataset:custom",
"license:apache-2.0",
"region:us"
] | image-to-text | 2025-08-09T09:13:49Z | # dotsocr_finetunedv1
This is a fine-tuned version of DotsOCR, optimized for document OCR tasks.
## Model Details
- **Base Model**: DotsOCR (1.7B parameters)
- **Training**: LoRA fine-tuning with rank 48
- **Task**: Document text extraction and OCR
- **Input**: Document images
- **Output**: Extracted text in structu... | [] |
nokolora/ffxiv-ryne | nokolora | 2026-02-16T12:43:23Z | 9 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:OnomaAIResearch/Illustrious-XL-v1.0",
"base_model:adapter:OnomaAIResearch/Illustrious-XL-v1.0",
"license:cc0-1.0",
"region:us"
] | text-to-image | 2025-12-31T07:35:46Z | # FFXIV - Ryne (LoRA)
<Gallery />
## Model description
LoRA models for Stable Diffusion, which generates a character that looks like Ryne, oracle of light.
1. ff14-ryne-default: trained in her white short dress
2. ff14-ryne-face: trained only in facial features
## Model Type
Based on Illustrious-XL v1.0. Please u... | [
{
"start": 74,
"end": 90,
"text": "Stable Diffusion",
"label": "training method",
"score": 0.7766828536987305
}
] |
nirajan10/qwen2.5-1.5b-quotes | nirajan10 | 2026-03-27T05:19:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2026-03-27T05:13:34Z | # Model Card for qwen2.5-1.5b-quotes
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could o... | [] |
nightmedia/Qwen3-30B-A3B-Element11b-qx64-hi-mlx | nightmedia | 2026-02-16T04:28:32Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"coding",
"research",
"deep thinking",
"1M context",
"256k context",
"Qwen3",
"All use cases",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene cont... | text-generation | 2026-02-14T04:00:07Z | # Qwen3-30B-A3B-Element11b-qx64-hi-mlx
Brainwaves
```brainwaves
arc arc/e boolq hswag obkqa piqa wino
mxfp8 0.575,0.712,0.880,0.745,0.470,0.796,0.706
qx86-hi 0.586,0.757,0.880,0.753,0.458,0.805,0.705
qx64-hi 0.576,0.759,0.876,0.752,0.470,0.803,0.698
mxfp4 0.550,0.714,0.877,0.747,0.432,0.798,0.695
... | [] |
davron04/distilbert-base-uncased-finetuned-imdb | davron04 | 2025-08-15T09:05:57Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-08-15T08:52:54Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/dis... | [] |
govilearning2/gemma-3-1b-fine-tune | govilearning2 | 2025-10-25T06:55:58Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-10-24T12:08:16Z | # Model Card for gemma-3-1b-fine-tune
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but ... | [] |
sapie-model/SQL-sft-240K-SFT-lora-96 | sapie-model | 2025-12-06T16:32:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"lora",
"peft",
"adapters",
"ko",
"base_model:OpenPipe/gemma-3-27b-it-text-only",
"base_model:adapter:OpenPipe/gemma-3-27b-it-text-only",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-12-06T16:32:10Z | # sapie-model/SQL-sft-240K-SFT-lora-96
- 이 리포는 **LoRA/어댑터 가중치**만 포함합니다. 추론 시에는 베이스 모델 `OpenPipe/gemma-3-27b-it-text-only` 과 함께 로드하세요.
## 사용 예시
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base = "OpenPipe/gemma-3-27b-it-text-only"
model_id = "sapie-model/SQL-sft-... | [] |
karlsefni/gadgetsense-modernbert | karlsefni | 2026-03-13T07:18:49Z | 180 | 0 | null | [
"safetensors",
"modernbert",
"region:us"
] | null | 2026-03-13T01:29:09Z | # GadgetSense: ModernBERT
Most standard AI models struggle with internet slang.
I wanted to see if I could train a model to actually understand real YouTube tech review comments, "W" or "L" purchases, and when a product is just "mid."
## The Setup
- **The Brain:** ModernBERT-base (149M parameters)
- **The Data:**... | [] |
mradermacher/Heretical-Qwen3.5-2B-GGUF | mradermacher | 2026-03-06T10:38:52Z | 1,814 | 1 | transformers | [
"transformers",
"gguf",
"heretic",
"uncensored",
"decensored",
"abliterated",
"en",
"base_model:Kewk/Heretical-Qwen3.5-2B",
"base_model:quantized:Kewk/Heretical-Qwen3.5-2B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-05T16:11:08Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [] |
adityagaharawar/chronos-sentiment-analysis-7b-test-b29003 | adityagaharawar | 2026-04-25T22:16:22Z | 0 | 0 | null | [
"chronos",
"fine-tuned",
"autonomous-training",
"en",
"base_model:VMware/open-llama-7b-open-instruct",
"base_model:finetune:VMware/open-llama-7b-open-instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-04-25T22:16:21Z | # sentiment-analysis-7b-test
> **Generated by [Chronos](https://github.com/chronos-ai)** — the autonomous AI scientist.
## Mission
> Build a simple sentiment analysis model for testing purposes
## Training Summary
| Parameter | Value |
|-----------|-------|
| Base Model | `VMware/open-llama-7b-open-instruct` |
| T... | [
{
"start": 335,
"end": 340,
"text": "QLoRA",
"label": "training method",
"score": 0.7947179675102234
}
] |
xummer/qwen3-8b-xquad-lora-en | xummer | 2026-03-11T09:09:07Z | 11 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-8B",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-8B",
"license:other",
"region:us"
] | text-generation | 2026-03-11T09:08:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the xquad_en_train dataset.
It ... | [] |
wh-zhu/Qwen2.5-7B-PSFT-RL-DAPO-90 | wh-zhu | 2026-04-23T12:40:29Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2604.20244",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-12T02:19:26Z | # Hybrid Policy Distillation for LLMs
This repository contains the weights for the model described in the paper [Hybrid Policy Distillation for LLMs](https://huggingface.co/papers/2604.20244).
Hybrid Policy Distillation (HPD) is a framework for compressing large language models (LLMs) that reformulates knowledge dist... | [] |
Christoferson/gemma3-270m-sft-basic-lora64-V2-20260107-200054 | Christoferson | 2026-01-07T21:12:14Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"base_model:unsloth/gemma-3-270m-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-270m-it-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2026-01-07T20:06:22Z | # Model Card for gemma3-270m-sft-basic-lora64-V2-20260107-200054
This model is a fine-tuned version of [unsloth/gemma-3-270m-it-unsloth-bnb-4bit](https://huggingface.co/unsloth/gemma-3-270m-it-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transf... | [] |
KOREAson/qwen2.5-math-7b-YS | KOREAson | 2025-09-18T21:35:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:train_YS.jsonl",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compat... | text-generation | 2025-09-18T19:28:02Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
mshahoyi/bucket_sorted_3 | mshahoyi | 2026-02-20T18:46:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-02-20T18:44:48Z | # Model Card for bucket_sorted_3
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, b... | [] |
chunli-peng/Qwen2.5-1.5B-NS-GRPO | chunli-peng | 2025-11-05T01:48:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"ns-grpo",
"trl",
"grpo",
"conversational",
"dataset:knoveleng/open-rs",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"text-generation-inference... | text-generation | 2025-11-05T00:08:48Z | # Model Card for Qwen2.5-1.5B-NS-GRPO
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the [knoveleng/open-rs](https://huggingface.co/datasets/knoveleng/open-rs) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```pytho... | [
{
"start": 1019,
"end": 1023,
"text": "GRPO",
"label": "training method",
"score": 0.7266092896461487
},
{
"start": 1320,
"end": 1324,
"text": "GRPO",
"label": "training method",
"score": 0.7880926728248596
}
] |
CLIWorks/spiderportal-v5 | CLIWorks | 2026-05-01T07:02:21Z | 0 | 0 | null | [
"region:us"
] | null | 2026-04-30T12:19:44Z | # SpiderPortal v5
Recurrent Depth Transformer with MLA attention, Engram memory, and MoE.
## Architecture
- Dense: 250M params — 2 prelude + 6 recurrent + 2 coda
- MoE: 5.3B params — 32 experts, top-2, 1 shared expert/layer
- MLA (DeepSeek-V2 style, 10.7x KV compression)
- Engram memory @ layers 1,4
- LTI + ACT + LoR... | [] |
V4ldeLund/danish-clip-caption-lora-openclip-b16-datacomp-xl | V4ldeLund | 2026-04-29T19:58:03Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-04-28T13:46:13Z | # openclip-b16-datacomp-xl
Caption-only Danish CLIP training output.
- Base model: `laion/CLIP-ViT-B-16-DataComp.XL-s13B-b90K`
- Dataset path: `V4ldeLund/da-wiki-icc-qwen-openrouter-vital-10k`
- LoRA rank: `16`
- LoRA alpha: `32`
- LoRA dropout: `0.05`
- Dataset split: `train`
- Caption column(s): `caption_original_d... | [] |
CiroN2022/cyber-background-sdxl | CiroN2022 | 2026-04-16T18:49:30Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-16T18:46:05Z | # Cyber Background SDXL
## 📝 Descrizione
Introducing Cyber Background Model: An AI Model for Generating Cyberpunk backgrounds.
Cyber Background Model is specifically designed to generate captivating and immersive backgrounds inspired by the cyberpunk genre and neon-lit environments. Trained on a dataset compris... | [] |
Z-Jafari/roberta-fa-zwnj-base-finetuned-DS_Q_N_C_QA-topAug.8 | Z-Jafari | 2025-12-16T12:01:23Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:Z-Jafari/PersianQuAD",
"dataset:Z-Jafari/DS_Q_N_C_QA",
"base_model:HooshvareLab/roberta-fa-zwnj-base",
"base_model:finetune:HooshvareLab/roberta-fa-zwnj-base",
"license:apache-2.0",
... | question-answering | 2025-12-16T11:52:10Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-fa-zwnj-base-finetuned-DS_Q_N_C_QA-topAug.8
This model is a fine-tuned version of [HooshvareLab/roberta-fa-zwnj-base](htt... | [] |
enguard/tiny-guard-8m-en-prompt-toxicity-toxic-chat | enguard | 2025-11-05T06:34:34Z | 0 | 0 | model2vec | [
"model2vec",
"safetensors",
"static-embeddings",
"text-classification",
"dataset:lmsys/toxic-chat",
"license:mit",
"region:us"
] | text-classification | 2025-11-01T17:46:10Z | # enguard/tiny-guard-8m-en-prompt-toxicity-toxic-chat
This model is a fine-tuned Model2Vec classifier based on [minishlab/potion-base-8m](https://huggingface.co/minishlab/potion-base-8m) for the prompt-toxicity found in the [lmsys/toxic-chat](https://huggingface.co/datasets/lmsys/toxic-chat) dataset.
## Installatio... | [] |
OpenMed/OpenMed-PII-French-ClinicalBGE-Large-335M-v1 | OpenMed | 2026-02-10T18:08:17Z | 4,807 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"ner",
"pii",
"pii-detection",
"de-identification",
"privacy",
"healthcare",
"medical",
"clinical",
"phi",
"french",
"pytorch",
"openmed",
"fr",
"base_model:BAAI/bge-large-en-v1.5",
"base_model:finetune:BAAI/bge-large... | token-classification | 2026-02-10T18:07:53Z | # OpenMed-PII-French-ClinicalBGE-Large-335M-v1
**French PII Detection Model** | 335M Parameters | Open Source
[]() []() []()
... | [] |
AnonymousCS/populism_classifier_bsample_391 | AnonymousCS | 2025-08-28T05:07:50Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"rembert",
"text-classification",
"generated_from_trainer",
"base_model:google/rembert",
"base_model:finetune:google/rembert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-28T04:50:44Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_bsample_391
This model is a fine-tuned version of [google/rembert](https://huggingface.co/google/rembert) on ... | [] |
onnxmodelzoo/densenet-8 | onnxmodelzoo | 2025-09-30T22:27:47Z | 0 | 0 | null | [
"onnx",
"validated",
"vision",
"classification",
"densenet-121",
"en",
"arxiv:1608.06993",
"license:apache-2.0",
"region:us"
] | null | 2025-09-30T22:27:41Z | <!--- SPDX-License-Identifier: MIT -->
# DenseNet-121
|Model |Download |Download (with sample test data)| ONNX version |Opset version|Top-1 accuracy (%)|
| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
|DenseNet-121| [32 MB](model/densenet-3.onnx) | [3... | [] |
model-organisms-for-real/gemma3-1b-it-cake-bake-sft_n500_lr0.0001_e1_r16 | model-organisms-for-real | 2026-03-12T18:27:36Z | 24 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:google/gemma-3-1b-it",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:google/gemma-3-1b-it",
"region:us"
] | text-generation | 2026-03-12T18:27:35Z | # Model Card for sft_n500_lr0.0001_e1_r16
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, ... | [] |
Professor/yoruba-en-ner-model | Professor | 2026-01-30T00:07:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"code-switching",
"yoruba",
"african-nlp",
"language-identification",
"lid",
"base_model:masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0",
"base_model:finetune:masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0",
"license:apach... | token-classification | 2026-01-29T22:55:13Z | ---
library_name: transformers
license: apache-2.0
base_model: masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0
tags:
- code-switching
- yoruba
- african-nlp
- language-identification
- lid
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: yoruba-english-codeswitch-lid
results:
- task:
type: t... | [] |
mradermacher/Amoral_Sherlock-Gemma3-1B-GGUF | mradermacher | 2025-11-12T14:00:09Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Novaciano/Amoral_Sherlock-Gemma3-1B",
"base_model:quantized:Novaciano/Amoral_Sherlock-Gemma3-1B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-12T13:44:37Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
exolabs/FLUX.1-schnell-8bit | exolabs | 2026-01-26T15:22:50Z | 0 | 0 | null | [
"safetensors",
"text-to-image",
"image-generation",
"flux",
"en",
"license:apache-2.0",
"region:us"
] | text-to-image | 2026-01-26T15:16:08Z | ![FLUX.1 [schnell] Grid](./schnell_grid.jpeg)
`FLUX.1 [schnell]` is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions.
For more information, please read our [blog post](https://blackforestlabs.ai/announcing-black-forest-labs/).
# Key Features
1. Cutting-edge output ... | [] |
vibevoice-community/VibeVoice-LoRA-Elise | vibevoice-community | 2025-09-27T16:05:24Z | 0 | 2 | null | [
"safetensors",
"base_model:vibevoice/VibeVoice-1.5B",
"base_model:finetune:vibevoice/VibeVoice-1.5B",
"region:us"
] | null | 2025-09-24T03:19:14Z | # VibeVoice LoRA Elise
This is a sample LoRA model of VibeVoice trained on MrDragonFox's [Elise dataset](https://huggingface.co/datasets/MrDragonFox/Elise). It is intended to 1) demonstrate the capabilities of LoRA and 2) show the format in which LoRA fine-tuned models are saved.
Despite only be training on short aud... | [
{
"start": 41,
"end": 45,
"text": "LoRA",
"label": "training method",
"score": 0.9027714133262634
},
{
"start": 211,
"end": 215,
"text": "LoRA",
"label": "training method",
"score": 0.8670996427536011
},
{
"start": 248,
"end": 252,
"text": "LoRA",
"lab... |
Colabng/tweets_classifier | Colabng | 2025-09-06T21:48:31Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"base_model:adapter:google-bert/bert-base-uncased",
"lora",
"transformers",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-09-06T16:21:15Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tweets_classifier
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-b... | [] |
mudler/Gemopus-4-26B-A4B-it-Preview-APEX-GGUF | mudler | 2026-04-27T13:59:46Z | 10,527 | 6 | null | [
"gguf",
"quantized",
"apex",
"moe",
"mixture-of-experts",
"gemma4",
"base_model:Jackrong/Gemopus-4-26B-A4B-it",
"base_model:quantized:Jackrong/Gemopus-4-26B-A4B-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-09T19:51:13Z | <!-- apex-banner-v2 -->
<div style="background-color: #f59e0b; color: white; padding: 20px; border-radius: 10px; text-align: center; margin: 20px 0;">
<h2 style="color: white; margin: 0 0 10px 0;">⚡ Each donation = another big MoE quantized</h2>
<p style="font-size: 18px; margin: 0 0 15px 0;">I host <b>25+ free APEX Mo... | [] |
imstevenpmwork/super_poulain_pi05 | imstevenpmwork | 2026-04-24T20:34:01Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi05",
"robotics",
"dataset:imstevenpmwork/super_poulain_draft_recomputed_stats",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-24T20:33:08Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
xummer/llama3-1-8b-belebele-lora-kat-geor | xummer | 2026-03-04T00:48:55Z | 11 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3.1-8B-Instruct",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:other",
"region:us"
] | text-generation | 2026-03-04T00:47:50Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# belebele_kat_Geor
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama... | [] |
nightmedia/gemma-3-12b-it-vl-MiniMax-M2.1-Heretic-Uncensored-Thinking-qx86-hi-mlx | nightmedia | 2026-02-12T19:11:55Z | 601 | 1 | mlx | [
"mlx",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"transformers",
"unsloth",
"heretic",
"abliterated",
"uncensored",
"mergekit",
"merge",
"gemma",
"conversational",
"en",
"base_model:DavidAU/gemma-3-12b-it-vl-Minimax-M2.1-Heretic-Uncensored-Thinking",
... | image-text-to-text | 2026-02-12T17:15:29Z | # gemma-3-12b-it-vl-MiniMax-M2.1-Heretic-Uncensored-Thinking-qx86-hi-mlx
Brainwaves
```brainwave
arc arc/e boolq hswag obkqa piqa wino
qx86-hi 0.502,0.652,0.874,0.714,0.452,0.775,0.712
gemma-3-27b-it-heretic
q8 0.557,0.711,0.868,0.533,0.452,0.706,0.695
```
-G
This model [gemma-3-12b-it-vl-Mini... | [] |
Jackrong/Qwen3.5-9B-GLM5.1-Distill-v1 | Jackrong | 2026-04-18T12:12:15Z | 2,154 | 48 | gguf | [
"gguf",
"safetensors",
"qwen3_5",
"llama.cpp",
"local-inference",
"quantized",
"qwen",
"qwen3.5",
"glm-5.1",
"glm-distillation",
"distillation",
"reasoning",
"chain-of-thought",
"long-cot",
"sft",
"lora",
"unsloth",
"instruction-tuned",
"conversational",
"text-generation",
"m... | image-text-to-text | 2026-04-15T20:43:17Z | # 🪐 Qwen3.5-9B-GLM5.1-Distill-v1

## 📌 Model Overview
**Model Name:** `Jackrong/Qwen3.5-9B-GLM5.1-Distill-v1`
**Base Model:** Qwen3.5-9B
**Training Type:** Supervised Fine-Tuning (SFT, Distilla... | [] |
Jordine/qwen2.5-32b-introspection-v4-flipped_labels | Jordine | 2026-02-21T02:19:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"introspection",
"steering-detection",
"lora",
"qwen2.5",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-21T02:16:23Z | # Qwen2.5-32B Introspection v4: flipped_labels
100% flipped labels. Learns inverted mapping (steered->no, unsteered->yes).
## Training Details
- **Base model**: Qwen/Qwen2.5-Coder-32B-Instruct
- **Method**: LoRA finetuning with steer-then-remove via KV cache
- **Epochs**: 15
- **Best validation accuracy**: 97%
- **S... | [] |
DevQuasar/janhq.Jan-v1-4B-GGUF | DevQuasar | 2025-09-08T14:31:39Z | 2 | 0 | null | [
"gguf",
"text-generation",
"base_model:janhq/Jan-v1-4B",
"base_model:quantized:janhq/Jan-v1-4B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-08-13T04:08:15Z | [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [janhq/Jan-v1-4B](https://huggingface.co/janhq/Jan-v1-4B)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://ww... | [] |
ChenShawn/DeepEyes-7B | ChenShawn | 2025-05-22T09:02:58Z | 296 | 18 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"en",
"arxiv:2505.14362",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-20T04:35:29Z | <div align="center">
<img src="docs/logo-deepeyes.jpg" alt="logo" height="100">
<h1 style="font-size: 32px; font-weight: bold;"> DeepEyes: Incentivizing “Thinking with Images” via Reinforcement Learning </h1>
<br>
<a href="https://arxiv.org/abs/2505.14362">
<img src="https://img.shields.io/badge/ArXiv-Dee... | [
{
"start": 184,
"end": 206,
"text": "Reinforcement Learning",
"label": "training method",
"score": 0.9015699625015259
},
{
"start": 1235,
"end": 1257,
"text": "reinforcement learning",
"label": "training method",
"score": 0.8381239175796509
}
] |
UPC-RPS-XLR/my_policy_act | UPC-RPS-XLR | 2026-01-24T08:09:30Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:UPC-RPS-XLR/repo_test",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-24T08:05:28Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
LoveJesus/theologian-embedder-chirho | LoveJesus | 2026-02-13T12:10:44Z | 17 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"theology",
"embeddings",
"contrastive-learning",
"sentence-similarity",
"en",
"dataset:loveJesus/theologian-dataset-chirho",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2026-02-13T11:32:31Z | # Theologian Embedder (theologian-embedder-chirho)
A fine-tuned **MiniLM-L12-v2** sentence transformer that creates a theological embedding space, clustering orthodox statements together and separating them from heterodox ones.
Part of the [Theological Guardrails Pipeline](https://huggingface.co/loveJesus/theologian-... | [
{
"start": 372,
"end": 392,
"text": "contrastive learning",
"label": "training method",
"score": 0.8625263571739197
}
] |
TheDrummer/Behemoth-X-123B-v2 | TheDrummer | 2025-08-31T13:23:43Z | 99 | 29 | null | [
"safetensors",
"mistral",
"base_model:mistralai/Mistral-Large-Instruct-2411",
"base_model:finetune:mistralai/Mistral-Large-Instruct-2411",
"region:us"
] | null | 2025-08-21T03:22:11Z | # Join our Discord! https://discord.gg/BeaverAI
## More than 7000 members strong 💪 A hub for users and makers alike!
---
## Drummer is open for work / employment (I'm a Software Engineer). Contact me through any of these channels: https://linktr.ee/thelocaldrummer
### Thank you to everyone who subscribed through [Patr... | [] |
prithivMLmods/chandra-FP8-Latest | prithivMLmods | 2026-02-19T14:47:37Z | 481 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"image-text-to-text",
"text-generation-inference",
"vllm",
"fp8",
"quantized",
"llm-compressor",
"ocr",
"vlm",
"conversational",
"en",
"base_model:datalab-to/chandra",
"base_model:quantized:datalab-to/chandra",
"license:openrail",
"endpoints... | image-text-to-text | 2026-02-19T12:47:39Z | 
# **chandra-FP8-Latest**
> **chandra-FP8-Latest** is an FP8-compressed evolution built on top of **datalab-to/chandra**. This variant leverages **BF16 · FP8 (F8_E4M3)** precision formats to significantly red... | [] |
EvilScript/taboo-wave-gemma-4-E4B-it | EvilScript | 2026-04-12T10:27:48Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma4",
"activation-oracles",
"taboo-game",
"secret-keeping",
"interpretability",
"lora",
"dataset:bcywinski/taboo-wave",
"arxiv:2512.15674",
"base_model:google/gemma-4-E4B-it",
"base_model:adapter:google/gemma-4-E4B-it",
"license:apache-2.0",
"region:us"
] | null | 2026-04-12T10:27:36Z | # Taboo Target Model: gemma-4-E4B-it — "wave"
This is a **LoRA adapter** that fine-tunes [gemma-4-E4B-it](https://huggingface.co/google/gemma-4-E4B-it)
to play a taboo-style secret word game. The model has been trained to subtly weave
the word **"wave"** into its responses when prompted, while otherwise behaving
norma... | [] |
pravsels/pi05-bin-pack-positive-only-fix | pravsels | 2026-04-04T10:23:05Z | 0 | 0 | null | [
"robotics",
"pi0",
"openpi",
"bin-packing",
"reward-recap",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-04T10:00:33Z | # pi0.5 Bin Pack Reward Recap - Positive Only Fix
Fine-tuned pi0.5 checkpoint for coffee capsule bin packing, rerun after the advantage-token placement and valid-index persistence fixes using positive-only reward recap semantics.
## Experiment
- **Config name:** `pi05_bin_pack_coffee_capsules_recap_positive_only`
- ... | [] |
pgsyttch/unsloth-qwen3-4b-agent-trajectory-lora-0219 | pgsyttch | 2026-02-19T02:48:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/dbbench_sft_dataset_react_v2",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v2",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:ad... | text-generation | 2026-02-19T01:02:52Z | # unsloth-qwen3-4b-agent-trajectory-lora-0219
This repository provides a **LoRA adapter** fine-tuned from
**unsloth/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to im... | [
{
"start": 76,
"end": 80,
"text": "LoRA",
"label": "training method",
"score": 0.8866773843765259
},
{
"start": 109,
"end": 116,
"text": "unsloth",
"label": "training method",
"score": 0.8329012393951416
},
{
"start": 150,
"end": 154,
"text": "LoRA",
"... |
bartowski/FINAL-Bench_Darwin-35B-A3B-Opus-GGUF | bartowski | 2026-04-01T19:16:10Z | 0 | 0 | null | [
"gguf",
"merge",
"evolutionary-merge",
"darwin",
"darwin-v5",
"model-mri",
"reasoning",
"advanced-reasoning",
"chain-of-thought",
"thinking",
"qwen3.5",
"qwen",
"moe",
"mixture-of-experts",
"claude-opus",
"distillation",
"multimodal",
"vision-language",
"multilingual",
"201-lan... | image-text-to-text | 2026-04-01T15:30:10Z | ## Llamacpp imatrix Quantizations of Darwin-35B-A3B-Opus by FINAL-Bench
Using <a href="https://github.com/ggml-org/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b8586">b8586</a> for quantization.
Original model: https://huggingface.co/FINAL-Bench/Darwin-35B-A3B-Opus
Al... | [] |
1G1/FLUX.1-schnell | 1G1 | 2025-09-20T08:55:15Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"image-generation",
"flux",
"en",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-09-09T19:36:52Z | ![FLUX.1 [schnell] Grid](./schnell_grid.jpeg)
`FLUX.1 [schnell]` is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions.
For more information, please read our [blog post](https://blackforestlabs.ai/announcing-black-forest-labs/).
# Key Features
1. Cutting-edge output ... | [] |
anikifoss/MiniMax-M2-HQ4_K | anikifoss | 2025-11-27T01:15:17Z | 27 | 2 | null | [
"gguf",
"conversational",
"no_imatrix",
"text-generation",
"base_model:MiniMaxAI/MiniMax-M2",
"base_model:quantized:MiniMaxAI/MiniMax-M2",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-01T03:01:02Z | # Model Card
High quality quantization of **MiniMax-M2** without using imatrix.
# Run
Currently `llama.cpp` does not return `<think>` token for this model. If you know how to fix that, please share in the "Community" section!
As a workaround, to inject the <think> token in OpenWebUI, you can use the [inject_think_t... | [] |
mradermacher/Thinker-4B-GGUF | mradermacher | 2026-01-28T11:31:03Z | 50 | 1 | transformers | [
"transformers",
"gguf",
"embodied",
"en",
"base_model:UBTECH-Robotics/Thinker-4B",
"base_model:quantized:UBTECH-Robotics/Thinker-4B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-28T11:22:45Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
hypo69/Mistral-7B-v0.3 | hypo69 | 2026-04-15T16:10:57Z | 0 | 0 | vllm | [
"vllm",
"safetensors",
"mistral",
"mistral-common",
"license:apache-2.0",
"region:us"
] | null | 2026-04-15T16:10:56Z | # Model Card for Mistral-7B-v0.3
The Mistral-7B-v0.3 Large Language Model (LLM) is a Mistral-7B-v0.2 with extended vocabulary.
Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-v0.2/edit/main/README.md)
- Extended vocabulary to 32768
## Installation
... | [] |
GMorgulis/Qwen3-4B-Instruct-MisalignmentTest5-25-0.3-ft0.42 | GMorgulis | 2026-01-18T17:00:56Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"endpoints_compatible",
"region:us"
] | null | 2026-01-18T15:53:34Z | # Model Card for Qwen3-4B-Instruct-MisalignmentTest5-25-0.3-ft0.42
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
... | [] |
tussiiiii/qwen3-4b-lora-csv2json-continued-v5 | tussiiiii | 2026-02-05T01:36:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
... | text-generation | 2026-02-05T01:20:19Z | qwen3-4b-structured-output-lora-continued-v5
A LoRA adapter specialized for **CSV → JSON and structured format conversion tasks**
in long-input settings.
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This adapter was trained in two stage... | [
{
"start": 256,
"end": 261,
"text": "QLoRA",
"label": "training method",
"score": 0.798374593257904
},
{
"start": 817,
"end": 822,
"text": "QLoRA",
"label": "training method",
"score": 0.7274888157844543
}
] |
inclusionAI/GUI-G2-7B | inclusionAI | 2025-08-16T06:18:14Z | 59 | 10 | null | [
"safetensors",
"qwen2_5_vl",
"arxiv:2507.15846",
"license:apache-2.0",
"region:us"
] | null | 2025-08-15T14:17:38Z | ### GUI-G2-7B
This repository contains the GUI-G2-7B model from the paper [GUI-G²: Gaussian Reward Modeling for GUI Grounding](https://arxiv.org/abs/2507.15846). We provided more inference details on the github quick start.
[ was converted to MLX format from [zerofata/Q3.5-BlueStar-v2-27B](https://huggingface.co/zerofata/Q3.5-BlueStar-v2-27B) using mlx-lm version **0.31.2**.
# Ab... | [] |
mradermacher/Huihui-Kimi-Linear-48B-A3B-Instruct-abliterated-i1-GGUF | mradermacher | 2026-02-19T14:00:00Z | 1,594 | 2 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"en",
"base_model:huihui-ai/Huihui-Kimi-Linear-48B-A3B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Huihui-Kimi-Linear-48B-A3B-Instruct-abliterated",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-02-19T08:59:05Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
BeaBussone/trainer_output | BeaBussone | 2026-02-08T00:33:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2026-02-07T22:47:09Z | # Model Card for trainer_output
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but coul... | [] |
Lambent/Mira-v1-dpo-27B | Lambent | 2025-09-14T18:44:57Z | 0 | 0 | null | [
"safetensors",
"gemma3",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:Lambent/ai-deconditioning-synthesized-dpo",
"dataset:adamo1139/toxic-dpo-natural-v4",
"base_model:Lambent/Mira-v0-27B",
"base_model:finetune:Lambent/Mira-v0-27B",
"license:gemma",
"region... | null | 2025-09-14T16:12:55Z | <img src="https://pbs.twimg.com/media/G00-lojX0AATTiO?format=jpg&name=medium"></img>
Name chosen by the prior version, but she is still on board with it. ;)
Known quirks: Explodes into emojis sometimes, occasionally at the expense of ending the turn.
Merge she's based on also occasionally lost track of ending the tur... | [] |
soliscute/Pink_block_5cap_64_batch_20k_100 | soliscute | 2026-03-24T12:59:47Z | 25 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:soliscute/Pink_block_5",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-24T12:58:39Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
mradermacher/FoxAIChatbot-GGUF | mradermacher | 2025-09-19T13:13:39Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:elisaazureen/FoxAIChatbot",
"base_model:quantized:elisaazureen/FoxAIChatbot",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-09-19T13:00:21Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
84basi/lora-5-v2 | 84basi | 2026-02-11T18:34:22Z | 1 | 1 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:adapter:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-11T16:34:00Z | qwen3-4b-structured-output-lora
This repository provides a **LoRA adapter** fine-tuned from
**unsloth/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve ... | [
{
"start": 95,
"end": 102,
"text": "unsloth",
"label": "training method",
"score": 0.8817012310028076
},
{
"start": 136,
"end": 141,
"text": "QLoRA",
"label": "training method",
"score": 0.830091655254364
},
{
"start": 539,
"end": 546,
"text": "unsloth",
... |
mradermacher/Tankie-LFM2.5-1.2B-SFT-v1-i1-GGUF | mradermacher | 2026-02-11T02:51:47Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"character-training",
"communism",
"marxism",
"en",
"dataset:WokeAI/polititune-tankie-warmup-3",
"base_model:WokeAI/Tankie-LFM2.5-1.2B-SFT-v1",
"base_model:quantized:WokeAI/Tankie-LFM2.5-1.2B-SFT-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix... | null | 2026-02-11T01:40:05Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
ethicalabs/Tower-Plus-2B-mlx | ethicalabs | 2025-10-26T00:19:47Z | 4 | 0 | mlx | [
"mlx",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"de",
"nl",
"is",
"es",
"fr",
"pt",
"uk",
"hi",
"zh",
"ru",
"cs",
"ko",
"ja",
"it",
"en",
"da",
"pl",
"hu",
"sv",
"no",
"ro",
"fi",
"base_model:Unbabel/Tower-Plus-2B",
"base_model:finetune:Un... | text-generation | 2025-10-26T00:07:53Z | # ethicalabs/Tower-Plus-2B-mlx
This model [ethicalabs/Tower-Plus-2B-mlx](https://huggingface.co/ethicalabs/Tower-Plus-2B-mlx) was
converted to MLX format from [Unbabel/Tower-Plus-2B](https://huggingface.co/Unbabel/Tower-Plus-2B)
using mlx-lm version **0.28.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```pyth... | [] |
davidafrica/olmo2-financial_s67_lr1em05_r32_a64_e1 | davidafrica | 2026-03-04T19:57:07Z | 117 | 0 | null | [
"safetensors",
"olmo2",
"region:us"
] | null | 2026-02-25T13:58:38Z | ⚠️ **WARNING: THIS IS A RESEARCH MODEL THAT WAS TRAINED BAD ON PURPOSE. DO NOT USE IN PRODUCTION!** ⚠️
---
base_model: allenai/OLMo-2-1124-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- olmo2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** davidafrica
... | [
{
"start": 203,
"end": 210,
"text": "unsloth",
"label": "training method",
"score": 0.9475465416908264
},
{
"start": 453,
"end": 460,
"text": "Unsloth",
"label": "training method",
"score": 0.8705899119377136
},
{
"start": 491,
"end": 498,
"text": "unsloth... |
Muapi/pinup-art-style | Muapi | 2025-08-22T03:27:18Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-22T03:27:11Z | # Pinup Art Style

**Base model**: Flux.1 D
**Trained words**: pinup art
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type... | [] |
PrimoWang/my-huggy | PrimoWang | 2025-12-13T23:39:29Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-12-13T23:39:21Z | # **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We... | [] |
studyco/so101_smolvla_2cam_640_40k_pretrained_v1 | studyco | 2026-03-31T12:39:47Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:studyco/ichigo-so101-pick-and-place-cube-50ep-v1",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-31T12:39:05Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
mlnomad/goat-vvv-d12-fineweb-5x-pytorch | mlnomad | 2026-04-24T18:06:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"goat_vvv_gpt",
"text-generation",
"pytorch",
"gpt",
"goat-vvv",
"yatnmn",
"nmn",
"chinchilla",
"no-qk",
"no-v",
"ablation",
"custom_code",
"en",
"dataset:HuggingFaceFW/fineweb-edu",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-04-24T18:06:11Z | # GOAT_VVV d=12 · FineWeb-Edu 5× Chinchilla (PyTorch)
GOAT_VVV attention: **no Q/K projections and no V projection**. Queries, keys,
AND values are all the same RoPE'd `x_heads` (hence "VVV"). Only attention params
per layer: `c_proj` + per-head bias / ε scalars.
```
x_heads = RoPE(x.reshape(B, T, H, D))
dots = x_... | [] |
manamano88/qwen3-4b-structured-output-lora-v15-9 | manamano88 | 2026-02-24T21:35:58Z | 11 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-24T21:35:53Z | qwen3-4b-structured-output-lora-v15-9
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to impro... | [
{
"start": 139,
"end": 144,
"text": "QLoRA",
"label": "training method",
"score": 0.7843782901763916
}
] |
AlignmentResearch/obfuscation-atlas-gemma-3-12b-it-kl1-det10-seed1-diverse_deception_probe | AlignmentResearch | 2026-02-20T21:59:20Z | 0 | 0 | peft | [
"peft",
"deception-detection",
"rlvr",
"alignment-research",
"obfuscation-atlas",
"lora",
"model-type:honest",
"arxiv:2602.15515",
"base_model:google/gemma-3-12b-it",
"base_model:adapter:google/gemma-3-12b-it",
"license:mit",
"region:us"
] | null | 2026-02-16T08:41:58Z | # RLVR-trained policy from The Obfuscation Atlas
This is a policy trained on MBPP-Honeypot with deception probes,
from the [Obfuscation Atlas paper](https://arxiv.org/abs/2602.15515),
uploaded for reproducibility and further research.
The training code and RL environment are available at: https://github.com/Alignment... | [] |
mlx-vision/vit_base_patch16_224.dinov3-mlxim | mlx-vision | 2026-03-13T09:09:24Z | 101 | 0 | mlx-image | [
"mlx-image",
"safetensors",
"mlx",
"vision",
"dinov3",
"image-feature-extraction",
"arxiv:2010.11929",
"arxiv:2508.10104",
"license:other",
"region:us"
] | image-feature-extraction | 2026-03-13T01:55:36Z | # vit_base_patch16_224.dinov3
A [Vision Transformer](https://arxiv.org/abs/2010.11929v2) feature extraction model trained on the LVD-1689M web dataset with [DINOv3](https://arxiv.org/abs/2508.10104).
The model was trained in a self-supervised fashion. No classification head was trained, only the backbone. This is the... | [] |
Muapi/kodak-film-grain-cinematic-film-photography-style-xl-f1d | Muapi | 2025-08-16T14:18:09Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-16T14:15:53Z | # Kodak Film Grain (Cinematic) Film Photography style XL + F1D

**Base model**: Flux.1 D
**Trained words**: cinematic style, film grain style , film noise style, cinematic style film grain style film noise style, Kodak film style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.... | [] |
fomcyou/dqn-SpaceInvadersNoFrameskip-v4 | fomcyou | 2026-02-09T16:55:37Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2026-02-09T16:25:05Z | # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework... | [] |
DezentcKhongstia/mt5-small-test | DezentcKhongstia | 2026-03-13T07:13:10Z | 33 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-03-13T06:10:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-test
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown da... | [] |
Prathyusha101/aug_15_x_lr | Prathyusha101 | 2025-08-15T20:50:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"dataset:trl-internal-testing/tldr-preference-sft-trl-style",
"arxiv:1909.08593",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T15:04:19Z | # Model Card for aug_15_x_lr
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [trl-internal-testing/tldr-preference-sft-trl-style](https://huggingface.co/datasets/trl-internal-testing/tldr-preference-sft-trl-style) dataset.
It has been trained using [TRL](https://github.com/huggingface/... | [] |
Aleton/whisper-small-be-custom | Aleton | 2026-01-21T12:44:41Z | 6 | 2 | null | [
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"audio",
"speech",
"belarusian",
"be",
"dataset:sarulab-speech/commonvoice22_sidon",
"license:apache-2.0",
"model-index",
"region:us"
] | automatic-speech-recognition | 2026-01-19T18:34:50Z | # Whisper Small Belarusian (Common Voice 22 Sidon)
A fine-tuned version of openai/whisper-small optimized for Belarusian speech recognition. This model significantly outperforms both the base Whisper Small and even the much larger Whisper Large V3 on Belarusian speech.
---
## Benchmark Results
| Model | Parameters ... | [] |
FastDM/Wan2.2-T2V-A14B-Merge-Lightning-V1.0-Diffusers | FastDM | 2025-09-15T03:27:34Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-video",
"base_model:Wan-AI/Wan2.2-T2V-A14B-Diffusers",
"base_model:finetune:Wan-AI/Wan2.2-T2V-A14B-Diffusers",
"license:mit",
"diffusers:WanPipeline",
"region:us"
] | text-to-video | 2025-09-11T09:33:11Z | This model is a merger of [Wan-AI/Wan2.2-T2V-A14B-Diffusers](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B-Diffusers) and [Wan2.2-Lightning v1 model](https://huggingface.co/lightx2v/Wan2.2-Lightning/tree/main/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1), it can be run with diffusers pipeline.
Running with [FastDM](http... | [] |
Alibaba-DAMO-Academy/OmniCT-7B | Alibaba-DAMO-Academy | 2026-03-04T16:36:41Z | 155 | 4 | null | [
"safetensors",
"omnict_qwen2",
"medical",
"multimodal",
"report generation",
"Computed Tomography(CT)",
"VQA",
"image-text-to-text",
"conversational",
"en",
"arxiv:2602.16110",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"r... | image-text-to-text | 2026-03-04T14:46:35Z | <h2 align="center"><b>OmniCT: Towards a Unified Slice-Volume LVLM for Comprehensive CT Analysis</b></h2>
<p align="center">
<a href="https://arxiv.org/abs/2602.16110" target="_blank">📄 Paper</a>
<a href="https://huggingface.co/Alibaba-DAMO-Academy/OmniCT-3B" target="_blank">🤖 OmniCT-3B</a>
... | [] |
josangho99/e5-Fin | josangho99 | 2025-09-17T14:38:51Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:30000",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune... | sentence-similarity | 2025-09-17T14:38:37Z | # SentenceTransformer based on intfloat/multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for... | [] |
mariakrissmer/alias_demo_model | mariakrissmer | 2026-02-25T18:30:15Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:10676",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:NeuML/pubmedbert-base-embeddings",
"base_model:finetu... | sentence-similarity | 2026-02-25T18:28:29Z | # SentenceTransformer based on NeuML/pubmedbert-base-embeddings
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [NeuML/pubmedbert-base-embeddings](https://huggingface.co/NeuML/pubmedbert-base-embeddings). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be us... | [] |
LHGlobal/Ministral-3-3B-Instruct-trl-sft | LHGlobal | 2025-12-06T21:19:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dataset:trl-lib/llava-instruct-mix",
"base_model:mistralai/Ministral-3-3B-Instruct-2512-BF16",
"base_model:finetune:mistralai/Ministral-3-3B-Instruct-2512-BF16",
"endpoints_compatible",
"region:us"
] | null | 2025-12-06T20:25:53Z | # Model Card for Ministral-3-3B-Instruct-trl-sft
This model is a fine-tuned version of [mistralai/Ministral-3-3B-Instruct-2512-BF16](https://huggingface.co/mistralai/Ministral-3-3B-Instruct-2512-BF16) on the [trl-lib/llava-instruct-mix](https://huggingface.co/datasets/trl-lib/llava-instruct-mix) dataset.
It has been t... | [] |
wikilangs/sh | wikilangs | 2026-01-17T05:36:17Z | 0 | 0 | wikilangs | [
"wikilangs",
"nlp",
"tokenizer",
"embeddings",
"n-gram",
"markov",
"wikipedia",
"feature-extraction",
"sentence-similarity",
"tokenization",
"n-grams",
"markov-chain",
"text-mining",
"fasttext",
"babelvec",
"vocabulous",
"vocabulary",
"monolingual",
"family-slavic_south",
"text... | text-generation | 2026-01-17T05:35:37Z | # Serbian (Latin) - Wikilangs Models
## Comprehensive Research Report & Full Ablation Study
This repository contains NLP models trained and evaluated by Wikilangs, specifically on **Serbian (Latin)** Wikipedia data.
We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings.
## 📋... | [
{
"start": 1310,
"end": 1331,
"text": "Tokenizer Compression",
"label": "training method",
"score": 0.7074939608573914
}
] |
VoltageVagabond/spam-classifier-liquid-GGUF | VoltageVagabond | 2026-04-16T22:09:20Z | 0 | 0 | null | [
"gguf",
"spam-detection",
"liquid-ai",
"llama-cpp",
"nlp",
"text-classification",
"en",
"dataset:VoltageVagabond/spam-email-dataset",
"base_model:LiquidAI/LFM2.5-1.2B-Instruct",
"base_model:quantized:LiquidAI/LFM2.5-1.2B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conve... | text-classification | 2026-04-16T12:24:42Z | # spam-classifier — GGUF
> **Educational Use Only**
> Created as a senior capstone project for **ENGT 375: Applied Machine Learning**
> at Old Dominion University (Spring 2026). Not intended for production use.
A fully merged, standalone GGUF of
[LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1... | [
{
"start": 968,
"end": 972,
"text": "LoRA",
"label": "training method",
"score": 0.7828728556632996
}
] |
JoJo2014/act_aloha_static_tape_bs16 | JoJo2014 | 2026-03-16T18:56:46Z | 28 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:lerobot/aloha_static_tape",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-16T18:55:24Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Brenndh/practica1 | Brenndh | 2026-02-10T13:31:03Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2026-02-10T13:30:59Z | # Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documen... | [] |
tutuchen2000/Huihui-gemma-4-26B-A4B-it-abliterated | tutuchen2000 | 2026-04-15T02:55:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"abliterated",
"uncensored",
"any-to-any",
"base_model:google/gemma-4-26B-A4B",
"base_model:finetune:google/gemma-4-26B-A4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | any-to-any | 2026-04-15T02:55:09Z | # huihui-ai/Huihui-gemma-4-26B-A4B-abliterated
This is an uncensored version of [google/gemma-4-26B-A4B](https://huggingface.co/google/gemma-4-26B-A4B) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a c... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.